Download An Introduction to the Mathematical Aspects of Quantum Mechanics:

Document related concepts

Renormalization wikipedia , lookup

Quantum computing wikipedia , lookup

Topological quantum field theory wikipedia , lookup

Hilbert space wikipedia , lookup

Instanton wikipedia , lookup

Bohr–Einstein debates wikipedia , lookup

Quantum entanglement wikipedia , lookup

Quantum machine learning wikipedia , lookup

Wave–particle duality wikipedia , lookup

Orchestrated objective reduction wikipedia , lookup

Quantum field theory wikipedia , lookup

Many-worlds interpretation wikipedia , lookup

Matter wave wikipedia , lookup

Quantum teleportation wikipedia , lookup

Bell's theorem wikipedia , lookup

Quantum key distribution wikipedia , lookup

Propagator wikipedia , lookup

Coupled cluster wikipedia , lookup

Quantum group wikipedia , lookup

Erwin Schrödinger wikipedia , lookup

Particle in a box wikipedia , lookup

Copenhagen interpretation wikipedia , lookup

Scalar field theory wikipedia , lookup

Max Born wikipedia , lookup

Wave function wikipedia , lookup

Renormalization group wikipedia , lookup

Schrödinger equation wikipedia , lookup

Perturbation theory wikipedia , lookup

Quantum electrodynamics wikipedia , lookup

EPR paradox wikipedia , lookup

Dirac equation wikipedia , lookup

History of quantum field theory wikipedia , lookup

Hydrogen atom wikipedia , lookup

Measurement in quantum mechanics wikipedia , lookup

Interpretations of quantum mechanics wikipedia , lookup

Coherent states wikipedia , lookup

Perturbation theory (quantum mechanics) wikipedia , lookup

Molecular Hamiltonian wikipedia , lookup

Bra–ket notation wikipedia , lookup

Probability amplitude wikipedia , lookup

Path integral formulation wikipedia , lookup

Compact operator on Hilbert space wikipedia , lookup

Theoretical and experimental justification for the Schrödinger equation wikipedia , lookup

Hidden variable theory wikipedia , lookup

Quantum state wikipedia , lookup

Density matrix wikipedia , lookup

T-symmetry wikipedia , lookup

Relativistic quantum mechanics wikipedia , lookup

Symmetry in quantum mechanics wikipedia , lookup

Self-adjoint operator wikipedia , lookup

Canonical quantization wikipedia , lookup

Transcript
Universidade Federal de Goiás
Instituto de Física e Química
An Introduction to the Mathematical Aspects
of Quantum Mechanics:
Course Notes
Petrus Henrique Ribeiro dos Anjos
Paulo Eduardo Gonçalves de Assis
Catalão,Go
2015
Contents
1 Quantum States and Observables
1.1
Quantum States . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1.1
5
6
Uncertainty Principle . . . . . . . . . . . . . . . . . . . 11
1.2
Inner Product and Hilbert Spaces . . . . . . . . . . . . . . . . 12
1.3
Observables . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.4
Probability and Functions of Observables . . . . . . . . . . . . 18
1.5
Self-adjoint operators . . . . . . . . . . . . . . . . . . . . . . . 20
1.6
Riez Representation . . . . . . . . . . . . . . . . . . . . . . . . 22
1.7
Spectral Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.8
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2 The Spectrum
29
2.1
Spectrum and Resolvent . . . . . . . . . . . . . . . . . . . . . 30
2.2
Finding the spectrum . . . . . . . . . . . . . . . . . . . . . . . 35
2.2.1
2.3
The spectrum of the Hamiltonian . . . . . . . . . . . . 36
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3 Quantum Dynamics
43
3.1
Time evolution and Schrödinger Equation . . . . . . . . . . . 43
3.2
Applications to Two-Level Systems . . . . . . . . . . . . . . . 49
3.3
Schrödinger’s wave equation . . . . . . . . . . . . . . . . . . . 52
3.4
Time dependence of Expected Values . . . . . . . . . . . . . . 57
3.4.1
Newton’s 2nd Law and Quantum Mechanics . . . . . . 58
CONTENTS
4
3.5 Quantum Pictures
. . . . . . . . . . . . . . . . . . . . . . . . 60
3.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4 Approximation Methods
63
4.1 Non-perturbative methods . . . . . . . . . . . . . . . . . . . . 63
4.1.1
Variational Methods . . . . . . . . . . . . . . . . . . . 63
4.1.2
Extension to Excited States . . . . . . . . . . . . . . . 66
4.1.3
A 2 × 2 example . . . . . . . . . . . . . . . . . . . . . 67
4.1.4
4.1.5
Method os Successive Powers . . . . . . . . . . . . . . 68
WKB - Semiclassical approximation . . . . . . . . . . . 68
4.2 Time-independent Perturbation Theory . . . . . . . . . . . . . 71
4.2.1
Time-independent perturbation: Non-degenerate . . . . 72
4.2.2
Time-independent perturbation: Degenerate . . . . . . 75
4.3 The Anharmonic Oscillator
. . . . . . . . . . . . . . . . . . . 77
4.4 Time-dependent Perturbation . . . . . . . . . . . . . . . . . . 79
4.4.1
Time-dependent Perturbation Theory . . . . . . . . . . 80
4.5 Furhter Applications and Fermi’s Golden Rule . . . . . . . . . 83
4.6 Dirac’s interaction picture . . . . . . . . . . . . . . . . . . . . 84
4.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Referências
87
Chapter 1
Quantum States and Observables
In classical physics the mathematical description of a phenomenon is
somewhat clear. From the early days of modern science, the movement of a
macroscopic body could be completely characterized by the specification of
its position at a given instant of time. This process was easily achieved with
the use of simple tools such as rulers and clocks. Predictions could be computed based on formulations relying on simple mathematical objects called
functions. The development of differential and integral calculus, in fact, owes
a lot to the works of scientists like Isaac Newton.
With the advance of our understanding of the microscopic world, physicist were forced to abandon the classical formalism to match experiments.
Intead, a new theory came to light and both the basic objects to describe
it and the mathematical tools to formulate it had to be revised. Part of
this mathematical toolbox was already well stablished at the beginning of
the last century but some of it had to be developed alongside the results of
experiments.
Here we present an introduction to the mathematical aspects of quantum
mechanics.
5
CHAPTER 1. QUANTUM STATES AND OBSERVABLES
6
1.1
Quantum States
Definition 1.A. The state of a single particle 1-D quantum system is a
complex valued continuous function ψ(x, t) such that
i. The probability Pψ (x ∈ I) of finding that the position of the particle
belong to the interval I ⊂ R at time t is given by
Z
Pψ (x ∈ I) = |ψ(x, t)|2 dx
I
ii. The probability Pψ (p ∈ I) of finding that the momentum of the particle
belong to te interval I ⊂ R at time t is given by
Z
1
p
|ψ̃( , t)|2 dp,
Pψ (p ∈ I) =
~ I
~
where f˜ is the Fourier transform of ψ, given by
Z
1
˜
f (k) = √
f (x)eikx dx
2π R
A general property of the Fourier transform called Parceval’s identity [1],
show us that Note that
Z
R
2
|ψ(x, t)| dx =
Z
dp
p
ψ̃( , t)|2 .
~
~
R
And to our probability interpretation hold, we that
Z
|ψ(x, t)|2 dx = 1,
(1.1.1)
(1.1.2)
R
which means that the particle lies somewhere in the real line.
As an exercise, we suggest that you show that the Pψ defined on 1.A
actually obeys other probabilities rules. More precisely, you should show that
with our definition, for any countable sequence of disjoint intervals I1 , I2 , ...,
we have that
Pψ (x ∈
[
n
In ) =
X
n
Pψ (x ∈ ∪In ).
(1.1.3)
1.1. QUANTUM STATES
7
So the position x (or the momentum p) of the particle can assume any
value in the real line, and to each interval we assign a probability Pψ (x ∈ I)
that the value of x lies in I. We can show how to compute the mathematical
expectation of x. As a warm up, assume that x is restricted to a bounded
interval [a, b]. We can divide [a, b] into smaller subintervals Ik , and consider
the following object:
X
xk Ik ,
k
where xk is an arbitrary point of Ik . We desire that this sum converge to a
limit as the maximum length goes to zero, and furthermore the convergence
is independent of our choices of intervals Ik and point xk . If all this holds, we
call the limit x̄ the mathematical expectation of x. If x is not restricted to
a bounded interval, we can fix an arbitrary bounded interval [a, b], calculate
(as above) a limit for this bounded interval, and finally take the limit where
[a, b] goes to the real line. Now, in the latter case, if this limit (a) exist, (b)
is independent of our choice of interval [a, b] and (c) is independent of how
we grow the interval; we call this limit x̄ the mathematical expectation of x.
There are a lot of “if” ’s here, and in order to our physical theory works we
need to start to prove some things. Fortunately the following holds:
Lemma 1.1.1. Let the system be in a state ψ and f (x) be a continuous
function such that
Z
R
|f (x)||ψ(x, t)|2 dx < +∞,
then f¯ the mathematical expectation of f (x) is given by
f¯ =
Z
f (x)|ψ(x, t)|2 dx
R
Proof. Let I ⊂ R be a bounded interval and I1 , ...IN be a partition of I into
smaller non-overlaping intervals with maximum length δ. For let for each k
pick an arbitrary xk ∈ Ik , then
8
CHAPTER 1. QUANTUM STATES AND OBSERVABLES
Z
XZ
X
2
f (xk )Pψ (x ∈ Ik ) 6
|f (x)−f (xk )||ψ(x, t)|2 dx.
f (x)|ψ(x, t)| dx −
I
Ik
k
k
The continuity of f (x) implies that we can make |f (x) − f (xk )| 6 ǫ
R
by taking δ sufficiently small. Therefore Ik |f (x) − f (xk )||ψ(x, t)|2 dx 6
R
ǫ Ik |ψ(x, t)|2 dx, then
Z
Z
X
2
f (xk )Pψ (x ∈ Ik ) 6 ǫ |ψ(x, t)|2 dx.
f (x)|ψ(x, t)| dx −
I
I
k
To conclude the demonstration we note that
Z +l
Z
2
f (x)|ψ(x)|2 dx
f (x)|ψ(x)| dx = lim
l→+∞
R
−l
A direct consequence of the lemma above is the
Corolary 1.1.1. Let the system be in a state ψ, the following holds
Z
x̄ =
x|ψ(x, t)|2 dx
R
Analogue statements holds concerning the momentum, as follows. Their
proof are left as exercises.
Lemma 1.1.2. Let the system be in a state ψ and g(p) denote a continuous
function such that
Z
dp
p
< +∞,
|g(p)||ψ̃( , t)|2
~
~
R
the following holds
Z
dp
p
= ḡ
g(p)ψ̃( , t)|2
~
~
R
Corolary 1.1.2. Let the system be in a state ψ, the following holds
Z
dp
p
p̄ =
pψ̃( , t)|2
~
~
R
1.1. QUANTUM STATES
9
Since we use Fourier transform to describe the probabilities concerning
the momentum, it will be helpful to recall some properties of this integral
transformation. We use Fourier transform here without too much concern on
the conditions that f (x) must satisfy in order that the transforated function
exists, for a complete discussion of Fourier transform see Ref. [1]. First we
recall an useful property of Fourier transform:
g
∂
f (k) = −ik f˜(k)
∂x
(1.1.4)
Z
(1.1.5)
This property follows directly form integration by parts and the observation
that if f˜ exists then |f (x)| → 0 as |x| → 0. We also need an inverse Fourier
transform, and it is given by
1
f (x) = √
2π
f˜(k)e−ikx dk,
R
. To have some insight into equation 1.1.5, it worth consider a very helpful
identity:
1
δ(x − x ) =
2π
′
Z
′
e−ik(x−x ) dk,
(1.1.6)
R
where δ(x) denotes the quite ubiquitous entity that physicist call “Dirac Delta
Function. Formally, this object can be defined as a “function” such that for
any continuous function f and ǫ > 0
Z +ǫ
f (x)δ(x)dx = f (0).
(1.1.7)
−ǫ
When dealing with “Dirac Delta Function”, we need to keep in mind that:
1. It is not a function: Even an initial analysis show that δ(x) is not a
function, it is actually an entity that the mathematicians call a distribution, and we suggest to see ref. [2] for a rigorous treatment.
2. Dirac is not the first to use it: For exemple, an infinitesimal formula
for an infinitely tall, unit impulse delta function explicitly appears in
a work of Cauchy in 1827 (see ref [3]). And indeed, several other
CHAPTER 1. QUANTUM STATES AND OBSERVABLES
10
authors have dealt with objects with similar characteristics (among
them Poisson , Kirchoff , Lord Kelvin , again see ref. [3]). And in
particular, in the late nineteenth century Heaviside had derived the
main properties of δ(x).
Despite these considerations , it should be noted that the notation introduced
by Dirac in his influential book The Principles of Quantum Mechanics ( see
Ref [4]), is not only convenient but insightful and intuitive, what made it
readily available to a much larger community.
Putting these ideas into practice , we can use the Fourier transform to to
connect x and p representations of the quantum state. This is done in the
Proposition 1.1. Let the system be in a state ψ, the following holds
Z ∂
p = (−i~)
ψ(x, t) ψ † (x, t)dx
∂x
R
Proof. It is a straightforward application of eqs. 1.1.5, 1.1.4 and 1.1.6. In
fact, we find that
−i~
R
R
∂
ψ(x, t)
∂x
R
~
† ′ −i(k−k ′ )x
)e
dkdk ′ dx
ψ † (x, t)dx = 2π
R Rk ψ̃(k)ψ̃ (k
= ~
k ψ̃(k)ψ̃ † (k ′ )δ(k − k ′ )dkdk ′
2
R = ~ k ψ̃(k) dk
R p 2
=
p ψ̃( ~ ) dp = p̄
Note that the Proposition 1.1 gives us the expected value of the momentum, free from Fourier transform under the price of the introduction of partial
differentials. This gives us an important hint about the nature of quantum
mechanical systems: We can define a Momentum Operator P̂
P̂ ψ =
~ ∂
ψ
i ∂x
(1.1.8)
1.1. QUANTUM STATES
11
For instance, this became clear by a repeated application of the ideias leading
to Proposition 1.1, gives us
pn
n
= P̂ ψ, ψ ,
where we use the notation
(f, g) =
Z
f (x)g † (x)dx.
(1.1.9)
R
Note 1.a. Despite being a well-known fact , it should worth remember , at
least to our most inexperienced readers, that usually pn 6= p̄n .
For exemple, eq. 1.1.2 can be written as (ψ, ψ) = 1. We will further
discuss this notation and the role of operators like P̂ in quantum mechanics
in section . But before that, we want to show an intriguing property of
quantum systems: The famous Uncertainty Principle.
1.1.1
Uncertainty Principle
The famous Heisenberg Uncertainty Principle is a theorem about Fourier
transforms, once we grant a certain model of quantum mechanics. That is,
there is an unavoidable mathematical mechanism that yields an inequality
which has an paradigm shit interpretation in physics.
This First note that integrating by parts
Z
Z
∂
∂
†
(ψ, ψ) = − x (ψψ )dx = −2Re xψ(x) ψ † (x)dx.
∂x
R
R ∂x
That is
Z ∂
†
(ψ, ψ) 6 2 xψ(x) ψ (x) dx.
∂x
R
Furthermore, Cauchy-Schwartz identity applies here implying that
Z ∂
∂
†
(ψ, ψ) 6 2 xψ(x) ψ (x) 6 2kxψkk ψk,
∂x
∂x
R
CHAPTER 1. QUANTUM STATES AND OBSERVABLES
12
where kf k2 = (f, f ). Now we use eq. 1.1.4 and Parceval’s identity (eq. 1.1.1)
to converts derivatives (in the position) to multiplication (by the momentum
p
):
~
g
∂
∂
ψk = k ψk = kk ψ̃k.
∂x
∂x
Since for any quantum state kψk = (ψ, ψ) = 1, so we obtain the Heisenberg
k
inequality (using p = ~k)
kxψkkpψ̃k >
~2
.
4
A similar argument gives, for any x0 , p0 ∈ R
k(x − x0 )ψkk(p − p0 )ψ̃k >
~2
.
4
(1.1.10)
Put x0 = x̄ and p0 = p̄, then
k(x − x0 )ψk = x2 − x̄2 = ∆x
.
k(p − p0 )ψ̃k = p2 − p̄2 = ∆p
Applying this in Heisenberg’s general inequality (i.e. eq. (1.1.10)) we obtain
the famous Heisenberg Principle:
∆x∆p >
~
.
2
Roughly speaking the quantities ∆x and ∆p can interpreted as the error
obtained in measurements of position and momentum respectively. That is
Heisenberg’s uncertainty principle, is a fundamental limit to the precision
with which the pair (x, p) can be known simultaneously, or in another words
it gives a lower bound on how spread out the probability distributions of
x and p must be. Note that the only relevant Physical assumptions are in
Definition 1.A, namely: the probabilistic interpretation of the quantum state
and that Fourier transform relates x and p.
1.2
Inner Product and Hilbert Spaces
In this section, we develop the notation introduced in eq. (1.1.9)
1.2. INNER PRODUCT AND HILBERT SPACES
13
Definition 1.B. For every states ψ, φ we define the scalar product on L2
Z
(φ, ψ) =
φ∗ (x, t)ψ(x, t)dx
R
Lemma 1.2.1. Let a, b ∈ C and φ, ψ, ν ∈ L2 . The scalar product on L2 has
the following properties
1. (φ, aψ + bν) = a(φ, ψ) + b(φ, ν);
2. (ψ, φ) = (ψ, φ)∗ , where the asterisk denotes complex conjugation.
3. kψk2 = (ψ, ψ) > 0 unless ψ = 0
Note 1.b. Note that statement c of Lemma 1.2.1 is not quite correct. To be
precise, we should say that if kψk2 = 0 then ψ(x) = 0 almost everywhere, i.e.
(in a technical sense) the set for which the property holds takes up nearly all
possibilities. However, to simplify our discussion, we identify two functions
that agrees almost everywhere.
A scalar product obeying Lemma 1.2.1 is called an inner product, and a
vetor space provided with an inner product is called an inner product space.
The Lemma 1.2.1 has the following important consequences:
Corolary 1.2.1. Let ψ, φ belong to an inner product space, then the following
holds
i. |(ψ, φ)| 6 kψkkφk (Schwarz inequality).
ii. kψ + φk 6 kψk + kφk (Triangle inequality).
Proof. It is clear that [i.] holds, when ψ = 0, so assume that both φ and ψ
ane non-zero. Also, we assume that (ψ, ϕ) 6= 0 since otherwise the inequality
is obviously true.
Let σ = ψ −
that
(ψ,ϕ)
ϕ,
kϕk2
therefore the linearity of the inner product implies
(σ, ϕ) = (ψ, ϕ) −
(ψ, ϕ)
(ϕ, ϕ) = 0.
kϕk2
CHAPTER 1. QUANTUM STATES AND OBSERVABLES
14
Now,
kψk2 = kσ +
(ψ,ϕ) (ψ,ϕ) 2
= kσk + kϕk2 > kϕk2 .
(ψ,ϕ)
ϕk2
kϕk2
To show [ii.] , just expand kψ + ϕk2 = (ψ + ϕ, ψ + ϕ), and apply Schwarz
inequality. We leave the details as an exercise.
We are now capable of addressing an important question, that we are
postponing, that is what functions are acceptable as quantum states? Our
Remark raise some of the issues. But since we need the Definition 1.A, we
must insist that these functions are square integrable on the real line. So,
they must belong to the well know space L2 (R). We also desire that these
functions satisfies some features: Suppose there is a sequence of functions
ψk ∈ L2 such that
kψk − ψj k → 0 as j, k → +∞.
(1.2.11)
A sequence satisfying eq.1.2.11 is called a Cauchy sequence. To have things
working properly, we want that all Cauchy sequences converge to some element of L2 . We call this property completeness. More precisely: Given a
sequence satisfying eq. 1.2.11 then there is ψ ∈ L2 such that
kψ − ψk k → 0 as k → +∞.
(1.2.12)
We say that a sequence satisfying equation 1.2.12 converges strongly (or in
the norm) to ψ and write ψk → ψ. We say that a sequence of ψk ’s converges
weakly to ψ when
(ψk , ϕ) → (ψ, ϕ) for all ϕ ∈ L2 .
(1.2.13)
We point out that, as a consequence of Schwarz inequality, every strong
convergent sequence is also weak convergent, since
|(ψk − ψ, ϕ)| 6 kψk − ψkkϕk.
1.3. OBSERVABLES
15
Furthermore, we say that a subset S of L2 is closed if every convergent
sequence of ψk ∈ S converge to some ψ ∈ S, (if S is not closed then there
are sequences of elements of S that converge to elements of L2 that are not
in S). A subset S of L2 is called dense if for every ψ ∈ L2 and ǫ > 0, there
is an φ ∈ S such that kψ − φk < ǫ. That means that any function in L2 can
be approximated by functions in S. The following property is very useful
Lemma 1.2.2. If S is a dense subset of L2 and
(ψ, ϕ) = 0 ∀ ϕ ∈ S ⇒ ψ = 0.
(1.2.14)
Proof. Note that (ψ, ϕ) = 0 imply that kψ − ϕk2 = kψk2 + kϕk2 > kψk2 .
Since ϕ ∈ S then it can be taken arbitrarily close to ψ so kψk2 6 0 therefore
ψ = 0.
A vector space with an inner product, that is complete with respect to
the norm induced by the inner product is called a Hilbert Space (that is a
inner product vector space that satisfies the completeness property). The
space L2 is a very important exemple of a Hilbert Space, and many of the
statements we made here actually holds for general Hilbert Spaces. However,
if you are not familiar with Hilbert Spaces do not despair, you still follow
our notes without difficult considering everything in the L2 context.
1.3
Observables
Definition 1.C. A physical quantity that can be measured is called an Observable.
Postulate 1. To every observable a there is an corresponding linear operator
A with dense domain D(A) ⊂ L2 such that ā the expected value of a for the
system at the state ψ ∈ D(A) is given by
ā = (ψ, Aψ) .
CHAPTER 1. QUANTUM STATES AND OBSERVABLES
16
Not all operators with the property described in 1 correspond to an observable. First, we need that the observable can only assume real values.
Definition 1.D. A linear operator A is called Hermitian if (ψ, Aφ) = (Aψ, φ)
for every ψ, φ ∈ D(A).
Lemma 1.3.1. A is a Hermitian Operator ⇔ (ψ, Aψ) ∈ R for every ψ ∈
D(A).
Proof. For ψ ∈ D(A), and since A is Hermitian we note that (Aψ, ψ) =
(ψ, Aψ) = (Aψ, ψ)∗ . Conversely, assume that Im(Aψ, ψ) = 0, we have that
(A[iϕ + ψ], iϕ + ψ) = (Aφ, φ) + (Aψ, ψ) + i(Aϕ, ψ) − i(Aψ, ϕ),
taking imaginary parts we have that Re(Aψ, ϕ) = Re(ψ, Aφ). Finally,
Im(Aψ, ϕ) = Im [−i(Aiψ, ϕ)] = −Re(Aiψ, ϕ) = −Rei(ψ, Aϕ) = Im(ψ, Aϕ).
Then (Aψ, ϕ) = (ψ, Aϕ).
Now consider, for example, the momentum operator defined on eq.1.1.8.
The expression
Z
∂ψ
(x)ψ † dx
∂x
R
makes sense only if ψ is differentiable with respect to x and if the integral
p̄ = (P̂ ψ, ψ) = −i~
exists. That means D(P̂ ), the domain of the momentum operator, can not
be the whole L2 . In fact, there is many examples of elements of L2 functions
that are not differentiable everywhere. We need ways to extend the domain
of an operator to deal with these cases. To do this is enough to require that
the domain of the operator to be dense in L2 . Furthermore, we need that
the domain of the operator is the largest possible. This motivates the
Postulate 2. If A is an operator corresponding to an observable a and B is
a Hermitian operator such that D(A) ⊂ D(B), and
ā = (ψ, Bψ),
1.3. OBSERVABLES
17
∀ψ ∈ D(A) then B = A (i.e. D(B) = D(A) and Aψ = Bψ).
The ideia of this postulate is to give us a way to extend the domain of
an operator. For instance, suppose that ψ 6∈ D(A), but there is a sequence
{ψn } ⊂ D(A) such that ψn → ψ and Aψn → f , then we define a Aψ = f .
This only makes sense if f does not depend on the sequence {ψn }. Now,
suppose we have another sequence φn ∈ D(A) → ψ such that Aφn → f .
Therefore, µn = ψn − φn → 0, and we need that Aµn → 0 for that method
works. An operator with this property is called closable . When it works this
allow us to construct an extended operator denoted by Ā, the closure of A.
Note 1.c. It is also, worth to note that Ā can not be extended any further
using the method outlined above. In fact, suppose that there is a sequence
ψk ∈ D(Ā) such that ψk → ψ, Āψk → f . So there are sequences ψjk ∈ D(A)
such that ψjk → ψk and Aψjk → Āψk . So we can take one element of each of
this sequences, and construct a new sequence ψkk where ψkk → ψ, Aψkk → f .
Therefore ψ ∈ D(Ā) and Āψ = f .
As our previous discussion suggests, not all operators are closeable. Fortunately, for the quantum mechanical theory, we we are dealing with Hermitian
operators with dense domain, and we have the
Lemma 1.3.2. A Hermitian operator with dense domain is closable.
Proof. Let D(A) ∋ ψn → 0 and Aψn → f . For any ϕ ∈ D(A) we have that
(Aψn , ϕ) = (ψn , Aϕ).
Taking the limits, we have that
(f, ϕ) = (0, Aϕ) = 0.
Since D(A) is dense Lemma 1.2.2 implies that f = 0.
Finally, we can put these results together. This lead us to the
CHAPTER 1. QUANTUM STATES AND OBSERVABLES
18
Proposition 1.2. If the observable a is real valued then the corresponding
operator is a closed Hermitian Operator.
Proof. We already prove that A must be Hermitian. Since, we require that
its domain is dense, then it is closable. We claim that the closure Ā is also
Hermitian. In fact, for ψ, ϕ ∈ D(Ā), there are sequences D(A) ∋ ψn → ψ
and D(A) ∋ ϕn → ϕ, such that Aψn → Āψ and Aϕn → Āϕ, and therefore
(Āψ, ϕ) = lim(Aψn , ϕk ) = lim(ψn , Aϕk ) = (ψ, Āϕ).
n,k
n,k
Ā can not be further extended so the proposition follows from Postulate
2.
1.4
Probability and Functions of Observables
It is fundamental in any scientific theory that one can make some sort
of prediction of the outcome of an experiment. If the theory is not able to
make predictions, then it is not of any use to science (and most important
not scientific at all). In particular, in quantum mechanics one need to determine the probability Pψ (a ∈ I) that the real observable a be in the interval
I. Clearly this quantity must depend on the quantum state ψ and the corresponding operator A. We now show how one can obtain this information.
Our approach is based on the
Note 1.d. A real function of an observable is an observable.
The reason we emphasize this point is that usually physical quantities are not
measured directly , but calculated from other quantities. So this is a fundamental requirement of the theory. It is also highly likely that a more careful
author would prove this statment or include it as an postulate. That means
for any observable a and f : R → R, there is an observable f (a). Corre-
sponding to the observable f (a) there is a corresponding Hermitian operator
that (despite the abuse of the notation) we denote f (A) (postulate 1).
1.4. PROBABILITY AND FUNCTIONS OF OBSERVABLES
19
Now let χI (λ) be the characteristic function of the interval I, i.e.
1 λ∈I
(1.4.15)
χI (λ) =
0 λ 6∈ I
So let us consider the operator χI (A), where A is the operator corresponding to the observable a. Since that for any state ψ, a real valued observable
a can only be or not be in I, we find that
χI (a) = Pψ (a ∈ I) × 1 + Pψ (a 6∈ I) × 0
⇒ (ψ, χI (A)ψ) = Pψ (a ∈ I)
Unfortunately, this hardly answers our question. since we still need to
understand what is the operator χI (A) and given A how to construct it. We
now will try to deal with these problems proving some properties of χI (A).
Lemma 1.4.1. The following holds
i. kχI (A)ψk 6 kψk, for all ψ ∈ D [χI (A)]2 .
ii. D(χI (A)) = L2
Proof. Since χI (A) is Hermitian, we have that
χ2I (a) = (χ2I (A)ψ, ψ) = (χI (A)ψ, χI (A)ψ) = kχI (A)ψk2
, for all normalized ψ in the domain of χ2I (a). Also χ2I (a) = χI (a) = Pψ (a ∈
I) < 1, therefore kχI (A)ψk 6 1. Now, for ϕ 6= 0, write ψ =
leads to statement [i.].
ϕ
kϕk
and this
Lemma 1.4.2. If A is a Hermitian Operator with dense domain such that
(Aψ, ψ) = 0
then A = 0.
∀ψ ∈ D(A)
CHAPTER 1. QUANTUM STATES AND OBSERVABLES
20
Lemma 1.4.3. For each I, J ⊂ R, the following holds
i. χ2I (A) = χ2I (A)
ii. χI∪J (A) = χI (A) + χJ (A) − χI∩J (A)
iii. χI∩J (A) = χI (A)χJ (A)
Proof. Note that χ2I (a) = χI (a) implies that (χ2I (A)ψ, ψ) = (χI (A)ψ, ψ),
then ([χ2I (A) − χI (A)]ψ, ψ) = 0, for all ψ ∈ L2 . So statement [i.] follows from
lemma 1.4.2.
We leave the demonstration of the remaining statements as an exercise
(Hint: Note that χI∪J = χI + χJ − χI∩J and that [χI + χJ ]2 = χI + χJ +
2χI∩J )
We advanced quite a bit, but still we are not able to construct the operator
χI (A) given A and not even know if it is defined for every densely defined
closed Hermitian operator. To address these issues, in the next section we
shall derive more consequences of our postulates.
1.5
Self-adjoint operators
Our main aim in this section is to prove and discuss the
Proposition 1.3. If A is the operator corresponding to a real observable,
then the operator (1 + A2 ) is onto.
The proposition 1.3 is instrumental. It tells us that when dealing with
quantum mechanical operator, for each f ∈ L2 the equation (ψ + A2 ψ = f
has a solution ψ ∈ D(A2 ) ⊂ D(A). We should note that given to operators
A and B, D(A + B) = D(A) ∩ D(B), then D(1 + A2) = D(A2 ). Furthermore,
note that (1 + A2) = (A + i)(A − i) = (A − i)(A + i), therefore the proposition
1.3 tells us that for every f ∈ L2 there are ψ, ϕ ∈ D(A) such that
(A − i)ψ = (A + i)ϕ = f.
(1.5.16)
1.5. SELF-ADJOINT OPERATORS
21
Once noticed this fact, the proposition 1.3 also suggests that not all densely
defined closed Hermitian operator can correspond to an observable. This
lead us to the
Proposition 1.4. let A be an operator corresponding to a real observable
and let ψ, f ∈ D(A) such that for all ϕ ∈ D(A) we have
(ψ, Aϕ) = (f, ϕ).
Then ψ ∈ D(A) and Aψ = f .
Proof. What need to be proved is that ψ ∈ D(A) (if ψ ∈ D(A), we can use
that A is Hermitian then (Aψ − f, ϕ) = 0, and by the lemma 1.2.2 Aψ = f ).
Now note that, for every ϕ ∈ D(A), we have
(ψ, [A + i]ϕ) = (ψ, Aϕ) − i(ψ, ϕ) = (f − iψ, ϕ)
(1.5.17)
From eq. 1.5.16, there a w ∈ D(A) such that (A − i)w = f − iψ, which leads
to
(ψ, [A + i]ϕ) = (f − iψ, ϕ) = ([A − i]w, ϕ) = (w, [A + i]ϕ).
Again by eq. 1.5.16, there is a ϕ ∈ D(A) such that [A + i]ϕ = ψ − w.
Therefore
0 = (ψ − w, [A + i]ϕ) = (ψ − w, ψ − w) = kψ − wk2,
which show us that ψ = w ∈ D(A)
The property described proposition 1.4 is instrumental to build functions
of observables, however (un)fortunately this property is not shared by all
densely defined closed Hermitian operators. This is one of the crucial points
of this chapter, then take a deep breath and allow yourself a moment of
reflection on this issue. For any densely defined operator A with domain
D(A) ⊂ L2 we can define A† , the adjoint of A,by A† ψ = f in the proposition
CHAPTER 1. QUANTUM STATES AND OBSERVABLES
22
1.4, this makes perfectly sense because D(A) is dense. Otherwise there can
be more then one function that satisfies proposition 1.4. That means
(ψ, Aϕ) = (A† ψ, ϕ).
(1.5.18)
The linearity of the inner product implies that A† is also a linear operator.
When A is Hermitian , we can be tempted to swap the position of A in the
inner product and say that A† = A, however this is only true if ψ ∈ D(A),
and the proposition 1.4 holds for ψ ∈ L2 . Then it is clear that D(A) ⊂ D(A† ),
and restricted to D(A), we have A† = A. For an operator that satisfies the
proposition 1.4, we say that the operator is self-adjoint, i.e. D(A† ) = D(A)
and A† = A.
Putting all this ideas together we just prove the
Proposition 1.5. If a is a real observable then the corresponding observable
is self-adjoint.
1.6
Riez Representation
When we are dealing with Hilbert Spaces, Adjoints and more important
physicists it is important to have some rigorous mathematical results to guide
us. Usually, when dealing with quantum mechanics, physicists use a very
particular “language” that they call “Dirac Notation” or “Bra-Ket Notation”
to denote abstract vectors and linear functionals (i.e linear operator from
the Hilbert space to the scalars), leaving inner products to a some sort of
secondary role. The name “Bra-Ket” is so because the inner product of two
quantum states is denoted by hψ|ϕi, where hψ|, called a “Bra”, is a linear
functional and |ϕi is a vector in the Hilbert space.
Note 1.e. It is curious that (again) Dirac is not the first one to introduce
this notation, the notation has its roots on Grassmann’s algebra calculation
nearly 80 years before (see [5]). But, again Dirac has all the merit into disseminating the notation that nowadays is widespread in quantum mechanics:
1.6. RIEZ REPRESENTATION
23
almost every phenomenon that is explained using quantum mechanics (including a large portion of what is called modern physics) is usually explained
with the help of bra-ket notation . Much beyond of all these bras, part of the
(sex)appeal of “Dirac Notation” is the abstract representation-independence
it encodes, together with its versatility in producing a specific representation
(e.g. x, or p) without much ado, or excessive reliance on the nature of the
linear spaces involved. So once again the notation disseminated by Dirac is
not only convenient but insightful and intuitive, what made it readily available
to a much larger community.
The important guide line that we want to discuss is the (Hilbert Space)
Riez Representation lemma. This theorem establishes an important connection between a Hilbert space and its linear functionals, that is a connection
between “Bra´s” and “ket’s”. Here we will only describe and discuss the theorem, but we will not prove it. For a demonstration, we refer to [6]. First
note, that for each ϕ in the Hilbert Space H, we can define a linear functional
Fϕ using the inner product by making, for all ψ ∈ H,
Fϕ ψ = (ϕ, ψ).
(1.6.19)
A Bra-Ket enthusiast should say that to every Ket (i.e. an element of the
Hilbert space) correspond a unique Bra (i.e. a linear functional). The Riez
Representation lemma deals with the converse of this statement (i.e. it answers the question if for every Bra there is a unique Ket). We say that a
linear functional F is bounded if there is a C > 0 such that for all ψ ∈ H
|Lψ| < Ckψk.
(1.6.20)
The smallest constant C for which eq. 1.6.20 holds is called the norm of F
and we denote it by kF k. Note that not every linear functional is bounded,
a straightforward exemple in quantum mechanics is the following
Fx ψ = ψ(x),
CHAPTER 1. QUANTUM STATES AND OBSERVABLES
24
for all ψ ∈ L2 , i.e. Fx is the linear functional that associates each state to its
value at a given point x.
The Riez Representation lemma states that every bounded linear functional (a “bounded Bra”) correspond to a vector in the Hilbert Space. More
precisely,
Theorem. [ Riez Representation Lemma]
Let F be a bounded linear functional defined everywhere on a Hilbert space
H. Then there is ϕ ∈ H such that kϕk = kF k and, for all ψ ∈ H,
F ψ = (ϕ, ψ).
In other words, Riez Representation lemma say that a bounded linear
functional on a Hilbert space is just a inner product.
1.7
Spectral Theorem
At this point, after some simple physical-mathematical requirements, we
understand that the operator A corresponding to real observable a is selfadjoint. These requirements are actually enough to caracterize the operators
in quantum mechanics, that means if A is self-adjoint we can construct the
operators χI (A) that allow us to calculate the probabilities that the observable lies in the interval I. This is due to the Spectral theorem. The Spectral
theorem (for self-adjoint operators) basically sates that any self-adjoint operator is unitarily equivalent to a multiplication operator. This result is far
from trivial and here we will only state the theorem, explain it and not try
to give a proof. For a demonstration we refer to Ref [6].
An orthogonal projection O on a closed subspace S of H is a linear op-
erator such that for all ψ ∈ S, P ψ = ψ and for all ψ⊥ ∈ S ⊥ , P ψ = 0. Any
arbitrary ϕ ∈ H can be written as a unique decomposition ϕ = ψ + ψ⊥ , so
that linearity extend O to hole space (and Oϕ = ψ). Clearly, O is idempotent
1.7. SPECTRAL THEOREM
25
(i.e. O 2 = O), since for any ϕ = ψ + ψ⊥ , we have O 2 ϕ = O(Oϕ) = Oψ = ψ.
Given any two vector ϕi = ψi + ψi⊥ (i = 1,2), we note that
(Oϕ1, ϕ2 ) = (ψ1 , ψ2⊥ ) + (ψ1 , ψ2 ) = (ψ1 , ψ2 )
= (ψ1 , Oϕ2) + (ψ1⊥ , Oϕ2) = (ϕ1 , Oϕ2),
which show that O is Hermitian. Moreover,
kOϕk = kψk 6 kϕk,
then O is bounded.
We note that orthogonal projections satisfy many of the properties of
the operators χI of section 1.4. For example, suppose that S and S ′ are
closed subspaces of H. We want to consider O the orthogonal projection on
C = S ∩ S ′ . Let OS and OS ′ be the orthogonal projections on S and S ′
respectively. Clearly, if ψ ∈ C, then Oψ = OS ψ = OS ′ ψ = ψ. Also C ⊥ is a
subset of both S ⊥ and S ′⊥ , then Oψ⊥ = OS ψ⊥ = OS ′ ψ⊥ = 0. This leads to
the conclusion that O = OS OS ′ , and we suggest the reader to compare this
with the lemma 1.4.3.
For each self-adjoint operator, the Spectral theorem garantes the existence
of a family of orthogonal projection,that provides a canonical decomposition,
called the spectral decomposition, of the underlying Hilbert space on which
the operator acts. More precisely,
Theorem. [Spectral Theorem] Let A be a self-adjoint operator on H.
There is a family of ortogonal projection EA (λ) depending on a real parameter
λ, called a spectral family such that
i. λ1 < λ2 ⇒ EA (λ1 )EA (λ2 ) = EA (λ1 )
ii. For ǫ > 0, EA (λ + ǫ) → EA (λ) as ǫ → 0.
iii. For ψ ∈ H
EA (λ)ψ → 0 as λ → −∞
EA (λ)ψ → ψ as λ → +∞
26
CHAPTER 1. QUANTUM STATES AND OBSERVABLES
iv. ψ ∈ D(A) ⇔
R
R
λ2 dkEA (λ)ψk < +∞
v. For ψ ∈ D(A) and φ ∈ H,
(φ, Aψ) =
Z
R
λd (φ, EA (λ)ψ)
vi. If f is a complex valued function, the operator f (A) is given by
Z
f (λ)dEA (λ),
f (A) =
R
defined D(f (A)) consisting of all ψ ∈ H such that
Z
|f (λ)|2 dkEA (λ)ψk < +∞
R
Now back to business (i.e. prove some results). If a is a real valued
observable, with corresponding operator A then from the spectral theorem
the mathematical expectation ā of the observable a for a system at state
ψ ∈ D(A), is given by
ā =
Z
R
λd(ψ, EA(λ)ψ).
Furthermore, the probability Pψ (a ∈ I) that the observable a lies in the
interval I for a system at state ψ ∈ D(A)
Z
Pψ (a ∈ I) = d(ψ, EA (λ)ψ).
I
As we point out, this theorem gives us a prescription to construct the operator
χI (A). In fact, using [vi.], we find that
Z
χI (A) = dEA (λ).
I
Note 1.f. For I = (−∞, λ], we have EA (λ) = χI (A).
For a concrete example, consider the position of the particle. It is clearly
a real valued observable, and therefore there is a corresponding self-adjoint
operator X̂, given by
[X̂ψ](x) = xψ(x)
(1.7.21)
1.7. SPECTRAL THEOREM
27
It is worth to remember that an operator is not defined unless its domain is
specified. Here, it is of double importance since we the operators of interest
arise form real observables, and therefore are self-adjoint. Even a superficial
analysis should convince us that a slightest change in the domain can destroy
the self-adjointness. So we need to specify a domain where X̂ is self-adjoint.
The simplest (an largest) domain D(X̂), we can choose in order to X̂ be
self-adjoint is is the set of those ψ ∈ L2 such that xψ ∈ L2 . In fact , it is
easy to see that X̂ is Hermitian. Or more directly, take an f ∈ L2, we see
that ψ± =
f
x±i
∈ D(X̂) and (X̂ ± i)ψ± = f , which show that (X̂ ± i) is onto,
so the proposition 1.4 holds. Moreover, X̂ is densely defined, since for any
f ∈ L2 we can take ψǫ =
f
ǫx2 +1
∈ D(X̂). But
|ψǫ − f k 6 ǫx2
|f |
,
+1
ǫx2
which implies that ψǫ → 0 as ǫ → 0. Therefore X̂ is self-adjoint.
Now, the Spectral theorem states that there is a family of orthogonal
projections E(x0 ) such that
X̂ =
Z
x0 dE(x0 ).
R
Our last remark, show us that the orthogonal projections E(x0 ), are actually
given by
[E(x0 )](x) = H(x − x0 )ψ(x),
where H(x) is the step function (H(x) = 0 for x > 0 and H(x) = 1 for
x 6 0).
We close this chapter stating a last postulate, necessary to the logical
self-consistency of our physical theory. The postulate, usually referred “the
wave function colapse” is the following:
Postulate 3. If the measurement of the observable a of a system in a state
ψ gives the result λ immediately after the measurement the new state of the
CHAPTER 1. QUANTUM STATES AND OBSERVABLES
28
system became
φ=
EA (λ)ψ
kEA (λ)ψk
This is to say that immediately after a measurement of the observable a,
we know for sure the result if we perform the same measurement again. That
means, that if we find that a ∈ I in our first measurement, the second one
(performed immediately after) will also gives that a ∈ I. So the system can
not remain in the same state ψ, otherwise we can only predict the probability
of a ∈ I in the second measurement. It is the essence of measurement in
quantum mechanics and connects the wave function with classical observables
like position and momentum. The colapse is indeed one of the two process
by which a quantum system evolve in time, the other one is given by the
Schrödinger equation that we will discuss in chapter 3. For a more detailed
discussion of the “wave function colapse” we refer to Ref. [7].
1.8
Exercises
1. Show that eq. 1.1.3 holds.
2. Prove Lemma 1.1.2 and the corollary 1.1.2.
3. Prove the triangle inequality.
4. Give an example of a sequence in which ψk → ψ weakly, but not
strongly.
5. Prove [ii.] and [iii.] of the lemma 1.4.3.
6. Construct the operator χI (P̂ ).
7. Show that the linear functional Fx ψ = ψ(x) is not bounded.
Chapter 2
The Spectrum
Having at our disposal the mathematical formalism capable of describing
a quantum system, as discussed in the previous chapter, we are now in a
position to aspire a connection between “our” theory and the results obtained
in a laboratory. For this purpose it is not enough, altough necessary, to know
the quantum state of an experimental object. This is because one hardly
measures the state itself but, rather, infers information about it through a
set of indirect measurements [8].
Therefore, if we have an electron confined to live in a certain region,
a particle in a box, so to speak, in order to extract information about its
condition we must design experiments to specify, for instance -and if possibleits position, its momentum, its energy, its (spin) angular momentum, etc.
We have seen so far that there exist some restrictions on the outcomes of a
measurement, be it due to the uncertainty principle, quantization conditions
or the self-adjointness postulate.
It becomes, then, paramount to investigate the possible results measured
by an experimentalist. This set of allowed outcomes of an observable measurement consists what is called the spectrum of the associated operator. In
this chapter we would like to shed light in this question.
29
CHAPTER 2. THE SPECTRUM
30
2.1
Spectrum and Resolvent
We have seen how in quantum mechanics, we can predict the probability
that a real observable a lies in some interval I. We can now ask what values
an observable can assume or if it can attain any value. The set of possible
values depends on the corresponding self-adjoint operator A, and we call it
σ(A) the spectrum of A. To properly answer to this question, we shall apply
the spectral theorem to our observables. To do this, we give the
Definition 2.A. We say that a scalar λ is in the resolvent set ρ(A) of a
closed operator A on H if there is a bounded operator Rλ on H, called the
resolvent of A, such that
Rλ (λ − A)ψ = ψ
(λ − A)Rλ ϕ = ϕ,
ψ ∈ D(A)
ϕ ∈ H.
The definition say that if exists then Rλ = (λ−A)−1 . With this definition
we have the
Proposition 2.1. If A is a self-adjoint operator then all non-real numbers
are in ρ(A)
Proof. For every non-real λ, f (x) =
1
,
λ−x
real line. So by the spectral theorem
Z
f (A) =
R
is bounded and continuous over the
1
dEλ (A)
λ−x
is a bounded operator defined everywhere. More over f (A) satisfies the
requirements on definition 2.A.
The key point here is that if an real observable can not attain (in some
sense) a certain value λ0 then λ0 lies in the resolvent of the corresponding
operator. More precisely, we have the
Proposition 2.2. let A be a self-adjoint operator and I an open interval of
the real line. If χI (A) = 0 then I ⊂ ρ(A).
2.1. SPECTRUM AND RESOLVENT
31
Proof. Let λ0 ∈ I, since I is open, δ the minimum distance between λ0 and
the ends of I is positive. So define


f (x) =

1
λ0 −x
x 6∈ λ
0
x∈λ
Clearly, in the real line, f (x) is piecewise continuous and bounded by 1δ . Thus
by the spectral theorem
f (A) =
Z
R−I
1
dEλ(A)
λ0 − x
is a bounded operator defined everywhere. Furthermore,
f (A)(λ0 − A)ψ = (1 − χI (A))ψ = ψ, ∀ψ ∈ D(A)
(λ0 − A)f (A)ϕ = (1 − χI (A))ϕ = ϕ, ∀ϕ ∈ H.
Therefore λ0 ∈ ρ(A). Since this holds for all λ ∈ I, we that I ⊂ ρ(A).
The converse of this theorem is also true. This leads us to the
Proposition 2.3. For any real λ0 ∈ ρ(A) there is an open interval I ⊂ R
such that λ0 ∈ I and χI (A) = 0.
Proof. Suppose there is no such interval. Therefore there must be a sequence
of open intervals In → such that λ0 ∈ In and χIn (A) 6= 0. So there is a nonzero ψn ∈ H, such that χIn (A)ψn = ψn . So the Spectral theorem implies
k(λ0 − A)ψn k = k(λ0 − A)χIn (A)ψn k 6 sup |λ0 − λ| → 0.
λ∈In
But since λ0 ∈ ρ(A) this is an absurd, because
kψn k = k(λ0 − A)−1 (λ0 − A)ψn k = k(λ0 − A)−1 kk(λ0 − A)ψn k → 0,
which contradicts the fact that kψn k > 0.
Corolary 2.1.1. If A is self-adjoint, then ρ(A) is an open set.
CHAPTER 2. THE SPECTRUM
32
Proof. Let λ ∈ ρ(A). If λ is not real, then there is a small open disk Dλ with
center in λ which contain no real numbers. So by the proposition ??, we have
that Dλ ⊂ ρ(A). Now, if z are real then the proposition 2.3 say that there
is an open interval I ⊂ ρ(A) with center in λ. The length of this interval is
the diameter of the open disk DI centered in λ, and clearly DI ⊂ ρ(A).
The points that are not in the resolvent set are in the spectrum. In other
words:
Definition 2.B. The set σ(A) , called the spectrum of A is the set C − ρ(A).
That means that σ(A) consist of those points λ in that the operator
(λ − A) is not invertible. So a straightforward consequence of corollary is the
Corolary 2.1.2. If A is a self-adjoint operator, then the spectrum σ(A) is a
closed set.
The main reason we go through all this mess is that spectrum is a fundamental object in quantum theory. Its role becomes clear in the
Proposition 2.4. An observable can only assume values in the spectrum of
the corresponding operator.
Proof. Let A be the operator corresponding to the observable a. For λ0 ∈
ρ(A), by Proposition 2.3 there is an interval I such that χi (A) = 0. Following
section 1.4, we have
Pψ (a ∈ I) = (χI (A)ψ, ψ) = 0,
for any ψ ∈ H. Therefore, a can not assume values in I for any state
function.
So when faced with an observable in quantum mechanics (e.g. we wish
to make a measurement on a physical system) , we must consider the corresponding operator to this observable and (possibly ) the first thing we should
2.1. SPECTRUM AND RESOLVENT
33
look at is the spectrum of this operator. In fact, as the proposition above
states, we can only measure values in the spectrum of the operator! Thus,
a criteria for finding the spectrum of an self-adjoint operator is essential to
quantum mechanics. To develop this criteria, we will need some preliminary
results:
Lemma 2.1.1. A closed vector subspace S of a Hilbert Space H is it self a
Hilbert Space.
Proof. We just net to prove that S ⊂ H is complete. Let ψk ∈ S be a
Cauchy sequence. Since H is complete ψk → ψ ∈ H. But, since S is closed
then it contains all limit points, therefore ψ ∈ S. So all Cauchy sequences of
elements of S converges to an element in S. So S is a complete inner product
space, i.e. a Hilbert space..
We now state a very useful theorem for determine the sepctrum of a
self-adjoint operator:
Proposition 2.5. Let A be a self-adjoint operator. λ ∈ R lies in σ(A) if
and only if there is a sequence ψk ∈ D(A) such that, kψk k = 1 and
k(λ − A)ψk k → 0.
(2.1.1)
Proof. Suppose λ 6∈ σ(A) and eq. 2.1.1 holds. Then λ ∈ ρ(A) and therefore
Rλ (λ − A)ψk = ψk ,
a contradiction with eq. 2.1.1.
Now suppose that eq. 2.1.1 does not hold. Our strategy is to show that
(λ−A) is invertible, so λ ∈ ρ(A). To do this we claim that there is a constant
C such that
kψk 6 Ck(λ − A)ψk.
If not, there must be a sequence ϕk ∈ D(A) such that
kϕk k
→ ∞.
k(λ − A)ϕk k
(2.1.2)
CHAPTER 2. THE SPECTRUM
34
So we can take ψk =
sumption).
ϕk
,
kϕk k
and eq. 2.1.1 holds (which contradicts our as-
Form eq. 2.1.2, we have that (λ − A) is injective. If it was not injective
then there must be ϕ1 6= ϕ2 such that
(λ − A)(ϕ1 − ϕ2 ) = 0,
which contradicts eq. 2.1.2.
Furthermore, consider a sequence ψn ∈ R(λ − A), such that ψn → ψ ∈ H.
Since (λ − A) is injective, we call ϕn the unique solution of (λ − A)ϕn = ψn .
Now, eq. 2.1.2 shows that the sequence ϕn is a Cauchy sequence, in fact,
kϕk − ϕn k 6 Ckψk − ψn k → 0.
Thus ϕn → ϕ ∈ H. So for f ∈ D(A), we have
(ϕ, (λ − A)f ) = lim(ϕn , (λ − A)f ) = lim(ψn , f ) = (ψ, f ).
Therefore, ϕ ∈ D(A) and (λ − A)ϕ = ψ. Then we conclude that R(λ − A)
is closed, and by proposition 2.1.1 it is a Hilbert Space.
Let f be an arbitrary vector of H and w ∈ R(λ − A). We take v to be
the (unique) solution of (λ − A)v = w, and construct the linear Functional
F on R(λ − A) by defining
F w = (v, f ).
This is a bounded linear functional, since by eq. 2.1.2 , we have
|F w| 6 kvkkf k 6 Ckf kkwk.
So by Riez representation lemma, there is a u ∈ R(λ − A) such that F w =
(u, w), ∀w ∈ R(λ − A). Thus, we find taking vinD(A),
((λ − A)v), u) = (f, v).
The self-adjointness of A implies that u ∈ D(A) and (λ − A)v = f. Therefore
f ∈ R(λ − A) and H = R(λ − A). So (λ − A) is one-to-one (injective) and
surjective, therefore invertible, which implies λ 6∈ σ(A).
2.2. FINDING THE SPECTRUM
35
As a consequence we have
Corolary 2.1.3. Self-Ajoint operators are closed.
Proof. We left this proof as an exercise.
Corolary 2.1.4. If A is a closed operator and kvk 6 CkAvk then A is
injective and R(A) is closed.
Proof. Do λ = 0 in the proof of the proposition 2.5.
There are some special elements in the subset of σ(A) called the eigenvalues.
Definition 2.C. λ is said to be an eigenvalue of A if there is a non-vanishing
solution for Aψ = λψ. When this non-vanishing solution ψ exist, we call it
an eigenvector of the operator A corresponding to the eigenvalue λ
Note 2.a. The prefix eigen- is adopted from the German word eigen for “own”, “unique to”, “peculiar to”, or “belonging to” in the sense of “idiosyncratic” in
relation to the originating object(operator). Therefore it is usual to carry it to
more specific objects, for example: eigenfunctions in L2 spaces or eigenstate
in Quantum mechanics ...
With this definition it is clear that
Corolary 2.1.5. If λ is an eigenvalue of A then λ ∈ σ(A).
Proof. It is straightforward, if λ is an eigenvalue then there is ψ 6= 0, such
that (λ − A)ψ = 0. So (λ − A) is not injective, and λ ∈ σ(A).
2.2
Finding the spectrum
We now want to apply these ideias to find the spectrum of some operators
of interest. we consider as an initial example the position operator, discussed
CHAPTER 2. THE SPECTRUM
36
in eq. 1.7.21. The easiest elements are the eigenvalues. Thus we should look
for solution of
X̂ψ = λψ
.
⇒ xψ(x) = λψ(x)
Thus, there are no eigenvalues, since the only solution of the equation
above is ψ = 0 for x 6= λ. That means ψ(x) almost everywhere, and to us
this means ψ = 0! So we need to use proposition 2.5. In order to do this,
it worth to have some sort of motivation to construct the sequence in the
statement of proposition 2.5. An object that can satisfies ψ = 0 for x 6= λ,
is the (infamous) Dirac’s Delta function. So we can think in a sequence such
that ψk (x) → δ(x − λ), and kψk k = 1. These sequences are some times called
a delta sequence. For example, we can take
ψk (x) =
1 − k2 (x−λ)2
e 2
,
N(k)
√
where N(k) is such that kψk k = 1, that means N 2 (k) = kπ . Clearly,
Z
1
2
k(λ − X̂)ψk k = 2
y 2 e−y dy → 0,
3
N (k)k R
as k → ∞. So for any λ real we have that λ ∈ σ(X̂). That means the
spectrum of the position operator is the whole real line.
The argument for the momentum operator is similar. But we shall use the
Fourier transform and the identity of eq.1.1.6. We leave this as an exercise.
You should find that the spectrum of the momentum operator is the whole
real line.
2.2.1
The spectrum of the Hamiltonian
We are now, interested in the spectrum of the Energy operator or the
Hamiltonian
H=
P2
+ V (X̂).
2m
To consider this operator we will need the following lemma.
2.2. FINDING THE SPECTRUM
37
Lemma 2.2.1. If A is a self-adjoint operator, then A2 is a self-adjoint operator.
Proof. We have that A2 + 1 = (A + i)(A − i), here we stress that the domain
of both operator are D(A2 ). If A is self-adjoint, then for each f ∈ H there is
u ∈ D(A2 ) such that (A2 + 1)u = f . Furthermore since ±i ∈ ρ(A) for each
f there are v± ∈ D(A) such that (A − i)v− = f and (A + i)v+ = v− . Since
both v± are in D(A), it is also true for Av− . Therefore, R(A2 + 1) is the
whole space and therefore A2 is self-adjoint.
This show that
P2
2m
is self-adjoint. H0 is the (Hamiltonian) energy operator of a free particle.
H0 =
That is a particle that is not subject to a external potential. To study its
spectrum, we write
√
1 √
(λ − H0 ) =
( 2mλ − P )( 2mλ − P ),
2m
√
√
and observe that 2mλ ∈ R ⇒ 2mλ 6∈ ρ(P ). So for λ < 0, we have that
λ ∈ ρ(H0 ). Now, consider the case λ > 0. We write λ =
~k 2
,
2m
again we search
for eigenvalues so
2
H0 ψ = ~k
ψ
2m
dψ
2
⇒ dx
=
−k
ψ
2
+ikx
⇒ ψ(x) = c+ e
+ c− e−ikx ,
The only way this ψ(x) can lie in L2 is that c+ = c− = 0! Thus again, there
are no eigenvalues! Again we employ can employ proposition 2.5 to find the
spectral points. Not surprisingly, the method is quite analogue the one you
probably use to find the spectrum of the momentum operator. Let
1 x ikx
e ,
ψn (x) == √ ϕ
n
n
where ϕ(x) =
1 −x2
e ,
N
and N is such that
Z
φ(x)2 dx = 1.
R
CHAPTER 2. THE SPECTRUM
38
So
1
kψn k =
n
Moreover,
d
ψ (x)
dx n
d2
ψ (x)
dx2 n
Z 2
Z
x ϕ
dx = |ϕ(y)| dy = 1.
n
1
ϕ′ ( nx )eikx
= ikψn (x) + n−3/2
2ik
= −k 2 ψn (x) + n−3/2
ϕ′ ( nx )eikx +
1
ϕ′′ ( nx )eikx
n−5/2
So we have that
kψn′′ + k 2 ψn k = k
x
1
2ik
kϕ′ ( )k + −3/2 kϕ′′ .
−1/2
n
n
n
And therefore, as n → ∞, kψn′′ + k 2 ψn k → 0. Thus, λ > 0 lies in σ(H0 ).
Therefore the positive real axis is contained in the energy spectrum of the
free particle. Since the spectrum is a closed set, 0 is also an element of σ(H0 ).
This proves the
Proposition 2.6.
σ(H0 ) = [0, +∞)
To consider the case of a particle subject to a potential, must consider
the operator V (X̂). Again, D(V ) the simplest and largest domain of V is
the subset of H such that ψ ∈ D(V ) and V ψ ∈ H. Once we fix this domain,
we know the domain of H, which is given by D(H0 ) ∩ D(V ). And here comes
the problem. Is H self-adjoint ?
In fact, there are many examples that show us that with no further restrictions on V , it is not true that (in general) H is self-adjoint. At first
this seems to be a terrible failure to our theory. But actually, this only
raise questions and should be taken as an opportunity for research. This
is one of the central problems of modern mathematical physics. One could
fill libraries with the mathematics generated trying to answers this question.
Mathematically, this is the question of when the sum of the Laplace operator
and a multiplication operator on L2 is self-adjoint (more precisely, essentially
self-adjoint). A more particular and immediate question is the following: Is
there some potential such that H is self-adjoint?
2.2. FINDING THE SPECTRUM
39
(Un)Fortunately, the proposition above describes a large class of potentials such that H is self-adjoint.
Proposition 2.7. If for all ψ ∈ D(H0 ) ∩ D(V ), there are constants a < 1
and b such that,
kV ψk 6 akH0 ψk + bkψk,
then H = H0 + V is self-adjoint.
To prove this theorem we use the two following results of general interest.
Lemma 2.2.2. If B is a bounded linear operator on H and kBk < 1, then
there is a bounded linear operator N on H such that
N(1 − B) = (1 − B)N = 1
kNk 6 k1 − Bk−1
Proof. The operator N is the inverse of the operator (1 − B), an is given by
the Neumann series
N = (1 − B)−1 = 1 + B + B 2 + B 3 + ....
Lemma 2.2.3. Let A be a densely defined Hermitian operator on H. Suppose
there is a complex number z such that R(z − A) = R(z † − A). Then A is
self-adjoint.
Proof. The hypotheses implies that (1 + A2 ) be onto, therefore A is selfadjoint. We leave the details as an exercise.
Now, back to proposition 2.7.
Proof. Follows from the Lemmas 2.2.2 and 2.2.3. In fact, for λ ∈ R and
ψ ∈ D(H0 ) ∩ D(V ), we have
k(H0 − iλ)ψk2 = kH0 ψk2 + λ2 kψk2 .
CHAPTER 2. THE SPECTRUM
40
The self-adjointness of H0 and proposition 2.1 implies that if λ 6= 0, for every
ϕ ∈ H, we can find ψ ∈ D(H0 ) such that (H0 − iλ)ψ = ϕ. Thus
kϕk2 = kH0 (H0 − iλ)−1 ϕk2 + λ2 k(H0 − iλ)−1 ϕk2 .
We can see that
kH0 (H0 − iλ)−1 ϕk 6 kϕk
λk(H0 − iλ)−1 ϕk 6 kϕk.
If V satisfies the hypotheses of proposition 2.7 then
kV (H0 − iλ)−1 ϕk 6 akH0 (H0 − iλ)−1 ϕk + bk(H0 − iλ)−1 ϕk 6 (a +
b
)kϕk.
|λ|
Since a < 1, we can find a sufficiently large λ such that we have
kV (H0 − iλ)−1 k 6 (a +
b
)k 6 1.
|λ|
So we can write
H − iλ = (1 + V (H0 − iλ)−1 )−1 (H0 − iλ).
The right hand side is onto,hence R(H − iλ) = H for |λ| sufficiently large.
The adjointness of H follows from lemma 2.2.3.
Proposition 2.7 gives a useful way to answer the question on if H is selfadjoint. To do this, we must know when the potential satisfies the hypotheses
of the proposition. This can be done with the following criteria, which we
state without demonstration.
Theorem. [A Criteria for self-Adjointness] The following statements
are equivalent
i. D(H0) ⊂ D(V ).
ii. kV ψk2 6 C(kH0 ψk2 + kψk2 )
iii. C0 = supx
R x+1
x
|V (x)|2 dx < +∞
2.3. EXERCISES
41
iv. For ǫ > 0, there is a constant K such that
kV ψk2 6 ǫkH0 ψk2 + Kkψk2
v. For ǫ > 0, there is a constant C such that
kV ψk 6 ǫkH0 ψk + Ckψk
So if any of this statements holds then H is self-adjoint. In any case, the
spectrum of H will depend on a fundamental way on the potential V .
2.3
Exercises
1. Prove Corollary 2.1.3
2. Show that the spectrum of the momentum operator is the real line.
3. Give a detailed proof of Lemma 2.2.3.
4. Show that λ < 0 lies in ρ(H0 ).
5. Prove that 0 is not an eigenvalue of H0 .
Chapter 3
Quantum Dynamics
One important point we should stress is that time is just a parameter in
quantum mechanics, that is it is not an operator. That means time is not an
observable in the language of the previous chapters. So it makes no sense to
treat time in the same way as we treat the position or momentum operators.
In this chapter, we analyse how a quantum state ψ change in time, and how
the probabilities and expected values depend on the time parameter.
3.1
Time evolution and Schrödinger Equation
So we need to look for a time evolution operator U such that
U(t − t0 )ψ(t0 ) = ψ(t)
We expect that time is a continous parameter so that for a quantum state
ψ
lim ψ(t) = ψ(t0 )
t→t0
Therefore the time evolution operator U must be continous on t and
lim U(t − t0 ) = U(0) = I,
t→t0
Also, a quantum state reamains a quantum state as time goes by so
kψ(t)k2 = ψ(t), ψ(t) = U(t)ψ(0), U(t)ψ(0) = kU(t)ψ(0)k2 = kψ(0)k2
43
CHAPTER 3. QUANTUM DYNAMICS
44
Therefore
U † (t)U(t) = I
(3.1.1)
An operator obeying eq.3.1.1 is called a unitary operator . Another necessary
feature is that if we are interested in obtaining time evolution from t0 to t2 ,
then we can obtain the same result by considering time evolution first from
t0 to t1 and then from t1 to t2 ,
ψ(t2 ) = U(t2 − t1 )ψ(t1 )
ψ(t2 ) = U(t2 − t1 )U(t1 − t0 )ψ(t0 )
Therefore, for t2 > t1 > t0
U(t2 − t0 ) = U(t2 − t1 )U(t1 − t0 ).
An operator with this property is said to be strongly continous.
Another physical requirement concerns energy conservation that if the
potential V does not explicitly depend on the time then the total energy
must be conserved. In our quantum mechanical setting this means that for a
given state ψ the mathematical expectation of the energy and the probability
that the energy lies in an interval I does not depend on time.
Note 3.a. A final requirement is the inertia principle, which means that if
the potential V is invariant under translations then the momentum must be
conserved. In our quantum mechanical setting this means that for a given
state ψ the mathematical expectation of the energy and the probability that
the momentum lies in an interval I does not depend on time.
We sumarize our demands on the
Postulate 4. The time evolution of a 1-D quantum system from the time 0
to a time t is given by an operator U(t) such that
3.1. TIME EVOLUTION AND SCHRÖDINGER EQUATION
45
i. U(t) is continous on t.
ii. U(t1 + t2 ) = U(t1 )U(t2 )
iii. U † (t)U(t) = I
iv. If the potential does not depend explicitly on t then
ψ, U † H U ψ = ψ, H ψ ,
Now we can ask about U
Lemma 3.1.1. Let A be a self-adjoint operator and let
ft (A) = exp(−itA) =
∞
X
(−it)n
n=0
n!
An
then D (ft (A)) = L2 and the following holds
i. ft1 +t2 (A) = ft2 (A)ft1 (A)
ii. f (t)ψ → ψ as t → 0.
iii.
U (t)ψ−ψ
t
→ −iAψ as t → 0, for all ψ ∈ D(A).
iv. if limt→0
U (t)ψ−ψ
t
exists then ψ ∈ D(A).
Proof. Follows directly from the spectral theorem.
Our next result (also know as the Stone’s theorem on one-parameter
unitary groups) establishes a one-to-one correspondence between self-adjoint
operators on a Hilbert space H and one-parameter families of unitary operator that are strongly continous. To do this we need the converse of Lemma
3.1.1, this result is due to Stone and we state it here without demonstration
(for a proof see [9]).
CHAPTER 3. QUANTUM DYNAMICS
46
Theorem. [Stone’s theorem] Let f (t)t∈R be a strongly continuous oneparameter unitary group on a Hilbert space H. Then there exists a unique
self-adjoint operator A on H such that f (t) = eiAt .
We know that the time evolution operator U(t) must have the form eiAt
for some self-adjoint operator A. The energy conservation requiriment will
gives us who is A.
Lemma 3.1.2. If the potential does not explicitly depend on t, then H =
U † (t)HU(t).
Proof. From Stone’s Theorem, U(t) = eiAt where A is a self-adjoint operator.
Now let H ′ = U † (t)HU(t), it easy to see that the self-adjointness of H implies
that H ′ is self-adjoint. The energy conservation requairiment of Postulate 4
implies that the expected value Ē does not depend on t, that is
Ē(t) = (ψ(t), Hψ(t)) = (U(t)ψ(0), HU(t)ψ(0))
= (ψ(0), U(t)† HU(t)ψ(0)) = (ψ(0), H ′ψ(0))
must coincide with
Ē(0) = (ψ(0), Hψ(0)).
Thus
(U(t)ψ, HU(t)ψ) = (ψ, U † (t)HU(t)ψ) = (ψ, H ′ ψ) = (ψ, Hψ),
so for all ψ ∈ D(H) we find that (ψ, (H ′ − H)ψ) = 0 therefore H ′ = H.
In fact, taking ψ(0) as an eigenstate of the Hamiltonian,
Hψ(0) = Eψ(0),
then as we let it evolve in time,
U(t) Hψ(0) = E U(t)ψ(0) = E ψ(t),
3.1. TIME EVOLUTION AND SCHRÖDINGER EQUATION
47
it should still remain an eigenstate of the system at a later time, if the energy
is to be conserved,
Hψ(t) = Eψ(t).
Comparing these expressions we get
U(t) Hψ(0) = Hψ(t) = H U(t)ψ(0),
that is, the object U(t) must commute with the Hamiltonian,
[H, U(t)] = H U(t) − U(t) H = 0.
Having in mind all these restrictions one has to impose on the evolution
operator, it must be of a general form
i
U(t) = e− ~ A t ,
with A being a symmetry of the problem,
[H, A] = HA − AH = 0.
The latter can, in principle, be considered a function of the Hamiltonian H
and be expanded as
A = A(H) =
X
cn H n .
n
For simplicity, we make use of the correspondence principle and argue
that, similarly to what happens in classical mechanics where the evolution is
governed by the Hamiltonian function itself, the quantum evolution operator
can be given by
i
Proposition 3.1. U(t) = e− ~ Ht , where H is the Hamiltonian.
It acts as a group element on the initial state and the Hamiltonian operator belongs to an algebra. Once the Hamiltonian is determined, from a
physical point of view, in terms of relevant operators - such as those representing the observables for position and momentum, X and P , for instance
CHAPTER 3. QUANTUM DYNAMICS
48
- there is still freedom in the choice of representation. The evolution of the
evolution operator itself is then given by
i~
∂
U(t) = H U(t).
∂t
On the other hand,
i~
∂
∂U(t)
∂
U(t)ψ(0) = i~
ψ(t) = i~
ψ(0) = H U(t) ψ(0)
∂t
∂t
∂t
so a quantum state evolves deterministically according to the linear Schrödinger
equation.
Hψ = i~
∂
ψ,
∂t
(3.1.2)
If the Hamiltonian is time-dependent one has to consider the following
formal solution in terms of a time-ordered exponential
Z
H
ψ(t) = p exp −i
dt |ψi(0)
~
!
Z
Z ′
Z t
1 t H(t′ ) ′ t H(t′′ ) ′′
H(t)
dt −
dt
dt + · · · ψ(0).
=
1−i
~
2 0
~
~
0
0
In situations where the Hamiltonian is time-independent, we have a reduction to the simpler expression discussed above, so that the state evolves
according to
i
ψ(t) = e− ~ Ht ψ(0).
Note 3.b. It worth to note that any quantum-mechanical object is completely
characterized by the state function and the time-evolution of state function
is completely deterministic. Everything, the system, the equipment, the environment, and the observer are part of a (quite complex) state vector of
“universe”. The measurements with different results are part of state functions at different points of the spacetime, furthermore the measurement is a
complicated process involving interactions between the system and equipment.
And the equipment alone has something of order of 1023 degrees of freedom!
3.2. APPLICATIONS TO TWO-LEVEL SYSTEMS
49
That means we neither know nor are able to compute the states of the equipment that we use to make the measurement. That is why some people say that
quantum mechanics is deterministic, but it is also probabilistic: that means
you can deterministically calculate the probability of the outcome of an experiment. This is to distinguish it from non-deterministic (i.e. stochastic)
systems where you do not generally have “one” solution but an entire family
of solutions depending on random variables.
Historically, two equivalent representations were introduced at the dawn
of the quantum theory. The first was Heisenberg’s attempt to reproduce
experiments with the launch of infinite dimensional matrices to describe coordinates, momenta and energies. Independently, Schrödinger proposed a
formalism based on wave equations and differential operators to represent
some observables. Later it became clear that Schrödinger’s differential operators correspond to a neat way of representing Heisenberg’s more cumbersome
matrices of infinite dimensions.
There is also a difference in the perspective between both approaches,
which will become more evident in what follows, making that of Schrödinger
more closely related to equation (3.1.2). We start by presenting this formulation.
Note 3.c. In Heisenberg’s formalism the observables vary in time whereas
the vectors remain fixed.
3.2
Applications to Two-Level Systems
Now that we have set out the basic framework to describe quantum phenomena and explored some relevant consequences of the theory, it is time to
discuss in some detail a number of important and elementary physical systems. From a mathematical point of view, it is simpler to tackle a quantum
problem which admits a finite dimensional representation. The first example
CHAPTER 3. QUANTUM DYNAMICS
50
to investigate is, thus, a two level problem for which the quantum state can
be in only two states.
A 2×2 Hilbert space can be used, for example, to reproduce the behaviour
of a quantum spin 21 , which can be in the states up and/or down. Nonetheless
it can also be used to describe, approximately only, a larger system but for
which there are two states that are almost decoupled from the remaining
states.
For a finite dimensional vector, charactering a particular quantum state,
the normalization of the probability modifies according to
Z
X
|ψ(x, t)|2 dx = 1 −→
|ψi (t)|2 = 1.
R
i
A suitably normalized basis to describe vectors in this Hilbert space is, for
example,
ψ↑ =
1
0
ψ↓ =
,
0
1
,
†
†
with ψ↑,↓
ψ↑,↓ = 1 and ψ↓,↑
ψ↑,↓ = 0. If, in this orthonormal basis, the energy
observable, specified by a Hamiltonian operator, has a general Hermitian
form, (φ, Hψ) = (Hφ, ψ),
H=
α
β eiδ
−iδ
βe
γ
then the evolution of a given state ψ is given by the Schrödinger equation
∂
i~ ∂t
ψ = Hψ. Taking the following combinations
δ
δ
ψ+ = cos 2θ ei 2 ψ↑ + sin 2θ e−i 2 ψ↓ ,
δ
δ
ψ− = sin 2θ ei 2 ψ↑ − cos 2θ e−i 2 ψ↓ ,
2β
with the convenient reparametrization θ = arctan α−γ
, the equation of
motion can be written as
∂
ψ± = E± ψ± .
∂t
The advantadge of introducing these states above, denoted eigenvectors, is
i~
that they evolve in a very simple way,
ψ± (t) = e−i
E±
t
~
ψ± (0),
3.2. APPLICATIONS TO TWO-LEVEL SYSTEMS
51
governed by E± , the associated eigenvalues,
E± =
α+γ α−γ
±
sec θ.
2
2
Therefore, if one performs a measurement the particle can only be in
either of the two possible states, ψ+ with energy E+ , or ψ− with energy E− .
But if one does not measure its state it can be in linear combination of both
eigenstates. The particle can evolve from one state to another and we can
define the transition frequency between the states as
ω=
E+ − E−
.
~
If we let these up and down states evolve, it is convenient to express it in
terms of the eigenvectors,
iδ
ψ↑ = e− 2 cos θ2 ψ+ + sin θ2 ψ− ,
iδ
ψ↓ = e+ 2 sin 2θ ψ+ − cos θ2 ψ− ,
so the action of the evolution operator becomes almost trivialized,
θ ~i E+ t
θ ~i E− t
− iδ
2
cos 2 e
ψ+ + sin 2 e
ψ− ,
U(t) ψ↑ = e
i
i
iδ
U(t) ψ↓ = e+ 2 sin 2θ e ~ E+ t ψ+ − cos 2θ e ~ E− t ψ− .
Now we are in a position to compute the probability of a initial state ψ↑
being in a state ψ+ , or ψ− , after
2
†
ψ+ U(t) ψ↑ =
2
†
ψ− U(t) ψ↑ =
an interval t,
2
i
δ
cos 2θ ei 2 e ~ E+ t = cos2 θ2 ,
θ i δ2 ~i E+ t 2
= sin2 2θ ,
sin 2 e e
respectively, as well as the probability of a initial state ψ↓ transitioning into
a state ψ+ , or ψ− ,
2
2
i
δ
†
ψ+ U(t) ψ↓ = sin 2θ e−i 2 e ~ E− t = sin2 θ2 ,
2
2
i
δ
†
ψ− U(t) ψ↓ = cos 2θ e−i 2 e ~ E− t = cos2 2θ .
In both situations, once we have a initial state prepared, be it an up or a
down state, and let it evolve, after a while any energy measurement can
CHAPTER 3. QUANTUM DYNAMICS
52
only produce the values E± , associated to states ψ± , respectively. Thus the
probability of being in either of these eigenstates sum to unity, as is clear
from the expressions above. Note that the probability of having an up state
or a down state transtioning to an eigenstate does not vary with time and
depends only on how much how close the up and down states are close to
being an eigenstate, or ultimately on α, β, γ.
Moreover we can calculate the transition amplitudes of starting with up
and down states, suitably prepared, letting them evolve under the current
Hamiltonian operator and measuring, after some time has elapsed, if the
particles are again in one of the up and down states, respectively,
i
ω
ω ω
†
ψ↑,↓
U(t) ψ↑,↓ = cos2 2θ ei 2 t + sin2 θ2 e−i 2 t ei 2 t e ~ E∓ t ,
i
ω
†
ψ↓,↑
U(t) ψ↑,↓ = 2i sin 2θ cos θ2 sin ω2 t ei 2 t e ~ E− t ,
to show that the probability of being at the initial state after a while varies
with time,
P↑↑ (t) = 1 − sin2 θ sin2
ωt
2
∈
(0, 1),
and similarly for the probability of transmutation between states,
P↑↓ (t) = sin2 θ sin2
ω
2
t
∈
(0, 1).
As expected, since there is no further degree of freedom in the system, the
total probability of transition is conserved,
P↑↑ (t) + P↑↓ (t) = 1,
at any instante, but the interference pattern between initial and final states
varies harmonically with time.
3.3
Schrödinger’s wave equation
In the previous section we discussed one of the simplest quantum problems, for which the states are described by vectors living in a two-dimensional
3.3. SCHRÖDINGER’S WAVE EQUATION
53
Hilbert space. Since the particles can be in only two linearly independent
states, the physical observables correspond to 2 × 2 matrices. on the other
hand, the particle has infinitely many degrees of freedom, matrix representations are not the most convenient and vectors are replaced by functions.
For particles restricted to move in one dimension and the equation govern-
ing the quantum phenomena is the Schrödinger equation given in eq. 3.1.2.
Using the momentum operator of chapter 1, this can be written as a linear
partial differential equation
~2 ∂ 2
∂
+ V (x, t) ψ(x, t),
i~ ψ(x, t) = −
∂t
2m ∂x2
which simply corresponds to a representation of the fundamental equation
(3.1.2) in terms of space-time coordinates. The problem of determining the
evolution of the probability distribution for a quantum particle is equivalent to solving a PDE with suitable boundary conditions. This is a fairly
nontrivial task unless the interaction has a simple form.
Our first result in this context is the
Lemma 3.3.1. If the potential V is time-independent, the state function
evolve as a (infinite dimensional) linear combination of
E
ψ(x, t) = ϕ(x)e−i ~ t ,
where ϕ and E are given by
−
~2 ∂ 2
ϕ + V (x)ϕ = Eϕ.
2m ∂x2
Proof. If the potentials are time-independent, V (x, t) = V (x), we can separate variables as
ψ(x, t) = ϕ(x)φ(t)
so that we have
1 ∂φ
~2 1 ∂ 2
ϕ
+
V
(x)
=
i~
= E.
−
2m ϕ ∂x2
φ ∂t
CHAPTER 3. QUANTUM DYNAMICS
54
The time-dependence can be immediately obtained,
E
φ(t) = φ(0)e−i ~ t ,
and the complete solution has the general form
E
ψ(x, t) = ϕ(x)e−i ~ t .
Note 3.d. We can see that the probability amplitude is not affected by the
flow of time, since |ψ(x, t)| = |ϕ(x, 0)|.
The solution of Schrödinger’s time-independent second order differential
equation,
~2 ∂ 2
ϕ + V (x)ϕ = Eϕ,
−
2m ∂x2
can be expanded in terms of two linearly independent functions,
ϕ(x) = c1 ϕ1 (x) + c2 ϕ2 (x),
(3.3.3)
(3.3.4)
complemented by appropriate boundary conditions, and the following requirements on the wave function:
• ϕ(x) and ϕ′ (x) must be finite
• ϕ(x) and ϕ′ (x) must be single-valued
• ϕ(x) and ϕ′ (x) must be continuous
Note 3.e. As we discuss in chapter 1, to lie in L2 the state functions must
vanish both at x → +∞ and x → −∞ (or in general, it must satisfy appro-
priate boundary conditions), therefore not any value of E is allowed. Instead,
the boundary conditions usually impose restrictions on the energy eigenvalues, denoted quantization conditions.
In what follows we will present some properties which are consequences
of Schrödinger’s wave equation.
3.3. SCHRÖDINGER’S WAVE EQUATION
55
Lemma 3.3.2. If the potential V (x) is even, then the solutions of the timeindependent Schrödinger equation have a definite parity
Proof. Consider the time-independent Schrödinger equation,
−
~2 ∂ 2
ϕ(x) + V (x)ϕ(x) = Eϕ(x),
2m ∂x2
we can see that applying a parity transformation on it, we have
−
~2 ∂ 2
ϕ(−x) + V (−x)ϕ(−x) = Eϕ(−x).
2m ∂x2
If the potential is invariant under parity, then V (−x) = V (x), ϕ(−x) satisfies
the same equation as ψ(x), so both ϕ(x) and ϕ(−x) are admissible solutions
to the same problem, up to boundary conditions. Thus, the solutions of a
parity-symmetric problem can be expressed in terms of symmetric and antisymmetric combinations,
ϕ± (x) =
ϕ(x) ± ϕ(−x)
√
,
2
leading to solutions with a definite parity: either even or odd.
Now, we would like to show that the structure of the Schrödinger equation
implies the existence of an important conservation law, that associated to the
probabilistic interpretation of the quantum theory. In fact, there would be
an unconciliable problem with Max Born’s probabilisitic interpreation, the so
called Copenhagen interpretation, if it was not compatible with Schródinger’s
evolution equation (see Ref. [10]).
Proposition 3.2. If the potential is a real function and ψ ∈ D(H), then
2
~2
∂2 ψ∗
∂
∗∂ ψ
−
ψ
−
ψ
= +i~ (ψ ∗ ψ) .
(3.3.5)
2
2
2m
∂x
∂x
∂t
Proof. Taking the fundamental equations for ψ(x, t) and ψ † (x, t)
∂
ψ,
H ψ = +i~ ∂t
∂ †
ψ,
ψ † H † = −i~ ∂t
CHAPTER 3. QUANTUM DYNAMICS
56
We see that, for ψ ∈ D(H),
(ψ, H ψ) − (Hψ, ψ) = +i~
∂
(ψ, ψ),
∂t
so the total probability (ψ, ψ) is preserved if the Hamiltonian is Hermitian.
On the other hand, apply a complex conjugation transformation to Schrödinger’s
time-dependent equation and we obtain the equation satisfied by ψ ∗ ,
−
~2 ∂ 2 ψ ∗
∂ψ ∗
∗ ∗
+
V
(x)
ψ
=
−i~
.
2m ∂x2
∂t
Then multiplying them from the left and right by ψ ∗ and ψ, respectively,
and subtracting one from the other we get
2
∂2 ψ∗
∂ψ ∗
~2
∗∂ ψ
∗
∗
∗ ∂ψ
ψ
−
ψ + ψ (V (x) − V (x) ) ψ = +i~ ψ
+ψ
−
2m
∂x2
∂x2
∂t
∂t
and if the potential is a real function, V (x) = V (x)∗ , it simplifies to
2
~2
∂2 ψ∗
∂
∗∂ ψ
−
ψ
−
ψ = +i~ (ψ ∗ ψ) .
2
2
2m
∂x
∂x
∂t
Notice that the expression above corresponds to a conservation law, since
it is of the form of a continuity equation,
∂ρ ∂J
+
= 0,
∂t
∂x
(3.3.6)
associated to the conservation of the total charge, the total quantum probability,
Q=
Z
ρ dx =
Z
ψ ∗ ψ dx = (ψ, ψ)
(3.3.7)
since its variation vanishes,
if the associated current,
+∞
∂Q
= 0,
+J
∂t
−∞
i~
J =
2m
∂ ∗
∗ ∂
ψ ψ −ψ
ψ ,
∂x
∂x
(3.3.8)
(3.3.9)
3.4. TIME DEPENDENCE OF EXPECTED VALUES
57
is localized, vanishing at the faraway boundaries. The continuity equation
above shows that there is a flow of the object J (inwards or outwards) if
the density varies (increases or decreases). Notice that if the wave function
is real, as is the case of a decaying exponential solution, there is no flux
of probability, meaning the probability distribution remains unchanged with
time.
Note 3.f. In problems where an incident particles scatters, due to the presence of a potential, for instance, one can define the fraction of the wave which
is transmitted and that which is reflected. The quantities are given by the so
called transmission and reflection coefficients, T =
3.4
Jt
,
Ji
R=
Jr
.
Ji
Time dependence of Expected Values
It is time now to use our knowledge about the Schrödinger equation to
extract information about the evolution of the expectation values associated
to a certain observable.
Proposition 3.3. Let O to be an operator corresponding to a real valuedobservable, then
d
∂O
1
,
O =
[O, H] +
dt
i~
∂t
where [A, B] = AB − BA.
Proof. Given a certain operator O, its expected value on a certain state may
in principle be time-dependent if the reference state is evolving,
Z
O(t) = (ψ, O ψ) = ψ ∗ (x, t) O ψ(x, t) dx
(3.4.10)
Thus we can write
Z
Z
Z
∂ψ
d
∂ψ ∗
∗ ∂O
O ψ + dx ψ
ψ + dx ψ ∗ O
+
O =
dx
dt
∂t
∂t
∂t
Z
Z
Z
1 ∗
1
∗ ∂O
∗
=
dx − ψ H O ψ + dx ψ
ψ + dx ψ O
Hψ
i~
∂t
i~
Z
Z
∂O
1
dx ψ ∗ (OH − HO) ψ + dx ψ ∗
ψ.
=
i~
∂t
CHAPTER 3. QUANTUM DYNAMICS
58
It can then be cast in the more compact form,
∂O
1
d
.
O = [O, H] +
dt
i~
∂t
Note 3.g. This is the quantum analogue of the the Poisson brackets in classical mechanics, suggesting one to interpret the quantum expectation value as
the classical function itself in the classical limit, when ~ → 0 and
1
i~
{·, ·}P B .
[·, ·] →
It becomes evident that if the observables do not depend explicitly on
time, we have the
Corolary 3.4.1. If O does not depend explicitly on time
1
d
O =
[O, H].
dt
i~
(3.4.11)
So if the observable is not depende explicitly on time the expectation
values of the observables are preserved in time if and only if [O, H] = 0. These
objects are considered constants of motion if this happens. An important
message to take from this point is that the Hamiltonian determines not only
the evolution of the quatum states, as prescribed by Schrödinger’s equation,
but it also dictates the time evolution of the expectation values of observables,
as can be seen from the relation above.
3.4.1
Newton’s 2nd Law and Quantum Mechanics
A remarkable consequence of the general relation between the expectation
of any quantum mechanical operator and the expectation of the commutator
of that operator with the Hamiltonian of the system, as seen in the Proposition 3.3, is the so called Ehrenfest Theorem, which we now discuss.
3.4. TIME DEPENDENCE OF EXPECTED VALUES
59
The Proposition 3.3 is particularly useful when we evaluate it for simple
Hamiltonian of the form H =
P2
2m
+ V (X). It can be shown that
1
[X, P 2 ]
2m
[X, H] =
=
i~
P,
m
∂V
,
[P, H] = [P, V (X)] = −i~ ∂X
and since nor X or P depends explicitly on time, we have
dX
dt
dP
dt
=
1
[X, H]
i~
=
1
[P, H]
i~
=
1
P,
m
=
∂V
.
− ∂X
(3.4.12)
Note 3.h. The relations expressed in Eq. 3.4.12 are equivalent to Hamilton’s
equations of motion (for a conservative potential) [11],
dX
dt
=
dP
dt
∂V
= − ∂X
= − ∂H
,
∂X
1
P
m
=
∂H
,
∂P
A simple way to interpret Eq. 3.4.12 is given by Newton’s second law for
mechanics,
M ddtX̄ = P̄ ,
∂V
dP̄
= − ∂X
,
dt
leading to
M
d2
∂V
= F.
X=−
2
dt
∂X
The content of the last expression is that Newton’s second law, F = ma, is
after all, still valid at the quantum level in some sense: it holds true if one is
interested only in expectation values.
The same results can also be constructed from a slightly different approach, starting from the definition of the expectation value of the position
observable,
ẋ =
=
d
x
dt R
R
d
dt
∂ψ(x)∗
∂t
=
ψ(x)∗ x ψ(x) dx
x ψ(x) + ψ(x)∗ x
∂ψ(x)
∂t
dx.
CHAPTER 3. QUANTUM DYNAMICS
60
From Schrödinger’s wave equation
R ∂ 2 ψ(x)∗
~
ẋ = 2mi
x ψ(x) − ψ(x)∗ x
∂x2
R
∂
~
ψ(x)∗ ∂x
ψ(x) dx = mp ,
= mi
∂ 2 ψ(x)
∂x2
dx
as expected. Similar calculations provide us with the corresponding equation
for ṗ. In the end, Newton’s second law can again be stated: F̄ = mẍ.
3.5
Quantum Pictures
We have seen that according to Schrödinger’s description the observables are described by constant operators acting vectors which vary in time
according to ΨS (t) = U(t)ΨS (0) = U(t)ΨH whereas in Heisenberg’s framework the states are fixed. We can choose Heisenberg states to coincide with
Schrödinger’s at the initial time, ΨH = ΨS (0) = U(t)† ΨS (t).
Not surprisingly we have that in either picture the time dependent expectation value of an observable in a given state coincide,
O = (ΨS (t), OS ΨS (t)) = (ΨH , OH (t) ΨH ).
This requirement imposes restrictions also on the time dependence of the
operators and we can relate them via
OH (t) = U(t)† OS U(t),
so that the operator has a constant form according to Schrödinger,
OS = U(t)OH (t)U(t)† ,
but has a dynamical nature in Heisenberg’s picture.
The time evolution given by Schrödinger’s equation has the following
counterpart in Heisenberg’s Picture,
i~
dOH (t)
= [H, OH (t)],
dt
dictating the dynamics of Heisenberg’s operators.
(3.5.13)
3.5. QUANTUM PICTURES
61
There is, however, a further useful formulation, due to Dirac. To explore
it one should be able to decompose the Hamiltonian in a time-independent
term H0 and an interacting time-dependent part V (t),
H = H0 + V (t).
We then introduce the simpler evolution operator,
i
U0 (t) = e− ~ H0 t ,
together with a transformed quantum state, ΨI (t) = U0 (t)† ΨS (t), or
ΨS (t) = U0 (t)ΨI (t).
This new vector evolves in time according to the interaction potential
i~
∂
ΨI (t) = VI ΨI (t),
∂t
where we have defined VI (t) = U0 (t)† V (t)U0 (t). Moreover, also the transformed observable,
OI (t) = U0 (t)† OS (t)U0 (t),
evolves in time, but governed by the noninteracting part of the Hamiltonian
H0 ,
i~
dOI (t)
= [H0 , OI (t)].
dt
The latter formalism is also called the interaction picture for a quantum
description, since the states evolution rely essentially on the interaction potentical VI . In the following table we summarize how states and observables
evolve in time under the different pictures we have presented.
CHAPTER 3. QUANTUM DYNAMICS
62
State Vector
Observable Operator
Heisenberg
Invariant
Evolves with H
Dirac
Evolves with VI
Evolves with H0
Schrödinger
Evolves with H
Invariant
Table 3.1: Quantum pictures
3.6
Exercises
1. Show that, for a discrete spectra En , the general solution of the timedependent Schrödinger equation is the linear superposition of the stationary states
ψ(x, t) =
∞
X
i
an ψn (x)e− ~ En t ,
n=1
where ψn (x) are eigenfunctions of H with eigenvalues En .
2. Use the spectral theorem to obtain a continuous version of exercise 1.
3. Show that when an particle is confined to a box, its energy levels are
quantized.
4. Show that when an electron is confined to a square well, even though
there is a nonvanishing probability of finding it outside the box, there
is no flux of probability to the outer region.
5. Show (and interprete) that when a Gaussian wave packet is left to
evolve according to Schrödinger’s equation, one has the following relation
~2
(∆x) (∆p) =
4
2
2
~2 t2
1+ 2 2 .
mσ
Chapter 4
Approximation Methods
So far we have seen some examples of quantum problems which can be
completely solved in closed form. Overall we can mention the problems of
a free particle, of a particle in a box, of particles scattering on simple onedimensional barriers, the harmonic oscillator, the problem of central forces,
like the Hydrogen atom, just to name a few. In order to overcome the
difficulties in “diagonalizing” a problem we present in this chapter some useful
techniques to help us investigate real problems. The methods can be divided
in two categories: the perturbative and the non-perturbative approaches.
4.1
Non-perturbative methods
Here we start the description of problems for which a closed form solution
is either very difficult to find and unknown or - more commonly - not possible.
In what follows we present some convenient methods which can be applied
to various problems, but whose validity is restricted.
4.1.1
Variational Methods
A useful method and of simple implementation is the one that permits
us to determine the ground state energy of a quantum problem, that is the
minimal value of the energy E0 , given a specific dynamics, characterized by
63
CHAPTER 4. APPROXIMATION METHODS
64
a Hamiltonian H.
Proposition 4.1. The energy expectation value of a system is never less
than its ground state energy,
H ≥ E0 .
Proof. We start by denoting ϕn the eigenstates of the system,
H ϕn (x) = En ϕn (x),
so that any state can be written as an expansion in this eigenbasis,
Z
X
X
ϕ(x) =
ϕn (x) dx ϕ∗n (x)ϕ(x) =
ϕn (x) (ϕn , ϕ).
n
n
This general state can be used to evaluate the expectation value of a
certain observable. In the case of the energy operator, we have
H =
(ϕ, Hϕ)
(ϕ, ϕ)
which can be rewritten in the following form
P
(En − E0 )|(ϕn , ϕ)|2
H = E0 + n=1P
.
2
n=0 |(ϕn , ϕ)|
Since we are considering E0 to be associated to the ground state, all other energy levels are higher and all terms in the summation above are non negative
[12].
Note 4.a. If one uses the theorem stating that the ground state of a quantum
problem is nondegenerate, then all terms in the summation can be show to
be positive. You can find a proof in the book of Simon/Reed, "Methods of
Mathematical Physics", vol.4 "Analysis of Operators".
Therefore, we have a general result that the expectation value for an
energy measurement is greater than, or equal, the ground state energy, H ≥
E0 .
4.1. NON-PERTURBATIVE METHODS
65
Alternatively, we can expand the generic state as a combination of eigenstates but with unknown coefficients, cn (α),
ϕα (x) =
X
cn (α) ϕn (x).
n
In this case, we can show that the expression
H(α) =
reduces to, using
P
n
(ϕα , H ϕα )
(ϕα , ϕα )
|cn (α)|2 = 1,
H(α) =
X
n
|cn (α)|2En .
Once again we can argue that its minimum value is indeed E0 , occurring
when the expectation value is evaluated on the ground state ϕ0 (x),
H(α) ≥ E0 .
These results seem dull at first sight but they serve to a useful purpose,
namely of determming the ground state of a certain system about which we
do not know much. Complicated system tend to occur in physical situations
more frequently than the idealized settings commonly found in textbooks. In
general one is a priori not convinced about the best tools to solve a quantum
problem and here we present a fairly universal technique which has also a
simple implementation.
The knowledge of a single energy eigenvalue in the whole spectrum might
sound insignificant in principle. However, in many physical applications, especially when one is concerned about statitic properties of quantum systems,
the ground state energy contains sufficient information to allow a characterization of the system.
As we minimize the expectation value of a Hamiltonian operator whose
spectrum is bounded from bellow, we are capable of finding the value of the
CHAPTER 4. APPROXIMATION METHODS
66
newly introduced parameter α = α0 which satisfies the following minimum
condition,
∂H(α) = 0.
∂α α0
We can then estimate the ground state energy of a system as
E0 . H(α0 ).
Although very general, this approach presents the disadvantadge that it
relies on one’s ability to construct the most effective form for the coefficients
cn (α), or ultimately for ϕα (x) itself, as some choices will lead to better results
than others. In order to optimize the method it is advisable to make use
of symmetriy properties and physical reasoning to start with a convenient
ansatz for these objects. In some cases it is possible to find the exact ground
state ϕ0 (x) = ϕα0 (x).
Next we investigate an alternative non-perturbative method which is more
compatible with the other end of the spectrum, that is it more suited for high
eigenvalues, being a complementaray approach.
4.1.2
Extension to Excited States
Proposition 4.2. If you have an enumerable spectrum which can be ordered,
E0 < E1 ≤ E2 ≤ · · · then the upper bound on the n-th energy level is given
by
En ≤ ψ, H ψ
if the state ψ used is orthogonal to all previous n − 1 states.
The proof is left as an exercise.
4.1. NON-PERTURBATIVE METHODS
4.1.3
67
A 2 × 2 example
H=
√1
2
√ 2
0
1
|φ(α)i = √
1 + α2
1
α
√
1 + 2 2α
E(α) = hφ(α)|H|φ(α)i =
1 + α2
dE =0
dα α0
√
α0 = − 2
1
|φ(α0 )i = √
3
1
√
− 2
E0 ≤ E(α0 ) = −1
Suppose we know the ground state of the system
1
1
√
|1i = √
3 − 2
The orthogonal state is
1
|φi = √
3
√ 2
1
E1 ≤ hφ|H|φi = 2
CHAPTER 4. APPROXIMATION METHODS
68
4.1.4
Method os Successive Powers
Proposition 4.3. Assuming the spectrum is bounded,
E0 ≤ E1 ≤ E2 ≤ · · · ≤ EM ,
the largest eigenvalue reads
hφ|H m+1|ψi
,
m→∞ hφ|H m |ψi
EM = lim
where the state φ may be any, as long as it is orthogonal to all eigenfucntions
of H, φ, ψn 6= 0.
Note 4.b. If this projection φ, ψn vanishes we have to take another φ
state.
Proof. Projecting on a test function φ the m-th action of the Hamiltonian
operator on a (unknown) combination of its eigenfucntions
X
φ, H m ψ =
cn Enm φ, ψn .
n
Thus, in the limit of large m → ∞ the largest eigenvalue EM contributions
will dominate and
m+1
φ, ψM
cM EM
φ, H m+1 ψ
=
lim m→∞
m
φ, H m ψ
cM EM
φ, ψM
is a well defined ratio if φ, ψM 6= 0, as assumed, and provides the estimative
for EM above.
4.1.5
WKB - Semiclassical approximation
The method developed by Wentzel, Kramers and Brillouin - from which
the acronym stems - considers the Schrödinger equation,
~2 ′′
ψ (x) + (V (x) − E) ψ(x) = 0,
−
2m
in a semiclassical approach.
4.1. NON-PERTURBATIVE METHODS
69
Proposition 4.4. A good approximation for the eigenfunctions of a Schrödinger
problem in the semicassical limit is given by plane waves of the form
R √
i
2m(E−V (x))dx
e± ~
ψ(x) ∼ 41 .
2m(E − V (x))
Proof. For that it considers a representation similar to the one found in
Hamilton-Jacobi’s formulation of classical mechanics, reexpressing the wavefunction in terms of Hamilton’s principal (action) funcion,
i
ψ(x) = e ~ S(x) ,
so the equation of motion becomes
−i~ S ′′ (x) + S ′ (x)2 = 2m(E − V (x)) = p(x)2 ,
where we have introduced a position-dependent momentum function p(x)
as an analogy with classical mechanics. Here we are more interested in the
excited states so that the quantum systems behaves more similarly to classical
mechanics. In this so called semiclassical limit the wave properties are less
important and -similarly to what is done in the geometric limit of physical
optics- we can restrain to the optical approximation,
~ S ′′ (x) << S ′ (x)2 .
Note 4.c. Also in the semiclassical limit, ~ → 0.
This allows us to express the principal function in terms of the external
potential, or the equivalent classical momentum and corresponding wavelength,
Z
Z
Z
p
2π~
.
S(x) ≃ ± dx 2m(E − V (x)) = ± dx p(x) = ± dx
λ(x)
One is then able to show that a solution for the wave function is a combination
of incoming and outgoing plane waves,
R
R
1
1
c1
c2
ψ(x) = p
e− i~ p(x)dx + p
e+ i~
p(x)
p(x)
whose form is valid for a general potential.
p(x)dx
,
CHAPTER 4. APPROXIMATION METHODS
70
It follows from this result a quantization condition which was used in the
early days of the quantum theory,
Corolary 4.1.1. The action associated to a quantum problem, over a period,
must be quantized in multiples of π,
I
1
p(x)dx = n +
π.
2
Proof. Care must be taken, though, if the potential allows for the existence
of turning points, which occur when the energy E given makes the particle confined in the potential V (x). The turning points xTP appear when
V (xTP ) = E, so that p(xTP ) = 0 and the movement stops. Denoting the
turning points on the left and on the right by a and b respectively, it is left
as an exercise to show that near these points one has [13]
Z x
π
c
1
p
,
p(x)dx −
ψ(x)
cos
~ a
4
a<x<<b
|p(x)|
and
ψ(x)
a<<x<b
π 1
p
cos
−
4 ~
|p(x)|
c
Z
b
x
p(x)dx ,
The simple analysis that in the region between the turning points the solution must be well behaved permits us to recover to famous Bohr-Sommerfeld
quantization condition,
Z x
Z
π 1 b
1
π
−
p(x)dx −
−
p(x)dx = mπ,
~ a
4
4 ~ x
or more simply,
Z
b
p(x)dx =
a
1
m+
2
π.
It is tricky to state which method is best for a particular problem. Each
method has its advantadges and its drawbacks. So far we have presented formulations which are more useful in opposite regimes of low and high energy
4.2. TIME-INDEPENDENT PERTURBATION THEORY
71
levels. In the next section we will show a very geneal way of solving some
Schrödinger problems with arbitrary precision, a method based on perturbation theory.
4.2
Time-independent Perturbation Theory
We know return to the fundamental time-independent Schrödinger equation. In the event of us having a simple Hamiltonian operator H0 for which
we know the eigenvalues En0 and the associated eigenfunctions ϕ0n (x), we can
write
H0 ϕ0n (x) = En0 ϕ0n (x),
as it happens for a variety of problems: particle in a box, harmonic oscillator,
hydrogen atom, to name a few.
If, on the other hand, we have a Hamiltonian operator we do not how
to diagonalize exactly we might still be able to solve the problem if we are
able to recognize the challenging operator as a small deviation from a known
one. That means we can express the new Hamiltonian as the old one with
the simple introduction of a perturbation,
H = H0 + λ H1 .
Then we ask ourselves what the eigenstates ϕn (x) and eigenvalues En for
this deformed operator are, that is we would like to write down the following
eiganvalue equation,
H ϕn (x) = En ϕn (x).
Before proceeding, we must reminf ourselves that if the small perturbation
hypothesis is true, in the end we should obtain eigenvalues En which are not
too different from the reference ones En0 , or equivalently,
λ (ϕn , H1 ϕn ) << En0 .
CHAPTER 4. APPROXIMATION METHODS
72
Because the eigenstates of H0 form a complete basis {ϕ0n (x)}, we can use
those states to expand the eigenvectors of H,
ϕk (x) =
X
ckm ϕ0m (x),
m6=n
with the normalization
(ϕm , ϕn ) = δmn .
In a similar way as the Hamiltonian operator is a deviation from an
undeformed one, we can expand the deformed wavefunction as a deviation
from the original wavefunction by including corrections, order by order on λ,
ϕn (x) = ϕ0n (x) + λϕ1n (x) + λ2 ϕ2n (x) + · · · .
The normalization condition of the new eigenfunctions leads to the orthonormality of the correction terms with respect to the original eigenstate,
(ϕ0n , ϕjn ) = δj0 ,
and
(ϕ0n , ϕkn ) =
X
ckm (ϕ0n , ϕ0m ) = 0.
m6=n
At this point, we need to establish a distinction between systems which
contain and systems which do not contain degenerate states. We begin with
the simplest scenario, of no eigenstates sharing the same eigenvalue.
4.2.1
Time-independent perturbation: Non-degenerate
Proposition 4.5. The first order correction on the energy levels of a perturbed Hamiltonian is given by
En1 = (ϕ0n , H1 ϕ0n ).
4.2. TIME-INDEPENDENT PERTURBATION THEORY
73
Proof. By starting with the perturbed Hamiltonian,
H = H0 + λ H1
we can make use of the perturbation expansion of the eigenfunctions and
eigenvalues to obtain
(H0 + λH1 ) ϕ0n (x) + λϕ1n (x) + · · · = En0 + λEn1 + · · · ϕ0n (x) + λϕ1n (x) + · · · .
On both sides of the equation one has now a power series on the perturbation
parameter λ and in order for the equation to be valid for any value of λ we
must equate each coefficient of the expansion, giving rise a set of simpler
equations,
H0 ϕ0n (x) = En0 ϕ0n (x),
H0 ϕ1n (x) + H1 ϕ0n (x) = En0 ϕ1n (x) + En1 ϕ0n (x),
H0 ϕ2n (x) + H1 ϕ1n (x) = En0 ϕ2n (x) + En1 ϕ1n (x), +En2 ϕ0n (x),
..
.
The terms independent of λ, associated to the coefficients of power λ0 , are
the first to appear in the expansion. They correspond simply to the reference
eigenproblem, for which we know the spectrum,
H0 ϕ0n (x) = En0 ϕ0n (x).
In the sequence we have the first order correction, corresponding to the
coefficients of the power λ1 ,
H0 ϕ1n (x) + H1 ϕ0n (x) = En0 ϕ1n (x) + En1 ϕ0n (x).
Notice that in the above equation we have two unknowns, the first perturbation on eigenvalues, En1 , and the first perturbation on the eigenfunctions,
CHAPTER 4. APPROXIMATION METHODS
74
ϕ1n (x). As we have argued, the latter can be expanded in the basis of the
eigenstates of H0 ,
ϕ1n (x) =
X
c1m ϕ0m (x).
m6=n
Inserting them in the equation above, we are left with
X
0
Em
− En0 c1m ϕ0m (x) + H1 ϕ0n (x) = En1 ϕ0n (x).
(4.2.1)
m6=n
In order to compute the first eigenvalue correction En1 one needs to take
the scalar product of this equation with the unperturbed eigenfunction ϕ0n (x).
This produces
En1 = (ϕ0n , H1 ϕ0n ),
so it is enough to compute the expectation value of the perturbation operator
on the original ground state.
On the other hand, if we want to determine the first order correction to
the eigenvectors we have the following proposition
Proposition 4.6. The first nontrivial term in the eigenfucntion perturbation
expansion is
ϕ1n (x) = −
X
m6=n
ϕ0m (x)
(ϕ0m , H1 ϕ0n )
.
0 − E0
Em
n
Proof. Projecting the same equation (4.2.1) on a state ϕ0k (x), allows us to
write
c1m = −
(ϕ0m , H1 ϕ0n )
,
0 − E0
Em
n
so the expansion in the original basis becomes is the one presented above.
Following a similar path we can compute the second order corrections as
well, based on the equation
H0 ϕ2n (x) + H1 ϕ1n (x) = En0 ϕ2n (x) + En1 ϕ1n (x), +En2 ϕ0n (x)
(4.2.2)
4.2. TIME-INDEPENDENT PERTURBATION THEORY
75
where En1 and ϕ1n (x) have been proviously determined, leaving us to compute
the unknowns En2 and ϕ2n (x).
Proposition 4.7. The second order corrections to the eigenvalues are always
negative. Besides, close levels contribute more and tend repel each other.
Proof. Making use of (4.2.2) we write down
E0n (ϕ0n , ϕ2n ) + (ϕ0n , H1 ϕ1n ) = En2 ,
leading to
En2 = −
X |(ϕ0 , H1 ϕ0 )|2
m
n
,
0 − E0
E
m
n
m6=n
indicating the validity of the proposition.
For states in the continuum the sums must be replaced by integrals.
4.2.2
Time-independent perturbation: Degenerate
For degenerate states, a more careful analysis is required. Suppose we an
unperturbed system with degenerate eigenstates,
H0 ϕ0n,i (x) = En0 ϕ0n,i (x),
i = 1, · · · , g
where g denotes de degeneracy associated to each eigenvalue, that is it represents the number of eigenfunctions sharing a same eigenvalue [14].
As before, we can still propose a perturbation expansion for the characteristic energies,
En = En0 + λEn1 + λ2 En2 + · · · ,
and for the characteristic functions,
ϕn (x) = ϕ0n (x) + λ ϕ1n (x) + λ2 ϕ2n (x) + · · · ,
CHAPTER 4. APPROXIMATION METHODS
76
so that at zero-th order we have the familiar relation
H0 ϕ0n (x) = En0 ϕ0n (x).
Considering now the undeformed states to be degenerate one has, more
generally,
H0
g
X
ai ϕ0n,i (x)
=
En0
g
X
ai ϕ0n,i (x),
i=1
i=1
which is automatically satisfied.
Proposition 4.8. In the presence of degeneracies the first order correction
to the energy eigenvalues En1 are determined by the following equation
g
X
(ϕ0n,j , H1ϕ0n,i ) ai = En1 aj .
i=1
Proof. In order to determine the first corrections we consider
H0 ϕ1n,i (x) + H1 ϕ0n,i (x) = En0 ϕ0n,i (x) + En1 ϕ0n,i (x),
which leads to
H0
X
c1m ϕ0m (x)
+ H1
g
X
ai ϕ0n,i (x)
i=1
m6=n
=
En0
X
c1m ϕ0m (x)
+
En1
g
X
ai ϕ0n,i (x),
i=1
m6=n
or
X
c1m
m6=n
0
Em
−
En0
ϕ0n (x)
+
g
X
ai H1 ϕ0n (x)
i=1
=
g
X
ai En1 ϕ0n,i (x).
i=1
Projecting this equation on a function ϕ0n,j (x), one can show that the proposition is true.
With the introduction of the abbreviation
Vij ≡ (ϕ0n,j , H1 ϕ0n,i )
4.3. THE ANHARMONIC OSCILLATOR
77
the equation which determines the first eigenvalue corrections can be written
in a more convenient form,


V11 V12 · · ·
 V21 V22 · · ·  


.. . .
..
.
.
.


a1
1
a2 
 = En 
..
.

a1
a2 
.
..
.
Therefore, the message we get from these computations is that one must
choose a basis in the degenerate space for which the perturbation is diagonal.
We turn our attention now to an explicit simple example, of system presenting a double degeneracy. The associated degenerate eigenspace is such
that we can write
V11 V12
V21 V22
a1
a2
=
En1
a1
a2
so if we wish to determine the value of the energy correction En1 , we must
diagonalize the interaction operator,
V11 − En1
V12
V21
V22 − En1
=0
which gives rise to the characteristic equation
2
En1 − (V11 + V22 ) En1 + V11 V22 − V12 V21 = 0.
This can be put in a favorable form
2
En1 − Tr V En1 + det V = 0,
from which we see that the first energy correction represents a level splitting
of the spectrum,
√
1
En1 = Tr V ± Tr V − 4 det V .
2
4.3
The Anharmonic Oscillator
As an application of the perturbation theory, consider now a small deviation of the harmonic oscillator, with a quartic interaction,
H=−
~2 d 2
mω 2 2
+
x + αx4 ,
2m dx2
2
CHAPTER 4. APPROXIMATION METHODS
78
for which there is not a closed form set of eigenstates and eigenvalues. Our
first attempt, based on what we have studied so far, is to assume the Hamiltonian of our interest to be decomposed as
H = H0 + αH1
where
H0 = −
~2 d 2
mω 2 2
+
x,
2m dx2
2
and
H1 = x4 ,
with a perturbation parameter α, considered to be relatively small.
Since this operator is not explicitly time-dependent we can apply the
time-independent perturbation formalism, with the unperturbed eigenvalues
1
0
~ω,
En = n +
2
and unperturbed eigenfunctions written in terms of Hermite polynomials
Hen (x),
1
ϕ0n (x)
π 4 − x2
=√
e 2 Hen (x).
2n n!
The first correction to the characteristic energies is given by, in units of
α−1 ,
En1 = (ϕ0n , x4 ϕ0n ) =
3h2
2n(n + 1) + 1
2
2
4m ω
and for the eigenfunctions we have to compute the more laborious quantity,
[15]
ϕ1n (x) = −
X
m6=n
ϕ0m (x)
X
(ϕ0m , x4 ϕ0n )
(ϕ0m , x4 ϕ0n )
0
=
−
ϕ
(x)
.
m
0 − E0
Em
~ω(m − n)
n
m6=n
4.4. TIME-DEPENDENT PERTURBATION
4.4
79
Time-dependent Perturbation
So far we have treated perturbatively the time-independent Schrödinger
eigenvalue equation. It is time to employ the same principle of expanding
the states as a perturbative expansion to address the more general problem
of the time-dependent Schrödinger equation, which must be considered if, for
example, the external potential varies with time [14].
For that matter, our starting point is, not surprisingly, the fundamental
equation,
i~
∂
ψ(x, t) = H(x, t)ψ(x, t).
∂t
Proposition 4.9. The general solution of Schrödinger’s equation can be expressed as a combination of eigenfunctions of the form
ψ(x, t) =
X
cn φn (x) χn (t).
n
Proof. We shall assume that the Hamiltonian has such a structure that allows
us to separate the operator in the following fashion
H(x, t) = H0 (x) + H1 (t),
an additive separability condition of the Hamiltonian. If that is the case we
have a separability of wavefunction, but in the product form,
ψ(x, t) = φ(x) χ(t).
Substituting it in the time-dependent Schrödinger equation we can write
i~
1
1
χ′ (t)
−
H1 (t)χ(t) =
H0 (x)φ(x) = E,
χ(t)
χ(t)
φ(x)
where E is a constant to guarantee that both the χ(t) and φ(x) vary independently. This procedure generates two separate equations,
H0 (x) φ(x) = E φ(x),
CHAPTER 4. APPROXIMATION METHODS
80
and
i~
d
χ(t) = H1 (t) + E χ(t).
dt
Here φ(x) is, in principle known and χ(t) can be solved similarly to the
stationary Schrödinger equation and the general solution can be expressed
as in the proposition.
In what follows we present how the solution can be constructed in the
spirit of perturbation theory.
4.4.1
Time-dependent Perturbation Theory
Considering a Hamiltonian whose time dependence may be, at least partially, separated in the following way
H(x, t) = H0 (x) + V (x, t),
we can make use of the perturbation approach if at every instant we have
that the deformation from the original Hamiltinonian is indeed small,
V (x, t) << H0 (x).
Proposition 4.10. The first time-dependent correction coefficients must satisfy the following equation
ċ1m (t) =
1 X 0
c (t) eiωmn t Vmn (t).
i~ n n
Proof. In the absence of perturbation we know that the Schrödinger equation
H0 (x, t) ψn0 (x, t) = En0 ψn0 (x, t)
has the general solution
i
0
ψn0 (x, t) = φ0n (x)e− ~ En t .
4.4. TIME-DEPENDENT PERTURBATION
81
We can use this complete set of eigenfunctions to expand the perturbed state
X
X
i 0
ψn (x, t) =
cn (t)ψn0 (x, t), =
cn (t)φ0n (x)e− ~ En t ,
n
n
and replacing it in
i
∂
ψ(x, t) = H(x, t) ψ(x, t)
∂t
gives
∂
i~
∂t
X
0t
− ~i En
cn (t)φ0n (x)e
n
!
= H0 (x) + V (x, t)
X
0t
− ~i En
cn (t)φ0n (x)e
n
!
The above expression simplifies to
X
i
i
cn (t) e− ~ En t (ϕ0m , V (x, t) ϕ0n )
i~c˙m (t) e− ~ Em t =
n
and can be written in the compact form,
1 X
c˙m (t) =
cn (t) eiωmn t Vmn (t),
i~ n
where the first correction coefficient depends crucially on the unperturbed
eigenstates through c0n (t), culminating with the proposition.
Bellow we determine the explicit form of the time dependence of the
coefficients cn (t). We can choose the initial state to obey
cn (t0 ) = c0n = δns
so we have
c1m (t)
1
=
i~
Z
′
dt′ Vms (t′ )eiωms t .
This procedure can be repeated to give rise to higher and higher contributions
forming the corrected eigenfunction coefficients
cm (t) = c0m (t) + c1m (t) + · · ·
Z
1
′
= δms +
dt′ Vms (t′ ) eiωms t + · · · .
i~
.
CHAPTER 4. APPROXIMATION METHODS
82
Proposition 4.11. The second order correction reads
c2m (t)
=
1
i~
2 X Z
′′
dt
n
Z
′
′′
dt′ Vns (t′ ) Vmn (t′′ ) eiωns t eiωmn t .
Proof. Using
ċ2m (t) =
1 X 1
c (t) eiωmn t Vmn (t)
i~ n n
with the information we have already gathered produces the equation
ċ2m (t)
=
1
i~
2 X Z
′
dt′ Vns (t′ ) Vmn (t) eiωns t eiωmn t
n
so the second correction solution behaves as proposed.
We can keep going producing higher terms and constructing more and
more precise eigenfunctions.
The simplest example to consider as a time-dependent two-level system
is a problem of the familiar form
H = H0 + V (t),
for which the equation of motion reads
∂
c1 (t)
V11
V12 e−iωt
c1 (t)
.
=
i~
c2 (t)
V21 eiωt
V22
∂t c2 (t)
ωt
ωt
Multiplying both sides by a phase, e+i 2 and e−i 2 , respectively, with the
introduction of new variables,
ωt
ϕ1 (t) = c1 (t) e+i 2
and
ωt
ϕ2 (t) = c2 (t) e−i 2 ,
furnishes us with
∂
ϕ1 (t)
V11 + ~ω
ϕ1 (t)
V12
2
.
=
i~
ϕ2 (t)
V21
V22 − ~ω
∂t ϕ2 (t)
2
4.5. DIRAC’S INTERACTION PICTURE
83
With this transformation we were able to circumvent the problem of having a time-dependent Hamiltonian since we have eliminated the time dependence. By diagonalizing the problem we can express it in the following
form
∂
i~
∂t
ψ1 (t)
ψ2 (t)
=
E1 0
0 E2
ψ1 (t)
ψ2 (t)
,
where the eigenvalues are given by
q
V11 + V22 1
±
(V11 − V22 + ~ω)2 + 4V122 .
E1,2 =
2
2
4.5
Dirac’s interaction picture
Previously we have discussed about Dirac’s interaction picture as opposed
to Heisenberg’s and Schrödinger’s formulations. Here we go back to this point
and show how it can be a useful tool to tackle time-dependent problems of
the form
H = H0 + V (t).
We define, as we have already seen, the interaction state,
i
ψI (t) = e ~ H0 t ψ(t),
and the interaction observable,
i
i
OI = e ~ H0 t O e− ~ H0 t ,
so that initially both states coincide,
ψI (0) = ψ(0).
If we also introduce the interaction potential,
i
i
VI = e ~ H0 t V e− ~ H0 t ,
CHAPTER 4. APPROXIMATION METHODS
84
we can use to write the fundamental Schrödinger equation according to
∂ i H0 t
∂
e ~ ψ(t) = VI ψI (t).
i~ ψI (t) = i~
∂t
∂t
Therefore, the time evolution of the interaction state is solely governed by
the interaction potential. Projecting it on a given eigenstate of the original
problem
i~
gives rise to
X
∂
(ψn , ψI ) = (ψn , VI ψI ) =
(ψn , VI ψm )(ψm , ψI )
∂t
m
i~
X
∂
cn (t) =
eiωnm t Vnm cm (t).
∂t
m
This result, not surprisingly, coincides with the expression we have just obtained with a different approach.
In the sequence we explore some consequences of the formalism constructed applied to special cases.
4.6
Furhter Applications and Fermi’s Golden
Rule
Step Function / Constant
Another problem which can be easily addressed with this framework is
that of a potential which has a simple time dependence: initially the potential
is not present but at some point in time it is turned on, by switching a button,
for example, and it remains constant until it is switched off. Mathematically
a potential of this type can be represented by a Heaviside step function,
1,
t1 < t < t2
V (t) = θ(t − t1 ) − θ(t − t2 ) =
0,
otherwise
The first correction in the eigenfunction expansion leads to, using the
expressions found above,
Z
1 t ′
Vms
Vms
′
1
cm (t) =
eiωms t − 1 = −
eiωms t − 1 .
dt Vms eiωms t = −
i~ 0
~ωms
Em − Es
4.7. EXERCISES
85
This allows us to compute the transition probability between two states,
associated to the energies Em and Es , as
1
Pms
(t)
=
|c1m (t)|2
4|Vms |2
sin2
=
2
(Em − Es )
ωms t
2
.
If we wait for a long time, i.e. much greater than the characteristic
time scale of the problem, τms ∼
1
ωms
the transition probability may be
approximated by
1
Pms
2π
1
≃
|Vms |2 t δ(Em − Es ),
t >>
ωms
~
so that the transition rate eventually behaves like
2π
1
∂ 1
≃
Pms t >>
|Vms |2 δ(Em − Es ).
∂t
ωms
~
This is the so called Fermi’s Golden Rule.
Harmonic Potential
A similar result can also be obtained if the one considers a potential which
varies harmonically with time. If that is the case, it turns out the coefficients
read
c1m (t)
1
=
i~
Z
′
′
∗ −iωt′
eiωms t
e
dt′ Vms eiωt + Vms
and, as before,
1
Pms
1
2π
t >>
≃
|Vms |2 t δ(Em − Es ± ~ω).
ωms
~
This analysis is useful to explain important phenomena such as the stimulated
emission or absorption of radiation, often associated with lasers.
4.7
Exercises
1. Show that the ground state of a Hamiltonian is nondegenerate.
86
CHAPTER 4. APPROXIMATION METHODS
2. Prove Fermi’s Golden Rule.
3. Calculate explicitly the first order correction to the eigenstate of the
Harmonic Oscillator problem perturbed by H1 = α x4 .
4. Determine the first order correction to the states and energies of the
Harmonic Oscillator problem perturbed by H1 = α x3 .
Bibliography
[1] Reed, Michael, and Barry Simon. Fourier analysis, self-adjointness.
Academic Press, 1975.
[2] Schwartz, Laurent. "Théorie des distributions." Actualités Scientifiques
et Industrielles, Institut de Mathématique, Université de Strasbourg 1
(1966): 2.
[3] Van der Pol, Balth, and Hendricus Bremmer. Operational calculus:
based on the two-sided Laplace integral. University Press, 1955.
[4] Dirac, Paul Adrien Maurice. The principles of quantum mechanics. No.
27. Oxford university press, 1981.
[5] H. Grassmann (1862). Extension Theory. History of Mathematics
Sources. American Mathematical Society, London Mathematical Society, 2000 translation by Lloyd C. Kannenberg.
[6] Reed, Michael, and Barry Simon. "Functional Analysis, volume 1 of
Methods of Modern Mathematical Physics." (1972).
[7] Giacosa, Francesco (2014). "On unitary evolution and collapse in quantum mechanics". Quanta 3, (1), 156.
[8] J. Neumann, Mathematical Foundations Of Quantum Mechanics, 1955.
[9] Stone, M. H. (1932), "On one-parameter unitary groups in Hilbert
Space", Annals of Mathematics 33 (3): 643-648
[10] A. Petersen, Quantum Physics and the Philosophical Tradition, MIT
Press 1968
87
88
BIBLIOGRAPHY
[11] R. Shankar, Principles of Quantum Mechanics, 2nd ed., 1994.
[12] D.R. Bates, Quantum Theory 1. Elements (AP, 1961).
[13] M.V. Berry, K.E. Mount, Semiclassical approximations in wave mechanics, Rep. Prog. Phys. 35, 315-397, 1972.
[14] L.I. Schiff, Quantum mechanics, 1949.
[15] G. Esposito, G. Marmo, G. Sudarshan, From Classical to Quantum
Mechanics, 2004.