Download Quantum field theory and Green`s function

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Bohr–Einstein debates wikipedia , lookup

Coherent states wikipedia , lookup

Density matrix wikipedia , lookup

Bell's theorem wikipedia , lookup

Quantum teleportation wikipedia , lookup

Quantum electrodynamics wikipedia , lookup

Dirac equation wikipedia , lookup

Quantum field theory wikipedia , lookup

Coupled cluster wikipedia , lookup

Quantum state wikipedia , lookup

Lattice Boltzmann methods wikipedia , lookup

Feynman diagram wikipedia , lookup

Schrödinger equation wikipedia , lookup

Scalar field theory wikipedia , lookup

History of quantum field theory wikipedia , lookup

Path integral formulation wikipedia , lookup

Second quantization wikipedia , lookup

Double-slit experiment wikipedia , lookup

Renormalization wikipedia , lookup

T-symmetry wikipedia , lookup

Renormalization group wikipedia , lookup

Wave function wikipedia , lookup

Electron scattering wikipedia , lookup

Molecular Hamiltonian wikipedia , lookup

Wave–particle duality wikipedia , lookup

Particle in a box wikipedia , lookup

Matter wave wikipedia , lookup

Propagator wikipedia , lookup

Atomic theory wikipedia , lookup

Symmetry in quantum mechanics wikipedia , lookup

Elementary particle wikipedia , lookup

Theoretical and experimental justification for the Schrödinger equation wikipedia , lookup

Identical particles wikipedia , lookup

Canonical quantization wikipedia , lookup

Relativistic quantum mechanics wikipedia , lookup

Transcript
1
Quantum field theory and Green’s function
† Condensed matter physics studies systems with large numbers of identical particles (e.g. electrons, phonons, photons) at finite temperature.
† Quantum field theory arises naturally if we consider a quantum system composed by a large number of identical particles at finite
temperature.
1.1. Quantum statistical physics
1.1.1. Classical statistical physics
Classical statistical physics tells us that in grand canonical ensemble, the expectation value of a thermal dynamic quantity (which will be called
X here) can be written as:
X  =
n Xn exp-bEn - m Nn 
n exp-bEn - m Nn 
(1.1)
where b = 1  T (here the Boltzmann constant kB is set to unity) and m is the chemical potential. Here, we sum over all possible states of the
system (n ). For the nth state, the total energy of the whole system is En ; the particle number is Nn and the value of the quantity X is Xn .
1.1.2. Quantum statistical physics
For a quantum system (at finite temperature T), we can rewrite this formula in terms of quantum operators
`
`
`
`
`
n n X exp-bH - m N  n Tr X̀ exp-bH - m N 
=
X  =
`
`
`
Tr exp-bH̀ - m N 
n n exp-bH - m N  n
(1.2)
`
`
`
where n sums over a complete basis of the Hilbert space  n. H is the Hamiltonian. N is the particle number operator and X is the quantum
operator of the quantity that we want to compute. The sum n n … n is also known as a trace Tr.
Here, we will not prove this operator formula, because it is not part of the main focus of this lecture. Instead, we will only demonstrate that by
choosing a proper basis, this (quantum) operator formula recovers the classical formula shown above, and therefore, the operator formula is a
natural quantum generalization of classical statistical physics.
`
`
`
If H̀, N  = 0, we can choose the common eigenstates of H and N as our basis. In this basis, we have
`
(1.3)
H n = En n
and
`
N n = Nn n
(1.4)
where En is the eigen-energy and Nn is the eigen-value of the particle number operator.
Using this basis, it is easy to check that
`
`
exp- bH - m N  n = exp-bEn - m Nn  n
(1.5)
2
Phys620.nb
and therefore,
X  =
`
`
`
n n X exp-bH - m N  n
`
`
n n exp-bH - m N  n
=
`
n n X n exp-bEn - m Nn 
`
Here, we define Xn = n X n .
n n n exp-bEn - m Nn 
=
n Xn exp-bEn - m Nn 
n exp-bEn - m Nn 
(1.6)
Note: the quantum field theory used in particle physics is the TÆ0 limit of this finite temperature theory.
`
Q: What happens if H̀, N  π 0?
`
A: If H̀, N  π 0, it implies that in this system, the particle number is not conserved. As will be discussed in
later chapters, particle conservation law is related with the U(1) phase symmetry for charge neutral
particles (or gauge symmetry for charged particles). The absences of particle conservation implies the
breaking of phase/gauge symmetry, which means that the system is a superfluid/superconductor.
1.2. Second quantization
† The terminology of “second quantization” is due to historical reasons. We are NOT quantizing something for a second time. We are just
using a new basis to handle indistinguishable particles.
Q: Why do we want to use second quantization?
A: It is the most convenient way to handle a large number of indistinguishable particles.
1.2.1. wavefunctions for distinguishable particles
It is easy to write down quantum wavefunctions for systems composed by distinguishable particles. For example, if we have two distinguishable
particles, particle one in state y1 and particle two in y2 , the wavefunction is
Yr1 , r2  = y1 r1  y2 r2 
(1.7)
If we have n distinguishable particles, the wavefunction can be written as
Yr1 , r2 , r3, ..., rn  = y1 r1  y2 r2  …yn rn 
(1.8)
Here, the ith particle is in the quantum state yi for i = 1, 2, …n.
1.2.2. wave functions for indistinguishable particles
For indistinguishable particles, the wavefunction is very complicated, if the particle number is large. This is because we need to ensure that the
wavefunction is symmetric (anti-symmetric) for bosons (fermions). If we have two indistinguishable particles, one particle in state y1 and the
other in state y2 , the wavefunction is
Yr1 , r2  = y1 r1  y2 r2  ≤ y2 r1  y1 r2 
(1.9)
Here, the + sign is for bosons and the - sign is for fermions. It is easy to check that the wavefunction is symmetric or anti-symmetric
Yr1 , r2  = ≤ Yr2 , r1 
(1.10)
If we have 3 indistinguishable particles, the wavefunction contains 3!=6 terms
Yr1 , r2 , r3  =
y1 r1  y2 r2  y3 r3  ≤ y1 r1  y3 r3  y2 r2  ≤ y2 r2  y1 r1  y3 r3  + y3 r3  y1 r1  y2 r2  + y2 r2  y3 r3  y1 r1  ≤ y3 r3  y2 r2  y1 r1 
(1.11)
If we have n indistinguishable particles,
Yr1 , r2 , r3, ..., rn  =  ≤ 1P yi1 r1  yi2 r2  …yin rn 
P
(1.12)
where P represents all permutations. For a system with n particles, there are n ! different permutations and thus the r.h.s. has n ! terms. For a
system with a large number of indistinguishable particles, it is an extremely complicated to write down its wavefunction in this way. For
example, for a system with just ten particles, n=10, there are 10!º2.6 million terms, which is impossible to write down. In condensed matter
physics, a typical system has huge number of particles ( ~ 1023 ) and thus we need a better way to write down our quantum theory.
Phys620.nb
3
Note: in particle physics, although one typically studies systems with a very small number of particles (e.g.
two particles collides with each other in a collider), however, it is still necessary to consider a large number
because we have “virtue particles”. For example, the E&M interactions between two electrons is realized by
exchanging virtual photons. If one takes into account these virtual particles, the particle number is not very
small, and thus we shall use second quantization.
1.2.3. Fock space
The reason why it is hard to write down wavefunctions for indistinguishable particles is because when we write do the wavefunciton, we need
to specify which particle is in which quantum state. For example, yi r j  means the particle number j is in the quantum state yi . This procedure
is natural for distinguishable particles, where we do know which particle is in a particular quantum state. However, for indistinguishable
particles, we don’t know which particle is in this state, because we cannot distinguish these particles. In other words, the traditional way to
write down a wavefunction is designed for distinguishable particles. For indistinguishable particles, we need to first treat them as distinguishable particles, and then repeat the wavefunciton using all different permutations to make the wavefunction symmetric/anti-symmetric. This
procedure is not natural and is very complicated.
For indistinguishable particles, it is more natural to use the occupation number basis, which is known as the Fock space. In the Fock space, a
many-body quantum state is written in terms of occupation numbers:
Y = n1 , n2 , n3 …nN 
(1.13)
where ni is the number of particles in state yi  (ni is known as the occupation number). Here we don’t specify which particle is in the state
yi . Instead, we just count the number of particles in this state. In this approach, the particle are indistinguishable automatically and thus all
the complicates mentioned above are avoided.
1.2.4. Creation and annihilation operators:
In the Fock space, physical operators can be written in terms of creation and annihilation operators. The creation operator increases the
occupation number by 1 and the annihilation operator reduces the occupation number by 1.
For bosons, we have
bi † n1 , n2 , n3 …nN  =
bi n1 , n2 , n3 …nN  =
ni + 1
n1 , n2 , , …ni + 1, …nN 
n1 , n2 , , …ni - 1, …nN 
ni
(1.14)
(1.15)
and it is easy to check that these operators obey the following commutation relations:
bi , b j  = bi † , b j †  = 0
(1.16)
bi , b j †  = dij
(1.17)
For fermions, due to the Pauli exclusive principle, each quantum state can at most have one particle (so ni = 0 or 1).
ci † …, 1, … = 0
(1.18)
ci † …, 0, … = -1
j=1 i-1 ni
…, 1, …
ci …, 1, … = 0
ci † …, 0, … = -1j=1
(1.19)
(1.20)
i-1
ni
…, 0, …
(1.21)
It is easy to check that fermion creation/annihilation operators obey anti-commutation relations:
ci , c j  = ci † , c j †  = 0
ci , c j †  = dij
(1.22)
(1.23)
4
Phys620.nb
Note: the factor -1j=1 ni is dropped in many textbooks, because +|y and -|y represent the same
quantum state. However, this factor is curial to ensure the anti-commutation relation ci , c j †  = dij . We will
i-1
study this factor later when we examine 1D quantum systems.
1.2.5. Particle number operator
For bosons, using the definition above, it is easy to check that
bi † bi …, ni , … = ni …, ni , …
(1.24)
`
ni = bi † bi
(1.25)
Therefore, the particle number operator for bosons in the quantum state yi  is
As a result, the total particle number operator is
`
`
N =  ni =  bi † bi
i
i
(1.26)
For fermions, it is easy to notice that
ci † ci …, 0, … = 0
(1.27)
ci † ci …, 1, … = 1
(1.28)
`
ni = ci † ci
(1.29)
Therefore, the particle number operator for fermions in the quantum state yi  is
As a result, the total particle number operator is
`
`
N =  ni =  c i † c i
i
i
(1.30)
In the next a couple of sections, we will only consider fermions as our example (c and c† ), but the same conclusions are applicable for bosons (b
and b† )
1.2.6. Quantum states
In the Fock space, all quantum states can be written in terms of creation and annihilation operators. First, we need to define the vacuum
(ground states in high energy physics) 0 by assuming that there is one and only one state in the Fock space that is annihilated by any
annihilation operators. This state is our vacuum
ci 0 = 0 for any ci
It is easy to check that this quantum states has zero particle
`
N 0 =  ci † ci 0 = 0
i
(1.31)
(1.32)
Then, for any states with one particle (a one-particle state), we can write it as
y =  ai ci † 0
i
(1.33)
For two particle states, they can be written as
y = aij c j † ci † 0
(1.34)
For n particle states, they can be written as
y = ai1 …in cin † …ci2 † ci † 0
(1.35)
1.2.7. quantum operators
Any quantum operator can be written in terms of creation and annihilation operators c s and c† s:
X =  fk1 ,k2 …km ,q1 ,q2 …qm' ck1 ck2 ... ckm cq1 † cq2 † …cqm' †
(1.36)
Phys620.nb
5
If every term in this operator has m annihilation operators and m’ creation operators, this operator is known as a (m+m’) fermion operator.
1.2.8. Physical observables and correlation functions
Since all quantum operators can be written in terms of creation and annihilation operators, the expectation value of any physical observable can
be written in terms of the expectation values of creation and annihilation operators:
X = X  =  fk1 ,k2 …km ,q1 ,q2 …qm' ck1 ck2 ... ckm cq1 † cq2 † …cqm' † 
(1.37)
Therefore, in second quantization, anything we need to compute reduces to objects like this: ck1 ck2 ... ckn cq1 † cq2 † …cqm † . This type of objects
are known as correlation functions. If there are N creation and annihilation operators, it is known as a N-point correlation function. Here,
N = m + m '.
Q: What are the simplest nontrivial correlation functions? Here, “simplest” means that we want the number of creation and annihilation operator
to be as small as possible. “nontrivial” means the correlation function need to be nonzero.
` `
A: If the particle number is conserved (H , N  = 0, which is true for most of the cases we study), they are the two-point correlation functions.
`
This is because, if H̀, N  = 0, we can prove that ck1 ck2 ... ckm cq1 † cq2 † …cqm' †  = 0 if m∫m’. In other words, if we want to have a nontrivial
correlation function, it must have the same number of creation and annihilation operators. Therefore, we only need to consider N-point
correlation functions when N is even. The smallest positive even integer is 2, so the simplest nontrivial correlation functions are two-point
correlation functions ck cq † . The next one is four-point correlation ck1 ck2 cq1 † cq2 † .
`
Proof: As shown above, if H̀, N  = 0, we know that particle number is a conserved quantity and thus we can choose the common eigenstates of
`
`
H and N as our basis. In this basis, we have
ck1 ck2 ... ckm cq1 † cq2 † …cqm' †  =
n n ck1 ck2 ... ckm cq1 † cq2 † …cqm' † n exp-bEn - m Nn 
n exp-bEn - m Nn 
`
Because n is the eigenstate of the total particle number operator N , we know that the quantum state n has Nn particles.
(1.38)
Define
y = ck1 ck2 ... ckm cq1 † cq2 † …cqm' † n
because a creation operator increase the particle number by 1, while an annihilation operator reduce it by 1, it is easy to note that
Nn + m ' - m particles.
(1.39)
y has
If m ' ∫ m, Nn + m ' - m ∫ Nn , which means that the quantum states n and y has different number of particles, i.e. they are both eigen states
`
of N , but they have different eigenvalues. In quantum mechanics, we learned that two eigenstates of the same operator are orthogonal, if they
have different eigenvalues, so we know immediately that n y = 0. And therefore, n ck1 ck2 ... ckm cq1 † cq2 † …cqm' † n = 0. As a result, the
correlation function ck1 ck2 ... ckm cq1 † cq2 † …cqm' †  = 0.
`
Note: If H̀, N  π 0, we will have nonzero correlation functions with m π m'. For example, in superfluids,
b π 0, and in superconductors ck1 ck2  π 0. These cases will be considered later.
1.2.9. Summary
† The terminology of “second quantization” is due to historical reasons. We are NOT quantizing something for a second time. We are just
using a new basis to handle indistinguishable particles.
† In both high energy and condensed matter physics, quantum field theory utilize the “second quantization” construction. The reason is
because quantum field theory deals with more than one indistinguishable palaces, and the second quantization formulas are the most
natural way to describe this type of physics.
† In second quantization (same is true for quantum field theory), any physical quantities are reduced to computing various correlation
functions.
† If particle number is conserved, only correlation functions with same number of creation and annihilation operators are nontrivial.
† The order matters: ck cq †  ∫ cq † ck . Therefore, depending on the way to order the creation/annihilation operators, we can define different
correlation functions. This topics will be address in the next section.
6
Phys620.nb
1.3. Hamiltonian in second quantization
Let’s consider a Hamiltonian with three terms: the kinetic energy HK , the potential energy HP and interactions between different particles HI .
H = HK + HP + H I
(1.40)
1.3.1. Potential energy
For particles in a potential Ur , the total potential energy (summed over all particles) is
Ø
P.E. =  „ r Ur  rr 
Ø
Ø
Ø
(1.41)
where rr  is the particle density at r . In second quantization, particle density is
Ø
Ø
Ø
Ø
` Ø
rr  = y† r  yr 
Here, y† r  creates an electron at r and y(r ) annihilate an electron at r . Therefore, the potential energy part of the Hamiltonian is
Ø
Ø
Ø
Ø
HP =  „ r Ur  y† r  yr 
Ø
Ø
Ø
(1.42)
Ø
(1.43)
1.3.2. Kinetic energy
Momentum space
Total kinetic energy for a system with many particles is
Ø
K.E. = 
„k
2 pd
ek  nk 
Ø
Ø
(1.44)
Here, ek  is the dispersion relation of a single particle and nk  is the number of particles with momentum k. In the denominator, d is the
Ø
Ø
spatial dimension of the system. In second quantization, the quantum operator for nk  is
Ø
Ø
Ø
` Ø
nk  = y† k  yk 
(1.45)
Here, y† k  creates an electron at k and y(k ) annihilate an electron at k . Therefore,
Ø
HK = 
Ø
Ø
„k
2 pd
Ø
Ø
ek  y† k  yk 
Ø
Ø
Ø
(1.46)
If we consider nonrelastivistic particles ek  = k 2  2 m,
Ø
HK = 
Ø
„k
k2
2 p 2 m
d
y† k  yk 
Ø
Ø
(1.47)
Q: What shall we do for relativistic particles?
A: We cannot just set e(k)=c|k| or ek =
c2 k2 + m0 2 c4 . This is because |x| or
x are not analytic functions (they are singular at
k = 0). For fermions, it turns out that we need two species of fermions to have a relativistic dispersion. This topic will be discussed
latter, when we study the Dirac theory.
Real space
The momentum space creation/annihilation operators and real space creation/annihilation operators are connected by the Fourier transformation. For the annihilation operator,
Phys620.nb
yk  =  „ r ‰-Â k ÿ r yr 
Ø
Ø Ø
Ø
Ø
7
(1.48)
The inverse transformation is
Ø
yr  = 
‰Â k ÿ r yk 
„k
Ø Ø
2 pd
Ø
Ø
(1.49)
The transformation for the creation operators can be obtained by taking the Hermitian conjugate
y† k  =  „ r ‰Â k ÿ r y† r 
Ø
Ø Ø
Ø
Ø
y r  = 
2 p
(1.50)
‰-Â k ÿ r y† k 
„k
† Ø
Ø
Ø Ø
d
Ø
(1.51)
Therefore, it is easy to check that
Ø
HK = 
„k
k2
2 pd 2 m
Hk =  „ r
Ø
Ø
2m
Ø
=   „r
= „r
Ø
d
Ø
Ø Ø
2 pd 2 pd
Ø
Ø
„k
„k'
2 p 2 p
d
d
2m
k ÿ k'
d
„k'
1
Ø Ø
„k'
2 p 2 p
„k
Ø
Ø
Ø
„k
Ø
=  
Ø
“ y† r  “ y r 
Ø
Ø
 
y k  yk  =  „ r
†
2m
2m
Ø
(1.52)
2m
Ø
“ 
„k
2 pd
‰-Â k ÿ r y† k “ 
Ø Ø
Ø
„k'
2 pd
‰Â k'ÿ r yk '
Ø Ø
Ø
y† k yk ' expÂ r k ' - k  ÿ r  =
Ø Ø
Ø
Ø Ø
Ø
2m
k ÿ k'
Ø
Ø
(1.53)
y† k yk '  „ r expÂ r k ' - k  ÿ r 
k ÿ k'
Ø Ø
“ y† r  “ yr 
y k yk ' 2 p dk ÿ - k ' = 
†
d
Ø
Ø
Ø
Ø
Ø
Ø Ø
„k
kÿ k
2 p
d
2m
y k  yk  = 
†
Ø
Ø
Ø
„k
2 p
d
ek  nk 
Ø
Ø
1.3.3. Lattice systems
In a lattice, if we ignore the interactions between electrons, the Hamiltonian contains two terms.
H = HK + HP =  „ r 
Ø
“ y† r  “ yr 
Ø
2m
Ø
+ Ur  y† r  yr 
Ø
Ø
Ø
(1.54)
where the first term is the kinetic energy and the second term is the lattice potential, where Ur  is a periodic potential. In principle, we could
Ø
use this Hamiltonian. However, it is not the most convenient way to handle a lattice systems. A more convenient way utilizes Bloch waves and
the band structure. In a lattice system, the kinetic energy of a single particle is described by the band structure en k . Here n is the band index
Ø
Ø
and k is a momentum point in the (reduced) Brillouin zone. Therefore, total energy of the system (if we ignore interactions is)
E= 
n
Ø
„k
en k  nn k 
Ø
BZ 2 p
d
Ø
(1.55)
Here, we sum over all bands n  and the integral is over the first Brillouin zone BZ . nn k  is the occupation number for the quantum state in
Ø
band n at momentum k . In second quantization, the quantum operator for nn k  is
Ø
Ø
8
Phys620.nb
nn k  = yn † k  yn k 
Ø
Ø
Ø
(1.56)
where y† n k creates a particles in band n with momentum k . In other words, it is the creation operator for a Bloch wave. yn k is the annihilation operator for a Bloch wave. Therefore, the Hamiltonian for non-interacting electrons is
Ø
Ø
Hk =  
n
„k
BZ 2 pd
en k yn † k yn k
(1.57)
For simplicity, we will not consider lattices in this chapter. Instead, we will only consider particles moving in free space with dispersion
k 2  2 m. But please keep in mind that for a lattice system, we just need to substitute e = k 2  2 m into the corresponding dispersion for Bloch
waves.
Both the kinetic energy part of the Hamiltonian and the potential energy part contain one creation and one annihilation operator. So
they are both known as the quadratic terms, or two-fermion terms. If the Hamiltonian only contains these two terms, we call the
system a non-interacting system, because there is no interaction between particles in this Hamiltonian.
1.3.4.Interactions
Let’s consider Coulomb interaction between electrons. The total Coulomb energy is
EInt =
1
2
 „ r „ r ' V  r - r '  rr  rr '
Ø
Ø
Ø
Ø
Ø
Ø
(1.58)
where V r = e2  r and rr is the particle density. In second quantization, we know that particle density is
Ø
Ø
` Ø
rr  = y† r  yr 
(1.59)
Therefore, the interaction part of the Hamiltonian is
HI =
1
2
†
†
 „ r „ r ' V  r - r '  y r  yr  y r ' yr '
Ø
Ø
Ø
Ø
Ø
Ø
Ø
Ø
(1.60)
The interaction term contains four creation/annihilation operators, and thus this term is called a quartic term or a four-Fermi term.
Typically, we reorder the operators using the anti-commutation relation between creation/annihilation operators.
HI =
=
=
=
1
2
1
2
1
2
1
2
†
†
 „ r „ r ' V  r - r '  y r  yr  y r ' yr '
Ø
Ø
Ø
Ø
Ø
Ø
Ø
Ø
†
†
 „ r „ r ' V  r - r '  y r  y r ' yr ' yr  +
Ø
Ø
Ø
Ø
Ø
Ø
Ø
Ø
†
 „ r „ r ' V  r - r '  y r  y r ' yr ' yr  +
Ø
Ø
Ø
Ø
† Ø
Ø
Ø
Ø
†
 „ r „ r ' V  r - r '  y r  y r ' yr ' yr  +
Ø
Ø
Ø
Ø
† Ø
Ø
Ø
Ø
1
2
1
†
 „ r „ r ' V  r - r '  dr - r ' y r  yr '
Ø
Ø
†
 „ r V 0 y r  yr 
Ø
Ø
Ø
(1.61)
Ø
2
V 0 N
2
where N is the total number of particles in the system. The last term may looks problematic (it is singular), because for the Coulomb interaction,
V 0 = 1  0 = ¶, but this term will not cause any trouble. We can simply drop it, because this term just shifts the total energy by a constant. In a
solid state material, this term is canceled by the potential energy from the nucleons.
Very typically, when we write down the Hamiltonian, we put the operators in such an order that creation operators are put on the left and
annihilation operators are on the right. This order is called normal order and the procedure to put operators in such an order is known as
"normal ordering".
Typically, one use two colons to represent normal ordering. If we put the product of some creation and annihilation operators operator between
two colons, it means that we reorder these operators into normal order. For bosons, we just reorder the operators. For fermions, we need an
extra factor +1 or -1, depending on whether we have even or odd number of permutations to reorder the operators. For example,
: y† r yr ' := y† r yr '
: yr ' y† r := -y† r yr '
(1.62)
(1.63)
Phys620.nb
: yr ' yr '' y† r := y† r yr ' yr ''
9
(1.64)
For the interaction terms, instead of using the most straightforward formula,
1
HI =
2
†
†
 „ r „ r ' V  r - r '  y r  yr  y r ' yr '
Ø
Ø
Ø
Ø
Ø
Ø
Ø
Ø
(1.65)
we typically uses the normal ordered interaction term
1
HI =
2
†
†
 „ r „ r ' V  r - r '  : y r  yr  y r ' yr ' :=
Ø
Ø
Ø
Ø
Ø
Ø
Ø
Ø
1
2
†
†
 „ r „ r ' V  r - r '  y r  y r ' yr ' yr 
Ø
Ø
Ø
Ø
Ø
Ø
Ø
Ø
(1.66)
These two definitions differ by a constant, which doesn’t play any fundamental role. In other words, using normal order shifts the total energy
by a constant. In fact, the reason we use normal ordering is because by doing so, the energy of the vacuum state is set to zero. In normal order,
annihilation operator is put to the right side. By definition, the vacuum state is destroyed by any annihilation operator
y 0 = 0
(1.67)
Therefore,
: H : 0 = … y 0 = 0 = 0 0
(1.68)
So the vacuum state is an eigenstate of : H : with zero eigen energy.
1.3.5. Summary
In this chapter, we consider the simplest case
H = „r
Ø
“ y† r  “ yr 
Ø
Ø
+
2m
1
2
†
†
 „ r „ r ' V  r - r '  y r  y r ' yr ' yr 
Ø
Ø
Ø
Ø
Ø
Ø
Ø
Ø
(1.69)
The quadratic term: kinetic energy for non-relativistic particles e = k 2  2 m. The quartic term: normal ordered four-Fermi interactions.
However, it is important to keep in mind that the conclusions and methods can be generalized easily to other more complicated cases.
1.4. Equation of motion for correlation functions
In the Heisenberg picture, the equation of motion of an operator X is
Â
∑ X t
∑t
= X t, Ht
(1.70)
Here, we set the Planck constant Ñ to unity for simplicity.
1.4.1. Equation of motion for annihilation operators
For the Hamiltonian
H = „r
Ø
“ y† r  “ yr 
Ø
Ø
+
1
2
2m
†
†
 „ r „ r ' V  r - r '  y r  y r ' yr ' yr 
Ø
Ø
Ø
Ø
Ø
Ø
Ø
Ø
(1.71)
the equation of motion for the annihilation operator y is
Ø
∑ yr0 , t
Â
∑t
1
2
=
Ø
yr0 ,
“ dr - r0  “ yr
Ø
Ø
H =  „ r
+
2m
†
 „ r „ r ' dr - r0  V  r - r '  y r ' yr ' y r  +
Ø
Ø
“2 yr0 , t
=2m
Ø
Ø
Ø
Ø
Ø
Ø
Ø
Ø
+  „ r V  r0 - r  y† r , t yr , t yr0 , t
Ø
Ø
Ø
Ø
Ø
1
2
†
 „ r „ r ' V  r - r '  dr ' - r0  y r  yr ' yr 
Ø
Ø
Ø
Ø
Ø
Ø
Ø
Ø
Ø
(1.72)
10
Phys620.nb
Similarly, for the conjugate operator y† , we have
∑ y† r0 , t
“2 y† r0 , t
Ø
-Â
Ø
=-
∑t
2m
+ y† r0   „ r V  r0 - r  y† r  yr 
Ø
Ø
Ø
Ø
Ø
(1.73)
For noninteracting particles V = 0, this equation is very similar to the Schrodinger equation. But please keep in mind that y and y† here are
operators, instead of wavefunctions.
1.4.2. Equation of motion for correlation functions
Define two point correlation functions
G> r1, t1 ; r2 t2  =
Ø
Ø
yr1 , t1  y† r2 , t2 
1
Ø
Â
Ø
(1.74)
Here, the factor 1  Â is introduced for historical reason. For systems with translational symmetry in space and time, G> r1, t1 ; r2 t2  only depends
Ø
Ø
the time and position difference between 1 and 2
G> r1, t1 ; r2 t2  = G> r1 - r2 , t1 - t2  = G> r , t
Ø
Ø
Ø
Ø
Ø
Ø
Ø
(1.75)
Ø
Here, we define r = r1 - r2 and t = t1 - t2
Q: What is the equations of motion for G> r, t?
∑t G> r .t = ∑t1 G> r .t1 - t2  = ∑t1
Ø
Ø
“
= --
2
Ø
yr1 , t1 
2m
=
1
2m
1
Â
yr1 , t1  y† r2 , t2  = -Â ∑t1 yr1 , t1  y† r2 , t2 
Ø
Ø
Ø
Ø
+ V  r0 - r  y† r , t1  yr , t1  yr1 , t1  y† r2 , t2 
Ø
Ø
Ø
Ø
Ø
“r1 2 yr1 , t1  y† r2 , t2  - V y† y† y y =
Ø
Â
2m
(1.76)
“2 G> r, t - V < y† y† y y >
If V = 0 (free systems), we have a closed partial differential equation for two fermion correlation functions.
 ∑t G0 > r.t +
“2
2m
G0 > r, t = 0
(1.77)
The sub-index 0 here implies that we are considering a non-interacting system without interaction. We can solve this partial differential
equation (with proper initial conditions and boundary conditions), and obtain the correlation function G0 >
For interacting systems (V ∫ 0), the story is not as simple.
 ∑t G> r.t +
“2
2m
G> r, t = -Â V < y† y† y y >
(1.78)
So we have an inhomogeneous partial differential equation.
 ∑t G> r.t +
“2
2m
G> r, t = f r, t
(1.79)
The terminology inhomogeneous equation means that the r.h.s. of the equation is nonzero. Inhomogeneous equation may look complicated,
because for a different f r.t, it seems that we will need to solve a different equation. However, this is not the case. We just need to solve one
question and then for any f r., t, we can get the solution directly.
Q: How do we solve an inhomogeneous partial differential equation?
A: Let’s look at the E&M textbook. The Green’s function method.
1.4.3. E&M: electric potential f(r) for charge distribution r(r)
Gauss’s law
“ ÿ Er  = rr 
Ø Ø
Ø
(1.80)
Phys620.nb
11
We know that
Er  = -“ fr 
(1.81)
“2 fr  = rr 
(1.82)
Ø Ø
so
Ø
Ø
Ø
This is an inhomogeneous equation. How do we solve it? We first solve a different equation:
“r1 2 Gr1 , r2  = dr2 
Ø
Ø
Ø
(1.83)
Gr1 , r2  is the electric potential at point r1 induced by a point charge located at position r2 . We know the solution of this equation, which is just
Ø
Ø
Ø
Ø
the Coulomb’s law
Gr1 , r2  =
Ø
1
Ø
1
Ø
r1
4p
(1.84)
Ø
- r2
After we find Gr1 , r2 , the solution for the inhomogeneous equation “2 fr  = rr  can be obtained easily as
Ø
Ø
Ø
fr  =  „ r0 Gr , r0  rr0  =  „ r0
Ø
Ø
Ø Ø
Ø
Ø
1
4p
1
Ø
r-
Ø
rr0 
Ø
Ø
r0
(1.85)
Mathematicians call Gr1 , r2  the Green’s function. And this methods of solving inhomogeneous PDEs are known as the Green’s function
Ø
Ø
approach. In general, one first substitute the inhomogeneous part with a delta function. Then, one solve this new PDE, whose solution is the
Green’s function. Once the Green’s function is obtained, one can write down the solution of the inhomogeneous PDE very easily using an
integral. For the correlation function G> , the equation of motion is
 ∑t G> r.t +
“2
2m
G> r, t = -Â V < y† y† y y >
(1.86)
What we will need to do here is to substitute the r.h.s. by a delta function
 ∑t +
“2
Gr, t = dr dt
(1.87)
2m
and Gr, t is our Green’s function.
1.4.4.time ordering: a trick to get delta functions
Define time-ordered correlation functions (the Green’s functions)
Gr1, t1 ; r2 t2  =
1
Â
Tyr1 , t1  y† r2 , t2 
(1.88)
Here, Ty† r1 , t1  yr2 , t2  is known as the time - ordered product.
T yr1 , t1  y† r2 , t2  = 
yr1 , t1  y† r2 , t2  if t1 > t2
≤ y† r2 , t2  yr1 , t1  if t1 < t2
(1.89)
For bosons, we use the + sign and for fermions we use the - sign. Another way to write down the same product is
T yr1 , t1  y† r2 , t2  = yr1 , t1  y† r2 , t2  ht1 - t2  ≤ y† r2 , t2  yr1 , t1  ht2 - t1 
(1.90)
where hx is the step function hx = 1 for x > 0 and hx = 0 for x < 0. Now, let’s consider the EOM for Gr, t.
T yr1 , t1  y† r2 , t2  = -Â ∑t1 yr1 , t1  y† r2 , t2  ht1 - t2  ≤ y† r2 , t2  yr1 , t1  ht2 - t1 
Â
= -Â ∑t1 yr1 , t1  y† r2 , t2  ht1 - t2  ≤ y† r2 , t2  -Â ∑t1  yr1 , t1  ht2 - t1  -
∑t Gr.t = ∑t1
1
 yr1 , t1  y† r2 , t2  ¡ y† r2 , t2  yr1 , t1   dt1 - t2 
(1 91)
12
Phys620.nb
=
=
1
2m
Â
2m
∑r1 2 Tyr1 , t1  y† r2 , t2  - V T y† y† y y - Â dr1 - r2  dt1 - t2 
“2 Gr, t - V T y† y† y y - Â dr1 - r2  dt1 - t2 
If V = 0 (non-interacting systems)
 ∑t G0 r.t +
“2 G0 r, t = dr dt
1
2m
(1.92)
Bottom line, by time-ordering, we automatically get a delta function in the equation of motion, which makes G r, t a Green’s function.
1.4.5. Green’s function for free particles: G0 r, t
For the EOM,
 ∑t +
“2
2m
G0 r, t = dr dt
(1.93)
it is easy to solve in the momentum space.
 „ r „ t exp-Â k r + Â w t Â ∑t +
 „ r „ t -Â ∑t +
w-
k2
2m
“2
2m
“2
2m
G0 r, t =  „ r „ t exp-Â k r + Â w t dr dt
exp-Â k r + Â w t G0 r, t = 1
 „ r „ t exp-Â k r + Â w t G0 r, t = 1
(1.94)
(1.95)
(1.96)
Define
G0 k, w =  „ r „ t exp-Â k r + Â w t G0 r, t
(1.97)
It is easy to check that
G0 r, t = 
„k
„w
2 pd 2 p
expÂ k r - Â w t G0 k, w
(1.98)
which is the inverse transformation.
Using G0 k, w, we get
w-
k2
2m
G0 k, w = 1
(1.99)
and therefore
G0 k, w =
1
w-
k2
(1.100)
2m
More generally, for non-interacting particles with more complicated dispersion.
G0 k, w =
1
w - ek
(1.101)
For example, in a lattice system, if we ignore interactions, the Green’s function for particles in band n is
G0 n k, w =
1
w - en k
(1.102)
Phys620.nb
13
Here, we assume that both the creation and annihilation operators are for Bloch waves in band n. If the two operators are for different bands, the
correlation function is zero.
1.4.6. Note: there are many other reason to use T.
(1) Path integral leads to T naturally.
(2) The evolution operator Ut = TexpÂ  „ t Ht t
(3) With T, bosons and fermions are unified together. Same theory with two different boundary conditions.
Due to time-limitation, we will not address (1) and (2) in details in this course. But for theorists, it is very important to understand the path
integral formula.
The third point will be addressed in the next section.
1.5. Boundary condition and connections between different Green’s functions
As mentioned in Sec., because the product between two operators is in general non-commutative AB ∫ BA, the order of operators matters a
lot. Depending on the order of operators, we can define many different correlation functions. For example, the time-ordered correlation function
defined above. In addition, we can define the following two correlation functions (and other correlation functions).
1.5.1. Other correlation functions and the boundary condition
G> 1, 2 =
1
Â
G< 1, 2 = ≤
y1 y† 2
1
Â
y† 2 y1
(1.103)
(1.104)
Here, for simplicity, we use “1” to represent r1 , t1 and “2” to represent r2 , t2 . Please notice that we cannot use the simple commutation/anticommutation relation
y1 y† 2 ¡ y† 2 y1 ∫ d1 - 2
(1.105)
This is because the commutation/anti-commutation relations we are familiar with are “equal time” commutation/anti-commutation relations.
yr1 , t y† r2 , t ¡ y† r2 , t yr1 , t ∫ dr1 - r2 
(1.106)
Please notice that all the operators are at the same time (remember that we are using the Heisenberg picture, so that the operators are timedependent).
What we considered here are operators at different time points
y1 y† 2 ≤ y† 2 y1 = yr1 , t1  y† r2 , t2  ≤ y† r2 , t2  yr1 , t1  ∫ d1 - 2
(1.107)
Therefore, in general, it is not d(1-2). And thus
G> 1, 2 ¡ G< 1, 2 ∫
1
Â
d1 - 2
(1.108)
In other words, we cannot simply relate these two Green’s function using equal time commutation/anti-commutation relations.
For statistical average, we have Boltzmann factor ‰ b H . For time-evolution, we have the evolution operator ‰Â H t . It seems that inverse temperature b is just the imaginary time. Let’s try this idea by allowing time to be complex.
For G>
`
Tr exp-bH̀ - m N  yr1 , t1  y† r2 , t2 
G 1, 2 = y1 y 2 =
`
Â
 Tr exp-bH̀ - m N 
`
`
`
`
`
`
= Tr exp-bH̀ - m N  expÂ H t1  yr1  exp-Â H t1  expÂ H t2  y† r2  exp-Â H t2   Â Tr exp-bH̀ - m N 
>
1
†
If we use eigenenergy states to compute the sum
(1.109)
14
Phys620.nb
`
`
`
`
`
`
G> 1, 2 = Tr exp-bH̀ - m N  expÂ H t1  yr1  exp-Â H t1  expÂ H t2  y† r2  exp-Â H t2   Â Tr exp-bH̀ - m N 
`
`
`
= expb m N  expEn Â t1 - b n yr1  exp-Â H t1  expÂ H t2  y† r2  n exp-Â En t2   Â Tr exp-bH̀ - m N 
`
`
n expEn Â t1 - Â t2 - b n yr1  exp-Â H t1  expÂ H t2  y† r2  n
n
= expb m N
(1.110)
`
 Tr exp-bH̀ - m N 
Because En has no upper bound, as En Ø +¶, to keep the factor expEn Â t1 - Â t2 - b converge, we need to have
Re  t1   t2    0, In other words, Im t1 - t2  > -b.
In the same time, using 1 = m m m , we have
G> 1, 2 = expb m N
`
`
`
 expEn Â t1 - Â t2 - b n yr1   m m exp-Â H t1  expÂ H t2  y† r2  n  Â Tr exp-bH̀ - m N  =
n
m
`
expb m N  expEn Â t1 - Â t2 - b n yr1  m m y† r2  n expÂ Em t2 - t1   Â Tr exp-bH̀ - m N 
(1.111)
To make sure that Em Ø +¶ shows no singularity, Imt1 - t2  < 0.
n,m
So, 0 > Im t1 - t2  > -b
For G< , we have 0 < Im t1 - t2  < b
Let’s come back to G>
G> r1 , t1 - Â b; r2 , t2  =
`
`
`
`
`
`
`
Tr exp-bH̀ - m N  expb H  expÂ H t1  yr1  exp-Â H t1  exp-b H  expÂ H t2  y† r2  exp-Â H t2  
`
`
Â Tr exp- bH - m N  =
`
`
`
`
`
`
`
Tr expb m N  expÂ H t1  yr1  exp-Â H t1  exp-b H  expÂ H t2  y† r2  exp-Â H t2   Â Tr exp-bH̀ - m N  =
`
`
`
`
`
Tr expÂ H t1  yr1  expb m Ǹ - 1 exp-Â H t1  exp-b H  expÂ H t2  y† r2  exp-Â H t2  
`
Â Tr exp-bH̀ - m N  =
`
`
`
`
`
`
`
e- bm Tr expÂ H t1  yr1  exp-Â H t1  exp-bH - m N  expÂ H t2  y† r2  exp-Â H t2   Â Tr exp-bH̀ - m N  =
`
`
`
Tr y1 exp-bH - m N  y† 2
Tr exp-bH̀ - m N  y† 2 y1
- bm
- bm
=e
= ≤ e- bm G< 1, 2
e
`
`
 Tr exp-bH̀ - m N 
 Tr exp-bH̀ - m N 
(1.112)
Therefore, we find that
e b m G> r1 , t1 - Â b; r2 , t2  = ≤ G< r1 , t1 ; r2 , t2 
(1.113)
1.5.2. BC for the time-ordered Green’s function.
The time-ordered Green’s function can be defined along the imaginary axis also
T yr1 , t1  y† r2 , t2  = 
So,
T G=
yr1 , t1  y† r2 , t2  if Imt1  > Imt2 
≤ y† r2 , t2  y r1 , t1  if Im t1  < Im t2 
G> if Imt1  < Im t2 
≤ G< if Im t1  > Im t2 
(1.114)
(1.115)
It is easy to check that for Gr, t, we have
Gr, t = ≤ ‰ bm Gr, t - Â b
This is a boundary condition for imaginary time. At m=0, Bosons have a periodic BC, and fermions have an anti-periodic BC.
1.5.3. the k,w space
Define
(1.116)
Phys620.nb
15
G> k, w = Â  „ r „ t ‰-Â k r+Â w t G> r, t = yk,w yk,w † 
G< k, w = ≤ Â  „ r „ t ‰-Â k r+Â w t G< r, t = yk,w † yk,w 
(1.117)
G< k, w = ≤ Â  „ r „ t ‰-Â k r+Â w t G< r, t =
 e b m  „ r „ t ‰- k r+ w t G> r, t -  b =  e b m  „ r „ t ' ‰- k r+ w t'+ b G> r, t ' = ‰ - bw-m G> k, w
(1.118)
Define the “spectral function”
Ak, w = G> k, w ¡ G< k, w
Because G k, w = ‰
<
- bw-m
(1.119)
G k, w
>
Ak, w = G k, w ¡ G k, w = G> k, w ¡ ‰ - bw-m G> k, w
>
<
(1.120)
So
G> k, w =
Ak, w
1 ¡ ‰ - bw-m
= Ak, w
‰ bw-m
‰ bw-m ¡ 1
G< k, w = ‰ - bw-m G> k, w = ‰ - bw-m
= Ak, w1 ≤
Ak, w
1 ¡ ‰ - bw-m
1
‰ bw-m ¡ 1
= Ak, w
Here f w is the boson/fermion distribution function.
 = Ak, w1 ≤ f w
(1.121)
= Ak, w f w
(1.122)
1
‰ bw-m ¡ 1
Note: Here, the only thing we assume is that the indistinguishable particles commute or anti-commute with each other. In Green’s
function, these two choices leads to two different boundary conditions, from which the Bose-Einstein and Fermi-Dirac distributions
arise naturally.
1.5.4. G in the k,w space part I: the Matsubara frequencies
Remember that Gr, t = ≤ ‰ bm Gr, t - Â b, which is (almost) a PBC (anti-PBC) for t. We know that boundary conditions implies quantization
(discrete w).
For example, for a periodic function f t = f t + T, we know that in the frequency space, f w is defined only on a discrete set of frequency
points, w = 2 p n  T. For anti-PBC, f t = - f t + T, w = 2 n + 1 p  T.
For Gr, t = ≤ ‰ bm Gr, t - Â b, if we go to the k,w space and define
Gr, t =  „ k „ w ‰Â k r- w t Gk, w
(1.123)
The condition Gr, t = ≤ ‰ bm Gr, t - Â b implies that
 k r- w t
Gk, w = ≤ ‰ bm ‰-w b  „ k „ w ‰Â k r- w t Gk, w
 „k „w‰
(1.124)
So,     =1
‰ bw-m = ≤ 1
(1.125)
For bosons,
bw - m = 2 n p Â
w=
2npÂ
b
+m
(1.126)
(1.127)
Following the historical convention, we write w = Â wn
wn =
2np
b
-Âm
These discrete frequency points wn are known as the Matsubara frequencies
(1.128)
16
Phys620.nb
For fermions,
bw - m = 2 n + 1 p Â
(1.129)
w=
+m
(1.130)
p-Âm
(1.131)
2 n + 1 p Â
b
2n+1
wn =
b
After we find Gk, Âwn , where wn takes discrete values, now let’s define a new function Gk, z by simply replacing the discrete number  wn
into a complex number z that various continuously . For the function Gk, z, it is a function well-defined at every point on the complex z plane
(there may be some singularity points). At the Matsubara frequencies, this new function Gk, z coincides with Gk, Âwn . This procedure is
known as analytic continuation. This new function is very useful. Here, I will show you that by defining Gk, z, we can find a very easy way to
get G> and G< from G. Later, we will use this Gk, z to compute Gk, Âwn .
1.5.5. G in the k,w space part II: analytic continuation
Gk, Â wn  = 
-Â b
„ t ‰-wn t Gk, t =
0

-Â b
0
„ t ‰-wn t G> k, t = 
-Â b
0

„ t ‰-wn t 
„ w 1 - ‰-w -Â wn  b
w - Â wn
2p
Ak, w
1 ¡ ‰ - bw-m
„w
2pÂ
=
‰-Â w t G> k, w = 
„ w 1 ¡ ‰-w -m b
 wn - w
2p
„ w ‰-Â w -wn  -Â b - 1
2p
w - Â wn
Ak, w
1 ¡ ‰ - bw-m
=
Ak, w
1 ¡ ‰ - bw-m
=
(1.132)
„ w Ak, w
2 p  wn - w
Therefore,
Gk, z = 
„ w Ak, w
2p
(1.133)
z-w
Now, we substitute z by w + Âd
Gk, w + Â e = 
„W
Ak, W
(1.134)
2p w+Âe-W
=
w-Âe-W
1
„W
Ak, W
„W
1
1
-2 Â e = -
Ak, W = -  „ W dw - W Ak, W = - Ak, w

2
2
2
2
2Â
2 p w - W + e
2 p w - W + e
2
2
ImGk, w + Â e =
Gk, w + Â e - Gk, w - Â e
2Â
=
1
2Â

„W
2p
e

Ak, W
w+Âe-W
-
Ak, W
(1.135)
So,
Ak, w = -2 ImGk, w + Â e
(1.136)
1.5.6. Example: free particles
 ∑t G0 r.t +
1
2m
“2 G0 r, t = dr dt
(1.137)
in the k,w space
 wn -
k2
2m
G0 k, Â wn  =
G0 k, Â wn  = 1
(1.138)
1
 wn -
k2
2m
More generic case: dispersion relation e(k)
(1.139)
Phys620.nb
G0 k, Â wn  =
17
1
(1.140)
 wn - ek
Analytic continuation:
G0 k, z =
1
(1.141)
z - ek
the spectral function:
Ak, w = -2 ImGk, w + Â e = -2 Im
1
w + Â e - ek
=
2e
w - ek 2 + e2
= 2 p dw - ek 
(1.142)
particular number:
nk t = yk † t yk t = G< k, t - t = 
„w
2p
1 ≤ nk t = yk t yk † t = G> k, t - t = 
G< k, w = 
„w
2p
2 p dw - ek  f w = f ek 
„w
2p
G> k, w = 
„w
2p
2 p dw - ek 1 ≤ f w = 1 ≤ f ek 
(1.143)
(1.144)
Once we found Gk, Âwn , we can get Ak, w using
1.5.7. Interacting particles.
Ak, w = -2 ImGk, w + Â e
Then we can get all other correlation functions like G k, t - t and G k, t - t using
<
G> k, w =
Ak, w
1 ¡ ‰ - bw-m
= Ak, w
‰ bw-m
‰ bw-m ¡ 1
G< k, w = ‰ - bw-m G> k, w = ‰ - bw-m
= Ak, w1 ≤
Ak, w
1 ¡ ‰ - bw-m
(1.145)
>
1
‰ bw-m ¡ 1
= Ak, w
 = Ak, w1 ≤ f w
(1.146)
= Ak, w f w
(1.147)
1
‰ bw-m ¡ 1
Bottom line: if we know one correlation function, we can get other. Therefore, among all the different correlation functions, we just
need to focus on the one that is easiest to compute. The easiest one is the Green’s function (time-ordered correlation functions), because
it give us delta functions.
1.6. Feynman diagram
1.6.1. multi-particle Green’s functions
Single particle Green’s function:
G1, 1 ' =
1
Â
Ty1 y† 1 '
(1.148)
Two-particle Green’s function:
G2 1, 2; 1 ', 2 ' =
1
Â
2
Ty1 y2 y† 2 ' y† 1 '
(1.149)
three-particle Green’s function:
G3 1, 2, 3; 1 ', 2 ', 3 ' =
n-particle Green’s function:
1
Â
3
Ty1 y2 y3 y† 3 ' y† 2 ' y† 1 '
(1.150)
Phys620.nb
18
Gn 1, 2, …, n; 1 ', 2 ', …, n ' =
1
n
Â
Ty1 y2 …yn y† n ' …y† 2 ' y† 1 '
“ dr - r0  “ yr
(1.151)
1.6.2. the equations of motion of the Green’s functions
Â
∑ yr0 , t
∑t
1
=-
= yr0 , H =  „ r
†
 „ r „ r ' dr - r0  V  r - r '  y r ' yr ' yr +
2
2
“ yr0 , t
2m
 ∑t +
 ∑t +
+
2m
1
2m
1
2m
1
2
+  „ r V  r0 - r  y† r yr yr0 
“2 G1, 2 = dr1 - r2  dt1 - r2  +
1
Â
1
2
Â
d1 - 1 ' ≤ Â  „ r V  r1 - r2 
1
Â
(1.152)
†
†
 „ r V  r0 - r  Ty r1 , t1  yr1 , t1  yr0 , t1  y r2 , t2 
“2 G1, 1 ' = d1 - 1 ' + Â  „ r V  r1 - r2 
d1 - 1 ' + Â  „ r V  r1 - r2 
†
 „ r „ r ' V  r - r '  dr ' - r0  y r yr ' yr
1
Â
2
(1.153)
T y† r2 , t1  yr2 , t1  yr1 , t1  y† r1' , t1'  =
Ty† r2 , t1 + d yr2 , t1  yr1 , t1  y† r1' , t1'  =
2
(1.154)
Tyr1 , t1  yr2 , t1  y† r2 , t1 + d y† r1' , t1'  =
d1 - 1 ' ≤ Â  „ r V  r1 - r2  G2 1, 2; 1 ', 2+ 
t1 =t2
Here 2+ means that the time argument for 2+ is slightly larger than 2, to keep the operators in the right order t2 + = t2 + d.
 ∑t +
1
2m
“2 G1, 1 ' = d1 - 1 ' ≤ Â  „ r2 V  r1 - r2  G2 1, 2; 1 ', 2+ 
(1.155)
t1 =t2
This equation tells us that in order to get G1, 2, we need to know G2 1, 2; 1 ', 2 '. So we need to write down the EOM for G2
 ∑t +
1
2m
“2 G2 1, 2; 1 ', 2 ' = d1 - 1 ' G2, 2 ' ≤ d1 - 2 ' G2, 1 ' ≤ Â  „ r2 V  r1 - r2  G3 1, 2, 3; 1 ', 2 ', 3+ 
(1.156)
t1 =t3
This equation tells us that in order to get G2 , we need to know G3 . Repeat the same procedure, we find that if we want to know Gn , we need to
know Gn+1 .
 ∑t +
1
2m
“2 Gn 1, 2, …, n; 1 ', 2 ', …, n ' = d1 - 1 ' Gn-1 2, …, n; 2 ', …, n ' ≤ d1 - 2 ' Gn-1 2, …, n; 1 ', 3 ', …, n ' +
… ≤ Â  „ r2 V  r1 - r2  Gn+1 1, 2, …, n + 1; 1 ', 2 ', …, n + 1
(1.157)
t1 =tn+1
So we cannot get a close set of equations. In order words, the number of unknowns is always the number of equations+1, and thus there
is no way to solve these equations.
1.6.3. Q: How to solve this equation? A: the perturbation theory
G1, 1 ' depends on interaction strength V . Let’s expand G as a power series of V
G1, 1 ' = G0 1, 1 + OV  + OV 2  + OV 3  + …
Notice that Gn-1 is related to V μ Gn . Therefore, to get G up to the order of OV , we just need to keep G2 to the order of OV
OV n-2  … to OV 0  and set Gn+2 = 0
n
(1.158)
n-1
, and G3 to
1.6.4. the zeroth-order approximation (free-particle approximation, or say non-interacting approximation)
If we want to get G to the zeroth order, we need to set G2 = 0
Phys620.nb
 ∑t1 +
1
2m
“r1 2 G0 1, 1 ' = d1 - 1 '
19
(1.159)
One equation, one unknown. It can be solved easily.
G0 k, Âwn  =
1
Âwn -
(1.160)
k2
2m
If we want to get G to the first order, we need to set G3 = 0 and keep G2 to OV 0 
1.6.5. the first-order approximation (the Hartree-Fock approximation)
 ∑t1 +
 ∑t1 +
1
2m
1
2m
“r1 2 G1, 1 ' = d1 - 1 ' ≤ Â  „ r2 V  r1 - r2  G2 1, 2; 1 ', 2+ 
(1.161)
t1 =t2
“r1 2 G2 1, 2; 1 ', 2 ' = d1 - 1 ' G2, 2 ' ≤ d1 - 2 ' G2, 1 ' + OV G3  = d1 - 1 ' G0 2, 2 ' ≤ d1 - 2 ' G0 2, 1 '
(1.162)
two equations and two unknowns. The solution for the second equation is very simple:
G2 1, 2; 1 ', 2 ' = G0 1, 1 ' G0 2, 2 ' ≤ G0 1, 2 ' G0 2, 1 '
(1.163)
Let’s check it
 ∑t1 +
 ∑t1 +
1
2m
1
2m
“r1 2 G0 1, 1 ' G0 2, 2 ' = d1 - 1 ' G0 2, 2 '
(1.164)
“r1 2 G0 1, 2 ' G0 2, 1 ' = d1, 2 ' G0 2, 1 '
(1.165)
So we can go back to the first equation to get G.
 ∑t1 +
1
2m
“r1 2 G1, 1 ' =
d1 - 1 ' ≤ Â  „ r2 V  r1 - r2  G0 1, 1 ' G0 2, 2+ 
 ∑t1 +
1
2m
t1 =t2
+Â  „ r2 V  r1 - r2  G0 1, 2+  G0 2, 1 '
“r1 2 G1, 1 ' = d1 - 1 ' + f 1, 1 '
(1.166)
t1 =t2
(1.167)
This equation can be separated into two equations:
 ∑t1 +
 ∑t1 +
1
2m
1
2m
“r1 2 G0 1, 1 ' = d1 - 1 '
(1.168)
“r1 2 G1 1, 1 ' = f 1, 1 '
(1.169)
and G = G0 + G1 .
The first equation have been solved before. It is just the zeroth-order equation (the free theory)
 ∑t1 +
1
2m
“r1 2 G0 1, 1 ' = d1 - 1 '
(1.170)
For the second equation, the solution is straightforward using the Green’s function technique
G1 1, 1 ' =  „ r3  „ t3 G0 1, 3 f 3, 1 '
Let’s check this
(1.171)
20
Phys620.nb
 ∑t1 +
So
1
2m
“r1 2  „ r3  „ t3 G0 1, 3 f 3, 1 ' =  „ r3  „ t3 d1 - 3 f 3, 1 ' = f 1, 1 '
G1 1, 1 ' =  „ r3  „ t3 G0 1, 3 f 3, 1 ' = ≤ Â  „ r3  „ t3  „ r2 G0 1, 3 V  r3 - r2  G0 3, 1 ' G0 2, 2+ 
+Â  „ r3  „ t3  „ r2 G0 1, 3 V  r3 - r2  G0 3, 2+  G0 2, 1 '
(1.172)
t3 =t2
(1.173)
t3 =t2
For G1, 1 '
G1, 1 ' = G0 1, 1 ' ≤ Â  „ r3  „ t3  „ r2 G0 1, 3 V  r3 - r2  dt3 - t2  G0 3, 1 ' G0 2, 2+  +
  „ r3  „ t3  „ r2 G0 1, 3 V  r3 - r2  dt3 - t2  G0 3, 2+  G0 2, 1 '
(1.174)
t3 =t2
The first term is the free propagator, the second therm is known as the Hartree term and the last term is the Fock term.
If we want to get G to the order of OV 2 , we set G4 = 0, keep G3 to OV 0  and G2 to OV 1 
1.6.6. second order:
 ∑t1 +
 ∑t +
1
2m
1
2m
 ∑t1 +
1
2m
“r1 2 G1, 1 ' = d1 - 1 ' ≤ Â  „ r2 V  r1 - r2  G2 1, 2; 1 ', 2+ 
(1.175)
t1 =t2
“2 G2 1, 2; 1 ', 2 ' = d1 - 1 ' G2, 2 ' + d1 - 2 ' G2, 1 ' ≤ Â  „ r2 V  r1 - r2  G3 1, 2, 3; 1 ', 2 ', 3+ 
t1 =t3
“r1 2 G3 1, 2, 3; 1 ', 2 ', 3 ' = d1 - 1 ' G2 2, 3; 2 ', 3 ' ≤ d1 - 2 ' G2 2, 3; 1 ', 3 ' + d1 - 3 ' G2 2, 3; 1 ', 2 '
(1.176)
(1.177)
The last equation give us (keep only OV 0  terms)
G3 1, 2, 3; 1 ', 2 ', 3 ' = G0 1, 1 ' G0 2, 2 ' G0 3, 3 ' ≤ G0 1, 1 ' G0 2, 3 ' G0 2, 3 ' ≤ G0 1, 2 ' G0 2, 1 ' G0 3, 3 ' +
G0 1, 2 ' G0 2, 3 ' G0 3, 1 ' + G0 1, 3 ' G0 2, 1 ' G0 3, 2 ' ≤ G0 1, 3 ' G0 2, 2 ' G0 3, 1 '
(1.178)
So we can get G2 and then G
† Too complicated!
† Hard to compute when n is large.
There is a simple way to directly get the finally answer we want, thanks the very smart technique designed by Feynman.
1.6.7. Feynman diagrams and Feynman rules
† Each integration coordinate is represented by a point;
† A propagator, G0 1, 1 ', is represented by a solid line:
† A creation operator is represented by a solid line attached to the point with an arrow from the point;
† An annihilation operator is represented by a solid line attached to the point with an arrow to the point;
† The interaction V 1 - 2 is represented by a dashed line connecting two points (r1 and r2 ). And each ending point has a creation operator
and an annihilation operator:
† Connect the lines, and keep the direction of the arrows
Find all the diagrams obey the rules described above, each of them represent a term in the power series expansion of G. Zeroth order terms
contains no V , first order terms contains just one V , nth order term contains n V . Here we consider two point correlation functions G1, 1 ' as
an example. Because G1, 1 ' =   Ty1 y† 1 ' has one creation and one annihilation term, we have one ending point and one starting point
1
Â
for solid lines. At the zeroth order, we don’t have V , and these two points are the only thing we have in our diagrams. As a result the only
diagram we have is a solid line connecting these two points.
Phys620.nb
21
The first order terms has one V , and it is easy to note that there are only two ways to connect the diagrams

Therefore, up to the first order
G1, 1 ' = G0 1, 1 ' ≤ Â  „ r3  „ t3  „ r2 G0 1, 3 V  r3 - r2  dt3 - t2  G0 3, 1 ' G0 2, 2+  +
  „ r3  „ t3  „ r2 G0 1, 3 V  r3 - r2  dt3 - t2  G0 3, 2+  G0 2, 1 '
(1.179)
t3 =t2
If we want higher order terms, we just add more V s to the diagrams. For example, if we have two V s (second order terms), we have








We just need to find all the plots and then write them into integrals (and the compute the integrals).
1.6.8. k,w-space
If the system have momentum and energy conservation laws, it is typically much easier to compute Green’s functions in the momentum-energy
space. In the k,w-space, we use the same diagrams and in addition, we also:
† Assign momentum and frequency to each line and keep the momentum and energy conservation law at each ending point or crossing
points.
† For each loop, there is a pair of unknown q and  Wn and we need to integrate/sum over them.
† For each fermionic loop, we get an extra factor of (-1), which comes the commutation relation.
† For mth order diagram, we get a factor -1m
For example, using the same diagram above, we can compute the two point correlation functions as
Gk, Âwn  = G0 k, Âwn  ¡ 

„q
1
2 p3 b
„q
1
2 p3 b
Â W G0 k, Âwn  G0 k, Âwn  V 0, 0 G0 k, Â Wn  n
Â W G0 k, Âwn  G0 k, Âwn  G0 k + q, Âwn + Â Wn  V q, Â Wn  + ...
(1.180)
n
1.7. the Dyson’s equation
If we look at the diagrams, there are many repeating structures. These repeating structures can be utilized to simplify the calculation.
1.7.1. The sum of a geometric series
How do we compute the sum of a geometric series:
X = a + a q + a q2 + a q3 + …
First, we notice that
(1.181)
Phys620.nb
22
q X = a q + a q2 + a q3 + a q4 + …
(1.182)
then, we rewrite X as
X = a + a q + a q2 + a q3 + … = a + q X
(1.183)
X -qX =a
(1.184)
So,
X=
a
(1.185)
1-q
We can use the same trick to sum many Feynman diagrams.
1.7.2. example 1: the Hartree approximation
For diagrams, we can use the same trick. For example, the following diagrams can be summed together


…
Here we sum over these diagrams and ignore others. This approximation is known as the Hartree approximation.
Gk, Âwn  º G0 k, Âwn  + G0 k, Âwn  SH k, Âwn  G0 k, Âwn  + G0 k, Âwn  SH k, Âwn  G0 k, Âwn  SH k, Âwn  G0 k, Âwn  +
G0 k, Âwn  SH k, Âwn  G0 k, Âwn  SH k, Âwn  G0 k, Âwn  SH k, Âwn  G0 k, Âwn  + …
(1.186)
where
SH k, Âwn  =
= ¡
„q
1
2 p b
3
Â W V 0, 0 G0 q, Â Wn 
(1.187)
n
This term is known as the self-energy correction. It is the same diagram we considered above but with external legs removed. Using the same
trick mentioned above, we find
GH k, Âwn  = G0 k, Âwn 1 - SH k, Âwn  G0 k, Âwn 
-1
= G0 k, Âwn -1 - SH k, Âwn  
-1
=
1
w - ek - SH k, Âwn 
(1.188)
This technique and this formula is known as the Dyson’s equation. Here, we find that the Green’s function for interacting particles is very
similar to the free Green’s function G0 . The only thing interactions does is to change the single particle energy ek into ek + SH k, Âwn . In
other words, the interactions changes the “energy” of the particle by SH k, Âwn . This is the reason why this term is called a self-energy
correction. However, it is important to keep in mind that this term is NOT really a shift in ek, because it is also a function of frequency.
1.7.3. example 2: the Hartree-Fock approximation
We can get more accurate results by adding more diagrams into our calculation. For example,


2


…
The approximation that sums over these diagrams (and ignore others) are known as the Hartree-Fock approximation.
Gk, Âwn  º G0 k, Âwn  + G0 k, Âwn  SHF k, Âwn  G0 k, Âwn  + G0 k, Âwn  SHF k, Âwn  G0 k, Âwn  SHF k, Âwn  G0 k, Âwn  +
G0 k, Âwn  SHF k, Âwn  G0 k, Âwn  SHF k, Âwn  G0 k, Âwn  SHF k, Âwn  G0 k, Âwn  + …
here the Hartree-Fock self-energy correction SHF k, Âwn  is
(1.189)
Phys620.nb
SHF k, Âwn  =
= ¡
+
„q
1
2 p b
3
Â W V 0, 0 G0 q, Â Wn  - 
n
„q
1
2 p b
3
Â W G0 k + q, Âwn + Â Wn  V q, Â Wn 
23
(1.190)
n
Using the same trick, (the Dyson’s equation), we find that
GHF k, Âwn  = G0 k, Âwn 1 - SHF k, Âwn  G0 k, Âwn 
-1
= G0 k, Âwn -1 - SHF k, Âwn  
-1
=
1
w - ek - SHF k, Âwn 
(1.191)
The final result is almost the same as the Hartree approximation. We just need to change SH k, Âwn  into SHF k, Âwn .
1.7.4. example 3: include more diagrams
If we include more diagrams, the Green’s function still takes the same structure
Gk, Âwn  = G0 k, Âwn  + G0 k, Âwn  Sk, Âwn  G0 k, Âwn  + G0 k, Âwn  Sk, Âwn  G0 k, Âwn  Sk, Âwn  G0 k, Âwn  +
G0 k, Âwn  Sk, Âwn  G0 k, Âwn  Sk, Âwn  G0 k, Âwn  Sk, Âwn  G0 k, Âwn  + …
(1.192)
Here, the self-energy correction contains more diagrams
Sk, Âwn  =
+
+
+
+
+
+
+
(1.193)
Using Dyson’s equation, we find that
Gk, Âwn  =
G0 k, Âwn 
1 - Sk, Âwn  G k, Âwn 
0
=
1
G0 k,Âwn 
1
- Sk, Âwn 
=
1
Âwn - ek - Sk, Âwn 
(1.194)
1.7.5. One-particle irreducible diagrams
If we use the Dyson’s equation, the key is to avoid double counting. For example, the second order diagram
included in the Hartree approximation when we include
has been
in the self-energy. Therefore, when we compute second order self-energy
corrections, we should not include this diagram again. The rule to avoid double counting is to use 1-particle irreducible diagrams in the selfenergy S. One-particle irreducible diagrams means that if we cut one internal link (solid line), the diagram is still connected. For example, the
following diagrams are 1-particle irreducible diagrams, and thus they should be include din the self-energy:





This diagram is not 1-particle irreducible, and thus we should not include them in the self-energy correction (it has already been taken care of
by the first order term
)
1.7.6. Summary
† Draw all possible 1-particle-irreducible diagrams.
† Remove external legs to get the self energy  (note: each fermion loop contributes a factor -1. For nth order diagram, we need an factor
-1n in the momentum space formula).
† Use Dyson’s equation to get the full Green’s function. GÂ wn  =
Âwn -ek-Sk,Âwn 
1
† Other particles can be treated using the same approach (photons, phonons, etc.)
Phys620.nb
24
1.8. Physical meaning
The physical meaning of the spectral function Ak, w is: if we have a particle with momentum k, Ak, w  2 p is the probability for this particle
to have energy w.
1.8.1. Ak, w ≥ 0
The proof can be found in the book of Mahan (page 151). We will not show it here.
1.8.2. 
‚w
Ak, w=1
Ak, w = G> k, w ¡ G< k, w
2p
(1.195)
Integrate over all w

¶ „w
2p
-¶
Ak, w = 
-¶
¶ „w
2p
G> k, w ¡ 
-¶
¶„w
2p
G< k, w
(1.196)
Notice that

¶ „w
2p
-¶
‰Â w t G> k, w =  G> k, t = yk, t0 + t y† k, t0 
(1.197)
If we set t = 0, we find that by integrate over all w, we get the equal-time correlation function

¶ „w
2p
-¶
G> k, w = yk, t0  y† k, t0 
(1.198)
G< k, w = y† k, t0  yk, t0 
(1.199)
Similarly,

¶ „w
2p
-¶
Therefore,

-¶
¶ „w
2p
Ak, w = 
In other words, -¶
-¶
¶ „w
2p
¶ „w
2p
G> k, w ¡ 
-¶
¶„w
2p
G< k, w = yk, t0  y† k, t0  ¡ y† k, t0  yk, t0  = 1 = 1
(1.200)
Ak, w measures the commutator/anit-commutator between y and y† , which is unity.
1.8.3. Free particles
For particles without interactions, the Green’s function is
G0 k, Âwn  =
1
(1.201)
Âwn - ek
The spectral function Ak, w is obtained by
Ak, w = -2 limeØ0 ImGk, w + Â e = -2 limeØ0 Im
1
w + Â e - ek
 = limeØ0
2e
w - ek 2 + e2
= 2 p dw - ek 
(1.202)
For a fixed k, if we plot Ak, w as a function of w, we find a delta peak at w = ek . This delta peak means that for a free particle with momentum
k, the energy of this particle can only take one value: w = ek . And it is easy to check that

-¶
¶ „w
2p
Ak, w = 1
1.8.4. Interacting particles
When interactions are taken into account,
(1.203)
Phys620.nb
Gk, Âwn  =
1
Âwn - ek - Sk, Âwn 
=
25
1
Âwn - ek - ReSk, Âwn  - Â ImSk, Âwn 
(1.204)
Sometimes, the real part of S is labeled as S1 , where the imaginary part is S2
Sk, z = S1 k, z + Â S2 k, z
(1.205)
Therefore, the spectral function is
Ak, w =
-2 limeØ0 ImGk, w + Â e =
limeØ0
1
Âwn - ek - Sk, w + Â e
2 S2 k, w + Â e
= -2 limeØ0 Im
w - ek - S1 k, w + Â e2 + S2 k, w + Â e2
=
1
w + Â e - ek - S1 k, w + Â e - Â S2 k, w + Â e
2 S2 k, w
=
(1.206)
w - ek - S1 k, w2 + S2 k, w2
1.8.5. The real part of self-energy correction
Here, we consider the real part of Sk, w. For simplicity, we consider the limit that ImSk, w Ø 0. When S2 (k,w)Ø0
Ak, w =
2 S2 k, w
w - ek - S1 k, w2 + S2 k, w2
= dw - ek - S1 k, w
(1.207)
If S1 k, w only depends on k and is independent of w
Ak, w = dw - ek - S1 k
(1.208)
Here, the spectrum function is also a delta function, which implies that for a particle with momentum k the energy of this particle is
ek + S1 k. In other words, interactions between electrons change the energy of a particle from ek into ek + S1 k. The physical meaning
here is that for free particles, the energy of a particle is just the kinetic energy ek. But for interacting particles, the energy of a particle contains
both kinetic energy ek and contributions from interactions S1 k.
In reality, S1 k, w also depends on w. Here, for a fixed k, we can solve the following equation
Ak, w = dw - ek - S1 k, w =
1
1 - ∑w S1 k, w
è
w=ek
è
dw - ek
(1.209)
è
where ek is the solution of the equation w - ek - S1 k, w = 0 (we assume that there is only one solution for simplicity). Here, we used the
fact that
d f x = 
i
dx - xi 
f ' xi 
Notice that at fixed k,
(1.210)
1-∑w S1 k,w w=eèk
1
è
Ak, w = Zk dw - ek
is a function of k, which we will call Zk .
(1.211)
Here, we learned that the real part of the self-energy can do two things: (1) renormalizing the dispersion relation and (2) changing the coefficient in front of the delta function.
But remember that

-¶
¶ „w
2p
Ak, w = 1
(1.212)
è
Zk dw - ek = Zk ∫ 1
(1.213)
while

-¶
¶ „w
2p
This means that if Zk ∫ 1, ImSk, w cannot be zero for all frequency. So we must consider ImSk, w to fully understand interaction effects.
26
Phys620.nb
1.8.6. The imaginary part of self-energy correction
If the imaginary part of the self-energy is none zero, the spectral function is no longer a delta function.
Ak, w =
2 S2 k, w
w - ek - S1 k, w2 + S2 k, w2
(1.214)
è
è
If S2 is small, Ak, w still shows a peak when we fix k and plot Ak, w as a function of w. The location of the peak is at ek, where again ek
is the solution of the equation w - ek - S1 k, w = 0. The width of the peak is S2 k, w.
The fact that Ak, w is not a delta peak implies that for a fixed k, the energy of the particle is not a unique value, but has a distribution. In other
words, there is an uncertain in energy. This comes from the uncertainty principle for time and energy. The uncertainty principle tells us that to
measure the energy with infinite accuracy, it must take infinite long time. Here, what we are trying to do is to measure the energy of a particle
with a fixed momentum k. If the momentum of this particle remains k for infinite long time, we can determine its energy accurately. But if the
momentum of the particle varies with time, we will not have enough time to measure the energy, and thus the energy has an uncertainty Ñ/t,
where t is the time span in which the momentum can remain unchanged. For non-interacting systems, the momentum of the particle never
change, so that we can determine the energy. Therefore, the spectral function is a delta function. For interacting systems, because a particle with
momentum k will collide with other particles, its momentum can only remain invariant between to collisions. If the average time span between
two collisions is t, which is known as the collision time or the life time of an electron, the energy would get an uncertainty Ñ/t. This is the
width of the peak, i.e. the imaginary part of the self-energy.
In summary, the real part of the self-energy modifies the dispersion relation while the imaginary part tells us the inverse of the life time of this
particle.
è
If S2 k, w << e k, the width of the peak is small, and thus we can think this peaks as “almost” free particles. Peaks in the spectral function of
è
this type are called quasi-particles. The are not the particles that we originally consider. They have a different dispersion ek and they have
finite life time 1  S2 .
è
If S2 k, w >> e k, the concept of a particle becomes ill-defined. There, particles scatter with each other so frequently, i.e. particles are
correlated so strongly with each other, that we cannot separate a single particle from the environment.
Experimentally/theoretically, to determine whether a system contains quasi-particles or not, we plot Ak, w and search for peaks.
1.8.7. Diagrams
In QED, the interaction V in our diagrams is substitute by the Green’s function of photons. There, we can understand the diagram of interaction
as an electron shots a photon, which is then absorbed by another electron.