Download SOLID-STATE PHYSICS III 2007 O. Entin-Wohlman Thermal equilibrium

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Thermal conduction wikipedia , lookup

Internal energy wikipedia , lookup

Electromagnetism wikipedia , lookup

Old quantum theory wikipedia , lookup

H-theorem wikipedia , lookup

Path integral formulation wikipedia , lookup

Hydrogen atom wikipedia , lookup

Thermal conductivity wikipedia , lookup

Probability density function wikipedia , lookup

Introduction to gauge theory wikipedia , lookup

Electrical resistance and conductance wikipedia , lookup

Electron mobility wikipedia , lookup

Relativistic quantum mechanics wikipedia , lookup

Superconductivity wikipedia , lookup

Aharonov–Bohm effect wikipedia , lookup

Condensed matter physics wikipedia , lookup

Probability amplitude wikipedia , lookup

Quantum electrodynamics wikipedia , lookup

Electrical resistivity and conductivity wikipedia , lookup

Density of states wikipedia , lookup

Theoretical and experimental justification for the Schrödinger equation wikipedia , lookup

Transcript
SOLID-STATE PHYSICS III 2007
O. Entin-Wohlman
1.
ELECTRONIC TRANSPORT PROPERTIES
Thermal equilibrium
At thermal equilibrium, the number of electrons having energy E is given by the Fermi
distribution,
f (E) =
1
eβ(E−µ)
+1
,
(1.1)
in which β ≡ 1/(kB T ) is the inverse temperature (kB is the Boltzmann constant) and µ
is the chemical potential. At zero temperature (β → ∞) the chemical potential is equal
to the Fermi energy, EF , and the Fermi function becomes a step-function, such that all
states with energy E ≤ EF are full, and all states with energy above EF are empty. In the
grand-canonical ensemble, where the chemical potential is fixed, the number of electrons
is temperature-dependent. The electronic density (number of electrons per unit volume) is
given by
∞
Z
dEN (E)f (E) ,
N=
(1.2)
−∞
where N (E) is the density of states (number of states having energy E per unit volume).
Calculation of integrals containing the Fermi function
In metals, the relevant energies are around the chemical potential (' the Fermi energy),
which is much larger than the thermal energy kB T at room temperature (and below). Therefore, one may calculate explicitly the integral in Eq. (??). We present now the general
procedure for calculating such integrals.
Suppose that we need to calculate
Z
∞
I=
dEg(E)f (E) ,
(1.3)
−∞
where the function g contains the density of states, and is a well-behaved function of the
RE
energy E. Integrating by parts, such that G(E) = −∞ dE 0 g(E 0 ), we have
Z
∞
I = G(E)f (E)
−
−∞
∞
−∞
1
dEG(E)
∂f (E)
.
∂E
(1.4)
At very low negative energies there are no states and therefore the density of states (which
is included in the function g) tends to zero, while at very high positive energies the Fermi
function, Eq. (??), vanishes. Therefore, the first term in Eq. (??) vanishes. To treat the
second term there, we expand the function G around the chemical potential,
1
G(E) = G(µ) + G0 (µ)(E − µ) + G00 (µ)(E − µ)2 + . . . .
2
(1.5)
Then,
I=
X
n
Z ∞
1 (n)
d
1
In , In = − G (µ)
dE(E − µ)n
n!
dE eβ(E−µ) + 1
−∞
Z
i
h1 ∞
xn
1
dx x/2
.
= n G(n) (µ)
β
n! −∞ (e + e−x/2 )2
(1.6)
The term in the square brackets is a number, which is zero for odd n’s. For even values of
n it can be calculated, to give
I = G(µ) +
π2
(k T )2 G00 (µ) + . . . .
6 B
(1.7)
For a free (three-dimensional) electron gas, the density of states is given by
N (E) = N (0)(E/EF )1/2 , E ≥ 0 ,
(1.8)
where N (0) is the density of states at the Fermi energy. Returning to our calculation of the
electronic density, we find
G(E) = N (0)
2 E 3/2
,
3 EF1/2
(1.9)
and consequently, assuming that µ ' EF ,
2 π 2 h kB T i2
N = N (0) EF 1 +
+ ... .
3
8 EF
(1.10)
∗ ∗ ∗Exercise. Find the total energy of the free electron gas.
The Boltzmann equation
The Boltzmann equation (also called the transport equation) is an equation for the distribution function of the electrons, which replaces the Fermi function when the electrons are
out of equilibrium. The electrons are out of equilibrium since they experience external fields
(i.e., perturbations). The Boltzmann-equation approach assumes that those perturbations
2
change slowly enough in space and in time. Under such conditions, we may use semiclassical
considerations.
Let us first ignore the collisions of the electrons (with lattice imperfections, other electrons,
etc.) and assume that the motion is due only to external fields. When the perturbations
are smooth enough, the r (position) and the k (momentum, using units in which ~ = 1)
coordinates of every electron evolve in time according to the semiclassical equations of motion
1
ṙ = v(k) , k̇ = −e E + v(k) × H ≡ F(r, k) .
c
(1.11)
Here, v is the velocity, e is the electron charge, E is the electric field, H is the magnetic field,
and F is the Lorentz force. Considering an infinitesimal time interval dt, then an electron
which at time t is at r with momentum k must have been at time t − dt at r − v(k)dt, with
momentum k − Fdt. In the absence of collisions, the electron distribution, g(r, k, t) at time
t is identical to the distribution function at time t − dt,
g(r, k, t) = g(r − v(k)dt, k − Fdt, t − dt) ,
(1.12)
since no electron has been scattered off (or scattered into) the classical trajectory. Expanding
in the infinitesimal time interval dt, we find
∂g
∂g
∂g
+v·
+F·
= 0 , no collisions .
∂t
∂r
∂k
(1.13)
Collisions give an additional source for the change of the distribution function with time.
Now g is reduced due to ‘scattering-out’ of electrons away from the classical trajectory, and
it is enhanced due to ‘scattering in’ of electrons,
∂g ∂g ∂g =
−
,
∂t coll
∂t in
∂t out
(1.14)
∂g
∂g
∂g ∂g +v·
+F·
+
=0.
∂t
∂r
∂k
∂t coll
(1.15)
such that
This is the Boltzmann equation.
Once one solves the Boltzmann equation and finds the distribution function, one may obtain
various physical quantities. For example, the current density, J, is given by
J=e
X
k
3
v(k)g(k) .
(1.16)
One can also find the current density of the energy, namely, to analyze the heat transport.
We will discuss this quantity later.
The collision term of the Boltzmann equation
There are various sources contributing to the collision term of the Boltzmann equation. For
example, electrons may be scattered by randomly-distributed static impurities. This will give
rise to elastic scattering processes, during which the electrons can change just the direction
of the momentum, but not their energy. Another source is the electron-phonon interaction,
which gives rise to inelastic scattering processes, in which the electrons change their momenta
and their energies. Likewise, the electrons may be scattered off other electrons, or off
localized magnetic moments.
These various processes can all be described by assigning to each collision process a transition
probability per unit time (related to the scattering cross-section). This quantity gives the rate
for an electron to be scattered from a state k to another state, of k0 . (We use alternatively the
momenta or the energies to specify electronic states.) Denoting that transition probability
per unit time by Wkk0 , we have
∂g(k) X
Wk0 k g(k0 )[1 − g(k)] − Wkk0 g(k)[1 − g(k0 )] .
=
∂t
coll
k0
(1.17)
The first term on the right-hand-side of Eq. (??) is the scattering-in term: the transition
probability per unit time to go from any state k0 to the state k is multiplied by the distribution function of electrons having k0 (ensuring that there are electrons to be scattered in)
and by the probability to have available room at k, namely, by 1 − g(k). This is summed
over all states k0 . Similarly, the second term in Eq. (??) is the scattering out term. Note
that the right-hand-side of Eq. (??) vanishes when summed over k. This is a result of the
number (of electrons) conservation.
∗ ∗ ∗Exercise. At equilibrium, the distribution function is the Fermi function. Then the collision term, Eq. (??), vanishes. Derive the necessary condition on the transition probability
per unit time for this to happen.
Elastic impurity scattering
The explicit calculation of the collision term in the Boltzmann equation is usually done
up to second-order (in perturbation theory) in the scattering potential. (Remember that
the entire Boltzmann-equation approach is valid when the perturbations in the system are
slowly varying.)
4
When the impurity concentration, ni , is small, we may calculate the transition probability
per unit time due to a single impurity, and then multiply the result by the impurity concentration. Denoting the potential of each impurity by U (r), the transition probability per
unit time is given by the Fermi golden-rule (~ = 1),
Wkk0 = 2πni δ(Ek − Ek0 )|hk|U |k0 i|2 .
(1.18)
Wkk0 = Wk0 k .
(1.19)
We note that in this case
This is called ‘detailed balance’. When this is the case, the collision term (??) becomes
∂g(k) X
0
Wkk0 g(k ) − g(k) .
=
∂t
coll
k0
(1.20)
This vanishes at equilibrium, i.e., when g is replaced by the Fermi function, f [because of
the energy conserving delta function in Eq. (??)].
Linearization of the Boltzmann equation-the electrical conductivity
Let us consider the solution of the Boltzmann equation in a simple situation, in which the
only perturbation on the electrons is a constant (in space and time) electric field, and the
sole source of collisions is elastic impurity scattering.
Since the perturbation does not depend on time, the distribution function g will not depend
on t as well, i.e., ∂g/∂t = 0. Such a situation is called ‘steady-state’. In the present case, we
also have that ∂g/∂r = 0, since the field is uniform. Consequently, the distribution depends
on the momentum alone. We will also assume that the system is not too far away from
equilibrium, and will seek a solution for g up to first-order in the electric field.
Consider the field-driven term in the Boltzmann equation (??). Writing g(k) = f (Ek ) +
g (1) (k), where g (1) is linear in the field, we have (up to first-order in the electric field)
F·
∂g(k)
∂g(k)
∂f (Ek )
= −eE ·
' −eE ·
∂k
∂k
∂k
∂f dEk ∂f = eE ·
−
= eE · v(k) −
.
dk
∂E
∂E
(1.21)
We therefore conclude that the distribution function must have the form
g(k) = f (E) −
∂f ∂E
5
eE · v(k)τ (k) ,
(1.22)
where E ≡ Ek . The derivative of the Fermi function restricts the relevant energies to be at
about the Fermi energy. This is physically clear: the electric field affects only the electrons
lying close to the Fermi energy. We therefore may safely assume that the (yet unknown)
function τ (k) ' τ (EF ) ≡ τ.
We next consider the collision term, using Eqs. (??) and (??). We note that since the
scattering is elastic then necessarily k = k 0 , namely, the impurity scattering just changes
the direction of the wave vector. This means that the transition probability per unit time
(i.e., the scattering cross section) in the case of elastic scattering depends only on the angle
between k and k0 . Because of the energy-conserving delta function in Eq. (??), the integral in
Eq. (??) reduces to an integration over that angle. Putting all this together, the Boltzmann
equation of this case gives
Z
E · v(k) = τ
dΩ D(cos θ) v(k) − v(k ) · E ,
0
0
(1.23)
where D is the scattering cross-section, θ is the angle between the directions of k and k0 ,
and dΩ0 marks the angular integration over the angles of k0 .
Generally one expects that the direction of v(k) is determined by k itself. Choosing the
polar direction along k, the vector E such that E = E(sin α, 0, cos α), and the vector k0 such
that k0 = k(sin θ cos φ, sin θ sin φ, cos θ), we find
k0 · E = Ek sin α sin θ cos φ + cos α cos θ) .
(1.24)
When this is inserted into Eq. (??), the angular integration kills the first term. Then Eq.
(??) yields
1
=
τ
Z
dΩ0 D(cos θ) 1 − cos θ .
(1.25)
We see that the collision rate 1/τ is dominated by back scattering, namely, by scattering
processes which change significantly the angle. Forward scattering (for which cos θ is close
to 1) contributes only little to the relaxation rate. Indeed, had Wkk0 dictated only forward
scattering, the collision rate would have vanished (namely, the collision term would have
disappeared from the Boltzmann equation). The relaxation time given by Eq. (??) is the
transport relaxation time. In fact, our result here is equivalent to replacing the collision term
by
∂g g(k) − f (Ek )
.
'−
∂t coll
τ
6
(1.26)
We have now determined all ingredients of our solution, Eq. (??), to the Boltzmann equation
in the present simple case. In order to find the (charge) current, it remains to insert the
solution into the expression for J, Eq. (??). Obviously, only the second term of Eq. (??)
contributes to the current (the first term does not contribute because the angular integration
vanishes). Hence we have
2
J=e
X
k
∂f (E ) k
E · v(k) v(k) .
τ (k) −
∂Ek
(1.27)
The derivative of the Fermi function restricts the relevant energies contributing to the sum
to be on the Fermi level. The angular integration is a bit more tricky. For a free electron
gas, v(k) = k/m, (m is the mass of the electron). Then we find
1
J = e2 N (0)vF2 τ E ,
3
(1.28)
where the factor 1/3 comes from the angular integration, and vF is the Fermi velocity. The
factor vF2 τ /3 is the diffusion coefficient (at three dimension). In general dimensions, d, the
diffusion coefficient, D, is
D=
vF2 τ
.
d
(1.29)
Comparing the result (??) with the one for the electronic density, Eq. (??), (omitting the
temperature-dependent correction term, which is usually small) we can re-write the above
equation in the form
J=E
e2 τ N m
≡ EσD ,
(1.30)
where
σD = e2 τ N/m = N (0)e2 D
(1.31)
is the electric conductivity. This expression for σ is termed ‘the Drude conductivity’.
∗ ∗ ∗Exercise. Repeat the above calculation for the case when the electric field is still uniform
(constant in space) but varies in time, such that E(t) = E cos ωt, and find the frequencydependent conductivity.
Linearization of the Boltzmann equation-transport coefficients
Let us now include in the previous treatment the effect of a temperature gradient. Namely,
the temperature is not fixed along the sample, but changes along a certain direction. When
7
this gradient is small enough, we may suppose that there is still a well defined temperature,
T (r), at each point, such that
f (Ek ) →
1
(Ek −µ(r))/(kB T (r))
e
+1
.
(1.32)
Here we have assign an r-dependence also to the chemical potential. This should be viewed
as adding to the (fixed) chemical potential an electric potential, which is created in the
system since under the effect of the temperature gradient the electron density is changed in
order to screen the effect of that gradient. (In fact, we should have done the same also in
the treatment above.) The term v · ∂g/∂r in the Boltzmann equation (??) now yields
v·
∂f E − µ
1
∂
k
[−∇T
]
−
∇µ
.
=
−
∂r e(Ek −µ(r))/(kB T (r)) + 1
∂E
T
(1.33)
Adding the effect of a uniform electric field E, the linearized Boltzmann equation takes the
form
∂f hE − µ
∂g ∇µ i
k
−
v(k) ·
(−∇T ) + e(E −
) =−
.
∂E
T
e
∂t coll
(1.34)
At this point we assume that the collision term is due to static impurity scattering, so that,
similarly to Eq. (??) above, the distribution function is given by
∂f hE − µ
∇µ i
k
g(k) = f (Ek ) + −
τ v(k) ·
(−∇T ) + e(E −
) .
∂E
T
e
(1.35)
(Here, it is implicitly assumed that both ∇T and ∇µ are uniform in space.)
Inserting the solution Eq. (??) into the expression for the (charge) current density, Eq.
(??), we find
∂f h
∇µ i
τ (k)v(k) −
v(k) · (E −
)
∂E
e
k
∂f E − µ h
i
X
k
+e
τ (k)v(k) −
v(k) · (−∇T ) .
∂E
T
k
J = e2
X
(1.36)
The first term here is similar to what we have found above [see Eq. (??)]. It gives the
electrical conductivity, describing the response of the electrons to both the electric field E
and the potential gradient included in ∇µ/e. The combination E − ∇µ/e is called the
‘electro-chemical potential’.
The second term in Eq. (??) shows that the effect of a temperature gradient on the electrons
is to produce an electric current. This is called ‘thermo-electric effect’. However, there is
8
another effect due to the temperature gradient: it produces heat current density. The hear
current density, denoted U, is defined as
U=
X
v(k) Ek − µ g(k) .
(1.37)
k
Since µ represents the free energy of the electron, the heat current is given by the current
density of the ‘internal energy’ (i.e., Ek ), minus the free energy (which is indeed the definition
of heat). Introducing the solution for the distribution function, Eq. (??), into Eq. (??), we
find
∂f h
∇µ i
v(k) · (E −
)
τ (k)(Ek − µ)v(k) −
∂E
e
k
∂f h
i
X
(Ek − µ)2
+
v(k) −
v(k) · (−∇T ) .
τ (k)
T
∂E
k
U=e
X
(1.38)
Inspecting Eqs. (??) and (??), we see that it is natural to define the following quantity
K(n) =
X
k
∂f τ (k)(Ek − µ)n −
v(k) ⊗ v(k) .
∂E
(1.39)
(K is a tensor.) Then, the general transport equations take the form
e (1)
K · (−∇T ) ,
T
1
∇µ
+ K(2) · (−∇T ) , Eef = E −
.
T
e
J = e2 K(0) · Eef +
U = eK(1) · Eef
(1.40)
For example, suppose that we apply a temperature gradient on an open system, so that
there is no electric current flowing through it. Then the first of Eqs. (??) tells us that an
electric field is set up on the system, given by
E=
1 (0)−1 (1)
K
K ∇T .
eT
(1.41)
Using this result in the second of Eqs. (??), we find that the heat current is
U = κ(−∇T ) ,
(1.42)
where κ (usually, a tensor) is the thermal conductivity, given by
κ=
1 (2)
(K − K(1) K(0)−1 K(1) ) .
T
9
(1.43)
How does one actually calculate the transport coefficients? To this end we use our trick of
calculating the sums which include the derivative of the Fermi function. We write Eq. (??)
in the form
Z
Z
Z
∂f ∂f dΩk
n
(n)
N (E)τ (E)(E − µ)
v ⊗ v ≡ dEΦ(E) −
,
K = dE −
∂E
4π
∂E
and use [see Eqs. (??), (??), and (??)] to obtain
Z
∂f π 2 kB2 T 2 ∂ 2 Φ dEΦ(E) −
= Φ(µ) +
.
∂E
6
∂E 2 E=µ
(1.44)
(1.45)
∗ ∗ ∗Exercise. Show that when only the leading orders are kept, one finds
K(2) =
π 2 kB2 T 2 (0)
K ,
3
(1.46)
and
(1)
K
π 2 kB2 T 2 ∂K(0) =
.
3
∂E E=µ
(1.47)
Thermo-electric effects
Let us consider a circuit (a simple ring, for concreteness), which is made of two materials.
The ring is connected to a battery, which provides the electric field E, and is kept at a fixed
temperature (namely, there is no temperature gradient). According to Eqs. (??),
J = e2 K (0) E , U = eK (1) E ,
(1.48)
1
U = K (0)−1 K (1) J ≡ ΠJ .
e
(1.49)
so that
Since the current driven in the ring, J, is the same in both metals forming the ring, then
the heat current is necessarily different, being ΠA J in metal A, and ΠB J in metal B. At the
two junctions, the balance is restored: one junction absorbs heat flux , say, (ΠA − ΠB )J, and
the other junction emits heat flux, so that the first one is heated, and the second is cooled.
This is the Peltier effect.
Transport in the presence of a (constant) magnetic field
Let us now consider the combined effect of a constant magnetic field and a constant electric
field on the electrons. Assuming that the collision integral is due only to elastic impurity
scattering [see Eq. (??)] the Boltzmann equation pertaining to this configuration reads
∂g(k)
1
g(k) − f (Ek )
−e E + v(k) × H ·
=
.
(1.50)
c
∂k
τ
10
We again seek for a solution for the distribution function which is just slightly away from
equilibrium,
g(k) = f (Ek ) + g (1) (k) ,
(1.51)
∂f (Ek ) v(k) · A(k) .
g (k) = eτ −
∂Ek
(1.52)
where in analogy Eq. (??),
(1)
(The function A(k) has yet to be found.) We will see below why such a form is dictated in
the present case.
Since
∂f (Ek ) ∂f (Ek ) ∂Ek ∂f (Ek ) =
=
v(k) ,
∂k
∂Ek
∂k
∂Ek
(1.53)
we see that the zeroth-order term of the distribution Eq. (??) does not contribute to the
magnetic field part of the Lorentz force (it includes the product v × H · v). However, in the
electric field part of the Lorentz force we may keep just the zeroth order term (as before).
As a result, we find that Eq. (??) becomes (to lowest possible order in the electric field)
∂f (E ) g(k) − f (Ek ) e
g(k) − f (Ek )
k
e −
v(k) · E =
+ v(k) × H ·
∂Ek
τ
c
∂k
(1)
(1)
g (k)
g (k) e
+ v(k) × H ·
.
(1.54)
≡
τ
c
∂k
Arranging terms, we find
∂f (E ) k
g (1) (k) = eτ −
v(k) · E −
∂Ek
∂f (E ) k
= eτ −
v(k) · E −
∂Ek
eτ
g (1) (k)
v(k) × H ·
c
∂k
2 2
eτ
∂f (Ek )
−
H × E · v(k) + . . .
mc
∂Ek
(1.55)
The last equality here is obtained upon solving by iterations, and assuming the free electron
gas relation
v(k) = k/m .
(1.56)
We see from the form of Eq. (??) that the general solution must have the form (??).
Moreover, it appears that the vector A there is independent of the momentum. Therefore,
using the solution Eqs. (??) and (??) in Eq. (??), we find
v·E=v·A+
eτ
eτ
v · H × A , i.e., E = A +
H×A .
mc
mc
11
(1.57)
Since the last equality gives
eτ H(H · A) − AH 2 , and H · A = H · E ,
mc
H×E=H×A+
(1.58)
we find
E−
A=
eτ
H
mc
eτ 2
× E + ( mc
) H(H · E)
.
eτ 2
1 + (H mc
)
(1.59)
eτ
must be dimensionless. Indeed,
Inspecting Eq. (??), we see that the quantity H mc
ωc =
eH
,
mc
(1.60)
is the cyclotron frequency. Moreover, from our discussion of the electrical conductivity above
[see Eq. (??)], and the formal analogy between Eq. (??) and Eq. (??), we find that the
electrical current is given by
J = σD A , where σD =
N e2 τ
is Drude conductivity .
m
(1.61)
Using our result (??) in Eq. (??), we finally obtain that the electrical current, when the
electrons experience both electric and magnetic field is given by
J = σD
E − ωc τ Ĥ × E + (ωc τ )2 Ĥ(Ĥ · E)
.
1 + (ωc τ )2
(1.62)
(Here, Ĥ = H/H.)
Without loss of generality, we may take the magnetic field to be along the ẑ direction. Then,
the z component of Eq. (??) is just
Jz = σD Ez ,
(1.63)
namely the motion along the magnetic field is un-changed. On the other hand, for the
motion in the plane perpendicular to the magnetic field, we have
Jx = σD
Ex + ωc τ Ey
Ey − ωc τ Ex
,
J
=
σ
,
y
D
1 + (ωc τ )2
1 + (ωc τ )2
which we can put in a matrix form
 



J
1 ωc τ
E
σD
 x=

  x  ⇒ J = σE .
2
1 + (ωc τ )
Jy
−ωc τ 1
Ey
12
(1.64)
(1.65)
Here σ is the conductivity tensor. Inverting this tensor, we find that the resistivity tensor ρ,
is such that

E = ρJ , with ρ =
1
1 
σD ω τ
c
−ωc τ
1

 .
(1.66)
Writing explicitly Eq. (??), we have
1
ωτ
Jx − c Jy ,
σD
σD
1
ωτ
Ey =
Jy + c Jx .
σD
σD
Ex =
(1.67)
It follows that the longitudinal resistivity (electric field and current along the same direction)
is unaffected by the magnetic field. On the other hand, the transverse resistivity is odd in
the magnetic field–this is one of the Onsager relations. In particular we see that when the
current is flowing along, say, the x− direction due to an electric field Ex , and the system is
open along the y− direction, such Jy = 0, then the magnetic field (via the Lorentz force)
causes the appearance of voltage along the y− direction, given by
Ey = Jx
eHτ m
1
ωc τ
= Jx
= Jx
.
2
σD
mc N e τ
N ec
(1.68)
This is the Hall voltage. One notes that the Hall coefficient, 1/N ec, is independent of the
relaxation time.
∗ ∗ ∗Exercise. We have found above that for the free electron gas, the longitudinal resistivity
is not affected at all by a constant magnetic field. Consider a system made up of two electron
species, which differ from one another by their masses and by their relaxation times. Find
the longitudinal resistivity in this case. Make sure that the result is even in the magnetic
field (this is another Onsager relation).
13
2.
WEAK LOCALIZATION
Diffusion of classical and quantum-mechanical particles
In the (classical) Boltzmann approach consecutive scattering events of a certain particle are
assumed to be independent of each other, i.e., collisions are un-correlated. This implies that
multiple scattering of a particle at a particular scattering center is not taken into account.
Consequently, if there is a finite probability for such multiple scattering to occur, the basic
assumption of the independence of scattering events breaks down and the validity of the
Boltzmann equation results, notably the Drude conductivity, becomes, at least, questionable.
To investigate this point we consider the diffusive motion of a particle in a d−dimensional
disordered system (namely, a system containing various scattering centers, in particular
elastic ones). Let the particle be located at r = 0 at time t = 0. Because of its diffusive
motion, the particle will be located at some later time t within a smooth volume, whose size
is determined by the probability distribution P (r, t). That probability distribution obeys
the diffusion equation
∂P
− D∇2 P = 0 ,
∂t
D = vF2 τ /d is the diffusion coefficient .
(2.1)
The solution of the diffusion equation (in an infinite volume) is given by
P (r, t) =
1
2
e−r /4Dt .
d/2
(4πDt)
(2.2)
∗ ∗ ∗Exercise. Find the average distance that the particle reaches after a time t.
At a time t much longer than the relaxation time τ , t >> τ , we may ignore the exponential
in Eq. (??). Then P (r, t) gives the volume covered by the diffusing particle,


1



, d=1


(Dt)1/2


1
1
'
.
P (r, t) '
,
d=2
Dt


Vdiff



 1 3/2 , d = 3 

(Dt)
(2.3)
One may view the diffusion volume also as the probability to return to the origin after a time
t.
A quantum-mechanical particle can diffuse from a certain point a to another point b via
many ‘trajectories’ or ‘tubes’, each of them having a typical ‘width’ given by the Fermi
wavelength λF ,
λF =
1
.
mvF
14
(2.4)
We note that a diffusion picture for the electron is valid as long as the mean-free-path,
` = vF τ , is much longer than λF = 1/kF , namely,
kF ` 1 .
(2.5)
To find the probability for the particle to reach point b, we have to sum over all probability
amplitudes of each separate path, and then take the absolute value squared. In so doing,
we are assuming coherent motion of the particle. Namely, we presume that the quantummechanical particle retains its phase. Phase coherence may be lost, for example, by inelastic
processes. The cross-section for those increases as the temperature is increasing. We will
therefore assume that we are considering very low temperatures, such that the time over
which the particle retains its phase coherence, τφ , is much longer than the relaxation time,
τφ >> τ .
Let us denote the probability amplitude of trajectory i connecting point a to point b by Ai .
The total quantum-mechanical probability to reach point b, denoted W , is
W =|
X
i
Ai |2 =
X
i
|Ai |2 +
X
Ai A∗j .
(2.6)
i6=j
The first term in Eq. (??) is just the classical result; the second is due to interference of
the path amplitudes, and therefore it is an exclusively quantum-mechanical effect. It is the
second term in Eq. (??) which is neglected in the Boltzmann equation approach. Under
many circumstances, this neglect is justified, since each trajectory (path) carries a different
phase, and on the average the interference is destructive, and the quantum mechanical
correction is unimportant.
There is, however, one particular exception, namely if point a and point b coincide (within
a distance λF ). Then the path can be traversed in two opposite directions, forward and
backward. In that case, the probability W is just the return probability (the probability to
return to where the particle came from). The forward path and the backward path have
the same phase and therefore interfere constructively. If we denote the forward probability
by A1 and the backward one by A2 then we have A1 = A2 ≡ A. According to Eq. (??) the
classical result will be 2|A|2 , while the quantum-mechanical one is 4|A|2 . It follows that the
quantum mechanical probability to return to the origin is twice that obtained classically.
Consequently, quantum-mechanical diffusion is slower than the classical one. In other words,
quantum-mechanical interference effects reduce the Drude (Boltzmann) conductivity.
15
This remarkable result follows from the interference of the clock-wise and anti-clock-wise
trajectories, that begin and end roughly (up to order of λF ) at the same point. This pair of
trajectories are related to one another via time-reversal symmetry.
Estimate of the weak-localization correction to the conductivity
Let us estimate the quantum-mechanical correction to the Drude conductivity. We already
know from the discussion above that this change is negative. Furthermore, the change will
be proportional to the probability that during diffusion a closed path occurs at all. (This is
the probability for the trajectory to intersect itself during the diffusion.) We hence look at
a d-dimensional ‘tube’ of diameter λF . During an infinitesimal time dt the diffusing particle
will move a distance vF dt and so it will cover a volume given by dV ' λFd−1 vF dt. On the
other hand, the maximal attainable volume is given by Eq. (??) above. It follows that the
probability for a particle to stay in the closed tube is given by the ratio of these two volumes.
The relative change in the conductivity, caused by interference, is hence
Z τφ
dt
δσ
d−1
.
' −vF λF
σD
(Dt)d/2
τ
(2.7)
We have put the lower limit of the integration to be τ , as one cannot discuss diffusion
processes on time scales less than the mean free time in-between elastic collisions; the upper
limit of the integration is τφ because for times longer than it phase-coherence is lost, and
quantum interference is not relevant any more.
Working out the integral in Eq. (??), using λF = kF−1 and introducing the mean-free-path,
` = vF τ , we find
δσ
' −(kF `)1−d
σD
Z
1
τφ /τ






dx
'−

xd/2




1
(kF `)2
h
1/2 i
1 − ττφ
,
τ
1
ln τφ ,
(kF `)
h 1/2
i
τφ
−
1
τ
,


d=3 ,



d=2,




d=1 .
(2.8)
As we have discussed above, the phase coherence time, τφ , diverges at zero temperature.
At three dimensions, the weak-localization correction is negligible. It is proportional to
1/(kF `)2 1, [see Eq. (??)], and is almost temperature independent, since (at low temperatures) τφ τ . This is not the case for one and two dimensions. Firstly we see that
the weak-localization correction is larger in two dimensions as compared to three, and does
not have any small parameter at all in one dimension, and secondly, in both one and two
dimensions it diverges as the temperature tends to zero. Making the reasonable assumption
16
that
1
' Tp ,
τφ
(2.9)
we find that at two dimensions the conductivity diverges logarithmically with the temperature T as the latter is decreased.
The fact that the correction to the conductivity diverges (at dimension less than three)
means that our Drude picture is not adequate at two and less dimensions. In fact, at zero
temperature, a two-dimensional system is insulating.
Size-dependence of weak localization corrections
Our discussion above pertains to an unbounded system. The diffusing electron maintains
p
its phase-coherence during a time τφ , and therefore traverses a distance `φ ' Dτφ before
it loses its phase. The previous discussion is thus valid as long as the size of the system,
denoted L, is such that L `φ . However, this condition depends on the temperature, since
`φ increases as the temperature is decreased. It is hence conceivable that the condition will
break down at a certain temperature, and the correction to the conductivity will depend on
the size of the system!
In order to study this effect, we return to Eq. (??) and put it into the form
δσ
vF λd−1
F
'−
σD
D
Z
`φ
`
x
1 2−d
dx d ' −
k
x
kF ` F
Z
`φ
dxx1−d .
(2.10)
`
But note that the upper limit of the integration here should be replaced by L once the
phase-coherence length `φ becomes longer than the system size.
Performing the integration at two dimensions (the relevant case, for obvious reasons), we
obtain

`φ
δσ
1  ln ` ,
'−
σD
kF `  ln L ,
`

for L > `φ , 
for L < `φ . 
(2.11)
It follows that, at low enough temperature, the conductivity of a two-dimensional sample depends on its size. For future purposes, we write down the result at one and three dimensions
as well,


δσ
'−

σD
1
2 kF `
`φ
`
1−
−1 ,
17
`
`φ

, d=3 ,
d=1 .
(2.12)
Again, for L < `φ , we need to replace `φ in the expressions above by the size of the system,
L.
Note that at zero temperature `φ is always longer than the size of the system. Using the
explicit form for the Drude conductivity,
σD =
N e2 τ
' e2 `kFd−1 ,
m
(2.13)
we can summarize the size dependence of the conductivity, due to weak localization corrections, in the form
−δσ ' e2











− , d=3 ,


L
ln ` , d = 2 ,



L−` , d=1 .
1
`
1
L
(2.14)
The effect of a magnetic field on weak localization corrections
As we have seen, weak-localization corrections originate from quantum coherence, i.e., from
the ‘wavy’ character of the electron that allows it to interfere with itself. The interference is
constructive because the state with k and that of −k are degenerate, due to time-reversal
symmetry. Magnetic fields break this symmetry, and degrade this constructive interference.
Let us ignore for the time-being the coupling of the magnetic field to the spin of the electron,
namely, neglect the Zeeman interaction. Then the magnetic field just modifies the electron
velocity. Introducing the vector potential A, such that the magnetic field, B, is given by
B = ∇ × A, the Hamiltonian of the electron is
1 e 2
H=
p − A + V (r) ,
2m
c
(2.15)
where V (r) is the potential energy. The Schrödinger equation is thus
HΨ(r) = EΨ(r) .
We can perform a (formal) gauge transformation on the wave function,
ie Z r
e
.
Ψ(r) = exp
d` · A Ψ(r)
c
(2.16)
(2.17)
e is then the same as in the absence
The Schrödinger equation satisfied by the wave function Ψ
of the vector potential,
p2
2m
e
e
+ V (r) Ψ(r)
= E Ψ(r)
.
18
(2.18)
We may interpret the results (??) and (??) as if in the presence of a (constant) magnetic
field, all that happens is that the wave function (at site r) accumulates a phase factor, given
by the line integral
ie
c
Z
r
d` · A ,
(2.19)
which starts at an arbitrary point and ends at r. This phase factor is related to the partial
magnetic flux accumulated along the path starting at the arbitrary point and ending at r.
This observation is usually not so helpful for a practical solution, except when the electron
is confined to move along one-dimensional trajectories. Inspecting Eq. (??), we see that the
phase factor is the flux of the magnetic field (times 2π), divided by
Φ0 =
2πc
,
e
(2.20)
which is the unit of the flux quantum. The above result is a manifestation of the AharonovBohm effect, and the phase is sometimes referred to as the Aharonov-Bohm phase.
∗ ∗ ∗Exercise. Find the wave functions and the energy spectrum of an electron confined to
move on a one-dimensional ring, of a radius R, penetrated by a magnetic field B, directed
perpendicularly to the plane of the ring. In particular, draw the eigen energies as function
of the magnetic field.
Let us now return to our discussion of the quantum probability to return to the origin, vs.
the classical one, Eq. (??), and in particular, to the two time-reversed paths that start and
end at the same point. In the presence of a magnetic field, each of these trajectories will
acquire a phase factor. In fact, since each of these paths starts and ends at the same point,
that phase factor will be simply the magnetic flux, contained within the path [in units of
the flux quantum, Eq. (??)]
Φ = SB ,
(2.21)
where S is the area enclosed by the path. However, since one of these paths runs clock-wise
while the other runs anti-clock-wise, the phase factor on each of them will be the same, but
with the reversed sign. The clock-wise amplitude becomes
A → A1 = Aei2πΦ/Φ0 ,
(2.22)
A → A2 = Ae−i2πΦ/Φ0 ,
(2.23)
while the anti-clock-wise is now
19
The total probability to return to the origin coming from this pair is hence
|A1 + A2 |2 = 4A2 cos2
2πΦ
4πΦ = 2A2 1 + cos
.
Φ0
Φ0
(2.24)
It follows that, since the magnetic field causes some destructive interference, the quantum
probability to return to the origin is reduced, and as a result the quantum diffusion is less
slow. In other words, the magnetic field reduces the weak localization correction. This is
called ‘anti-weak-localization’. Moreover, we see that the effect of the magnetic field enters
via a periodic function. When measured as function of the magnetic field, the effect will
oscillate as the magnetic field is increased, and the oscillation period is determined by the
area encompassed by the time-reversed paths.
We can estimate the anti-weak-localization correction due to a magnetic field as follows.
The area encompassed by the diffusing electron during a time t is about Dt. Therefore the
4πBDt
change in the probability entering the integrand in Eq. (??) is 2 1−cos Φ0 . (Remember
that in the absence of the field, we had just 4|A|2 .) It follows that the change in the weak
localization relative correction to the conductivity is [see Eq. (??)]
Z τφ
4πBDt ∆σ(B)
dt d−1
1
−
cos
.
' vF λF
σD
(Dt)d/2
Φ0
τ
(2.25)
Let us work out the outcome of this expression for two dimensions. We denote
x≡
4πBDτφ
.
Φ0
(2.26)
Then,
1
∆σ(B)
'
σD
kF `
Z
1
τ /τφ
1
dy
(1 − cos xy) '
y
kF `
Z
0
x
dy
(1 − cos y) .
y
(2.27)
(Note that τ /τφ is a very small number at low enough temperatures, for which our considerations are confined.) We see that for x 1, the cosine term may be expanded, while for
x 1 its contribution to the integral averages to zero. Hence we find that for weak magnetic
fields ∆σ(B) ∝ B 2 , while for relatively strong ones it show logarithmic dependence on the
field.
20
3.
THE ANDERSON LOCALIZATION: THE METAL-INSULATOR TRANSI-
TION
Dimensional considerations and the Thouless argument
Let us consider a disordered system of size L. This means that the system is a cube in three
dimensions, a flat square in two dimensions, etc. The first question we ask is what are the
dimensions of the conductance of the system. To this end, we use Ohm’s law, which tells us
that the conductance, G, (the inverse of the resistance) is given by the current divided by
the voltage,
h eN i
h i hI i
eI
1
e2
=
=e×
×
=
G =
V
eV
t
eV
energy × time
2
e
=
.
Plank constant
(3.1)
Namely, the conductance of a system is just the quantum unit of the conductance (we will
use just e2 /2π), times a number.
Next we derive the relation between the Drude conductivity (which does not depend on
the size of the system in the classical limit) and the conductance of a d-dimensional system
of size L. By Ohm’s law, the conductivity is just the ratio between the current density and
the electric field. At three dimensions,
σD = j ×
1
I
1
I
1
G
= 2×
= × =
.
E
L
V /L
V
L
L
(3.2)
I
1
I
1
= ×
=
=G.
E
L V /L
V
(3.3)
At two dimensions,
σD = j ×
Hence we have in general
G = σD Ld−2 .
(3.4)
(At one dimension the current density becomes the current.)
Finally we combine the two observations, using Eq. (??) in Eq. (??),
G = σD Ld−2 = e2
D N
N τ d−2
d
L
= e2 2
L
.
m
L
mvF2
(3.5)
Here, the first factor, e2 , is just the quantum unit of the conductance, see Eq. (??). In the
second term appears 1/time, where the relevant time is given by L2 /D. This is the time the
21
diffusing electron needs in order to travel the full length of the system. This time is called
the Thouless time, and it measures the amount of sensitivity to boundary conditions. This
is so since had we changed the boundary conditions, the electrons within the sample would
feel that change only after the time required for them to travel to those boundaries. From
these considerations we see that the inverse of that time measures roughly the uncertainty
of the electronic energy levels (the inverse of time is energy in our units). Namely, the
determination of the energy spectrum depends on the assumed boundary conditions. When
those boundary conditions are changed, so will also the spectrum, and the amount of change
is roughly the inverse of the Thouless time. Finally, the third term in Eq. (??) is the total
number of the electrons in our system, N Ld , divided by the Fermi energy, which is just
the total density of states at the Fermi energy. In other words, the inverse of this factor is
roughly the spacing between the energy levels which are about the Fermi energy.
The above discussion indicates that the conductance of a system of size L, GL , can be
expressed as a ratio of two energy scales (times the quantum unit of the conductance)
GL ' e2
WL
,
∆L
(3.6)
where ∆L is the energy spacing among energy levels (of a system of size L) located at about
the Fermi energy, and WL is the ‘uncertainty’ in the energy level, caused by the ‘uncertainty’
in the boundary conditions,
∆L =
1
D
, WL = 2 .
d
N (0)L
L
(3.7)
This expression is due to Thouless.
Now imagine that we combine together systems of size L in order to create a system of size
2L. (In three dimensions, we need 8 cubes of size L in order to create a cube of size 2L,
and so forth.) The question we ask is: will the conductance of the larger system, G2L , be
larger or smaller than the conductance of the L−size system, GL ? As argued by Thouless,
the answer depends on the ratio WL /∆L , or equivalently, on the conductance of the cube of
size L.
Let us consider what happens to the energy spectrum and the wave functions upon combining
several ‘cubes’ of size L to form a bigger ‘cube’ of size 2L. If WL > ∆L , then the energy
spectrum of the combined cubes will merge together nicely. If, on the other hand, WL < ∆L ,
then there will be just a small amount of overlap of the separate spectra of the smaller cubes.
22
When the energy spectra of the smaller cubes merge together ‘smoothly’, then it will be
‘easier’ for an electron to go from one smaller cube to the other; in other words, the wave
functions of the larger cube will extend over the entire 2L cube. In such a case, we will
expect G2L to be significant, at least as large as GL . On the other hand, when the wave
functions of the smaller cubes do not overlap appreciably, then the electron will be stuck
in the smaller cube and will not move around to another smaller cube. In that case, the
conductance of the larger cube will be smaller than the one of the smaller cube.
We can summarize the above discussion as follows. Firstly, we define the dimensionless
conductance, g, to be the conductance divided by the quantum unit of the conductance.
According to the Thouless picture,
gL =
WL
.
∆L
(3.8)
Then we say that if gL > 1, then g2L is also larger than unit, and vice versa. In other words,
the conductance at a certain length scale depends on the conductance of the smaller length
scale alone.
Scaling theory
When the wave functions are extended over the entire system, the conductance is necessarily
good. This is the metallic regime. In the extreme metallic regime, where we can ignore the
quantum-mechanical corrections discussed above, the conductivity does not depend on the
system size, while the conductance does. In particular, the dimensionless conductance, g,
depends on the size off the system as
g ∝ Ld−2 .
(3.9)
Adding the relevant parameters,



Lk 2 ` , d = 3

 F
g'
kF ` , d = 2



 ` , d=1
L


,


,



.
(3.10)
On the other hand, when the wave functions are well localized so that the electron is stuck
at a certain region in the sample and does not move all around it, then the conductance is
‘bad’. In other words, the system is an insulator. Making the plausible assumption that the
wave functions are exponentially decaying with length, with a typical decaying length, i.e.,
23
localization length, ξ, then we expect the conductance to decay exponentially as well,
g ∝ e−L/ξ .
(3.11)
(We do not assign an energy dependence to the localization length ξ since our discussion is
confined to electrons at about the Fermi energy, and therefore we consider only the wave
functions belonging to energies there.)
The results (??) and (??) give the dimensionless conductance in the extreme metallic limit.
But, in fact, we have already worked out the corrections to this result. Indeed, using Eq.
(??) (noting that e2 there is just the dimensions of the conductance in our units) we have



L


1− ` , d=3 ,




`
.
(3.12)
δg =
ln L , d = 2 ,






 ` −1 , d=1 ,
L
Let us now see how can we combine the result of the Thouless argument, and Eqs. (??),
(??), and (??) together. Since the dimensionless conductance at length scale bL, where b is
a multiplicative factor, depends on the dimensionless conductance at scale L, we define the
rate by which the dimensionless conductance is changing as the length scale is changing,
β(g) =
L dg
.
g dL
(3.13)
In fact, Eq. (??) is the logarithmic derivative of the dimensionless conductance, d ln g/d ln L.
(Taking the logarithmic derivative is helpful in eliminating the microscopic scales like ` or
ξ.)
In the insulating regime we have
L 1 β(g) =
− g = ln g ,
g
ξ
(3.14)
in all dimensions.
In the metallic regime, using Eqs. (??) and (??), we find
1
L 2
`kF − 1/` = 1 − , d = 3 ,
g
g
L
1
1
β=
−
=− , d=2,
g
L
g
L
2`
1
β=
− 2 = −1 − , d = 1 .
g
L
g
β=
24
→ β(g) = d − 2 −
1
.
g
(3.15)
We see that in one and two dimensions, the beta-function as function of g (or equivalently,
as function of ln g) is always negative. This means that independently of the size of sample
we begin with, the dimensionless conductance will decrease as the sample becomes bigger,
namely, the system is insulator. The conclusion is that at zero temperature, all (infinite)
systems at two dimensions and less are always insulators.
The situation is different at three dimensions. There the beta function is positive at relative large g and negative at smaller ones. This implies that the (infinite) system (at zero
temperature) is a conductor for values of g for which the beta-function is positive, and is
an insulator for values of g for which the beta-function is negative. In particular there is a
point, g = gc , at which the beta-function vanishes. This point is called ‘fixed point’, since
the dimensionless conductance g does not change when the size of the system is changed.
This is the point of the Anderson metal-insulator transition.
At the vicinity of the fixed point the beta-function can be approximated as
β=
where
1
ν
g − gc
,
νgc
(3.16)
is the slope of the beta-function (as function of ln g) at g = gc . By integrating Eq.
(??) in the vicinity of the fixed point, we find
L 1/ν
g − gc g0
g − gc
=
'
,
L0
g g0 − gc
g0 − gc
(3.17)
where g0 is the dimensionless conductance of the system at size L0 . (Note that both g and
g0 are by definition close to gc .) In particular, we may define the parameter
t=
g0 − gc
,
gc
(3.18)
which measures the disorder in the system relative to the corresponding critical value. The
parameter t is positive on the metallic (i.e., conducting) side of the transition, and is negative
on the insulating side. One can associate a ‘correlation’ length for this transition, ξc , which
is given by (in dimensionless units)
ξc ' |t|−ν .
(3.19)
The correlation length measures the extent of the localization. On the insulating side, we
find from Eq. (??)
L 1/ν
g
1−
'
.
gc
ξc
25
(3.20)
Namely, a small insulating system, whose size L is less than ξc will have a finite conductance.
But when the size of the insulating system is about ξc , its conductance vanishes.
∗∗∗Exercise. Scaling theory of two-dimensional metal-insulator transition [V. Dobrosavljević,
E. Abrahams, E. Miranda, and S. Chakravarty, Phys. Rev. Lett. 79, 455 (1997)]:
Let us assume the following form for the β-function
β(g) ≡
dlng
A
= (d − 2) + α ,
dlnL
g
where d is the dimensionality of the system, g is its dimensionless conductance, measured in
units of 2e2 /h, L is the system size, A is a positive constant and the exponent α is assumed
to be positive. This form is valid for large conductances g. For very small g, we assume
that the system is localized, and the conductance decays exponentially with the system
size. (1) Discuss the possibility of a metal-insulator transition at two dimensions, at zero
temperature. (2) Assume that at finite temperatures, the longest size of the system for
which the scaling theory holds is temperature dependent,
Lφ (T ) ' T −1/p .
Discuss the temperature-dependence of the conductance.
26
4.
TRANSPORT BY LOCALIZED ELECTRONS: PHONON-ASSISTED HOP-
PING
Hopping insulator
A ubiquitous picture for an Anderson insulator is the following. We imagine that the relevant
electronic wave functions (those belonging to energies at the vicinity of the Fermi level) are
localized around random ‘sites’ in the system. Thus, the wave function localized around the
i-site, whose radius-vector is Ri , is given (approximately) by
Ψ(r) ' e−|r−Ri |/ξ ,
(4.1)
where ξ is the localization length (for wave functions of energy at about the Fermi energy).
Let us measure energies from the Fermi level. Then, when the electron resides on site i
[namely, its wave function has the form (??)] it has the energy i , and that energy can
be either positive (namely, above the Fermi energy) or negative. The probability that an
electron resides at site i is obviously temperature-dependent, given by the Fermi distribution,
f (i ) =
1
.
+1
eβi
(4.2)
(Remember that we measure energies from the Fermi energy.)
Although the wave function belonging to the i site is taken to be rather small at any other
site, say the j site, it still does not vanish there. Therefore the electron has a certain probability amplitude to tunnel from site i to site j and vice versa. This probability amplitude,
denoted Jij , is essentially related to
Jij ' Ae−|Ri −Rj |/ξ ,
(4.3)
where A has dimensions of energy. (This is because the probability amplitude to tunnel
from site i to site j is given by the overlap integral of the wave function localized at i, the
wave function localized at j, and the potential acting on the electron.)
If the sites i and j are very far away from one another (relative to the localization length),
that probability amplitude is exponentially small. However, one may still wonder weather
it is possible to construct a linear combination of the localized wave functions which will be
extended over the sample. The condition that this will not occur is that Jij |i − j |. In
other words, the ‘mismatch’ in the (random) electronic energies cannot be compensated by
the overlap Jij of two wave functions.
27
Phonon-assisted hopping
Let us now explore the possibility that the electron will really go from site i to site j. Since
the energy it has when at site i, i , is different from the one it has at site j, j , we need a
phonon (or phonons) to compensate for the energy difference.
Let us assume that the electronic energies are such that i − j > 0. Then the electron will
emit a phonon (of energy i − j ) while hopping from site i to site j. This does not mean that
the process is temperature independent and will always occur. The temperature-dependence
of the probability for this process to occur is given by
f (i )(1 − f (j ))(1 + N (i − j )) ,
i − j > 0 ,
(4.4)
where N is the Bose distribution,
N (ω) =
1
.
−1
(4.5)
eβω
The first factor in Eq. (??), f (i ), [see Eq. (??)] gives the probability that there is an
electron at site i. Similarly, the second factor, 1 − f (j ), is the probability that site j is
empty and the electron can go there (we ignore the spin degree of freedom). Finally, the
third factor in Eq. (??) is the probability for an ‘extra’ phonon of energy i − j > 0 to be
created.
In a similar fashion we can consider the temperature dependence of the probability to go
from site i to site j when the energy of site i is smaller than the one of site j, i.e., i − j < 0.
All that we have to change in Eq. (??) is the last factor there, since now we need the
probability that there is already a phonon with energy j − i to be absorbed in the process.
Thus,
f (i )(1 − f (j ))N (j − i ) ,
i − j < 0 .
(4.6)
The reason that we consider only one phonon which is either absorbed or emitted (and
not several phonons) is that eventually, we will consider very low temperatures, where the
cross-section for multiple phonon processes is very small.
We next examine the temperature dependence of the probabilities, Eqs. (??) and (??). To
this end we write down the limiting behaviors of the Bose and Fermi functions at very low
temperatures. We have
N (ω) → e−βω ,
28
(4.7)
(since the argument of the Bose function is always positive), and


 1 , <0 ,
f () →
 e−β , < 0 . 
(4.8)
Inspecting Eqs. (??) and (??), we find
Eq. (??)
i > j > 0
e−βi
Eq. (??)
j > i > 0
e−βj
i > 0 > j e−β(i +|j |) j > 0 > i e−β(|i |+j )
0 > i > j
e−β|j |
0 > j > i
e−β|i |
Interestingly enough, all the cases in the table can be put together in the form
β
e− 2 (|i |+|j |+|i −j |) .
(4.9)
(One notes that the probability tends to zero as the temperature tends to zero.) Therefore,
we may write down the transition probability per unit time for the electron to hop from site
i to site j as
β
Tij ∝ e−|Ri −Rj |/ξ e− 2 (|i |+|j |+|i −j |) .
(4.10)
Mott Variable-range hopping
The two exponential factors in Eq. (??) represent two competing effects. Clearly, the first
term, the tunnelling overlap, is larger when the two sites are closer. But in the strongly
localized system, the mismatch of the electronic energies will then be large. The farther the
electron hops, the better is its chance to find a site with energy close to the initial site energy.
Therefore, from the point of view of the second term, hopping to faraway sites is favorable.
It is clear from the discussion above that the electron will choose some ‘intermediate’ option
to hop to. It turns out that it hops a distance which varies with the temperature (hence the
name ‘variable-range hopping’).
Our aim is to find the conductance of the insulator. To this end, we identify the conductance
of the ‘bond’ between site i and j, Gij , by [see Eq. (??)]
β
Gij = e2 γe−Rij /ξ e−|Ri −Rj |/ξ e− 2 (|i |+|j |+|i −j |) , Rij ≡ |Ri − Rj | ,
29
(4.11)
where γ includes all the factors coming from the electron-phonon coupling matrix elements,
etc. It is useful to think of this model as a network consisting of randomly distributes
sites linked to one another by conductance Gij . In general, any site on the network will
be linked to many other sites, but the appreciably large conductances will come only from
the nearest-neighbors in the four-dimensional position-energy space. This is because the
individual conductances vary over many orders of magnitude. The characteristic value of
the conductance of the system is given by the critical percolation conductance Gc , which
is defined such that the subset of conductors with Gij > Gc contains a connected network
which spans the entire system.
Using Eq. (??), the condition Gij > Gc can be put in the form
e2 γ
Rij |i | + |j | + |i − j |
+
< ln
.
ξ
2kB T
Gc
(4.12)
Changing to dimensionless quantities, with
Rm = ξ ln
e2 γ
e2 γ
, Em = kB T ln
,
Gc
Gc
(4.13)
this condition becomes
|i | + |j | + |i − j |
Rij
+
<1.
Rm
2Em
(4.14)
Since the total number of sites per unit volume having energies less than Em is given by
n ' N (0)Em ,
(4.15)
(N (0) is the density of states per unit volume at the Fermi energy) the criterion (??) will
3
be satisfied for Rm
n ' c, where c is a number of order unity. This means [see Eq. (??)]
e 2 γ 4 3
ξ N (0)kB T ' c .
ln
Gc
(4.16)
It is now useful to define the energy scale
kB T0 '
1
,
ξ 3 N (0)
(4.17)
which gives the energy spacing between two energy levels localized within the same localization volume. Then
1/4
2
Gc = e γe
30
−
T0
T
.
(4.18)
the conductance vanishes at zero temperature: it is phonon-assisted, and therefore required
real electron-phonon processes (as opposed to virtual ones). It increases as the temperature
increases, but not like a simple exponential. We also find that the typical hopping distance,
Rm , is temperature-dependent,
Rm ' ξ
T 1/4
0
T
.
(4.19)
The typical hopping distance decreases as the temperature is raised. (When it reduces
to about the localization length, then the theory is not valid anymore.) The lower is the
temperature, the farther the electron has to hop in order to encounter a site with energy
close to its original energy.
∗ ∗ ∗Exercise. Find the hopping conductance of a two-dimensional insulator.
∗ ∗ ∗Exercise. The above derivation, in particular Eq. (??), is based on the assumption that
the density of states at the Fermi energy does not vary with the energy, and can be taken
as a constant. Discuss the hopping conductance in the case where it is a function of the
energy, for example, N () ' N (0)(/E0 )p . (Note that this implies that there are no states
at the Fermi energy!)
31
5.
LINEAR RESPONSE AND THE FLUCTUATION-DISSIPATION THEOREM
Introductory considerations
Instead of studying time evolution according to the Schrödinger equation (for the wave
functions), we shift the time-evolution into the operators. This is permitted since ultimately
we are interested in thermal averages, which, in turn, involve matrix elements. Therefore
hφα (t)|A|φα0 (t)i = hφα eiHt |A|e−iHt φα0 i = hφα |eiHt Ae−iHt |φα0 i ≡ hφα |AH (t)|φα0 i ,
(5.1)
where the subscript “H” denotes the Heisenberg picture, in which the operator A evolves
in time according to the full Hamiltonian H. We now write the Hamiltonian in the form
H = H0 +H1 . The Hamiltonian of the system before it is perturbed is H0 (and it is implicitly
assumed that we know its full spectrum). We split the time evolution such that
iHt
AH (t) = e
Ae
−iHt
h
iHt −iH0 t
= e
e
ih
e
iH0 t
Ae
−iH0 t
ih
iH0 t −iHt
e
e
i
≡ U † (t)AI (t)U (t) , (5.2)
where the subscript “I” denotes the interaction picture. (This subscript is usually omitted
for brevity.) Since in general H0 does not commute with H1 , it is not simple to find U (t).
However,
h
i
h i
h
i
dU
iH0 t
−iHt
iH0 t
−iHt
iH0 t
−iH0 t iH0 t −iHt
= e H1 e
e e
=e
−H0 + H e
= e
H1 e
i
dt
≡ H1,I (t)U (t) .
(5.3)
Luckily, linear response theory requires the time-evolution operator only to first order in the
perturbing Hamiltonian H1 , and therefore we can solve the differential equation (??). We
need the initial value of U . To this end, we imagine the perturbation Hamiltonian H1 to be
turned on adiabatically at an initial time, which is customarily taken to be t = −∞. This is
accomplished formally by multiplying H1 by eηt , where at the end of the calculation we take
the limit η → 0+ . With this adiabatic factor, the Hamiltonian of the system at the initial
time is just H0 , and therefore U (−∞) = 1. We now integrate both sides of Eq. (??) from
−∞ up to time t, to obtain
Z
t
U (t) = 1 − i
dt0 H1,I (t0 ) .
(5.4)
−∞
Inserting this approximate solution into Eq. (??) we finally find the time evolution of the
32
operator A with the full Hamiltonian (to linear order in H1 )
Z t
Z t
h
i
h
i
0
0
AH (t) = 1 + i
dt H1,I (t ) AI (t) 1 − i
dt0 H1,I (t0 )
−∞
−∞
Z t
h
i
dt0 AI (t), H1,I (t0 ) .
= AI (t) − i
(5.5)
−∞
Formal presentation of linear response
Suppose we have a system whose Hamiltonian is denoted H0 . When an external force, F (t),
is applied to system, it couples to one of the system variables (=operators), which we denote
by A. Then the full Hamiltonian is
H = H0 − F (t)A .
(5.6)
Namely, the perturbing Hamiltonian is just −F (t)A, with the adiabatic factor eηt kept
implicit. We wish to find the thermal average of the observable, namely, we are seeking for
hA(t)i. (Comment: we could have looked for the thermal average of another observable, B,
in a similar fashion.) Let us also confine ourselves to operators A whose thermal average is
zero in the absence of the driving force (for example, there is no current in the absence of a
voltage drop, or there is no magnetic moment in a nonmagnetic system in the absence of a
magnetic field). Then according to Eq. (??),
Z t
h
i
hAH (t)i = i
dt0 F (t0 )h AI (t), AI (t0 ) i .
(5.7)
−∞
This result can be presented in the form
Z ∞
hAH (t)i =
dt0 F (t0 )χ(t − t0 ) ,
(5.8)
−∞
where χ, which is the (generalized) susceptibility, is given by
χ(t − t0 ) = iΘ(t − t0 )h AI (t), AI (t0 ) i .
(5.9)
From now on, we drop the subscripts H and I from our formulae. The susceptibility is a
response function, since it ‘remembers’ the time direction: it is the response of the system at
time t to the action of the field which has occurred at a previous time t0 . The susceptibility
depends only on the time difference, since we consider a thermally averaged quantity.
Since the thermal average and the time-dependencies in Eq. (??) are with respect to the
un-perturbed Hamiltonian H0 , we may go ahead and try to find it explicitly. We denote the
eigen functions and eigen values of H0 by φα and α , respectively.
33
The first step is to re-write Eq. (??) in Fourier (temporal) transform, such that
hA(ω)i = F (ω)χ(ω) .
(5.10)
This step is a bit tricky, so we do it in great detail. First, we go back to Eq. (??), and takes
its Fourier transform
Z
∞
Z
t
h
i
dt0 F (t0 )ih A(t), A(t0 ) i
−∞
−∞
Z t
Z ∞
Z ∞
h
i
dω 0
iωt
0
0 −iω 0 t0 ηt0
0
dte
dt
F (ω )e
e ih A(t), A(t ) i ,
=
−∞
−∞
−∞ 2π
iωt
hA(ω)i =
dte
(5.11)
0
where the term eηt has been inserted explicitly. Since the thermal average h. . .i depends
h
i
solely on t − t0 , we now change variables, putting t − t0 = τ . Then h A(t), A(t0 ) i =
h
i
h A(τ ), A i , and we have
∞
Z ∞
h Z ∞
h
ii
dω 0
0
0
i(ω−ω 0 )t
hA(ω)i =
F (ω )
dte
i
dτ ei(ω +iη)τ h A(τ ), A i
−∞ 2π
−∞
0
h
ii
h Z ∞
dτ ei(ω+iη)τ h A(τ ), A i ,
= F (ω) i
Z
(5.12)
0
so that the Fourier transform of the response function is
Z ∞
h
i
dτ ei(ω+iη)τ h A(τ ), A i .
χ(ω) = i
(5.13)
0
Note that we have omitted eηt ! The time t is not restricted, so we may drop this term.
Next, we use the spectrum of the Hamiltonian H0 to write explicitly the response function,
Z ∞
X
−βα
i(ω+iη)τ 1
χ(ω) = i
dτ e
e
hφα eiH0 τ Ae−iH0 τ φγ ihφγ A|φα i
Z αγ
0
X
− hφα A|φγ ihφγ eiH0 τ Ae−iH0 τ φα i , Z ≡
e−βα .
(5.14)
α
Since eiH0 τ |φα i = eiα τ |φα i, we have
Z ∞
1 X −βα
χ(ω) = i
dτ ei(ω+iη)τ
e
|Aαγ |2 ei(α −γ )τ − e−i(α −γ )τ ,
Z αγ
0
(5.15)
where Aαγ ≡ hφα |A|φγ i is the matrix element of the operator A. We can now carry out the
integration over τ , to obtain
χ(ω) = lim+
η→0
1 X −βα
1
1
e
|Aαγ |2
−
. (5.16)
Z αγ
−ω − iη + γ − α −ω − iη + α − γ
34
Using the formula
lim+
η→0
1
1
= P + iπδ(x) ,
x − iη
x
(5.17)
where P is the principal part (meaning that when we take the integration over 1/x we skip
the point x = 0), Eq. (??) gives the result
1 X −βα
Imχ(ω) = π
|Aαγ |2 δ(γ − α − ω) − δ(α − γ − ω) .
e
Z αγ
(5.18)
In order to understand the physical meaning of the relation (??), let us first make a little
manipulation with the indices, interchanging α and γ in the second term, to re-write it in
the form
h1 X
i
Imχ(ω) = π(1 − e−βω )
e−βα |Aαγ |2 δ(ω + α − γ ) .
Z αγ
(5.19)
Let us now show that the terms in [. . .] form a correlation function. Indeed, those terms are
Z ∞
1 X −βα
1 X −βα
dt i(ω+α −γ )t
2
2
e
|Aαγ | δ(ω + α − γ ) =
e
|Aαγ |
e
Z αγ
Z αγ
−∞ 2π
Z ∞
dt iωt 1 X −βα
=
e
e
hφα eiα t Ae−iγ t φγ ihφγ Aφα i
Z αγ
−∞ 2π
Z ∞
dt iωt 1 X −βα
e
e
hφα eiH0 t Ae−iH0 t φγ ihφγ Aφα i
=
Z αγ
−∞ 2π
Z ∞
dt iωt 1 X −βα
=
e
e
hφα A(t)φγ ihφγ Aφα i
Z αγ
−∞ 2π
Z ∞
Z ∞
dt iωt 1 X −βα
1
e
e
hφα A(t)A φα i =
dteiωt hA(t)Ai .
(5.20)
=
2π
Z
2π
−∞
−∞
α
Thus we see that the terms in the square brackets of Eq. (??) are just the Fourier transform
of the correlation function hA(t)Ai (divided by 2π). Thus we arrive at the relation
ImχAA (ω) =
1 − e−βω
CAA (ω) ,
2
(5.21)
which is the fluctuation-dissipation theorem: it relates the imaginary part of the susceptibility, which gives the response of the observable A to a field coupled to A, to the (Fourier
transform) of the time-correlation function of the same observable.
In particular, the equal-time correlation function of the observable A is
Z
Z
dω
dω ImχAA (ω)
2
hA i =
CAA (ω) =
.
2π
π 1 − e−βω
35
(5.22)
In the high temperature limit, this becomes
Z
dω Imχ(ω)
2
.
hA i = kB T
π
ω
(5.23)
Example: driven harmonic oscillator
Let us consider a harmonic oscillator in thermal equilibrium inside a viscous medium. The
equation-of-motion is
mẍ + mω02 x + η ẋ = F (t) ,
(5.24)
where F (t) is the random force exerted on the oscillator by the thermal fluctuations, and
η describes the damping due to the viscous medium. It is easy to solve this equation in
Fourier transform. One finds
x(ω) =
mω02
F (ω)
.
− mω 2 − iωη
(5.25)
In other words, x(ω) = χ(ω)F (ω), where the response function is
χ(ω) =
1
.
mω02 − mω 2 − iωη
Referring to Eq. (??), we will now calculate
Z
1
dω Imχ(ω)
=
.
π
ω
mω02
(5.26)
(5.27)
The fluctuation-dissipation theorem therefore tells us that at high temperature
hx2 i =
kB T
,
mω02
(5.28)
which we know (from the equipartition theorem). Let us now consider the correlation
function of the coordinate, in Fourier space,
Z
C(ω) = dteiωt hx(τ + t)x(τ )i = h|x(ω)|2 i .
(5.29)
In order to find the correlation function explicitly, we utilize the equipartition theorem,
which says that hx2 i = kB T /(mω02 ). Then
Z
dω
k T
2
hx i =
h|x(ω)|2 i = B 2 .
2π
mω0
On the other hand, from Eqs. (??) and (??),
Z
dω
k T
|χ(ω)|2 h|F (ω)|2 i = B 2 .
2π
mω0
36
(5.30)
(5.31)
We also find from Eq. (??) that
Imχ(ω) = ωη|χ(ω)|2 ,
so we can re-write Eq. (??) in the form
Z
dω Imχ(ω)
k T
h|F (ω)|2 i = B 2 .
2π ηω
mω0
(5.32)
(5.33)
In view of the result (??), this equation holds provided that
h|F (ω)|2 i = 2ηkB T .
(5.34)
Namely, the fluctuations that produce the random force F are related to viscosity η which
is the source of dissipation, such that the power spectrum of the random force (i.e., its
correlation) is a white one (does not depend on frequency),
hF (t)F (t0 )i = 2ηkB T δ(t − t0 ) .
(5.35)
∗ ∗ ∗Exercise. Find the spin susceptibility of the free electron gas, and analyze the result.
37
Bibliography
1. W. A. Harrison, Solid state theory, McGraw-Hill, 1970.
2. N. W. Ashcroft and N. D. Mermin, Solid state physics, Saunders College, 1975.
3. J. M. Ziman, Principles of the theory of solids, Cambridge University Press, 1965.
4. Y. Imry, Introduction to mesoscopic physics, Oxford University Press, 1997.
5. S. Datta, Electronic transport in mesoscopic systems, Cambridge University Press,
1995.
38