Download Ensembles(b)

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Renormalization group wikipedia , lookup

Quantum entanglement wikipedia , lookup

Geiger–Marsden experiment wikipedia , lookup

Symmetry in quantum mechanics wikipedia , lookup

Bose–Einstein statistics wikipedia , lookup

Particle in a box wikipedia , lookup

Matter wave wikipedia , lookup

Wave–particle duality wikipedia , lookup

Quantum electrodynamics wikipedia , lookup

Double-slit experiment wikipedia , lookup

Electron scattering wikipedia , lookup

Canonical quantization wikipedia , lookup

Probability amplitude wikipedia , lookup

Relativistic quantum mechanics wikipedia , lookup

T-symmetry wikipedia , lookup

Atomic theory wikipedia , lookup

Theoretical and experimental justification for the Schrödinger equation wikipedia , lookup

Elementary particle wikipedia , lookup

Identical particles wikipedia , lookup

Transcript
2. THERMODYNAMICS and ENSEMBLES
(Part B)
R. Bhattacharya, Department of Physics, Jadavpur University, Kolkata – 32
2.4 Ensembles
An ensemble is a collection of replicas of member systems which have identical
macroscopic parameters, but whose microscopic descriptions of these members can be
and are quite different. Different ensembles are described below.
2.4.1 Isolated System (The micronomical Ensemble)
In trying to give statistical description to a system, one always has some information
available about the physical system under consideration. A representative statistical
ensemble is then constructed in such a way that all the systems in the ensemble satisfy
conditions consistent with one’s information about the system.
An isolated system is clearly an important example for consideration. Whenever one is
dealing with a system that interacting with another, it is always possible to reduce the
situation to the case of an isolated system by considering the combined system.
If V is the only relevant external parameter, then an isolated system consists of a given
number of N particles in a specified volume V, the constant energy of the system lying
in the range [E, E + δE]. Probability statements are then made with reference to an
ensemble which consists of many such systems. At equilibrium, the system is equally
likely to be found in any one of its accessible states. Then if the energy of system in a
state r is Er , the probability Pr of finding the system in this state is
Pr = C , E < Er < E + δE and Pr = 0 otherwise
(2.113)
An isolated system in equilibrium can be represented by an ensemble of systems
distributed according to the above prescription. This is called a microcanonical
ensemble .
Example: Ideal Gas in the Classical Limit :
For a collection of N particles which are non-interacting,
1 N 2
H=
(2.114)
∑ pi = E
2m i =1
The number of states Ω(E) lying between the energies E and E + δE is equal to the
number of cells in phase space contained between these energies.
1 E +δE
Ω( E ) = 3 N ∫
 ∫ d 3 p1  d 3 p N d 3 q1d 3 q N
E
h
(2.115)
V N E +δE
3
3
= 3N ∫
 ∫ d p1 d p N
E
h
since E is independent of the co-ordinates of the molecules.
VN
or Ω ( E ) = 3 N χ ( E )
h
(2.116)
where
From Eq. (2.114), we have,
χ (E ) =
∫
E +δ E
E
 ∫ d 3 p1 d 3 pN .
22
2m E =
N

pi2
∑
i =1
(2.117)
The sum contains 3N terms. In 3N dimensional (p) space this describes a sphere of radius
(2mE)1/2. χ(E) is then equal to the volume of phase space lying in the spherical shell
between the radii R(E) and R(E + δE ). Consider first how to calculate the volume of an
‘n-sphere’ of radius R, i.e.,
Γn ( R ) = ∫dx1 ..............dx n
(2.118)
∑x <R
Then,
Γn ( R ) = C n R n
(2.119)
To find Cn, consider
2
i
2
+∞
+∞
+∞
−∞
−∞
−∞
∫dx1
( −x +x +.......+xn )
∫dx 2 ........... ∫dx n e x 2
2
2
2
(2.120)
n
 +∞
2
=  ∫ dx e − x
 −∞

 = π n/2


(2.121)
But Eq. (2.120) is also equal to
∞
= ∫ dR S n ( R ) e −R
2
(1.122)
0
Where, S n ( R ) =
dΓn ( R )
= the surface area of an n-sphere of radius R. Therefore,
dr
∞
= nC n ∫dR R n −1e −R
2
0
nC n
=
2
∴C n =
∞
∫dt
t
n
−1
2
0
n
n

e −t = C n  −1!
2
2


πn / 2
( n / 2) !
(2.123)
(2.124)
Therefore, the volume of a shell of thickness s is
Vs = V n( R) − Vn ( R − s ) = C n [ R n − ( R − s ) n ] = C n R n [1 − (1 − s / R) n ]
n
(2.125)
23
If sn >> R, Vs ~ R (for n ~10 , this approximation is good).
Therefore,
χ ( E ) α (2mE )3 N / 2
Ω( E ) α V N (2mE )3 N / 2 α V N E 3 N / 2
And
(2.126)
(2.127)
∴ln Ω( E ) = const + N ln V + ( 3 N / 2 ) ln E
(2.128)
∂
βp=
ln Ω = N / V
(2.129)
∂V
∂
3N 1
ln Ω( E ) =
Or pV = NkT and β =
so that E = 3NkT/2. The entropy is given
∂E
2 E
by
S = S ( E , V ) = k ln Ω( E )
23
= k [ ln C 3 N + N ln
Now, C 3 N =
V
3N
+
ln(2mE )
3
2
h
(2.130)
π 3N
( 3N / 2 ) ! .Using Stirling’s approximation, ln L! = L ln L – L, we get
ln C 3 N = ln π 3 N / 2 −
3N
ln(3 N / 2) + 3 N / 2
2
∴ S = N k [ ln π 3 N / 2 + ln (V / h 3 ) + (3 / 2) ln(2mE ) −
(2.130)
3N
ln(3 N / 2)] + 3 Nk / 2
2
(2.131)
3/ 2
  2 1
 
= N k lnV π
2
mE
  + Nk / 2
  3N h 2
 

(2.122)
  4πm E  3 / 2 
 + Nk / 2
= N k lnV  2
  3h N  


(2.133)
 3 h2  N
2 S

E ( S ,V ) = 
 4π m 
 V 3 / 2 exp 3 Nk −1




Or
2 E
3
2 E NkT
 ∂E 
 ∂E 
T =
, C v = Nk , p = − 
=
 =
 =
∂
S
3
Nk
2
∂
V
V

V

s 3 V
(2.134)
(2.135)
Therefore,
3k 
4π m 
(2.136)
 1 + ln 2 
2 
3h 
Consider two ideal gases with N1 and N2 particles occupying volumes V1 and V2 at the
same temperature. Consider the change in entropy when the gases are allowed to mix in a
volume V = V1 + V2. The final temperature is the same. Thus u is unchanged. The change
in entropy is given by
(
)
S = Nk ln Vu 3/ 2 + NS0 , where, u = 3 / 2 kT and S0 =
∴ S = ( N 1 + N 2 ) ln V − N 1 lnV 1−N 2 ln V2 = N 1 ln
V
V
+ N 2 ln
> 0 (2.137)
V1
V2
If two gases are different, the result is correct. However, if the two gases are identical, we
would get the same increase in entropy (Gibbs’ paradox). This is clearly wrong because
we can increase entropy indefinitely by pulling off a large number of partitions in a gas.
The resolution of the paradox lies in the introduction of the factor (1/N!) in the gas phase
volume. Thus, in S, we get additionally, - ln N ! = N ln N – N.
4πm 
V
 3 Nk 
= N k ln u 3 / 2  +
1 + ln 2 
2 
3h 
N

(2.138)
24
This removes the paradox and makes the entropy additive.
2.4.2 System in Contact with a heat reservoir, The Canonical Distribution
We have already considered the interaction of a system A with a system A´ where A<< A
´. Thus A may be a relatively small macroscopic system. We ask the question: Under the
conditions of equilibrium, what is the probability Pr of finding the system A in any one
particular microstate r of energy Er?
The total energy of the system is, of course, fixed and is given by
E (0) = E r + E ′
(2.139)
Where E r is the energy of A and E ′ is the energy of A´. Thus, if A is in the energy
state r, the number of states accessible to the combined system is just the number of states
accessible to A´ when it has an energy E ′ = E ( 0 ) − E r . Thus the probability of
occurunce of the state r for A is given by
P r α Ω′( E ( 0) − E r )
(2.140)
(0)
or P r = C Ω′( E − E r )
(2.141)
C cab be determined from the condition
∑P
r
r
= 1 . Since A<< A´, E << E ( 0 ) and we
r
can expand ln Ω′( E ( 0 ) − E r ) about E ( 0 ) . Thus we get,
ln Ω′( E (0) − Er ) = ln Ω′( E (0) ) − Er
∂
ln Ω′
∂E ′
E ′= E ( 0)
+ .....
(2.142)
We have neglected higher order terms in E r as E r << E ( 0 ) . But,
∂
ln Ω′ E ′= E ( 0 ) ≡ β
(2.143)
∂E ′
And this is a constant independent of Er. β = 1/kT, where, T is the temperature of the heat
bath A´. Or,
ln Ω′( E ( 0 ) − E r ) = ln Ω′( E ( 0) ) − βE r
Since,
∑
or, Ω′( E ( 0 ) − E r ) = ln Ω′( E ( 0 ) ) e − βEr
(2.144)
Or, Pr = C e − βEr
(2.145)
Pr = 1, C = ∑ e
−1
r
− β Er
, and
− β Er
−1
e − βEr
where, C = Z = ∑ e
(2.146)
r
Z
The probability distribution is called the canonical distribution, and the ensemble of
systems all of which are distributed over states according to this distribution is called a
canonical ensemble. Z is called the canonical partition function.
Pr =
Before we try to point out the differences between the canonical and the microcanonical
distributions, let us take a look at two simple examples of the canonical distribution. The
probability of finding A in one particular state of energy is Pr = C e − βEr . The
25
probability that A has in the range lying between E and E + δE is then given by summing
Pr for all the states whose energy lies in this range. Therefore,
P ( E ) = ∑Pr (for E < E < E + δE) = C Ω ( E ) e − β E
r
r
(2.147)
Where, Ω(E ) is the number of states of A in the energy range E + δE.
Example 2.5 Molecule in an ideal gas
Consider a monatomic gas at temperature T confined in a volume V. If the number
density of the molecules is small, then the interaction between them may be neglected.
Thus the total energy equals the sum of the energies of all the molecules. Treating the
problem classically, we can concentrate on a single molecule. The remaining molecules
then constitute a heat reserviour at a temperature T.
∴E =
p2
2m
(2.148)



If the position of the molecule lies between r and r + dr and its momentum between



p and p +dp , then the volume in phase space is d 3 r d 3 p . Thus the probability that




the molecule has a position between r and r + dr and momentum between p and


d 3r d 3 p
p +dp is given by multiplying the number of cells in phase space, i.e.,
by
h3
the canonical probability distribution. Thus,
 
d 3 r d 3 p −β p 2 / 2 m
P(r , p) d 3 r d 3 p ∝
e
h3

Since the probability density is independent of r ,
2

d 3 r d 3 p −β p 2 / 2 m
P( p) d 3 p ∝ ∫
e
∝ e −β p / 2 m d 3 p
3
h
2


P ′(v ) d 3 v = P ( p ) d 3 p = C e −β mv / 2 d 3 v
(2.149)
(2.150)
(2.151)
This is the Maxwell distribution.
Example 2.6 Paramagnetism
Consider N0 magnetic atoms per unit volume, each with magnetic moment μ and spin
half placed in a magnetic field H. Each atom has two sites + and – corresponding to
energies ε + = −µH and ε − = +µH . Concentrating again on a single atom,
(2.152)
P+ = C e − β ε + = C e β µ H ; P− = C e − β ε − = C e − β µ H
∴
µ=
P+ µ + P− ( −µ)
e β µ H − e −β µ H
µH
= µ βµH
= µ tanh
−βµH
P+ + P−
kT
e
+e
(2.152)
Now, tanh y = y for y << 1 and ≈ 1 for y >> 1.
2
∴ µ = µ H for µ H <<1 and
kT
kT
µ = µ for
µH
>> 1
kT
(2.153)
The susceptibility χ is given by M = χ H , where M is the magnetic moment per unit
volume.
26
∴ χ=
µH
N0µ2
µH
>>1
;
<<1 (Curie’s Law); and χ = N 0 µ;
kT
kT
kT
(2.154)
2.4.4 Mean Value in a Canonical Ensemble
Using Pr from Eq. (2.146), we get for the mean value of energy,
∑r Er e − βEr
E=
∑ e − βE r
(2.155)
r
But,
∑E
r
r
∂ −β Er
∂
∂Z
= −∑r
e
= (−
)∑r e −β Er = −
∂β
∂β
∂β
e −β Er
Z = ∑ r e − β Er .
where,
Z is the partition function and is a function of temperature.
∴
E =−
1 ∂Z
∂
=−
ln Z
Z ∂β
∂β
(2.156)
(2.157)
Consider now the fluctuations in the energy.
−−−−
∆E 2 =
−−−−
2
E
E e β
∑
=
∑eβ
2 − Er
r
r
− Er
r
=
−−−−−−−−−
2
(E − E )
−−−−
= E 2 −(E )
1 ∂2Z
, since ∑r E r2 e −β E
2
Z ∂β
(2.158)
2
2
r
 ∂ 
=
− ∂β 



(∑ e
r
−β E r
)
(2.159)
We can also write
2
______
2
∆E + E 2 =
1 ∂2Z
∂  1 ∂Z  1  ∂Z 
∂E
=
+ E2

+ 2 
 =−
2
Z ∂β
∂β  Z ∂β  Z  ∂β 
∂β
______
2
∂E
∂
∆E 2 = −
=
ln Z ≥ 0
∂β ∂β 2
And
∂E
≤0
∂β
or
∂E
≥0
∂T
(2.160)
(2.161)
(2.162)
Pressure
Suppose that the system is parametrized by a single external parameter x. Consider a
quasi-static change in the external parameter from x to x + dx. Then the energy changes
by an amount
∂E
∆ x Er = − r dx = Fx dx
(2.162)
∂x
The macroscopic work done by the system as a result of this change is
 ∂E 
∑r e −β Er  − ∂xr dx
(2.163)
dW =
Z
But,
∑
r
e −β Er
∂E r
1 ∂
= (− )
∂x
β ∂x
(∑ e
r
−β Er
) = − β1  ∂∂Zx


(2.164)
27
1 ∂Z
1 ∂ ln Z
Or, dW = βZ ∂x dx = β ∂x dx
If the mean generalised force is X , then,
(2.165)
−−−−
∂Er 1 ∂ ln Z
dW = X dx , X = −
=
∂x β ∂x
Thus, if x = V, the volume, and p the pressure, then
dW = p dV =
p=
And
(2.166)
1 ∂ln Z
dV
β ∂V
(2.167)
1 ∂ln Z
β ∂V
(2.168)
which is the equation of state for the system.
Connection with thermodynamics
All important quantities can be expressed in terms of the partition function Z or ln Z.
Considering E = E(V), we have Z = Z(β,V) and
d ln Z =
∂ ln Z
∂ ln Z
dV +
dβ = β dW − E dβ = β dW − d ( E β) + β dE
∂V
∂β
d ln ( Z +βE ) = β( dW +dE ) = β dQ
Where,
dQ
is the heat absorbed by the system. But dS
= dQ
(2.169)
(2.170)
/T. Therefore,
S = k ln ( Z +βE )
(2.171)
TS = kT ln ( Z + βE )
or
(2.172)
or
which is the free energy. Now,
dF = dE −TdS − SdT
∴
(2.173)
F = E −TS = kT ln Z
= dE − pdV −dE − SdT
 ∂F 
p = −
 ;
 ∂V T
and
= −pdV − SdT
 ∂F 
S = −

 ∂T V
(2.174)
(2.175)
Where, E = F + TS . Thus one can get all the thermodynamic quantities from the
partition function.
The Entropy
−β E
−β E
We have Z = ∑r e r = ∑E Ω( E ) e
where Ω(E ) is the number of states
~
between E and E + δE. The right hand side has a sharp maximum at some value E
of
~
energy. The mean value of energy is then E = E and the summand is appreciable only in
some narrow range of ΔE around the maximum. Therefore,
∆E
Z = Ω( E ) e β E
(2.176)
δE
Or
ln Z = ln Ω( E ) − βE + ln
∆E
δE
(2.177)
28
But ln
and
∆E
is at most of the order of ln s, where, s is the number of degrees of freedom
δE
ln Ω
(E )
is O(s). Then,
k ln ( Z +βE ) +k ln Ω( E ) = S
(2.178)
2.4.5 Weakly Interacting Systems
Consider again a system A ( 0 ) divided into two parts that are weakly interacting at the
same temperature.
(0)
A ( 0 ) = A + A′ , E rs = E r + E s′
(2.179)
− β ( Er + Es′ )
− β Es
− β Er
(0)
Z =∑ e
= ∑e
∑e = Z Z′
Then,
r ,s
r
s
(2.180)
ln Z ( 0 ) = ln Z + ln Z ′ ,
(2.181)
And hence,
E ( 0) = E + E ′
S (0) = k ln ( Z (0) + β E (0) ) = S + S ′
(2.182)
The Ideal Gas (Canonical Distribution)
We have again,
pi2
2m
E=∑
i
Z =
1
N ! h 3N
(2.183)
3
3
3
3
3
∫ d r1 d r2 ......d rN d p1 ......d p N e
+∞
V N  −β p 2 / 2 m 
=
 e

N ! h 3 N −∫∞

3N
V N  2 πm 
=


β 
N !h 3 N 

3
3  2π m 
ln Z = N  ln V − ln β + ln  2 
2
2  h 

p=
And
S =k ln ( Z +βE )
1 ∂ln Z
β ∂V
=
−β
1 N
βV
,
∑ pi2 / 2 m
i
3N

 − ln N !

pV =NkT

3
3  2π m  3 
= N k  ln V − ln β + ln  2  +  − Nk ln N
2
2  h  2

 V 3
3 3 3 3 3  2πm  3 
= N k  ln + ln kT + ln − ln + ln  2  + 
2 2 2 2 2  h  2
 N 2
 Vu 3 / 2 3  4 π m  3 
= N k  ln
+ ln 
+ 
N
2  3h 2  2 

(2.184)
(2.185)
(2.186)
(2.187)
(2.188)
(2.189)
(2.190)
29
3k 
3
 Vu 3 / 2 
 4 π m 
= N k  ln
1 + ln 
 + NS 0 , where, u = kT , S 0 =

2 
N 
2
2 

 3h 
(2.191)
2.4.6 Equivalence of the Canonical and the Microcanonical Distributions
We have seen that the entropy obtained from either distributions gives the same result.
Instead of arguing mathematically, we can also argue physically to show the equivalence
of the distributions. A microcanonical distribution is the description that is the
appropriate one for an isolated system with a given energy. A canonical distribution
describes a system interacting a reserviour, and hence the system does not have a fixed
energy. However, since the probability that the system has an energy between E and
E + δE is,
P ( E ) = Ω(E ) e −β E
(2.192)
This probability has a sharp maximum at some value of E = E , the mean energy of the
system. One can approximate even an isolated system by a canonical distribution by
choosing a β in such a way that the maximum occurs at the desired value of energy.
Though the energy is not strictly a constant, it varies from E negligibly because of the
sharpness of the distribution. The advantage of the canonical distribution is that there is
no restriction on the value of energy and all the quantities of interest are much easier to
evaluate.
2.4.7 The Grand Canonical Distribution
Consider now a system A of fixed volume in contact with a reserviour A’ with which it
can interchange not only energy, but also particles. Then neither the energy of A nor its
number of particles N are fixed. The total energy of the system A ( 0) = A + A′ and its
total number of particles are fixed.
E ( 0 ) = E + E ′ = constant
(2.193)
N ( 0 ) = N + N ′ = constant
(2.194)
We now ask for the probability of finding the system A in a given state r where it has an
energy Er and contains Nr particles.
The argument is similar to the one used before. Let Ω'(E', N') denote the number of states
accessible to A' when it contains N' number of particles in the energy range near E'. If A
is in state r, then the number of states accessible to the entire system is simply Ω′ .
Therefore,
Pr ( E r , N r ) = Ω′ (E ( 0 ) − E r , N ( 0 ) − N r )
(2.195)
(
0
)
(
0
)
If A << A′, then, E r << E , N r << N .Then,
Where,
ln Ω′ (E ( 0 ) − E r , N ( 0 ) − N r ) = Ω′ (E ( 0 ) , N ( 0 ) ) − β E r −α N r +..........
(2.196)
∂
∂
ln Ω′ E ′= E ( 0) and α ≡
ln Ω′ N ′= N ( 0)
(2.197)
∂E ′
∂N ′
We have neglected the higher derivatives as well as the cross terms between Er and Nr.
Then, Ω′ (E ( 0 ) − E r , N ( 0 ) − N r ) = Ω′ (E ( 0 ) , N ( 0 ) ) e −β Er −α N r and
(2.198)
β≡
Pr = C −1 e − β Er −α N r
(2.199)
30
Where, C = Z =
∑
r ,Nr
e − β Er −α N r
(2.200)
Z is called the grand partition function. It involves the sum over energies as well as the
numbers of particles. The average energy and the average particle number in the system
is
E =
1
Z
∑
r , Nr
Er e
−β Er −α N r
and N =
1
Z
∑
r ,Nr
−β Er −α N r
Nre
.
(2.201)
As in the earlier case of the canonical partition function, we have,
E =−
1 ∂Z
∂
=−
ln Z
Z ∂β
∂β
(2.202)
The average number of particles is given by
1  ∂ 
1 ∂Z
∂
− β Er −α N r
N = −
=−
=−
ln Z
 ∑ r , Nr e
Z  ∂α 
Z ∂α
∂α
Writing Z = Z (α, β, V ) and α=−β µ=µ/ kT , we get,
d ln Z =
(2.203)
∂ln Z
∂ln Z
∂ln Z
dα +
dβ +
dV
∂α
∂β
∂V
(2.204)
(2.205)
(2.206)
=−N dα −E dβ + β p dV =βN dµ−E dβ+β p dV
=βd ( µN ) −βµdN −d ( E β) + βdE + β p dV
∴ d ( ln Z + E β − β µ N ) = β dE − β µ dN + β p dV
Recalling that
we have,
(2.208)
Or,
(2.209)
d Q =T dS =dE + p dV −µdN
,
(2.207)
dE +pdV −µdN = k T d ( ln Z + E β − βµN ) =TdS
.
S = k ( ln Z + E β − βµN )
k T ln Z =TS −kT β E + kTβ µN ) =TS −E + µN = −Φ
(2.210)
Φ= E −TS − µN =F −µN
Where,
(2.211)
In terms of this new potential Φ, the ensemble averages are given by
 ∂Φ
N =− 
 ∂µ 


V , T
 ∂Φ
; p = −  ∂V 

T , µ
 ∂Φ
; S = −  ∂T 

(2.212)
V , µ
Ideal gas in the grand-canonical distribution
The grand partition function may be written as
∞
Z = ∑ r , N e − β Er −α Nr = ∑ e βµ N Z N
r
(2.213)
N =0
Where ZN is the canonical partition function for N particles, given by (2.185) may be
rewritten as
V N  2 πm 
ZN =


β 
N !h 3 N 
3N
3N
=
1 V 
 
N !  λ3 
1/ 2
 β 

 2πm 
, where, λ = h
=
h
2πmkT
(2.214)
31
 Ve βµ
Z = ∑ 3
N =0  λ
∞



N
1
V 

= exp  e βµ 3 
N!
λ 

(2.215)
Therefore,
Φ = − kT ln Z = − k T e
 ∂Φ 
βµ
V
(2.216)
λ3
V
= e βµ 3
And N = −  ∂µ 
λ

V , T
 N λ3
 V
Or µ = kT ln

kTe β µ
N
 ∂Φ 

p
=
−
=
= kT


,
and
3

V
λ
 ∂V T , µ

(2.217)
(2.218)
2.5 Quantum Statistics of Ideal Gases
Consider a gas of N structureless particles within a container of volume V. The state of
the gas may be described by the wavefunction
Ψ = Ψ{ s1 ,......s N } ( Q1 , Q2 ...............Q N )
(2.219)
th
Here, Qi denotes collectively all the coordinates of the i particle (i.e. its position and spin
coordinates if any). The possible quantum states are labelled by the indices si.
Two cases need to be considered. In the classical case (Maxwell-Boltzmann statistics),
the particles are considered distinguishable and any number particles can be in the same
state s. The ‘classical’ case imposes no symmetry requirements on the wavefunction. In
the quantum mechanical case, quantum mechanics imposes definite symmetry
requirements on the wavefunction under the exchange or interchange of identical
particles. When the particles are indistinguishable, one does not get a new state by
interchanging two particles. When counting the distinct possible states of the system, the
relevant consideration is how many particles are in each state s rather than which particle
is in which state. Moreover, the symmetry properties are different for particles with
integral or half-integral spins (in units of ħ).
(a) Bosons are particles with integral spin. For bosons, Ψ is symmetric under the
interchange of two particles, i.e.,
Ψ(Q1 , Q2 ....Qi .....Q j ......Q N ) = Ψ(Q1 , Q2 ....Q j .....Qi ......Q N )
(2.220)
4
Exampled of bosons are photons, pions, kaons and He atoms.
(b) Fermions are particles with half-integral spins. Examples are electrons, protons, He3
atoms, Δ++ and so on. The requirement that Ψ is anti-symmetric under the interchange of
two particles.
Ψ(Q1 , Q2 ....Qi .....Q j ......Q N ) = −Ψ(Q1 , Q2 ....Q j .....Qi ......Q N )
(2.221)
If two particles are in the same state, then clearly, Ψ = 0. This is the familiar Pauli
exclusion principle.
Example 2.7
Consider the different ways distribution of two particles in three quantum states in the
different cases.
32
( i) Maxwell-Boltzmann statistics: The particles A and B are considered distinguishable,
and any number can be in a given state. The distribution can be seen in the first section of
the following table (2.2).
Table 2.2 Particle occupancies of two particles in three states in the three cases.
__________________
_______________ __________________
Maxwell Boltzmann
Bose-Einstein
Fermi-Dirac
1
2
3
1
2
3
1
2
3
__________________
_______________ __________________
AB
--AA
--A
A
--AB
--AA --A
A
--AB
--AA
A
-A
A
B
-A
A
-B
A
-A
-A
A
-B
-A
A
B
-A
-A
B
-B
A
____________________
_______________
________________
( ii) Bose-Einstein statistics: Particles are indistinguishable, and one or more can be in the
same state.
(iii) Fermi-Dirac statistics: Particles are indistinguishable and no two can be in the state.
Thus in the Bose-Einstein case, there is a greater relative tendency for the particles to
bunch together. This phenomenon is called Boson condensation and was proposed in
1921 and experimentally verified in 1996.
Now we turn to the formulation of the problem in quantum statistics. Consider a system
of N particles in volume V in equilibrium at temperature T. Let εr be the energy of a
particle in state r and nr be the number of particles in state r. If R denotes some state of
the system, and assuming no interaction between particles
E R =n1ε1 +n 2 ε2 +............... = ∑n r εr
(2.222)
r
N =n1 +n 2 +............... = ∑n r
(2.223)
r
The partition function is then
−β
n ε
Z = ∑ e −βE R = ∑e ∑ r r
R
(2.224)
R
n r = N . In general this is a difficult thing to
This has to be evaluated keeping in mind ∑
r
do, and it is more convenient to work with the grand canonical distribution for which this
condition does not apply. The grand partition function is
β µ( n + n + n .....)
Z = ∑r , N E r e −β E +βµ N = ∑e
e −β ( n ε + n ε +........)
(2.225)
r
1
r
r
2
3
1
1
2 2
n1 , n2 , n2 ...
= ∑e
n1
β ( µ−ε1 ) n1
∑e
n2
β ( µ−ε2 ) n2
................
(2.226)
33
∴
ln Z =ln Z 1 +ln Z 2 +........... where, Z i = ∑e
β ( µ −εi ) ni
ni
(2.227)
But Φ= −kT ln Z . Therefore,
Φ = ∑Φi ;
i
Φ i = −kT ln ∑ e
− β ( µ −ε i ) ni
(2.228)
ni
For ni particles in a given state i with energy εi.
2.5.1 Fermi-Dirac Statistics
The ni are called the occupation numbers for the states i. According to the Pauli’s
principle, the occupation numbers for fermions is either 0 or 1. We have, therefore,
Φi = −kT ln ∑e
−β ( µ −ε i ) ni
= − kT ln ( 1 + e β ( µ −εi ) )
(2.229)
ni
The mean number of particles in the ith quantum state is therefore given by
∂Φ
e β ( µ −ε i )
1
ni = − i =
=
(2.230)
∂ µ 1 + e β ( µ − ε i ) e β (ε i − µ ) + 1
This is the Fermi-Dirac distribution for the mean number of particles in a given quantum
state at temperature T. If we have a system with a total number of particles N, then we
must have,
1
ni = N ; or
∑
=N
(2.231)
∑
β
( ε i −µ )
i
+1
i e
This implicitly determines the chemical potential μ in terms of T and N. The
thermodynamic potential for the whole system is given by
Φ = ∑ Φi ;
Φ i = − kT
i
∑ ln ( 1 + e β µ ε )
( −
i
)
(2.232)
i
2.5.2 Bose Statistics
For an ideal system of bosons, the occupation numbers of the quantum states are
unrestricted and can take any values. Therefore
∞
Φ i = −kT ln ∑
n i =0
∑ (1+ e
∞
The series
(e
n i =0
+ β ( µ −ε i ) ni
0. Then,
+β ( µ −ε i ) ni
(1+ e
+ β ( µ −ε i ) ni
)
) is a geometric series that is convergent provided
) <1. Since this must be satisfied for all ε including ε
i
Φ i = −kT ln
(2.233)
1
β ( µ −ε i )
= kT ln (1 − e β ( µ −ε i ) )
1− e
∂Φ i
e β ( µ −ε i )
1
ni = −
=
= β (ε i − µ )
β ( µ −ε i )
∂µ 1 − e
e
−1
This is the Bose (or the Bose-Einstein) distribution. Again,
i=
0, we must have μ <
(2.234)
(2.235)
34
∑e β ε
(
i
And Φ =
∑
1
−µ )
i
−1
Φ i = − kT
i
=N
(2.236)
∑ ln ( 1 − e β µ ε )
( −
i
)
i
(2.236)
2.5.3 Thermodynamics of Boson and Fermion Systems
We can use the relations we have derived to discuss the thermodynamics of system of
noninteracting bosons and fermions. The energy of a particle is
E=
(
1
p x2 + p y2 + p z2
2m
)
(2.237)
In the distribution function, we make the usual change to the distribution in the phase of
the particle. For a given position and momentum, the state of the system depends on the
spin. Hence the number of particles in a given volume d3p dV of phase space is found by
multiplying the Fermi or the Bose distribution by g d3p dV/h3 where g = 2s + 1 and s is
the spin of the particle. Therefore,
g d 3 p dV
(+ for F.D. and – for B.E.)
(2.238)
dN = 3 β (ε −µ )
h e
±1
The number of particles with magnitude of momentum between p and p and p +dp is
The energy distribution is
4 π g dV p 2 dp
dN p =
h3
e β (ε − µ ) ± 1
(2.139)
4 π gd V
1
2 m3/ 2 ε d ε
3
β (ε − µ )
h
e
±1
2
p dp
m dε
m dε
p
, dε =
, dp =
=
Since ε =
2m
m
p
2mε
2mε m dε
2
= 2 m 3 / 2 ε dε
i.e., p dp =
2mε
The total number of particles is given by
4π g 2 m3/ 2 V ∞
1
N =
ε dε
3
∫ β (ε −µ )
dNε =
h
g ( mkT )
N
=
V
2π 2 h 3
3/ 2
Or,
0
∞
±1
e
z dz
∫
e
0
z −β µ
where z = βε
±1
(2.240)
(2.141)
(2.242)
(2.243)
Similarly,
Ω=
VgkT m 3 / 2
2π 2 h 3
∞
∫
ε ln[1 ± e β ( µ −ε ε ) ] dε
(2.244)
0
Upper sign for F.D. and lower sign for B. E. Integrating by parts,
2 Vg m 3 / 2
Ω=−
3 2π 2 h 3
∞
ε 3/ 2
∫ eβ ε
( −µ )
0
±1
dε
(2.245)
35
Finally,
∞
E = ∫ ε dN ε =
0
Vg m 3 / 2
2π 2 h 3
∞
ε 3/ 2
∫ eβ ε
( −µ )
0
±1
dε
(2.246)
These relations allow us to study thermodynamics.
2.5.4 Fermi and Bose Gases not in Equilibrium
We can derive the entropy of the Fermi and the Bose gases not in equilibrium and get the
Fermi and Bose distribution functions from the condition that the entropy is maximum at
equilibrium.
Let us distribute the single particle quantum states of an ideal gas among groups
containing neighbouring states or ‘cells’ as shown. Let the cells be numbered j = 1,2,…
and let the number of states in each cell be Gj and the number of particles in each cell be
Nj .
Figure 2.6 Distribution of quantum particles in cells.
The entropy of the gas can be calculated from the number of accessible states Ω of a
given microscopic state. This is the number of ways in which the given state can be
realised. Regarding each group of particles Nj as independent,
Ω = ΠΩj
(2.247)
j
Where Ωj is the number of ways of distributing Nj particles in Gj states.
2.5.5 Fermi gas
Here, Ωj is given by the number of ways of distributing Nj particles in Gj states with not
more than one particle in each state. This is simply equal to the ways of picking Nj objects
out of Gj objects. Thus,
Ωj =
Therefore,
Gj
CN j =
G j!
N j !( G j − N j ) !
(2.248)
S = k ln Ω = k ∑ln Ωj = k ∑(ln G j ! −ln N j ! −ln (N j −G j )! )
j
j
Using Stirling’s approximation, ln N! = N ln N – N, we have,
S = k ∑(G j ln G j − G j − N j ln N j + N j − (G j − N j ) ln (G j − N j ) + G j − N j )
(2.249)
= k ∑(G j ln G j − N j ln N j + N j − (G j − N j ) ln (G j − N j ) )
(2.250)
j
j
The mean occupation number of the quantum state j is n j = N j / G j . This gives
36
= − k ln ∑ G j ( n j ln n j − (1 − n j ) ln (1 − n j ))
j
This must be maximised subject to
∑N
j
This gives
j
= ∑G j n j = N
, and
∑ε
j
j
N j = ∑ε j G j n j = E
∂
( S +αN + βE ) =0
∂n j
(2.252)
(2.253)

nj
G j α − βε j + ln

1− nj

Or,
1−n j
Or,
(2.251)
nj

 =0


(2.254)
=e
α +β ε j
(2.255)
Or,
nj =
1
α +β ε j
e
+1
(2.256)
This is the Fermi distribution with β=1/kT and α = - μ/kT,
2.5.6 Bose Gas
Here each state may contain any number of particles. The number of ways of distributing
Nj particles in Gj states is simply given by referring to the following figure.
Figure 2.7: Nj Particles in Gj Partitions
Thus,
Ωj =
This gives
( N j − G j −1)!
N j ! (G j −1)!
(2.257)
37
= ∑ G j ((1 + n j ) ln (1 + n j ) − n j ln n j )
j
Maximizing S with the conditions,
∑N
j
j
nj =
gives
= N , and
∑ε
1
e
α +β εj
(2.260)
This is the Bose distribution.
−1
j
Nj =E
(2.259)
j
=
1
e
β ( ε j −µ )
−1
(2.258)
.
2.5.7 The Classical Limit
The Fermi and the Bose distributions are given by
1
n j = β ( ε j −µ )
(2.261)
e
±1
Both reduce to the Maxwell-Boltzmann distribution when e β ( εj – μ) >> 1. In that case,
n j = eβ µe
−β ε j
(2.262)
The Boltzmann distribution applies when n j << 1. This corresponds to an extremely
rarefied gas. We had from the grand canonical distribution for the probability that there
are n j particles in state j with energy εj,
β (µ −ε j ) ε j
Pn j = Ce
(2.263)
Then P0 = C = the probability that there are no particles in the state j. Since nj << 1, C = 1.
Also, P1 = e β (μ - εj ). The probabilities P2, P3, …. i.e., that of finding more than one particle
in the same state can be taken to be zero. Therefore,
n j = ∑Pn j n j = P1 .1 =e
j
β ( µ −ε j )
(2.264)
This is the Boltzmann distribution. Thus the classical limit corresponds to ( εj – μ) >>kT.
2.6 Summary
In this chapter, the behaviour of a large collection of particles is studied using the
methods of thermodynamics as well as that of different ensembles. The chapter begins
with probability and the random walk problem and the associated and limiting
distributions. Equilibrium and approaches to equilibrium are considered next using the
methods of probability and by maximising the number of accessible states. Properties of
the absolute temperature, directions of heat flow, quasi static processes, the limiting
behaviour of entropy and the laws of thermodynamics are then considered. Different
ensembles, the mean values in these ensembles and the equivalence of ensembles is
considered in the following sections. Quantum statistics of ideal gases is then studied
using the grand canonical distribution function. The Fermi and Bose statistics are
considered and their approach to the classical limit at high temperature is shown at the
end of the chapter.
References
38
Reif, F. Fundamentals of Statistical and Thermal Physics, McGraw-Hill, 1965.
Kittel, C. Introduction to Statistical Physics, Dover Publications, 2004.
Landau, L. D. and Lifshitz, E. M. Statistical Physics, 3rd ed., Pergamon Press, 1999.
Kubo, R. Statistical Mechanics, North-Holland, 1965.
Hill, T. L. An Introduction to Statistical Thermodynamics, Dover Publications, 1986.
Mayer, J. E. and Mayer, M. G. Statistical Mechanics, 2nd ed, John Wiley and Sons, 1977.
Pathria, R. K. Statistical Mechanics, 2nd ed, Butterworth-Hienemann, 1999.
Salinas, S. R. A. Introduction to Statistical Physics, Springer, 2004.
Problems
1) de Mere’s Paradox: A French gambler, de Mere observed that when 3 dice are thrown
simultaneously, the probability of the total number on the three faces turning out to be 11
(event 1) is slightly greater than the total number 12 (event 2). From a common
probability argument, both events should occur with the same probability. Event 1 can
occur in 6 ways through the outcomes 6,4,1; 6,3,2; 5,5,1; 5,4,2; 5,3,3 and 4,4,3. Event 2
can also occur in 6 ways as 6,5,1; 6,4,2; 6,3,3; 5,5,2; 5,4,3; 4,4,4. Therefore, the
probability of event 1 should be equal to the probability of event 2. Pascal found out the
error in this argument and showed that P(1) was indeed greater than P(2). Prove Pascal’s
assertion. (Hint: Think of the degeneracies of each of the outcomes listed above and
calculate the probabilities of events 1 and 2 considering the total number of 216
possibilities when 3 dice are thrown).
2) In an experiment, there are N equally likely outcomes. Consider two events A and B.
The number outcomes in which the event A occurs in NA and the number of outcomes in
which B occurs in NB. NAB is the number of occurrences in which both A and B occur and
N- is the number of occurrences in which neither A nor B occur. Then, show that
(a) (i) N = NA + NB + NAB + N-. (ii) P(A) = (NA + NAB)/N and (iii) P(B) = (NB + NAB )/ N
where P(A) and P(B) are the probabilities of the occurrences of A and B respecticely.
(b) Calculate P(A+B), the probability of the occurrence of either A or B.
(c) Calculate the probability P(AB), the probability of the occurrence of both A and B.
(d) Calculate the conditional probability P(A|B), the probability of the occurrence of A
given that B has occurred. Similarly, calculate P(B|A).
(e) Show that P(A+B) = P(A) + P(B) - P(AB) and that P(AB) = P(B)P(A|B)=P(A)P(B|A)
(f) Prove Bayes’ theorem, which states that
P ( B | A) P ( B ) P ( A | B )
=
P (C | A) P(C ) P ( A | C )
A simple introduction on Bayes’ theorem and its application is found in Wikipedia.
3) In a binomial distribution, the probability of n occurrences of an event with probability
p among a total of N trails is given by
N!
WN ( n ) =
p n ( 1 − p )N − n
n!( N − n )!
For small p (p << 1), WN ( n ) is very small, except when n << N. In this small n limit,
show that the distribution WN ( n ) becomes the Poisson distribution given by
39
λ n −λ
e
n!
Where, λ = n p, the mean number of events. Is P(n) normalised? Calculate < n > and
<Δn2> for the Poisson distribution. What are some examples of a Poisson distribution?
WN ( n ) → P( n ) =
1) In a random walk in one dimension, let sj be the displacement per step. After N steps
N
from the origin, the displacement x =
∑s
j =1
j
. If sj s are obtained from a Lorentzian
distribution w( s) = π s 2 + a2 , where a > 0, obtain the probability distribution associated
with the variable x. For large N, does the distribution for x approach the Gaussian
limit?
1
a
2) In an ideal gas mixture containing NA molecules of A and NB molecules of B in
volume V, how does the number of states Ω(E) in the range between E and E + δE
depend on the volume of the system? Compare this with the result of a one
component system.
3) A small mass m is suspended on a spring with spring constant k. The mass is acted on
by the restoring force due to the spring – k x and the force due to gravity with the
acceleration g. What is the mean position < x > of the mass and the mean thermal
fluctuation [x - < x >]2?
4) Consider a similar problem, where the external filed is a magnetic field varying
linearly with the vertical distance z. A dilution solution of atoms which can have
either a spin of +1/2 or -1/2 is placed in such a field. Let the height of the container in
which the solution is placed be z. The magnetic field is H(z) and let H(0) be 0.
Calculate the values of n+(z) and n-(z)), where n+ and n- are the mean number of spins
in the +z and –z direction.
5) Among N molecules, if there are n1 molecules with energy ε1, n2 molecules with
energy ε2,.. then the weight of this configuration (i.e., the number of ways in which it
N!
can be accomplished) is given by W( n1 ,n2 ,...) =
Since W is quite large, it is
n1 ! n2 !
best to consider ln W. We are interested in the maximum of W subject to the
constraints that the total number of molecules N = ∑ ni and the total energy E =
∑ε n
is conserved. In the method of Lagrange undetermined multipliers, The
constrained maximum is found by solving for the multipliers α and β in the equation
∂ ln W
d ln W = ∑
dni + α ∑ ni − β ∑ ε i dni = 0 Setting this equation to 0 for each i
∂ni
i
i
i
and using the Stirling’s approximation for ln W ( i.e, ln N! = N ln N – N), show that
α − β εi
ni
, e −α = q = ∑ e − β ε i , where
in the optimal or the most probable distribution, N = e
i i
i
q is the partition function.
40
6) Defining S = - k ∑ Pi ln Pi subject to the constraint
i
∑ P = 1 , use the method of
i
i
Lagrange undetermined multipliers to show that S is a maximum when all Pis are
equal to a constant. Similarly, maximise S = - k ∑ Pi ln Pi when it is subject to two
constraints,
∑ P = 1 and ∑ PE
i
i
i
i
i
i
= E = constant.
7) For a free particle confined to a volume V in three dimensions, the number of
microstates with momentum p less than P is given by
V 4π 3
Σ( P ) ≈ h13 ∫ ..∫ d 3 q d 3 p = 3
P
h 3
p≤ P
dΣ( p)
The number of microstates lying between p and p+dp, g(p)dp is given by dp dp .
This can be expressed in terms of energy as well by noting that E = p2/2m. Show that
2
the density of states g(p) is given by hV3 4π p and, expressed in terms of energy, it is
3/ 2 1/ 2
equal to hV3 2π (2m) E
11) In this problem, you are asked to derive the equation of motion for the density
matrix. The matrix element of the density operator ρˆ (t ) is defined by
1 ¥
ρ mn (t ) = ∑ {amk (t ) ank * (t )} . The summation of k is over the ensemble of systems ¥ .
¥ k =1
k
k
Here, an (t ) are the coefficients of the time dependent wavefunction ψ (t ) when it is
expressed as a linear combination of a complete set of orthonormal basis functions φn
k
k
(time independent); i.e., ψ (t ) is expressed as ψ (t ) =
∑a
n =1
k
n
(t ) φn . The time
dependence is through the coefficients and not through φn s. The time dependence of
ψ k (t ) is naturally through the time dependent Schrödinger equation,
k
k
Hˆ ψ k (t ) = i hψ& k (t ) . Note the dot on ψ (t ) . The coefficient an (t ) is given by
* k
i h a& nk (t ) = ∑ H nm amk (t ) . For this use the definitions of ψ& k (t )
φ
ψ
(
t
)
d
τ
.
Show
that
n
∫
m
*
and ψ (t ) . The matrix element H nm is defined as ∫ φn H φm dτ .
k
Note that the density matrix (element) is an ensemble average and also that
∑ρ
nn
=1
n
1 ¥
{ a& mk (t ) ank * (t ) + amk (t ) a& nk * (t )} . Now, substituting for the values
∑
¥ k =1
k
k*
of a& m (t ) and a& n (t ) from the earlier above equation, show that ihρ& mn (t ) =
( Hˆ ρˆ − ρˆ Hˆ ) mn or i h ρ&ˆ = [ Hˆ , ρˆ ]− . This is the quantum mechanical analogue of the
classical equation of Liouville. Note also that the Poisson bracket [ Hˆ , ρˆ ] has been
. Now, ihρ& mn (t ) =
−
replaced by the commutator ( Hˆ ρˆ − ρˆ Hˆ ) mn / ih .