Download Chapter 8 Microcanonical ensemble

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Photon polarization wikipedia , lookup

Path integral formulation wikipedia , lookup

Canonical quantization wikipedia , lookup

Identical particles wikipedia , lookup

Quantum logic wikipedia , lookup

Renormalization group wikipedia , lookup

Wave packet wikipedia , lookup

Density matrix wikipedia , lookup

Ensemble interpretation wikipedia , lookup

Theoretical and experimental justification for the Schrödinger equation wikipedia , lookup

Probability amplitude wikipedia , lookup

T-symmetry wikipedia , lookup

Eigenstate thermalization hypothesis wikipedia , lookup

Transcript
Chapter 8
Microcanonical ensemble
8.1
Definition
We consider an isolated system with N particles and energy E in a volume V . By
definition, such a system exchanges neither particles nor energy with the surroundings.
In thermal equilibrium, the distribution function ρ(q, p) of the system is a function of its
energy:
ρ(q, p) = ρ(H(q, p)).
According to the ergodic hypothesis and the postulate of a priori equal probabilities, ρ is
constant over an energy surface and defines the microcanonical ensemble:

1

if E < H(q, p) < E + ∆,
ρ(q, p) =
Γ(E, V, N )
 0
otherwise,
where
Γ(E, V, N ) =
=
Z
3N
d q
Z Z
Z
d3N p Θ(E < H(q, p) < E + ∆)
E<H(q,p)<E+∆
d3N q d3N p · 1
(8.1)
is the volume occupied by the microcanonical ensemble.
This is the volume of the shell bounded by the two energy surfaces with energies E and E + ∆ (see Fig. 8.1).
In Γ(E, V, N ), eq. (8.1), the dependence on the spatial
volume V comes from the limits of the integration over
dqi .
Figure 8.1: Volume Γ(E, V, N )
We denote as Φ(E, V ) the volume of phase space enclosed
in phase space.
by the energy surface E (Fig. 8.1). Then,
Γ(E) = Φ(E + ∆) − Φ(E)
87
88
CHAPTER 8. MICROCANONICAL ENSEMBLE
and
Γ(E) =
∂Φ(E)
∆ ≡ Ω(E) · ∆
∂E
for ∆ << E.
The quantity Ω(E),
∂Φ(E)
Ω(E) =
=
∂E
Z
d
3N
q
Z
d3N p δ(E − H(q, p)) ,
is the density of states at energy E. Now, in terms of Ω(E), ρ is given as

1

for E < H(q, p) < E + ∆,
ρ(q, p) =
Ω(E)∆
 0
otherwise.
8.2
Entropy
In this section, we want to explore the concept of entropy in classical statistical physics.
The average value of any classical observable O(q, p) can be obtained by averaging over
the probability density ρ(q, p) of the microcanonical ensemble:
Z
hOi =
dΓ ρ(q, p) O(q, p)
Z Z
1
d3N q d3N p O(q, p).
=
Γ(E, V, N )
E<H(q,p)<E+∆
BUT, there exists a macroscopic thermodynamic variable that cannot be obtained as an
average of a classical observable. This variable is the entropy.
According to Boltzmann, the entropy is defined to be proportional to the logarithm of
the number of available states, as measured by the phase space volume Γ:
S(E, V ) = kB ln Γ(E, V, N ) .
(8.2)
Since Γ has arbitrary units, S is defined up to an arbitrary additive constant. In order to
take into account the arbitrary constant, we write S as
Γ(E, V, N )
.
S = kB ln
Γ0 (N )
The above introduced constant Γ0 (N ) has dimensions of (p · q)3N so that the argument of
the logarithm is dimensionless.
In classical statistics, there is no way to determine Γ0 (N ), while in quantum statistics it
is given by
Γ0 (N ) = h3N N ! , with h = Planck constant.
(8.3)
We consider this value also for classical statistics. h3N defines the reference measure in
phase space and N ! is the counting factor for states obtained by permuting particles.
89
8.2. ENTROPY
→ Factor N ! arises from the fact that particles are indistinguishable. Eventhough one
may be in a temperature and density range where the motion of molecules can be
treated to a very good approximation by classical mechanics, one cannot go so far as
to disregard the essential indistinguishability of the molecules; one cannot observe
and label individual atomic particles as though they were macroscopic billiard balls.
Without the factor N ! we are faced with the Gibbs paradox (see next sections).
Using expression (8.3), we define now the microcanonical ensemble as

 h3N N !
for E < H(q, p) < E + ∆,
3N
ρ̄(q, p) = h N ! ρ(q, p) =
 0 Γ
otherwise.
and define formally the entropy as the ensemble average of the logarithm of the phase
space density:
Z
S = h−kB ln ρ̄(q, p)i = −kB dΓρ(q, p) ln ρ̄(q, p)
3N Z
h N!
Γ
3N
3N 1
d q d p ln
= −kB
.
= kB ln
Γ
Γ
Γ0
E<H<E+∆
|
{z
}
We obtain Γ
and recover the definition given as a function of the phase space.
The definition of entropy is equivalent to:
S = kB ln(Multiplicity) .
Since we have introduced the entropy definition in an ad-hoc way, we need to convince
ourselves that it describes the thermodynamical state function entropy, therefore it must
fulfill the requirements of
1) additivity;
2) consistency with the second law of thermodynamics;
3) adiabatic invariance.
1) Additivity
Consider two statistically independent isolated systems with the distributions ρ1 and ρ2 ,
respectively. They occupy volumes Γ1 and Γ2 in their phase spaces. The total volume
occupied by the systems is the product of their individual volumes since the probability
of two independent events is the product of the individual probabilities. Then, the total
entropy is
S = kB ln(Γ1 Γ2 ) = kB ln Γ1 + kB ln Γ2 = S1 + S2 .
90
CHAPTER 8. MICROCANONICAL ENSEMBLE
However, when two systems are brought together in thermal equilibrium, the entropy is
only approximately additive.
It can be shown that
Figure 8.2: The wall between the two sysS(E, V, N ) =
tems allows for heat exchange but not for
S1 (Ê1 , V1 , N1 ) + S2 (Ê2 , V2 , N2 ) + O(lnN )(8.4)
particle exchange.
Ê1 + Ê2 = E and Êi is the value at which the
function
f (E1 ) = ln Γ1 (E1 ) + ln Γ2 (E − E1 )
has a maximum (for the derivation, see Nolting: “Grundkurs Theoretische Physik 6”,
page 41).
2) Consistence with the second law of thermodynamics
We would like to confirm here that the statistical entropy, as it is defined above, fulfills
the second law of thermodynamics, which reads:
⋆ if an isolated system undergoes a process between two states at equilibrium, the
entropy of the final state cannot be smaller than that of the initial state.
As an example, let us look at the free expansion of an ideal gas:
During the free expansion (irreversible process) there is no heat transfer,
∆Q = 0, ∆T = 0
therefore
∆S = nR ln
according to the second law.
⋆
V2
V1
> 0,
The second law is equivalent to saying that, when dynamical constraints
are eliminated, the entropy rises.
In the case of the ideal gas free expansion, eliminating dynamical constraints meant
increasing the volume. The phase space volume is a monotonically increasing function of
the configuration volume V and, since S is by definition proportional to the phase space
volume (eq. (8.2)), the statistical S increases when dynamical constraints are eliminated.
91
8.3. CALCULATIONS WITH THE MICROCANONICAL ENSEMBLE
3) Adiabatic invariance of the entropy
⋆ The entropy does not change if the energy of the system is changed only by performing
work:
if E → E + dE with dE = δW, then dS = 0.
δQ = dE − δW,
δQ
dS =
;
T
S constant
(8.5)
The statistical definition of S:
S = kB ln
Γ
Γ0
.
fulfills this requirement if in the process the phase space doesn’t change.
⇒ The phase space does not change → it is an adiabatic invariant.
The proof of statements 2) and 3) will be given in class.
8.3
Calculations with the microcanonical ensemble
In order to perform calculations in statistical physics one proceeds through the following
steps.
1) Formulation of the Hamilton function
H(q, p) = H(q1 , . . . , q3N , p1 , . . . , p3N , z),
where z is some external parameter, e.g., volume V . H(q, p) specifies the microscopic
interactions.
2) Determination of the phase space Γ(E, V, N ) and calculation of the density of states
Ω(E, V, N ):
Z
Z
Ω(E, V, N ) =
d3N q
d3N p δ(E − H(q, p)).
3) Calculation of the entropy out of the density of states:
Ω(E, N, V )∆
S(E, V, N ) = kB ln
Γ0
4) Calculation of P, T, µ:
∂S
1
,
=
T
∂E V,N
µ
− =
T
∂S
∂N
,
E,V
P
=
T
∂S
∂V
.
E,N
92
CHAPTER 8. MICROCANONICAL ENSEMBLE
5) Calculation of the internal energy:
U = hHi = E(S, V, N ).
6) Calculation of other thermodynamical potentials and their derivatives by application
of the Legendre transformation:
F (T, V, N ) = U − T S,
H(S, P, N ) = U + P V,
G(T, P, N ) = U + P V − T S.
7) One can calculate other quantities than the thermodynamic potentials, for instance,
probability distribution functions of certain properties of the system, e.g., momenta/velocity distribution functions. If the phase space density of a system of
N particles is given by
ρ(q, p) = ρ(~q1 , . . . , ~qN , p~1 , . . . , p~N ),
then the probability of finding particle i with momentum p~ is
Z
Z
3
3
ρi (~p) = hδ (~p − p~i )i = d q1 . . . d qN d3 p1 . . . d3 pN ρ (~q1 , . . . , ~qN , p~1 , .., p~i , .., p~N ) δ (~p − p~i ) .
8.4
The classical ideal gas
We consider now the steps given in section 8.3 in order to analyze an ideal gas of N
particles with mass m in a volume V . The Hamiltonian function of such a system is
N
X
p~ 2i
H(qi , pi ) =
,
2m
i=1
and the total energy is
E < H < E + ∆.
Let us first calculate the phase space volume Γ(E, V, N ), which is the volume enclosed
within the energy hypersurfaces E and E + ∆ and the container walls (section 8.1.). It is
given as
∂Φ
Γ(E, V, N ) = Φ(E + ∆) − Φ(E) =
∆ ≡ Ω(E) · ∆,
∂E
where
Z
Z Z
3N
3N
N
Φ(E) =
d qd p=V
d3N p.
(8.6)
P
P
3N
i=1
p2i ≤2mE
3N
i=1
p2i ≤2mE
The
√ last integral in eq. (8.6) gives the volume of a 3N -dimensional sphere with radius
2mE. In order to evaluate it, we consider first the volume VN (R) of an N -dimensional
sphere with radius R, which we can write as
Z
Z
dy1 . . . dyN ,
dx1 . . . dxN = RN P
VN (R) = P
N
N
2 <1
2 <R2
y
x
i=1 i
| i=1 i {z
}
CN
93
8.4. THE CLASSICAL IDEAL GAS
with CN being the volume of the N -dimensional sphere with radius 1. In order to calculate
CN , we use the following well-known integrals:
Z
+∞
2
dxe−x =
√
π
(8.7)
−∞
and (as an extension of (8.7) to the N th order)
Z
+∞
dx1 . . .
−∞
Z
+∞
2
2
dxN e−(x1 +...+xN ) = π N/2 .
(8.8)
−∞
p
Since the integrand in (8.8) depends only on R = x21 + . . . + x2N , it proves advantageous
to switch to spherical coordinates, where we have:
dx1 . . . dxN = dVN (R) = N RN −1 CN dR.
Then, eq. (8.8) can be written as:
N CN
Z
∞
2
RN −1 dR e−R = π N/2 .
0
We now use this result, rewriting it as
1
N CN
2
Z
∞
N
dx x 2 −1 e−x = π N/2
(R2 = x),
0
knowing that the Γ-function,
Γ(z) =
Z
∞
dx xz−1 e−x ,
0
we get the expression for CN :
CN =
π N/2
.
N
Γ N2
2
Going back to Φ(E), (eq. (8.6)), we have now
Φ(E) = V N C3N
√
2mE
3N
,
from where we obtain the following expression for the density of states Ω(E):
Ω(E) =
3N
∂Φ
3N
= V N C3N (2mE) 2 −1 ·
· 2m .
∂E
2
94
CHAPTER 8. MICROCANONICAL ENSEMBLE
Entropy
We calculate the entropy. For large N we can replace the shell volume between E and
E + ∆ by the volume enclosed by the surface of energy E (see the exercises):
Φ(E)
S(E, V, N ) = kB ln
h3N N !


√
3N
N
V
C
2mE
3N


(8.9)
= kB ln 
.
3N
h N!
It is easy to check, that the argument of the logarithm in (8.9) is dimensionless as it
should be.
For N >> 1, one may use the Stirling formula,
ln Γ(n) ≈ (n − 1) ln(n − 1) − (n − 1) ≈ n ln n − n,
in order to simplify the expression for S. We perform the following algebraic transformations to C3N :
C3N =
ln C3N
π
3N
Γ
2
3N
2
3N
2
,
!
3N
3N
3N
=
ln π − ln
Γ
= ln
3N
2
2
2
2
3N
3N
3N
π2
3N
3N
3N
=
ln π −
ln
ln
+
−
=
2
2
2
2
2
3N
2
" 3/2
#
ln N
3
2π
+ +O
,
= N ln
3N
2
N
π
3N
Γ
2
3N
2
where we used Γ(n + 1) = nΓ(n). We insert this expression in eq. (8.9) and obtain:




3/2


V (2mE)3/2
3
2π
S = kB N ln
+
−
(ln
N
−
1)
+
ln

h3
3N
2 | {z }


ln N !
N
and, finally,
S = kB N
( "
ln
4πmE
3h2 N
3/2
V
N
#
5
+
2
)
→
Sackur- Tetrode Equation
Now we can differentiate the above expression and thus obtain
95
8.5. HARMONIC OSCILLATORS
- the caloric equation of state for the ideal gas:
1
3 1
∂S
= N kB ·
=
⇒
T
∂E V,N
2 E
- the thermal equation of state for the ideal gas:
kB N
∂S
P
=
=
⇒
T
∂V E,T
V
3
E = N kB T = U ;
2
P V = N kB T .
Gibbs Paradox
If we hadn’t considered the factor N ! when working out the entropy, then one would
obtain that
( "
)
3/2 #
4πmE
3
Sclassical = S + kB ln N ! = kB N ln
V +
.
3h2 N
2
With this definition, the entropy is non-additive, i.e.,
E V
,
S(E, V, N ) = N s
N N
is not fulfilled. This was realized by Gibbs, who introduced the factor N ! and attributed
it to the fact that the particles are indistinguishable.
8.5
Harmonic oscillators
Consider a system of N classical distinguishable one-dimensional harmonic oscillators with
frequency ω in the microcanonical ensemble conditions. The condition of an oscillator
being distinguishable can be fulfilled if one assumes that the oscillators are localized in
real space. The Hamiltonian function of this system is given by
N 2
X
1
pi
2 2
+ mω qi .
H(qi , pi ) =
2m 2
i=1
The normalized phase space volume Σ(E, V, N ) is:
Z
1
dN q dN p,
Σ(E, V, N ) = N
h H(qi ,pi )≤E
(8.10)
note that in eq. (8.10) we didn’t include the factor N ! since our harmonic oscillators are
distinguishable. We express eq. (8.10) in terms of xi = mωqi :
1
Σ(E, V, N ) = N
h
1
mω
N Z
PN
(
2
2
i=1 pi +xi
)≤2mE
dN x dN p.
96
CHAPTER 8. MICROCANONICAL ENSEMBLE
√
The resulting integral corresponds to a 2N -dimensional sphere of radius 2mE. Therefore,
N
N
1
1
πN
E
1
N
(2mE) =
.
Σ(E, V, N ) = N
h
mω
N Γ(N )
N Γ(N ) h̄ω
The entropy is then
S(E, V, N ) = N kB 1 + ln
E
N h̄ω
,
(8.11)
where we used, as in the previous example, the Stirling formula.
Discussion:
E
1 - Eventhough we are considering classical harmonic oscillators, the factor
esN h̄ω
tablishes the relation between the energy per particle and the characteristic energy
of the harmonic oscillator h̄ω (where h̄ comes from the consideration of the volume
unity h3N ).
2 - It is typical for many systems that their thermodynamic properties are dependent
on the ratios between the total energy and the characteristic energies of the systems.
From eq. (8.11), we obtain the caloric and thermal equations of state for the system of
oscillators:
1
∂S
1
= N kB
=
⇒ E = U = N kB T,
T
∂E V,N
E
P
∂S
= 0 ⇒ P = 0.
=
T
∂V E,N
The absence of pressure (P = 0) is a consequence of the fact that the oscillators are
localized: they cannot move freely and, as a result, don’t create any pressure. S does not
depend on the volume of the container. From
∂S
E
µ
= kB ln
− =
T
∂N E,V
N h̄ω
and
CP = CV =
∂E
∂T
= N kB
V
we see that CP = CV since the system cannot produce work.
8.6
Fluctuations and correlation functions
Our starting point is a classical observable B(q, p),
B(q, p) = B (q1 , . . . , q3N , p1 , . . . , p3N ) ,
97
8.6. FLUCTUATIONS AND CORRELATION FUNCTIONS
which is measured M times so that for each l = 1, . . . , M measurement we have the
information about the state:
(q (l) , p(l) )
and the value of B:
B(q (l) , p(l) ).
Out of these measurements we can calculate the distribution function ρM (q, p) in phase
space as
M
1 X (3N )
q − q (l) δ (3N ) p − p(l) .
δ
ρM (q, p) =
M l=1
Following the statistical postulate of equally a priori probabilities, for an isolated system
in thermodynamical equilibrium with E < H < E + ∆ we will have that

 1
if E < H(q, p) < E + ∆,
lim ρM (q, p) = ρ(q, p) =
Γ(E)
M →∞
 0
otherwise.
By knowing ρ(q, p), we can obtain the following statistical quantities.
• Average value of B(q, p) as
Z
M
1 X
(l) (l)
= d3N q d3N p ρM (q, p) B(q, p).
B q ,p
hB(q, p)i =
M l=1
Γ
• Variance:
(∆B)2 = (B − hBi)2 = B 2 − hBi2 = σB2 .
The variance provides information about the fluctuation effects around the average
value.
• Momenta of higher orders,
µn = hB n i ,
are calculated by making use of the characteristic function
ϕB (κ) = eiκB .
The characteristic function of any random variable (B) completely defines its probability distribution:
Z
iκB ϕB (κ) = e
= d3N q d3N p ρM (q, p) eiκB ,
so that
1 dn
µn = n n ϕB (κ)|κ=0
i dκ
⇒
ϕB (κ) =
∞
X
(iκ)n
n=0
n!
µn .
(8.12)
98
CHAPTER 8. MICROCANONICAL ENSEMBLE
• Cumulants. If instead of expanding ϕB (κ) in powers of κ, we expand the logarithm
of ϕB (κ) in powers of κ we obtain:
ln ϕB (κ) =
∞
X
n=1
the coefficients
ζn
κ2
(iκ)n
= µ1 iκ − σ 2 + . . . ,
n!
2
ζ1 = µ1 =< B >
ζ2 = σ 2 =< B 2 > − < B >2
are called cumulants.
(8.13)
Probability distribution of the observable B
Consider the probability that a measurement of the observable B gives a value between b
and b + ∆b, which is given as
Z
Z
ρ(b)∆b =
dΓ ρ(q, p) = dΓρ(q, p) [Θ(b + ∆b − B) − Θ(b − B)] ,
b<B(q,p)<b+∆b
where Θ(x) is the step function. For ∆b → 0,
⇒
Θ(b + ∆b − B) = Θ(b − B) + δ(b − B)∆b
Z
ρ(b) = dΓ ρ(q, p) δ(b − B(q, p)) = hδ(b − B(q, p)i .
Out of this expression we can define the average value of B as
Z
< B >= db b ρ(b).
Correlations between observables
Suppose that we want to analyze whether two observables A and B are statistically independent. For that purpose, we define the correlation function CAB as
CAB = h(A− < A >)(B− < B >)i
Z
Z
=
da db ρ(a, b) (a− < A >)(b− < B >),
where the distribution function ρ(a, b) describes the mutual probability density of the
observables A and B and is given by
ρ(a, b) = hδ(a − A)δ(b − B)i
Z
=
d3N q d3N p ρ(q, p) δ(a − A(q, p)) δ(b − B(q, p)).
Γ
If A and B are statistically independent, then
ρ(a, b) = ρA (a)ρB (b)
and therefore the correlation function in this case is zero:
CAB = h(A− < A >)(B− < B >)i = hA− < A >i hB− < B >i = 0.
99
8.7. THE CENTRAL LIMIT THEOREM
8.7
The central limit theorem
Consider M statistically independent observables Bm (q, p). The central theorem points
out that the probability density of the observable
B(q, p) =
M
X
Bm (q, p)
m=1
√
will show a normal distribution centered at < B >, with the width proportional to 1/ M .
8.7.1
Normal distribution
An observable Y is said to show a normal distribution if the probability distribution
function ρ(y) =< δ(y − Y ) > has the form of a Gauss curve:
ρ(y) =< δ(y − Y ) >= √
<Y > =
(Y − < Y >)2
=
Z
Y 2n
2πσ 2
e−
(y−y0 )2
2σ 2
.
+∞
dy ρ(y)y = y0 ,
−∞
Z +∞
−∞
dy ρ(y) (y − y0 )2 = σ 2 ,
(2n)! 2n
σ ,
2n n!
2n+1 Y
= 0,
1
=
n = 1, 2, . . .
The corresponding characteristic function is also a Gauss function:
Z +∞
iκY 1 2 2
dy eiκy ρ(y) = eiκy0 e− 2 κ σ .
=
f (κ) = e
−∞
We consider now the central limit. We define the variable Y as
1
Y ≡ √ (X1 + X2 + . . . + XM ),
M
(8.14)
100
CHAPTER 8. MICROCANONICAL ENSEMBLE
where X1 , X2 , . . ., XM are M mutually independent variables such that < Xi >= 0. The
characteristic function of Y is then
D
√ E
−iκY −iκ(X1 +X2 +...+XM )/ M
= e
e
+
*M
√
Y
e−iκXj / M
=
j=1
=
M D
Y
e−iκXj /
j=1
=
exp
M
X
j=1
√
M
E
√
Aj (κ/ M ) = e−iκY ,
(8.15)
with
D
√ E
√
Aj (κ/ M ) ≡ ln e−iκXj / M .
√
If M is a large number, Aj (κ/ M ) can be expanded as
√
κ3
κ2 ′′
Aj (0) + O(M −1/2 ) .
Aj (κ/ M ) ≃ Aj (0) +
2M
M
(8.16)
(8.17)
Here, the first-order term in κ is 0 because it is proportional to hXj i = 0. The quadratic
term is obtained by differentiating eq. (8.16):
A′′j (0) = − Xj2 .
Now, substituting eq. (8.17) into eq. (8.16) one has
−iκY 1
1 2 2
,
= exp − κ σ + O √
e
2
M
where
M
1 X 2
σ ≡
Xj .
M j=1
(8.18)
2
The quadratic term in the exponent (eq. (8.18)) is the central limit
√ term. We have
thus shown that Y is normally distributed if we neglect terms of O(1/ M ) (compare eq.
(8.18) and (8.14)).
If hXj i =
6 0, then
< Y >=
M
X
j=1
and Y − < Y > is a normal distribution.
Remarks:
√
< Xj > / M ,
101
8.7. THE CENTRAL LIMIT THEOREM
√
1) As Y is a normal
distribution
with
width
σ,
Y
/
M is also a normal distribution
√
√
with width σ/ M and the variable Y / M :
Y
√ = (X1 + X2 + . . . + XM )/M.
M
σ
can be regarded as the mean value of M different Xi ’s. If M is large, √ is small,
M
what means that the mean value differs very little from the average value, i.e., that
fluctuations are small.
√ If we increase the number of variables M , the fluctuations
will increase only as M and not as M .
2) For our M statistically independent observables Bm (q, p),
B(q, p) =
M
X
Bm (q, p)
m=1
with the average value
< B >=
M
X
< Bm > .
m=1
the distribution function is
*
ρ(b) =
δ(b −
where we defined
M
X
Bm )
m=1
Y =
+
M →∞
∼√
1
2πσ 2 M
Y2
e− 2σ2
B− < B >
√
.
M
√
Figure 8.3: The width is proportional to 1/ M , y0 =< B >
Note that the normal distribution of Y is limited to the calculation of average values of
lower powers of Y .
Example of normal distribution: Binomial distribution.
n=
N
X
i=1
ni
with ni = 0 or 1.
102
CHAPTER 8. MICROCANONICAL ENSEMBLE
The probability of ni = 1 is p, and the ni ’s are mutually independent. The resulting
distribution of n is the binomial distribution:
N
pn q N −n with q = (1 − p).
ρ(n) =
(8.19)
n
Using the central limit theorem, we get
ρ(n) ≃ √
1
2πN σ 2
e−(n−<n>)
2 /2N σ 2
,
(8.20)
N
1 X 2
σ =
ni − hni i2 = p(1 − p) = pq.
N i=1
2
Let us compare eq. (8.19) and (8.20) for N = 2500 and p = 0.02:
n
eq. (8.19)
eq. (8.20)
25
40
50
70
0.0000
0.0212
0.0569
0.0013
0.0001
0.0205
0.0570
0.0010
< n >= 50, nσ 2 = 49.
This theorem is very important in statistical physics since it allows to describe probability
distributions of observables!