Download 0.1 Minimum Principles and Thermodynamic Potentials

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Thermal expansion wikipedia , lookup

Heat transfer wikipedia , lookup

Thermoregulation wikipedia , lookup

First law of thermodynamics wikipedia , lookup

Calorimetry wikipedia , lookup

Black body wikipedia , lookup

Maximum entropy thermodynamics wikipedia , lookup

Equipartition theorem wikipedia , lookup

Heat wikipedia , lookup

Non-equilibrium thermodynamics wikipedia , lookup

T-symmetry wikipedia , lookup

Temperature wikipedia , lookup

Thermal conduction wikipedia , lookup

Internal energy wikipedia , lookup

Entropy in thermodynamics and information theory wikipedia , lookup

Chemical thermodynamics wikipedia , lookup

Extremal principles in non-equilibrium thermodynamics wikipedia , lookup

State of matter wikipedia , lookup

Heat transfer physics wikipedia , lookup

Heat equation wikipedia , lookup

Second law of thermodynamics wikipedia , lookup

Thermodynamic system wikipedia , lookup

H-theorem wikipedia , lookup

History of thermodynamics wikipedia , lookup

Gibbs free energy wikipedia , lookup

Adiabatic process wikipedia , lookup

Equation of state wikipedia , lookup

Van der Waals equation wikipedia , lookup

Otto cycle wikipedia , lookup

Transcript
0.1
Minimum Principles and Thermodynamic Potentials
The Second Law turns out to have certain very useful consequences for calculating the behavior of macroscopic systems. These are most usefully formulated as
variational principles. The law states that, in any thermodynamic transformation,
Rf
i dQ/T ≤ Sf − Si , in an obvious notation. Equality holds for a reversible process.
Now think about a system with some internal adjustable parameter, perhaps a gas
with a movable partition. Let the initial configuration have the partition fixed.
Then let it loose to find the equilibrium position. The total change in heat ∆Q
and the total entropy change ∆S satisfy ∆Q/T ≤ ∆S, and so T ∆S ≤ ∆W + ∆U .
Define A = U − T S. Then for fixed total volume ∆W = 0. If T is also fixed then
∆A = ∆U − T ∆S. The second law then states that ∆A ≤ 0. Simply stated, in
an irreversible process at constant temperature and volume A decreases. When the
system has found equilibrium by means of this process A will have its minimum value.
It is a good exercise to show this explicitly for the partition in the ideal gas, which we
will do in the problem seesions. The quantity A (also often denoted by F ) is called
the Helmholtz free energy. It is one of a set of four quantities called thermodynamic
potentials.
A summary is best expressed in a table.
Symbol Name
U
Internal Energy
H
Enthalpy
A
Helmholtz Free Energy
G
Gibbs Free Energy
Definition
Differential
(from mechanics) T dS − P dV
U + PV
T dS + V dP
U − TS
−SdT − P dV
U − TS + PV
−SdT + V dP
The minimum principle for G then states that for all states at a fixed T and P ,
the equilibrium state is that for which G is a minimum. The proof is very similar to
that for A: the second law states that ∆Q ≤ T ∆S or 0 ≥ T ∆S + ∆U + P ∆V , if P
is held fixed. But dG = dU − T dS + P dV , so dG ≤ 0 in an irreversible process. The
Gibbs free energy is very useful because most practical experiments do take place at
constant temperature and pressure. As we shall see later, it is also the most important
quantity in the theory of phase transitions.
The minimization principles may be understood physically by starting with the
fact that, in equilibrium, a completely closed (constant energy) system will maximize
its entropy. As you probably know, and as we will show later, this means that the
system tends toward the most probable state, consistent with having constant energy
and volume. We should think of equilibrium as the state of highest statistical weight.
Conversely, the system wishes to minimize its energy if volume and entropy are fixed.
The other minimum principles follow from this idea, with the sole modification that
other variables are held fixed. Maximization of probability is what underlies all these
ideas. The thermodynamic potentials are also important for a separate reason, this
time somewhat more formal. All four of them are state functions and their differentials
1
have simple forms:
dU
dH
dA
dG
0.2
=
=
=
=
T dS − P dV
T dS + V dP
−SdT − P dV
−SdT + V dP.
(1)
(2)
(3)
(4)
Maxwell Relations
The first equation implies that
Ã
and
Ã
∂U
∂S
!
!
∂U
∂V
=T
(5)
= −P.
(6)
V
S
The second derivatives are equal, however:
∂2U
∂2U
=
,
∂S∂V
∂V ∂S
so
Ã
∂T
∂V
!
Ã
S
∂P
=−
∂S
!
S
Ã
∂V
∂S
!
P
T
Ã
∂P
∂T
!
(7)
.
(8)
V
In a similar way, we find
and
Ã
∂T
∂P
!
Ã
∂S
∂V
!
Ã
∂S
∂P
!
T
=
=
Ã
∂V
=−
∂T
,
(9)
.
(10)
V
!
.
(11)
P
from the differential equations for the other three potentials. These four equations are
called the Maxwell relations. From them we can deduce immediately some interesting
facts. For example, for most (but not all) systems V increases on heating at constant
pressure: (∂V /∂T )P ≥ 0. This means that (∂S/∂P )T ≤ 0. So the entropy normally
decreases on compression at constant temperature, not nearly so obvious as the first
statement. A second example relates to the specific heat at constant volume, Cv .
Cv =
Hence
Ã
V
Ã
∂U
∂T
!
∂
∂T
! Ã
∂S
∂V
!
dQ
dT
∂Cv
|T = T
∂V
Ã
!
=
V
2
=T
V
Ã
= −T
T
∂S
∂T
Ã
!
.
(12)
V
∂2P
∂T 2
!
.
V
(13)
The right-hand-side depends only on the equation of state of the system. Once
this is known, then the volume dependence of the specific heat is known.
More complicated examples of uses of the Maxwell relations abound. One of the
most often encountered is the equation relating the specific heat at constant volume
Cv = T (∂S/∂T )v (which is easily calculated) to the specific heat at constant pressure
Cp = T (∂S/∂T )P (which is easily measured). This requires a short digression into
the subject of Jacobians. Consider two functions w(y, z) and x(y, z). The area dw dx
in the w − x plane is related to the area dy dz in the y − z plane by
dw dx =
where
∂(w, x)
dy dz,
∂(y, z)
¯ ³ ´
¯ ∂w
∂(w, x) ¯¯ ³ ∂y ´ z
= ¯¯ ∂x
∂(y, z)
¯ ∂y z
(14)
´ ¯¯
¯
¯
¯ = J.
¯
¯
³
∂w
³ ∂z ´ y
∂x
∂z y
(15)
J is the Jacobian of the transformation from w, x to y, z. The straight brackets
indicate the determinant. Note that J = J(y, z). A special case is
∂(w, x)
=
∂(y, x)
Ã
∂w
∂y
!
.
(16)
x
In case there is yet a third set of variables s, t, the Jacobian satisfies a product rule
∂(w, x)
∂(w, x) ∂(s, t)
=
.
∂(y, z)
∂(s, t) ∂(y, z)
³
(17)
´
∂S
is expressed in terms of
We wish to make a change of variables. Cv = T ∂T
V
temperature and volume. We wish to transform to temperature and pressure. Write
∂(S, V )
∂(S, V ) ∂(T, P )
=T
∂(T, V )
∂(T, P ) ∂(T, V )
! Ã
!
Ã
! Ã
! #Ã
!
"Ã
∂V
∂S
∂V
∂P
∂S
−
= T
∂T P ∂P T
∂P T ∂T P
∂V T
Cv = T
= T
Ã
∂S
∂T
!
−T
P
Ã
∂S
∂P
! Ã
T
∂V
∂T
! Ã
P
∂P
∂V
!
´
(19)
(20)
T
Now for any substance, we define the thermal expansion coefficient α ≡
³
(18)
1
V
³
∂V
∂T
´
P
=
fractional change in volume per degree and κT ≡ − V1 ∂V
= isothermal compress∂P T
ibility. Both depend only on the equation of state. Putting these results together
with the Maxwell relation
!
Ã
!
Ã
∂V
∂S
=−
,
(21)
∂P T
∂T P
3
we find
Cv = T
Ã
∂S
∂T
!
+T
P
Ã
∂V
∂T
!2 Ã
P
∂P
∂V
!
.
(22)
T
Substituting the definitions above gives
Cv = C p − T V
α2
,
κT
(23)
Cp − C v = T V
α2
,
κT
(24)
or, as is more often seen,
Since κT > 0,(as we shall see below), Cp > Cv . For an ideal gas
Ã
∂V
∂T
1
κT = −
V
Ã
1
=
V
and
!
P
∂V
∂P
1
=
V
!
Ã
=−
T
∂
∂T
!
P
N kT
Nk
1
=
= ,
P
PV
T
N kT
1 ∂ N kT
= 2 = 1/P.
V ∂P P
P V
(25)
(26)
Hence Cp − Cv = T V T −2 P = P V /T = N k. In the dimensionless heat capacities per
particle cp = Cp /N k and cv = Cv /N k, we have cp − cv = 1. For solids, it is almost
always the case that the volume changes little in, say, doubling the temperature at
constant pressure. (Think of the metal grill in an oven). Hence (T /V )(∂V /∂T ) P << 1
and cp ≈ cv .
0.3
The Third Law
This law of nature is not as firmly based or as universal as the first two laws of
thermodynamics. It states that the entropy of any system at zero temperature is
zero: S(T = 0) = 0. We will discuss the conditions of validity of this law later on.
For now, let us just mention some of the consequences of the law.
Consider heating a substance at constant volume starting at T = 0 and ending
up in some final state at temperature Tf . The final entropy is
S(T ) =
Z
Tf
0
CV
dT.
T
Now if CV = a + bT + cT 2 + ..., then the third law implies that a = 0. The same
holds true for a process at constant pressure. All heat capacities must vanish at
T = 0. This is not consistent with the relation we just deduced for ideal gases that
cp − cv = 1. At very low temperatures, ideal gases cannot exist. Indeed, gases do not
exist at very low temperatures, as experiment has shown. The lowest boiling point for
any substance at atmospheric pressure is 4.2K for helium. Near absolute zero, only
helium even remains liquid; all other substances solidify. For liquids and perfectly
4
crystalline insulating solids, it is found that CV ∼ T 3 and CP ∼ T 3 . For metals and
some glassy insulators, CV ∼ T and CP ∼ T.
Other consequences of the third law may be found by using the Maxwell relations. Consider an expansion process at zero temperature. S = 0 at all points,
hence (∂S/∂P )T = 0. But since (∂S/∂P )T = − (∂V /∂T )P , the thermal expansion
coefficient
Ã
!
1 ∂V
α(T = 0) =
=0
V ∂T P
vanishes at T = 0.
1
1.1
Applications of Thermodynamics
Adiabatic demagnetization
This is a practical way to construct a refrigerator at low temperatures. The working
substance is a paramagnetic salt with magnetization M subjected to an external
magnetic field H (not the enthalpy). We are no longer dealing with a gas which does
mechanical work. Instead, the substance does work on the external currents which
produce the field.
This work is dW = −HdM from electromagnetic theory. Here is the proof.
A changing magnetic field creates an electric field because
~
~ = − 1 ∂B .
∇×E
c ∂t
The work done by this field in a small time interval δt is
δW = δt
Z
~ d3 r,
J~ · E
and since the current density J~ is given by
c
~
∇ × H,
J~ =
4π
we have
cδt Z
~ ·E
~ d3 r
∇×H
δW =
4π ·Z
¸
Z
³
´
´
³
cδt
~ · ∇×E
~ d3 r
~ ×E
~ d3 r − H
=
∇· H
4π
~
1 Z ~ ∂B
= − δt H
·
d3 r
4π Z
∂t
1
~ · dB
~ d3 r
= −
H
4π Z
1
~ · d(H
~ + 4π m)
= −
H
~ d3 r
4π Z
1
~ · dM
~,
= −
H 2 d3 r − H
4π
5
Figure 1: Magnetization as a function of temperature for a system of N independent
spins.
~ is the total dipole moment.
where m
~ is the dipole moment per unit volume and M
Since H is the field associated with the external currents alone, a change in magneti~ · dM
~ . Normally H
~ and M
~ are parallel, so dW = −HdM.
zation does work dW = −H
Thus the first law is dU = dQ − dW = T dS + HdM . Once the first law has been
determined, then everything is as for the gas except that P is replaced by −H and
V by M . Our table becomes
dU
dH
dA
dG
=
=
=
=
T dS + HdM
T dS − M dH
−SdT + HdM
−SdT − M dH.
(27)
(28)
(29)
(30)
The equation of state and the Gibbs free energy for a paramagnetic salt containing
S = 21 spins are found experimentally to be well approximated in the range 10−2 K to
1K by the expressions:
M = µB N tanh(µB H/kT )
(31)
and
G = −kT N log[2 cosh(µB H/kT )].
(32)
Here µB is the Bohr magneton. We will derive these expressions later. The Gibbs
free energy as a function of temperature for different fields looks like: The energy
6
Figure 2: Gibbs free energy as a function of temperature for various fields.
crosses over from flat to downward linear at T = µB H/k. For our purposes the most
important quantity is the entropy
S=
Ã
∂G
∂T
!
= kN log[2 cosh µB H/kT ] − (µB H/T )N tanh(µB H/kT ).
(33)
H
Cooling by adiabatic demagnetization proceeds by first cooling in liquid He to
about 1K, in a low field (say H2 ). The system is magnetized by turning up the field,
keeping the system in contact with the bath, i. e. , isothermally. Then the bath is
removed so that the system is thermally isolated. Then the field is reduced to a low
value (say H1 ), so that the substance is adiabatically demagnetized. The process
appears graphically below. The result is that the system ends up at a temperature
much lower than 1K.
leaving this topic it is interesting to think about the specific heat C H =
³ Before
´
∂S
T ∂T
. Looking at the slope in the graph, we see that the specific heat vanishes
H
at high temperatures as well as at low temperatures. The latter is a consequence of
the third law, but the high-temperature behavior is a little surprising. In a gas and
most other systems, the heat capacity goes to a nonzero constatn as T → ∞. C → 0
is characteristic of systems with a finite number of quantum mechanical states per
particle. For spin 1/2 particles, there are precisely two states, up and down. The
peak in the specific heat is called a Schottky anomaly.
7
Figure 3: Entropy as a function of temperature for different fields.
Figure 4: In cooling the sysem by adiabatic demagnetization, we take it along the
path shown.
8
Figure 5: Typical interparticle potential
1.2
Gas-liquid transition
We will first derive the van der Waals equation of state for a gas. As a thought
experiment, start with an ideal gas and turn on an interaction V (r) between particles
separated by a distance r. A reasonable model for V (r) has a short-range (range r0 )
repulsion and a long-range attractive part.
The depth of the attractive well is Vmin . How must the ideal gas equation of state
P =
N kT
,
V
(34)
which assumes that the particles do not interact, be modified ? First, the particles
are excluded from a volume b ∼ r03 around each other. Since V in the ideal gas is the
volume in which the particles are free to move, we should replace V by V − N b in
9
Figure 6: Isotherms of the van der Waals equation of state, including unpysical
sections where the compressibility is negative.
the equation of state:
N kT
,
(35)
V − Nb
Second, the effect of the attractive part of the interaction is to reduce the pessure.
This arises because a molecule near the wall is attracted back to its fellows and strikes
the wall with a smaller velocity than if it were free, as in the ideal gas. The number
of molecules striking the wall per unit time is proportional to N/V. The force on any
one of them is also proportional to N/V. Hence we must subtract aN 2 /V 2 from the
right-hand side. The van der Waals equation of state is
P =
P =
N kT
N2
−a 2.
V − Nb
V
(36)
Our very rough derivation does not give a reasonable way to calculate a and b
- let us first treat them as parameters characterizing a given substance. Note that
nothing restricts us to gases in this argument. The van der Waals equation might
well apply to liquids as well. That would be the regime of high density: N/V ∼ 1/b.
Let us plot some isotherms.
10
At high temperatures kT >> aN/V and low densities N/V << 1/b, this reduces
to the ideal gas hyperbola. At lower temperatures, T < Tc , the curves have a peculiar
| < 0, which
form. There is a range of volumes where ion where κT = − V1 ∂V
∂P T
means that the system should expand when pressure is applied. This is obviously
impossible, and the equation of state must be incorrect in this regime. Actually we
can show rigorously from the second law that the equation of state can not hold
everywhere. Plot Avdw (V ) at a fixed T < Tc , where Avdw is what you get from
the vander Waals equation of state. Let (V1 , V2R) be the range of volumes where
∂A
κT < 0. Then, ∂V
|T = −P implies that A(V ) = − P dV , where the integration takes
place along an isotherm. In the range (V1 , V2 ), Avdw (V ) has downward curvature.
Now consider the point at Vm = V1 + V2 . We have that Avdw (Vm ) > Avdw (V1 )/2 +
Avdw (V2 )/2 = Avdw (V1 /2) + Avdw (V2 /2). (The last equation follows because A is an
extensive quantity. Thus the free energy of the state at Vm could be reduced by
replacing it by dividing the system into two parts with volumes V1 /2 and V2 /2 in
their corresponding states. Experimentally, this is exactly what happens. Let us do
an experiment at constant temperature T < Tc , pressing on a gas. It contracts along
the van der Waals curve, then suddenly begins to move along a horizontal line, i. e. ,
P = constant. The system is highly compressible:
The isothermal compressibility is infinite here:
1
κT = −
V
Ã
∂V
∂P
!
→∞
(37)
T
along this line.
What is happening at this point is liquification. The system is part gas, part
liquid - the two phases coexist. That is the separation of phases that we deduced
from the second law above. As we continue to apply pressure, we reach a phase of
low compressibility: (κT very small), which is finally the completely liquid phase.
The second law actually tells us at what pressure Pe the phase coexistence sets in.
Referring back to the A(V ) plot, we see that the derivatives at V1 and V2 are equal
and related to the A(V1 ) and A(V2 ) by
A2 − A 1
=
V2 − V 1
Ã
∂A(V1 )
∂V
!
=
T
11
Ã
∂A(V2 )
∂V
!
= −Pe .
T
(38)
Figure 7: Path followed by the system has a horizontal section.
However A2 − A1 = − VV12 P dV, where P(V) is taken from the equation of state.
R
Comparing, we have Pe (V2 − V1 ) = − VV12 P dV . Geometrically, this means that Pe is
such that the area under the actual experimental path of the system on the P − V
diagram is equal to the area under the equation of state. This is known as the
R
Maxwell equal area construction. The Maxwell construction gives us the locus of
points at which coexistence begins on the P − V diagram.
Note that there is a coexistence region which separates liquid and gas but it is
also possible to go around this region of the P − V diagram. Along such a path, we go
continuously from gas to liquid. A gas and a liquid are not truly qualitatively different.
Rather, they have quantitatively very different densities. On a P − T diagram, on
the other hand, the coexistence region is a curve. Remember that P was constant for
coexistence at a fixed temperature. The general rule is that coexistence region is a
curve on a plot of intensive variables, and a region of finite area if one of the variables
12
Figure 8: The Maxwell construction
is extensive (like V ). The endpoint (Tc , Pc ) is called the critical point. Notice that
the volume changes discontinuously as the system goes across the line. When a first
derivative of the free energy such as V = (∂G/∂P )T changes discontinuously at a
phase transition, it is described as a ”first-order” transition.
1.3
Clausius-Clapeyron Equation
There is a general equation for the phase boundary. Let us suppose that we have an
expression for the free energy of the liquid A` (V, T ) and an expression Ag (V, T ) for
the gas. At the boundary, we know that P` = Pg because the system is in mechanical
equilbrium and T` = Tg because the system is in thermal equilibrium. Also from the
Maxwell construction
A` − A g
=
V` − V g
Ã
∂A`
∂V
!
=
T
Ã
∂Ag
∂V
!
= −P` = −Pg
(39)
T
Hence A` − Ag = −P (V` − Vg ) or A` + P V` = Ag + P Vg . By definition, this is
G` (P, T ) = Gg (P, T ), i. e. , along the phase boundary the Gibbs free energy of the
two phases are equal. The equilibrium phase has the lesser free energy of the two.
The whole phase diagram for a simple substance might have several phases. In the
generic case, two phases are separated by a line, and three phases meet at a point,
because the equilibrium of two phases is determined by one equation in two unknowns:
G` (Tt , Pt ) = Gg (Tt , Pt ), while the equilibrium of three phases is determined by two
equations in two unknowns: Gs (Tt , Pt ) = G` (Tt , Pt ) = Gg (Tt , Pt ),where Gs is the
13
Figure 9: Phase diagram in terms of the intensive variables P and T. The coexistence
region becomes a line.
Gibbs free energy of the solid and (Tt , Pt ) is called the triple point. The equations
determine Tt and Pt . An important aspect of the phase diagram is that the solidliquid line doesn’t end, unlike the liquid-gas line. A solid and a liquid are qualitatively
different in a sense we will make precise later on. It is not possible to go continuously
from one to another.
The entropy also discontinuous as we cross the phase boundary. In fact there is
a relation between the discontinuity in entropy ∆S, the discontinuity in the volume
∆V , and the slope of the phase boundary on a P − T diagram, given by dP/dT . We
have that G` = Gg along the line. G` and Gg are themselves continuous, as we saw
above. They can be differentiated along the boundary, to get the change in entropy
of the individual phases:
Ã
∂G`
∂T
!
V
Ã
∂G`
dT +
∂P
!
dP =
T
Ã
∂Gg
∂T
!
V
Ã
∂Gg
dT +
∂P
!
dP,
(40)
T
or
so
−S` + V` (dP/dT ) = −Sg + Vg (dP/dT ),
(41)
S` − S g
∆S
dP
=
=
.
dT
V` − V g
∆V
(42)
This is more often seen in terms of the latent heat per mole ` = T ∆S/Nm and
the change in molar volume ∆v = ∆V /Nm (where Nm is the number of moles of the
14
substance):
`
dP
=
.
(43)
dT
T ∆v
This applies to any first-order transition, not only liquid-gas. Heat is given off
by the system as it melts (` > 0). When dP/dT > 0, then the system exapnds
on melting, which is the usual case. Ordinary ice is an exception. It contracts on
melting, and dP/dT < 0 for the solid-liquid phase transition boundary.
15