Download Lecture_1 - Biman Bagchi

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

First law of thermodynamics wikipedia , lookup

Equipartition theorem wikipedia , lookup

Conservation of energy wikipedia , lookup

Ludwig Boltzmann wikipedia , lookup

Non-equilibrium thermodynamics wikipedia , lookup

Extremal principles in non-equilibrium thermodynamics wikipedia , lookup

Maximum entropy thermodynamics wikipedia , lookup

Entropy in thermodynamics and information theory wikipedia , lookup

Heat transfer physics wikipedia , lookup

Internal energy wikipedia , lookup

Second law of thermodynamics wikipedia , lookup

Thermodynamic system wikipedia , lookup

Chemical thermodynamics wikipedia , lookup

History of thermodynamics wikipedia , lookup

T-symmetry wikipedia , lookup

H-theorem wikipedia , lookup

Transcript
Statistical Mechanics of Chemical and Biochemical Systems
Professor: Biman Bagchi ([email protected])
List of Topics
1 Discussion of Scope of Methods and Brief Review
 Phase space, Trajectory and Time averages
 Ensemble averages
 Postulates of Statistical Mechanics
 Microcanonical ensemble
2 Canonical Ensemble
 Construction from microcanonical ensemble
 Derivation of canonical partition function
 Connection with thermodynamics
 Fluctuations, response functions
 Derivation of ideal gas laws
 Quantum-mechanical canonical distribution; classical statistics as the limiting
case of quantum statistics
 Harmonic oscillator, Vibration of solids and phonons, phonon spectra
 Statistical-mechanical perturbation theory
 Nonideal gases; Mayer’s cluster expansion, virial expansion
 Correlation function of liquids, density functional approach
 Electrolytes: Debye-Huckel Theory
 General methods: BBGKY hierarchy and its possible closures.
 Numeric methods: molecular dynamics and Monte-Carlo (Assignments)
3 Grand Canonical Ensemble
 Derivation from canonical ensemble
 Connection with thermodynamics
 Fluctuations (Gaussian)
4 Phase Transitions








Ehrenfest classification
Landau theory (mean-field approximation)
First-order transitions
Nucleation
Second-order transitions and critical phenomena
Example: Ising model, binary alloy
Bose-Einstein Condensation
Scaling hypothesis and critical indices; universality
5 Applications of equilibrium statistical mechanics to chemical and biophysical problems

Statistical mechanics of polymers: coils, globules and transitions between them;
Flory-type and mean-field theories. Scaling in polymers
Protein Folding, Levinthal paradox, energy landscapes
Statistical mechanics of liquid crystals, isotropic-nematic phase transition


II
Nonequilibrium Statistical Mechanics
6. Diffusion Phenomena







Langevin equation, Fokker-Planck equation and its solution
Time correlation function, dynamic structure factor
Fluctuation-dissipation theorem and its implications
Linear response theory
Diffusion coefficient from a time-dependent correlation function
Generalized hydrodynamics
Introduction to mode coupling theory
7 Applications


Dynamical light scattering, Raman (vibrational) spectroscopy, etc.
Application: electrical conductivity
8 Chemical Kinetics
 Transition state theory
 Chemical reactions in solution: role of diffusion, Smoluchowski equation
 Kramers’ theory, Grote-Hynes Generalization (overview)
 Solvation dynamics, electron transfer reactions
9 Texts
 T. Hill, Introduction to Statistical Thermodynamics (Dover, 1986)
 D. Chandler, Introduction to Modern Statistical Mechanics (Oxford U Press,
1987)
 D. McQuarrie, Statistical Mechanics (Harper & Row, 1976)
Additional References:


R Zwanzig, Nonequilibrium Statistical Mechanics (Oxford)
L. Landau and E. Lifshitz, Statistical Mechanics (Pergamon, 1979)


R. Balescu, Equilibrium and Non-equilibrium Statistical Mechanics (Wiley,
1976)
S. K. Ma, Modern Theory of Critical Phenomena (Benjamin, 1978)
10 Course information
 Homework assignments, one mid-term exam
 Students are expected to present a term paper/small research project in lieu of
final exam
Why Study Statistical Mechanics?
It is one of the most interesting and exciting branch of the theoretical sciences. The
subject of statistical mechanics provides a microscopic description of collective
phenomena, such as phase transitions or protein folding. These phenomena involve many
atoms and molecules and are consequence of interactions among them. That is, a system
of non-interacting atoms, such a classical ideal gas, does not show any phase transition.
For example, why do we have to supercool most liquids below freezing point to grow
crystals? Why liquid sodium does freeze to a bcc crystalline phase instead of an fcc
phase, while iron freezes to an fcc phase? Or, how does a protein fold to its unique native
state? How to understand the folding funnel? For many other processes, such a chemical
reaction in solution, the reaction coordinate, that is the coordinate along which primary
change occurs, can be coupled to many other solvent degrees of freedom. So, if you are
interested in studying the viscosity dependence of rotational and translational motion in
solution, you would need to deal with many body interactions. Vibrational relaxtion of a
given bond in solution is also coupled to many degrees of freedom. Statistical mechanics
is used to understand all such phenomena.
Statistical mechanics is usually divided into two parts : Equilibrium and Non-equilibrium
statistical mechanics, and both are very big subjects and active areas of research. They
can be thought of microscopic generalization of the two division of Mechanics : Statics
and Dynamics. One outstanding triumph of Equilibrium Stat mech is that it provides us
with microscopic calculable expressions of thermodynamic functions like entropy and
free energy. As you know, thermodynamics is kind of lame because while it provides
fundamental relations among important functions, it does not allow microscopic
calculations of quantities. Statistical mechanics not only provides exact expressions for
such quantities, also provides valuable additional insight into the functions as such
specific heat, compressibility, dielectric constant, which are collectively called the
response functions of the system. Phase transition is an important area of EQM stat mech.
Non-EQM Stat Mech deals with dynamics, such topics as kinetics of chemical reactions
which means rate, solvent effects, kinetics of phase change such as nucleation.
Both EQM and non-EQM Stat Mech can be further divided into two categories –
phenomenological and microscopic, and both have long history. Because of the
complexity of the systems involved, phenomenology here has a place of honor.
The name statistics in statistical mechanics implies involvement of many particles.
Actually, in mechanics we cannot even exactly solve a three particle system with an
arbitrary potential. In chemistry we are often faced with Avogadro number of atoms and
molecules and we need to use statistical methods to make any progress. The laws of
mechanics, in terms of atomic positions, momenta etc are now to be replaced by
collective quantities. One wants to retain, as much as possible, the laws of mechanics –
of energy and forces – and replace them with average laws.
Since much of chemistry is at room temperature or slightly below, quantum statistical
mechanics has not been greatly popular in chemistry, although quantum effects are often
relevant in chemistry such as in solvation of an electron, electron and proton transfer
reactions. But by and large quantum statistical mechanics has been in the realm of
physics.
Brief Review of Thermodynamics
Discussion on SM often begins with thermodynamics. The reason is that SM provides
a microscopic basis of thermodynamics and meaning to such terms as entropy and free
energy which is otherwise rather hard to understand. Another important reason, not often
emphasized enough, is that thermodynamics by itself is not very useful because it does
not have the capacity to generate numbers needed to understand experiments. For
example, the first two laws define all sorts of relations between thermodynamic variables
and functions, but do not tell you how to calculate them. This deficiency partly the reason
for the third law which tells that entropy of a perfectly crystalline solid is zero.
Remember that this law is used to obtain entropy and enthalpy and then free energy, by
integrating temperature dependent specific heat all the way from zero Kelvin.
Let us go through the three laws quickly. The first law has to do with conservation of
energy. But in practical terms in gives a relation between energy, work and heat.
dE  Q  W
where work and heat are not exact differentials because they depend on path and are not
state functions.
There are several nearly equivalent statements of the second law but ultimately all boil
down to statements about entropy. The extensive state function entropy is an increasing
function of energy. Entropy obeys Clausius Inequality in the form
Q
 ds
T
where the equality is sign is for a reversible process. The success of second law of
thermodynamics is of course in the introduction of free energy
ΔA=ΔE-TΔS
ΔG=ΔH-TΔS
and the important variational statemements that for a system at equilibrium, the free
energy of the system is minimum. That is, any change will increase the free energy of the
system.
Students are encouraged to read the article by Frank Lambert in Chemical Education,
available in a more recent form in the site
http://www.entropysite.com/students_approach.html for a shorter approach to
understanding the second law and entropy.
The third law is simple : Entropy of a perfectly crystalline substance is zero ate absolute
zero.
However, it is the third law that allows calculation of entropy
T
S   (CP / T )dT
0
We can find temperature dependence of the specific heat in the form of a seris in T in
many handbooks and these expansions are widely used by Geologists, Metallurgists and
Chemical Thermodynamics researchers in the evaluation of free energy.
Phase space, Trajectory and Ensembles
Let us now begin on the basics of statistical mechanics. We shall start at the very
beginning – the a,b,c of stat mech. And these are phase space, trajectory and ensembles.
Let us consider one single atom in one dimensional space. To specify its future trajectory
we need two coordinates – one position coordinates (x) and one momentum (px). We now
plot the position against momentum. This coordinate space is called phase space. The
motion of the atom can now be depicted in this phase space. Such a depiction is called a
trajectory. These are concepts from classical mechanics.
To gain some insight into the phase space, let us consider a harmonic oscillator
H = KE + PE = p2/2m + ½ kx2, ………………..(1.1)
We now plot the trajectory of the harmonic oscillator in phase space. For constant energy,
the trajectory is an ellipse. Such closed trajectories are signatures of bound state.
Let us now generalize the phase space and trajectory in two ways: First, we consider N
number of particles and also consider three dimensional space. We then have 6N
dimensional phase space spanned by 3N position coordinates and 3N momentum
coordinates. Sometimes this is called the  space and the molecular phase space as the
 -space. Now, a point in this  space denotes the instantaneous state of the system --given by all the momenta and the coordinates of the system. This point is called a
representative point.
Now, as the molecules of the system execute their natural thermal motion, the
representative point in the phase space also exhibits a motion. This motion is called the
trajectory of the system. We now consider a system with constant number N at constant
volume V and energy E. Such a system is called an (NVE) system. In such a system, the
trajectory of the molecules travels on a constant energy surface. Therefore, all the points
on this trajectory are equally likely. An average of any property of the system, such as
pressure or kinetic energy over this trajectory is equal to the time average.
Clearly, the trajectory of the system can be obtained if we could solve N-body Newton’s
equation of motion in classical mechanics and the far more complicated N-body
Schroedinger equation in quantum mechanics. However, this is not possible in general. In
recent years, we can obtain this trajectory by using computers by carrying out MD
simulations. But even such huge number of data needs to be analyzed by using a proper
tool. Such a tool comes from probability theory. Remember that the concept of
probability to describe mechanical properties of the system was first initiated by
Boltzmann in his famous H-theorem and was heavily criticized for that. He dies as a
bitter man.
Probability theory is now used to describe distribution functions in the phase space, so
that we do not need to follow the motion of each individual molecules. Also, we can now
talk of distribution and fluctuation of thermodynamic quantities directly --- not possible
in the fully molecular and mechanistic description of the system. This is where comes the
next concept – concept of ensembles.
An ensemble is simply a mental collection of a very large number, N , of systems, each
constructed to be an exact replica on a thermodynamic level of the actual thermodynamic
system under consideration. This number N never enters in the final expressions and is
something like the smallness parameter h that we use in defining differentiation and
integration but serves very important purpose in formulating exact relation. In fact, this
analogy with h goes even further. In the present case, we take the limit that N goes to
infinity. That is, we have billions and billions of mental replicas of our actual system,
identical in their thermodynamic state but may differ in their microscopic states. This
collection is called an ensemble.
Time Average and Ensemble Average : The First Postulate
Time average of any property P is defined as an average over a long trajectory.
Mathematically
P = Lt (T ∞) 1/T
T
 dsP(s) , ……………………(1.2)
0
That is, we evaluate the value of the property P along the trajectory, add them up and
then divide the sum by the total time spent.
The ensemble average is a simple average over all the members of the ensemble
<P> = Lt (N ∞) 1/N
N
P ,
i 1
i
…………………………..(1.3)
The first Postulate of Statistical Mechanics, called the Ergodic Hypothesis, says that
P = <P>, ……………………………………………….(1.4)
That is, time average is equal to ensemble average. The motivation behind this hypothesis
is that left to itself, a system will go through all the microscopic states of the system
which are also represented in the ensemble. This condition requires that both N and T
must be very very large!
Today this is called Boltzmann-Sinai ergodic hypothesis. This was first proposed by
Boltzmann and proved for several hard disk systems by Sinai and coworkers. This has
been an object of great many discussions. Real proof and limitations of Ergodic
hypothesis both have come from computer simulations. One finds that both in the gas
phase and
Second Postulate --- Equal a priori probability
In an ensemble of isolated (NVE) microcanonical system, the systems of the ensemble
are distributed uniformly, that is, over equal probability or frequency over all possible
quantum states of the system. In the language of trajectory, each state is visited equal
number of times if waited for a very long time.
Actually, the postulate of equal a probability is kind of obvious and unavoidable because
all the microscopic states of an NVE system are equally likely.
We now show how these two postulates are used to derive Boltzmann law and connection
with thermodynamics. But before that let us “derive” the famous Boltzmann equation.
Write S = - kB
 P ln P .
j
j
……………………………….(1.5)
Where Pj is the probability of j-th microscopic states. Now, let  be the total number of
microscopic states. Since all states are equally probable, them Pj=1/Ω, then we get
S  k B ln  , …………………………………………………(1.6)
Which is one of the best known equations of natural science.
We now turn to canonical ensemble.
CANONICAL ENSEMBLE
Here the experimental system is characterized by N,V,T which is more close to reality
than the microcanonical ensemble. We now form an ensemble of these NVT systems --a very large number (Л) collection of NVT systems. We let N ∞. These large number
of NVT systems, our ensembles, are put in a large bath to keep temperature fixed.
Specifically the following arrangement is constructed. All the systems of the ensemble
are placed next to each other with walls that are heat-conducting but impermeable to
molecular exchange. The whole ensemble is put in a massive heat bath. After
equilibration is reached, a thermal insulation is placed around the whole ensemble and
then removed from the heat bath. Now, we can consider the whole ensemble of NVT
systems as one single NVE system with a total number ЛN number of atoms/molecules
and Et as the total amount of energy.
In this great microcanonical system, our systems of canonical ensemble have the same
energy E, same volume V and same temperature T but not the same energy. The systems
of the canonical ensemble will be distributed over the available energy levels of the
system. We now denote by nj the number of systems in the energy level Ej. Clearly, nj is a
distribution – we shall sometime put curly bracket {nj} to denote that fact. However,
there are restrictions on this distribution. They follow the following conditions
n
j
 ,...........................(1.7)
n E
j
j
 Et ,......................(1.8)
where nj is the number of systems (among all Л) that occupies energy state (that is, Ej is
the energy of the j-th system of the ensemble). Let us assume that we have a pre-assigned
total ensemble Energy , Et. We want to find the distribution of energy, P(Ej).
Note that Et is very large.
The reason for the above construction of the isolated system is that we can apply the
hypothesis of equal a priori probability : Each microsystem is equally probable. We can
then use methods of statistics to construct the probability distribution.
The number of ways of distributing {nj} among Л number of systems is the well-
known multimonial expression
({n j }) 
!
,
n1 !n2 !n3!............
………………………………(1.9)
Note that many sets of {nj} are possible and different sets of {nj} will have
different Ω({nj}). We now define the probability of observing a given state with
energy Ej as
 n ({n })

,
 ({n })
j
Pj
j
j
……………………………………………(1.10)
j
j
There are many different possible distributions {nj} that are consistent with the
conditiuons (1) and (2). It is nearly impossible to find all the sets of distributions, with
the constrains (1) and (2).
Here one uses a beautiful theorem and surprising result from probability theory, known as
the Maximum Term Method. This result says that when the occupation numbers in a
distribution are very large, then the most probable distribution gives an accurate
representation of all the average values.
Digression 1: Maximum Term Method
Let us consider the sum in the following form
M
   TM , …………………………………………………………..(1.11)
N 0
Where
TN 
M !xN
, …………………………………………………….(1.12)
N !( M  N )!
x is a term of the order of unity and M is very large, of the order of Avagadro’s number.
Now, (1.11) with TM given by (1.12) can of course be evaluated exactly giving
  (1  x)
M
, …………………………………………………………(1.13)
We need the logarithm. ln ∑ = M ln (1+x).
We now evaluate the term TN* which makes the maximum contribution to the sum in
Eq.1.11. We find N by the usual way of taking derivative and then putting the derivative
equal to zero. We work with the logarithm of TM. Stirling’s approximation for the
factorials give
ln TM = M ln M – N ln N – (M-N) ln (M-N) + N ln x . ………………….(1.14)
In order to find the N* that maximizes TN, we take the usual derivative and set it equal to
zero
 ln TN
 0   ln N  ln( M  N )  ln x , …………………..(1.15)
N
which is now solved to obtain
N*
xM
 x, so, N * 
, ……………………………….(1.16)
*
M N
1 x
We now substitute this N* in Eq.1.14 to obtain
ln TM = M ln M – N* ln N* – (M-N*) ln (M-N*) + N* ln x . ………………….(1.17)
which is equal to M ln (1+x) which is the exact sum. Actually, one can make the above
observations a bit more general by looking at the expansion of TN around TN* and show
that the difference go to zero as (M)-0.5.
What makes this result so useful is that we can find the most probable distribution by
using a method called Lagrange’s method of undetermined multipliers.
=======================================================
Digression 2 : Lagrange’s Method of Undetermined Multipliers
(LMUM)
The most probable distribution that we would need to find is subject to two constrains,
(1.7) and (1.8). We shall now discuss the physics behind the LMUM. (This is from Hill
and Ronis).
Let us first consider an arbitrary function of variables,
. As you all
know, extrema of are found by solving the simultaneous set of equations:
………..
In general, this will determine the function's minima, maxima, and
saddle (or inflection) points. Do you know why this works? Consider
the Taylor series expansion for for
; i.e.,
where
is given in Eq. (1) and
Since the linear term in Eq. (2) changes sign when
, the only way we can
have an extremum is to have the
linear term vanish, no matter what we choose for the
's
(other than them being small enough to be able to neglect the second
order terms). Whether we have found a minimum, maximum or saddle
point is determined by the sign of the second terms, but this won't
concern us here.
The preceding discussion is for the case where there are no constraints imposed on the
variations
where
. Suppose there are; specifically, lets assume that only
's which satisfy
is the number of constraints, are of interest. We can still
use Eq. (2) to describe the change in F near any point, although now,
only those variations which are consistent with Eq. (4) must be
considered. For small variations about a point which is consistent
with the constraints, we may approximately rewrite Eq. (4) as
where
Equation (5) has a nice geometric interpretation; namely, if we
introduce n-dimensional vectors,
then Eq. (6) becomes
, and
,
and hence, the allowed variations in are orthogonal to the
vectors
, and are hence orthogonal to the subspace
spanned by the
's. Similarly, we can represent the
linear variation of as
where
. Unlike the
unconstrained case, we cannot demand that
any
will vanish for
. Instead, it must vanish only for those
's that are consistent with the constraints;
i.e., those that
satisfy Eq. (7). For this to happen, it is necessary and sufficient
that lie in the subspace spanned by the
thus, we take the
where the
's;
's as a basis for this subspace and write
's are called the undetermined multipliers,
and are found by demanding that any solutions to Eq. (9) also obey the
constraints. If you use the definitions given above, it's
relatively easy to show that Eq. (9) is equivalent to setting the
partial derivatives of the function
to zero (where we treat the
's as constants). This is known as Lagrange's method of
undetermined multipliers.
Derivation of Boltzmann Distribution
We now apply the above two methods to derive the Boltzmann distribution. First, we
note that use of Stirling’s approximation to Eq.(1.9) gives
ln i ({n})  ( ni ) ln( ni )   ni ln ni
i
i
We now incorporate the constrains given by Eqs (1.7) and (1.8) by using LMUM. The
resulting equation is

[ln i ({n})    ni    ni Ei ]  0
n j
i
Where α and β are the two undetermined multipliers. We can easily carry out the
differentiation to obtain the following expression
ln(  ni )  ln n*j    E j  0, j  1,2,..... ,
i
or,
n*j  e e
 E j
, j  1,2......
This is the required most probable distribution. The constants α and β are to be
determined by using the conditions 1.7 and 1.8. Since sum over nj* equals the total
number of (NVT) systems in the ensembles, we get from above, the following expression
for α
e   e
 E j
which can be used to obtain the following expression for the probability Pj of the j-th
state being occupied
n*j
Pj 


e
 E j ( N ,V )
e
 E j ( N ,V )
, j=1,2,……….
j
This is the well-known Boltzmann equation for energy distribution, with β=1/kBT. It can
be shown that β is indeed 1/kBT. The term in the denominator is called the canonical
partition function, Q
Q  e
 E j ( N ,V )
j
Thermodynamic quantities can be described in terms of Q. These relations form
important part of statistical thermodynamics.
Relationship with Thermodynamics
We next proceed to establish important thermodynamic relations for canonical ensemble.
We first define the average energy E , as
E   Pj E j 
j
E e
 E j (T ,V )
j
e
,
 E j (T ,V )
We now evaluate the total differential of E
dE   ( E j dPj  Pj dE j )
j
One now uses a nice trick. Recognize that Ej can be obtained from definition of Pj by
taking logarithm. Use of this gives
dE  
E j
1
[(ln P  ln Q)dP P ( V )

j
j
j
Now, we use several simple relations
P
j
j
 1,   dPj  0
j
N
dV ]
d ( Pj ln Pj )   ( Pj / Pj )dPj   ln Pj dPj
j
j
j
All the above can be combined to obtain

1

d ( Pj ln Pj )  dE  pdV
Compare this with
TdS=dE+pdV
We can therefore deduce
TdS  
1

d ( Pj ln Pj )
Since β=1/kBT, we have
S  k B  Pj ln Pj
Use of the canonical definition of Pj into the above equation, we get
S  E / T  k ln Q  E / T  A / T
This gives an important relation between Hemholtz free energy and the canonical
partition function
A= -kBT ln Q(N,V,T)
Other thermodynamic relations
An advantage of the above relation is that we can find all the thermodynamic properties
by simple derivatives of the canonical partition function. We use the following
thermodynamic relation
dA= - SdT - pdV
which allows us to write down the following expressions for entropy and pressure
S  (
A
 ln Q
)V , N  k BT (
)V . N  k B ln Q
T
T
p  (
A
 ln Q
)T , N  k BT (
)T , N
V
V
Energy E can be found from the relation E=A + TS
Grand Canonical Ensemble
Now we allow exchange of particles, by introducing chemical potential. This is (V,T,μ)
or Grand Canonical Ensemble.
The analysis follows the same route as the canonical partition function case. In this case
we first construct the grand canonical ensemble by putting the systems in a large bath
which not only open to exchange of energy but also to the number of molecules. Each
system is characterized by constant volume V, constant temperature T and constant
chemical potential μ, but can have different energy and different number Ni of molecules.
Just as in the case of canonical ensemble, we put these systems in contact with each other
where the walls separating them permit exchange of energy and molecules. The ensemble
itself is put in a large heat and number bath. After equilibration is reached, we put
insulation around the whole ensemble such that no exchange of energy and number is
possible. That is, we again construct a very big microcanonical system.
This great great microcanonical system consists of a large number of systems that
constitute our grand canonical ensemble. The systems of our grand canonical ensemble
are all characterized by the same V,T and μ. However, these systems may all be
characterized by different amount of energy and different number of molecules. Let us
denote the number of systems with energy Ej and number of molecules N by nj (N). That
is, nj(N) is a distribution and we again have to find the distribution that maximizes the
total number of microstates Ω({nj(N)}) which is given by a similar multinomial
expression. Now Ω({nj(N)}) needs to be maximized with three constrains : total nu,mber
of systems, total energy and total number of molecules in our great great microcanonical
system. The same exercise as carried out before [Left as a home work exercise] leads to
the following expression for the most probable distribution
n*j ( N )  e e
  E j ( N ,V )  N
e
which leads to the following expression for α
e   e
  E j ( N ,V )  N
e
j,N
So that the probability of observing a system at energy level Ej and total number of
molecules N is
Pj ( N ) 
n*j
N

e
  E j ( N ,V )   N
e
e
  E j ( N ,V )   N
e
j,N
Where the denominator in the above expression is the grand canonical partition function,
often denoted by Ξ (V,T,μ). It can be seen easily that grand canonical and canonical
partition functions are related to each other by
Ξ(V,T,μ) = ∑N QN(V,T,N) eγN = ∑N QN(V,T,N) zN
Where z is the fugacity.
It can be shown that the main relation connecting GCE to thermodynamics is given by
PV = kBT ln Ξ (V,T,μ)
Fluctuations, Response Functions and Free Energy Surface
Ideal Gas : Quantum and Classical
Harmonic Oscillator Partition Function, Einstein’s Solid
Imperfect gases, Mayer’s Theory and Virial Series
Radial Distribution function and BBGKY Hirerarchy
Ehrenfest Classification
Nucleation
Landau Theory
Isotropic-nematic phase transition
Liouville Equation
Langevin equation
Derivation of Fokker-Planck Equation
TST and Smoluchowski equation rates