Download Thermodynamics: Part Two: State of play in living systems

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Nucleic acid analogue wikipedia , lookup

Artificial gene synthesis wikipedia , lookup

Genetic code wikipedia , lookup

Protein adsorption wikipedia , lookup

Implicit solvation wikipedia , lookup

Deoxyribozyme wikipedia , lookup

List of types of proteins wikipedia , lookup

Metabolism wikipedia , lookup

Biosynthesis wikipedia , lookup

Biochemistry wikipedia , lookup

Transcript
Thermodynamics: Part Two: State of play in living systems
In this second part of the article on thermodynamics we shall examine how thermodynamics is
featured in living systems.
Energy flow in Living systems
Living systems are highly ordered systems with low entropy. A high order, low entropy state can only be
maintained by expenditure of energy and the ultimate source of that energy is the sun which provides a
relatively low-entropy source of energy. Plants have a direct means of harvesting the solar energy. The leaves
of the plant have tiny energy factories called chloroplasts containing the pigment chlorophyll which trap the
solar energy and in a process called photosynthesis convert this solar energy into the necessary useful work to
maintain the plant in its complex, high-energy configuration. Mass, such as water and carbon dioxide, also
flows through plants, providing necessary raw materials, but not energy. In collecting and storing useful
energy, plants serve the entire biological world.
For animals, the energy flow through the system is provided by the consumption of high energy biomass,
either plant or animal. The chemical energy available in a metabolic fuel in the form of carbohydrate, lipid or
protein, is converted to another form of chemical energy, ATP. This molecule has been aptly named the energy
currency of cells. Its chemical energy may be converted to useful mechanical energy such as during the process
of contraction of skeletal muscles. This interconversion of energy is the thrust of the first law of
thermodynamics, which, however, does not state whether energy in one form can be entirely converted to
another. That there is always some energy that is unavailable to perform useful work is the basis of the second
law which we have already seen.
In discussing the second law of thermodynamics, the term entropy was introduced and defined as a measure
of the degree of disorder or randomness in a system. All processes, whether chemical or biological, tend to
progress toward a situation of maximum entropy. Hence living systems that are highly ordered are never at
equilibrium with their surroundings as equilibrium in a system will result when the randomness or disorder
(entropy) is at a maximum.
For a living system to remain viable, the mass and/or energy flow through it must be uninterrupted so that it
can perform the necessary useful work to maintain its complex, high-energy configuration far from equilibrium
with its surroundings. This implies that besides the requirement of energy, all known living organisms [with the
exception of ribonucleic acid (RNA) viruses+ must have the intrinsic ‘machinery’ to direct the work that needs
to be done by the energy flow to synthesise DNA (deoxyribonucleic acid) and protein from simple biomonomers and thus maintain themselves with that unique characteristic to replicate. Without the innate
ability to replicate, natural selection, a key mechanism of evolution, would not be possible. But the
‘machinery’ that causes DNA and protein synthesis is the DNA/enzyme system itself! How did these
biopolymers get there in the first place? This is still a mystery that continues to baffle scientists. We shall
return to this point at the end of this article.
Specified Complexity
Only recently has it been appreciated that the distinguishing feature of living systems is “specified complexity”
rather than order. This distinction has come from the observation that the essential ingredients for a
replicating system---enzymes (which are essentially protein biocatalysts) and nucleic acids (constituent units of
RNA and DNA) ---are all information-bearing molecules. Unlike man-made polymers such as nylon and
polypropylene, nucleic acids and proteins are aperiodic polymers, and this aperiodicity is what makes them
able to carry much more information. {Ref: Thaxton, Bradley & Olsen: The Mystery of Life's Origin: Reassessing
Current Theories (http://www. ldolphin.org/mystery/chapt7&8.html} .
The information in DNA is stored as a code made up of four chemical bases: adenine (A), guanine (G), cytosine
(C), and thymine (T). Human DNA consists of about 3 billion bases, and more than 99% of those bases are the
same in all people. The order, or sequence, of these four bases encodes the information available for building
and maintaining an organism, similar to the way in which letters of the alphabet appear in a certain order to
form words and sentences.
DNA bases pair up with each other, A with T and C with G, to form units called base pairs. Each base is also
attached to a sugar molecule and a phosphate molecule. Together, a base, sugar, and phosphate are called a
nucleotide. A strand of polynucleotide results by condensation reactions between nucleotides. In DNA two
such strands which are anti-parallel couple together to form a spiral called a double helix. Within cells, DNA is
organized into long structures called chromosomes. During cell division these chromosomes are duplicated in
the process of DNA replication, providing each cell its own complete set of chromosomes. DNA replication is
the basis for biological inheritance.
The genetic code is composed of nucleotide triplets. The code defines how sequences of three nucleotides,
called codons, specify which amino acid will be added next during protein synthesis. The code is read by
copying stretches of DNA into the related nucleic acid RNA in a process called transcription. Protein-coding
sequences are interrupted by non-coding regions. Non-coding interruptions are known as intervening
sequences or introns. Coding sequences that are expressed are called exons.
Proteins are polypeptides, meaning they are polymers constructed from monomer units of amino acids linked
together by peptide bonds between the carboxyl and amino groups of adjacent amino acid residues. There are
various levels of organization of protein molecules. The primary structure refers to amino acid linear sequence
of the polypeptide chain. The secondary structure of the protein can be either the alpha helix or the beta
sheets; the tertiary structure is its three-dimensional structure whilst its quaternary structure is the
arrangement of multiple folded protein or coiling protein molecules in a multi-subunit complex. The
sequence of amino acids in a protein is defined by the sequence of a gene, which is encoded in the genetic
code. In general, the genetic code specifies 20 standard amino acids.
Only certain sequences of amino acids in polypeptides and bases along polynucleotide chains correspond to
useful biological functions. Thus, informational macro-molecules may be described as being and in a specified
sequence.
Gibbs Free Energy Change in biological systems
As biological systems are rarely at equilibrium, it is difficult to quantitate entropy changes, and one frequently
uses the Gibbs free energy change to describe that portion of the total energy that is available for useful work.
For a system proceeding towards equilibrium, the Free Energy change (ΔG ) is defined (see Appendix section)
by the relation, ΔG = ΔH -T ΔS, where ΔH is the change in enthalpy or the heat content, T is the absolute
temperature, and ΔS is the change in entropy of the system. ΔG =0 means a system is at equilibrium, a
negative ΔG means the process will proceed spontaneously towards equilibrium in the direction written, in
part due to an increase in entropy; a positive ΔG will proceed spontaneously in the reverse direction as
o
written. For chemical reactions in solution, the standard free energy change, ΔG , is that for converting 1 mol/L
of reactants into 1 mol/L of products:
o
o
o
ΔG = ΔH -T ΔS
Chemical reactions can be “coupled” together if they share intermediates. In this case, the overall Gibbs Free
Energy change is simply the sum of the ∆G values for each reaction. Therefore, an unfavourable reaction
(positive ∆G1) can be driven by a second, highly favourable reaction (negative ∆G2 where the magnitude of
∆G2 > magnitude of ∆G1).
For example, the synthesis of glutamine from glutamate and ammonium ions is thermodynamically
unfavourable:
+
o
Glutamate + NH4 → Glutamine (∆G 1 = 14.2 kJ/mol)
The breakdown of ATP to form ADP and inorganic phosphate has a ∆G
reactions can be coupled together, thus:
-
4-
2-
0
2
value of-30.5 kJ/mol. These two
3-
Glutamate + ATP → *5-Phosphoglutamate] + ADP
2+
[5-Phosphoglutamate] + NH4 → Glutamine + Phosphate
The sum of these two reactions is:
4+
3–
Glutamate + ATP + NH4 → Glutamine + ADP + Phosphate
o
-1
∆G for this reaction is -16.3 kJ mol , and hence is thermodynamically favourable. This reaction is catalysed by
the enzyme, glutamine synthetase, in animal cells.
This principle of coupling reactions to alter the change in Gibbs Free Energy is the basic principle behind all
enzymatic action in biological organisms {Ref: http://en.wikipedia.org/wiki/Biological_thermodynamics}.
Polymerization of macromolecules such of DNA and proteins from their corresponding bio-monomers requires
an input of work that must be accomplished by energy flow through the living system. It has been noted that
such polymerizations result in a decrease in the thermal and configurational entropies which effectively serve
to increase ∆G, and thus and the work required to make these molecules. Morowitz {Ref: Harold J. Morowitz,
Entropy for Biologists. An Introduction to Thermodynamics, Academic Press, 1970} has estimated more
generally that the chemical work, or average increase in enthalpy, for macromolecule formation in living
systems is 16.4 cal/gm.
Entropy is an important consideration in determining the folding pattern of proteins, the complete
understanding of which has eluded scientists. Each protein exists in its native and functional folded state under
physiological conditions. A region of protein molecule dissolved in water induces a solvent shell of water in
which the water molecules are highly ordered. When two non-polar side chains come together on folding a
polypeptide, the surface area exposed to the solvent water is reduced and some of the highly ordered water
molecules in the solvation shell are released to bulk solvent. The entropy of the system (i.e., disorder of water
molecules in system) is increased. The increase in entropy is thermodynamically favourable and is the driving
force causing non-polar moieties to come together in aqueous solvent. Of the various conformations
accessible to the folding polypeptide chain, the preferred configuration is the one of lowest Gibbs free energy
within the physiological time frame. Sometimes a protein will fold into a wrong shape. And some proteins,
aptly named chaperones, keep their target proteins from getting off the right folding path. An improperly
folded protein in humans can be the cause of such severe diseases as Alzheimer’s disease, Cystic fibrosis, Mad
Cow disease and many cancers.
At some point, organisms normally decline and die even while remaining in environments that contain
sufficient nutrients to sustain life. An intrinsic controlling factor other than sunlight or nutrients must therefore
be operative for each organism in determining its existence. And this must also carry a hereditary
characteristic. This blueprint of biological life as we all know is DNA which transfers hereditary information
from generation to generation, controls the production of proteins and even determines the structure of the
cell, meaning whether it would be a nerve cell or eye cell, etc.
Life and Boltzmann’s entropy law
The internal order preserved by living organisms with low entropy appears to be in sharp conflict with
Boltzmann's perspective of the second law where the more probable state is one in which there is less order
and high entropy. DNA's apparent information processing function provides a resolution of the paradox posed
by life and the entropy requirement of the second law. However, it has also been pointed out by Albert
Lehninger (1982) {Ref: http://en.wikipedia.org/wiki/Entropy} that the "order" produced within cells as they
grow and divide is more than compensated for by the "disorder" they create in their surroundings in the
course of growth and division. "Living organisms preserve their internal order by taking from their
surroundings free energy, in the form of nutrients or sunlight, and returning to their surroundings an equal
amount of energy as heat and entropy."
Perhaps, one of the reasons why our body temperature has to be higher than the surrounding air, or that we
have to sweat off water if it isn't, is that we have to get rid of the extra entropy (otherwise, we would become
disorganized and eventually die). The energy that our warm body radiates carries away the extra entropy. It
does this because losing this energy decreases the number of microscopic states that the atoms and molecules
of our body can be in. {Ref: http://www.nmsea.org/Curriculum/Primer/what_is_entropy.htm}
Bioinformatics
Entropy has also found a conceptual application in information theory where it is used as a measure of
unpredictability or uncertainty associated with a random variable. For a given context, entropy is a measure
of the order or disorder in a sequence that can be regarded as information. The definition of the information
entropy is, however, quite general, and is expressed in terms of a discrete set of probabilities pi:
Since its first introduction by Shannon (1948), entropy in the information sense has taken on many forms,
namely topological, metric, Kolmogorov-Sinai and Renyi. These definitions have been applied with varying
levels of success, for example, to estimating DNA sequence entropy (of introns and exons) {Ref:
http://www.math.psu.edu/koslicki/entropy.nb}
Such studies are intrinsic to the field of bioinformatics which focusses on developing and applying
computationally intensive techniques to achieve a better understanding of biological processes that span
research areas such as sequence alignment (see below), genome assembly, drug design, protein structure
alignment, prediction of gene expression and protein–protein interactions, and the modelling of evolution.
A basic tenet of the protein folding problem is that the information contained in the primary sequence is
sufficient to dictate the three-dimensional structure. From an information theoretical point of view, protein
folding can be envisioned as a communication process by which the sequence information is transmitted to
the three-dimensional structure. How much information is transferred from sequence to structure? How
redundant is the information? Is information transfer via protein folding a "noisy" or "noiseless"
communication channel? Any attempt to quantitatively address such questions must of necessity first establish
what the total information content of a protein sequence is. Additionally, one would like to develop
techniques to describe the information in a three-dimensional structure, the folded protein. Since
experimental determination of protein folding pathways remains difficult, computational techniques are often
used to simulate protein folding. Most current techniques to predict protein folding pathways are
computationally intensive and are suitable only for small proteins.
In bioinformatics, a sequence alignment is a way of arranging the sequences of DNA, RNA, or protein to
identify regions of similarity that may be a consequence of functional, structural, or evolutionary relationships
between the sequences {Ref: Mount DM. (2004): Bioinformatics: Sequence and Genome Analysis, Cold Spring
Harbor Lab. Press, NY-. “In sequence alignments of proteins, the degree of similarity between amino
acids occupying a particular position in the sequence can be interpreted as a rough measure of
how conserved a particular region or sequence motif is among lineages. The absence of substitutions, or the
presence of only very conservative substitutions (that is, the substitution of amino acids whose side chains have
similar biochemical properties) in a particular region of the sequence, suggest that this region has structural or
functional importance. Although DNA and RNA nucleotide bases are more similar to each other than are amino
acids, the conservation of base pairs can indicate a similar functional or structural role. ……
Multiple sequence alignments try to align all of the sequences in a given query set and are often used in
identifying conserved sequence regions across a group of sequences hypothesized to be evolutionarily related.
Such conserved sequence motifs can be used in conjunction with structural and mechanistic information to
locate the catalytic active sites of enzymes.” ,Ref: http://en.wikipedia.org /wiki/Sequence_alignment }.
Several entropy-based methods have been recently developed for scoring sequence conservation in protein
multiple sequence alignments. High scoring amino acid positions may correlate with structurally or functionally
important residues. Studies have shown that using amino acid background frequencies in the entropy-based
scoring schemes results in improved performance in identifying functional sites from protein multiple
sequence alignments {Ref: http://www.biomedcentral.com/1471-2105/7/385}. Biophys J. 1996 July; 71(1):
148–155. B J Strait and T G Dewey: The Shannon Information Entropy of Protein Sequences}.
Pre-biotic life
We conclude this discussion by examining briefly some theories that reflect on the question of how life arose
on Earth. The nature of the earliest forms of life on our planet is unknown and may remain that way. There are
both religious and scientific views about the origin of life, but we keep coming closer to understanding what
they may have been like. It is conceivable that the first genetic code was a mere stripped down version of our
present code based on the limited set of available amino acids then present in the prebiotic environment.
These amino acids, numbering no more than ten, probably arose (along with other molecules) from electric
discharges in the atmosphere comprising of a mixture of gases then present which was rich in hydrogen. Their
relative abundance can be predicted by thermodynamics and this was reflected in the composition of the early
proteins that helped shape the genetic code (Ref: Higgs and Pudritz: arxiv.org/pdf/0904.0402). Two other
major sources of prebiotic amino acid synthesis have also been proposed, namely their formation in
hydrothermal vents and their extra-terrestrial delivery to Earth via meteorites. One prominent theory about
the origins of life, called the RNA World model, postulates that because RNA can function as both a gene and
an enzyme, RNA might have come before DNA and protein and acted as the ancestral molecule of life with a
simpler mechanism for replication that probably did not require supporting proteins and other cellular
components. Most scientists tentatively accept the theory of spontaneous origin, which is that life evolved
from inanimate matter. In this view (referred to as abiogenesis), the combined action of thermodynamics and
subsequent natural selection was the force leading to life. As changes in molecules caused them to persist
longer in the environment, they became more enabled to form increasingly complex associations, culminating
in the evolution of cells.
Recent scientific studies have added credibility for the process of molecular self-assembly. One of these is a
report by scientists at the Scripps Research Institute which is in support of the RNA World model. The report
discloses an interesting first synthesis of RNA enzymes that can replicate themselves without the help of any
proteins or other cellular components. The replicating system actually involves two enzymes, each composed
of two subunits and each functioning as a catalyst that assembles the other. This cross-replication process is
cyclic and proceeds indefinitely requiring only a small starting amount of the two enzymes and a steady supply
of the subunits. {Ref: http://www. sciencedaily. com / releases/ 2008/12/081218213634.htm}.
In a related study, scientists have addressed the question of how ancient RNA could have joined together to
reach a biologically relevant length without a plethora of supporting enzymes {Ref: http://www.
sciencedaily.com/ releases/2008/12/081218213634.htm}. They found that under favourable conditions (acidic
o
environment and temperature lower than 70 C), pieces ranging from 10-24 bases in length could naturally
fuse into larger fragments, generally within 14 hr. This spontaneous fusing, or ligation, would afford a simple
way for RNA to overcome initial barriers to growth and reach a biologically important size at around 100 bases
long.
In another study, scientists at the Georgia Institute of Technology have hypothesised that before there were
protein enzymes to make DNA and RNA, there were small molecules present on the pre-biotic Earth that could
have helped make these polymers by promoting molecular self-assembly. In support of this, they note their
experimental finding that the molecule ethidium bromide (a well-known DNA intercalator) assists short
oligonucleotides in forming long polymers and also selects the structure of the base pairs that hold together
two strands of DNA {Ref: http://www.sciencedaily.com/ releases/ 2010/03/100308151043. htm }.
Appendix
Recall that the first law can be written in the form:
dU = dQ - dW
For reversible processes only, work or heat may be rewritten as
dW = PdV
dQ = TdS
Substitution leads to other forms of the first law true for reversible processes only:
dU = dQ - PdV, substituted for a reversible dW
dU = TdS – dW, substituted for a reversible dQ
Given the formulation above, the first and second laws can be combined to yield the well-known perfect gas
equation:
dU = TdS – PdV (heat + work)
The above relation is always true because it is a relation between properties and is now independent of
process.
In considering the total energy of a thermodynamic system, we define a new state function called
enthalpy (H) which is the sum of the internal energy (U) and the product of pressure and volume (PV):
H = U + PV
When a process occurs at constant pressure, the heat evolved (either released or absorbed) is equal to the
change in enthalpy. Enthalpy is usually expressed as the change in enthalpy (ΔH) for a process between
initial and final states:
ΔH = ΔU + ΔPV
If temperature and pressure remain constant through the process and the work is limited to pressurevolume work, then the enthalpy change is given by the equation:
ΔH = ΔU + PΔV
The increase in enthalpy of a system is exactly equal to the energy added through heat, provided that the
system is under constant pressure and that the only work done on the system is expansion work:
ΔH = q
If the reaction is endothermic (that is, absorbs heat from the surroundings), q > 0 (positive) and hence ΔH is
also positive. The reverse applies for an exothermic reaction.
Differentiation of the defining enthalpy equation yields
dH = dU + d(PV) = dU + PdV + VdP
Substituting, dU = TdS –PdV, we have
dH = TdS +VdP
Some reactions are spontaneous because they give off energy in the form of heat (ΔH < 0). Others are
spontaneous because they lead to an increase in the disorder of the system (ΔS > 0). Calculations of ΔH and ΔS
can be used to probe the driving force behind a particular reaction. What happens when one of the potential
driving forces behind a chemical reaction is favourable and the other is not? We can answer this question by
defining a new quantity known as the Gibbs free energy (G) of the system, which reflects the balance between
these forces.
The Gibbs free energy of a system at any moment in time is defined as
G = H – TS
The Gibbs free energy of the system is a state function because it is defined in terms of thermodynamic
properties that are state functions. The change in the Gibbs free energy of the system that occurs during a
reaction is : ΔG = ΔH – ΔTS. At constant temperature, this can be written as ΔG = ΔH – TΔS.
Differentiation of the defining equation for G gives
dG = dH –TdS –SdT = TdS + VdP –TdS –SdT = Vdp –SdT
For a constant temperature process, the change of G with pressure is dG =VdP, and for a constant pressure
process, the change of G with temperature is dG = - SdT.
In the case of an ideal gas, V= RT/P; hence, dG =
dP = RT d(lnP) or ΔG = RT ln
The change in the free energy of a system that occurs during a reaction can be measured under any set of
conditions. If the data are collected under standard-state conditions, the result is the standard-state free
o
o
energy of reaction ( G ). For chemical reactions in solution, the standard free energy change, ΔG , is that for
converting 1 mol/L of reactants into 1 mol/L of products:
o
o
ΔG = ΔH -T ΔS
o
The advantage of the equation defining the free energy of a system is its ability to determine the relative
importance of the enthalpy and entropy terms as driving forces behind a particular reaction. The change in the
free energy of the system that occurs during a reaction measures the balance between the two driving forces
that determine whether a reaction is spontaneous. As we have seen, the enthalpy and entropy terms have
different sign conventions:
o
o
o
o
Favourable: ΔH < 0 and ΔS > 0 ; Unfavourable: ΔH > 0 and ΔS < 0
o
The entropy term is therefore subtracted from the enthalpy term when calculating ΔG for a reaction. Because
o
o
of the way the free energy of the system is defined, ΔG is negative for any reaction for which ΔH is negative
o
o
and ΔS is positive. ΔG is therefore negative for any reaction that is favoured by both the enthalpy and
o
entropy terms. We can therefore conclude that any reaction for which ΔG is negative should be favourable, or
spontaneous. Such a reaction is labelled exogernic.
Favourable, or spontaneous reactions:
o
o
o
ΔG < 0
o
Conversely, ΔG is positive for any reaction for which ΔH is positive and ΔS is negative. Any reaction for
o
which ΔG is positive is therefore unfavourable. Such a reaction is labelled endogernic.
o
Unfavourable, or non-spontaneous reactions: ΔG > 0
o
For equilibrium reactions, ΔG =0.
How does this equation relate to the entropy change of the universe, ΔS univ, which we already know is the
sole criterion for spontaneous change?
The defining equation for ΔSuniv is:
ΔSuniv = ΔSsurr + ΔSsys
Because most reactions (a change in the system) are either exothermic or endothermic, they are accompanied
by a flow of heat q across the system boundary. The enthalpy change of the reaction ΔH is defined as the flow
of heat into the system from the surroundings when the reaction is carried out at constant pressure, so the
heat withdrawn from the surroundings will be –q which will cause the entropy of the surroundings, ΔS surr, to
change by –q / T = –ΔH/T. We can therefore rewrite the equation for ΔSuniv as
ΔSuniv = (– ΔH/T) + ΔSsys
Multiplying through by –T , we obtain
–TΔSuniv = ΔH – TΔSsys
This expresses the entropy change of the universe in terms of thermodynamic properties of
the system exclusively. If –TΔSuniv is denoted by ΔG, then we have the Gibbs free energy change for the
process. It is to be noted that although a statement that the spontaneity of a reaction is dependent on both
enthalpy and entropy changes of a reaction is technically correct, it, however, disguises the important fact that
ΔSuniv, which this equation expresses in an indirect way, is the only criterion of spontaneous change. Note that
ΔG is only considered as entropy change (when divided by T) when no useful work of any kind is done by the
heat transfer in the system or in the surroundings.
Since most chemical and phase changes of interest to chemists take place under conditions of constant
temperature and pressure, the Gibbs energy is the most useful of all the thermodynamic properties of a
substance, and it is closely linked to the equilibrium constant.
We measure the position of equilibrium in a reversible reaction by the equilibrium constant, designated by the
letter K.
aA + bB ⇌ cC + dD
C
d
a
K = {[C] .[D] / [A] .[B]
b
The free energy change for the reaction is given by
ΔGrxn = ΔGpdts – ΔGreactants = (cGC + dGD) – (aGA + bGB)
The more negative is ΔGrxn , the more the domination of products over reactants, and the bigger is the value of
the equilibrium constant. We now proceed to derive this relationship mathematically, by considering the
equilibrium case where the reactants and products are gaseous and assumed to be ideal.
As we have seen previously, the molar free energy is given by the expression dG = RT dP/P. Integrating this at
o
o
o
constant temperature T gives G = G + RT ln P/P , where G is the molar free energy of the gas in its standard
o
o
or reference state of 1 atm. Since P is unity, the above equation simplifies to G = G + RT ln P and is valid at all
times. For non-ideal gases, fugacity replaces P. It, therefore, follows that
ΔGrxn = (c
+d
) – (a
o
+b
) + RT [(c lnPC + d lnPD) – (a lnPA + b lnPB)]
o
Expressing the G terms collectively as ΔG and combining the logarithmic terms into a single fraction, we get:
o
ΔGrxn = ΔG + RT ln
/
o
= ΔG + RT ln Q ,
o
where ΔG is the free energy change when the reaction occurs with the reactants and products in their
standard states of 1 atm and Q may be called the reaction quotient at any period in time until the equilibrium
situation is reached when we designate Q by K. The general case will not be one of chemical equilibrium, and
reaction in one direction or the other will occur spontaneously so as to reduce the free energy of the system to
a minimum. At this point ΔGrxn will be zero for any small degree of reaction in either direction and the system
o
will be in equilibrium. Thus, ΔG = -RT ln Kp , where Kp is the equilibrium value of Q.
o
The above considerations also apply to reactions in solution where the standard free energy change ΔG is the
change in free energy for converting 1 mol/L of reactants into 1 mol/L of products.
The practical importance of the Gibbs energy is that it allows us to make predictions based on the properties
(ΔG° values) of the reactants and products themselves, eliminating the need to experiment. But bear in mind
that while thermodynamics always correctly predicts whether a given process can take place (is spontaneous
in the thermodynamic sense), it is unable to tell us if it will take place at an observable rate.
By way of example, consider the reaction, H2(g) + Cl2(g) → 2HCl(g). This has a free energy change given by:
ΔGrxn = Σ
(products) - Σ
(reactants) = 2
(HCl) = 2(-22.77 kcal/mol) = -45-54 kcal/mol
Note that all elements in their standard states have zero free energy of formation Σ
involved.
as there is no change
The reaction is exogernic, but no perceptible reaction occurs! On the other hand, the presence of a small
amount of light of the right frequency will catalyse the reaction which then proceeds explosively.
One particularly useful worked example for students is the following {Ref: http:// www.chem1.com/
acad/webtext/thermeq/TE4.html]:
“The reaction ½ O2(g) + H2(g) → H2O(l) is used in fuel cells to produce an electrical current. The reaction can
also be carried out by direct combustion.
o
–1
–1
Thermodynamic data: molar entropies, S , in J mol K : O2(g) 205.0; H2(g) 130.6; H2O(l) 70.0; H2O(l) ΔH°f = –
–1
285.9 kJ mol .
Use this information to find
a) The amount of heat released when the reaction takes place by direct combustion;
b) The amount of electrical work the same reaction can perform when carried out in a fuel cell at 298K under
reversible conditions;
c) The amount of heat released under the same conditions.”
Solution: First, we need to find ΔH° and ΔS° for the process. Recalling that the standard enthalpy of formation
–1
of the elements is zero, ΔH° = ΔH°f(products) – ΔH°f(reactants) = –285.9 kJ mol – 0 = –285.9 kJ mol–1.
–1
Similarly, ΔS° = S°f(products) – S°f(reactants) = (70.0) – (½ ×205.0 + 130.6) = –163 J K mol
–1
a) When the hydrogen and oxygen are combined directly, the heat released will be ΔH° = –285.9 kJ mol–1.
–1
b) The maximum electrical work the fuel cell can perform is given by ΔG° = ΔH° – TΔS° = –285.9 kJ mol –
–1
–1
(298 K)(–163 JK mol–1) = –237.2 kJ mol .
c) The heat released in the fuel cell reaction is the difference between the enthalpy change (the total energy
available) and the reversible work that was expended:
–1
–1
–1
–1
ΔH° – ΔG°= TΔS° = (298 K)(–163 JK mol ) = – 48,800 J mol = – 48.8 kJ mol .
vg kumar das (10 August 2012)
[email protected]