Download 000. MPE. QM. IooE.to print.+26.09.2013.very short

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

X-ray photoelectron spectroscopy wikipedia , lookup

Relativistic quantum mechanics wikipedia , lookup

Hydrogen atom wikipedia , lookup

Renormalization wikipedia , lookup

Molecular Hamiltonian wikipedia , lookup

Canonical quantization wikipedia , lookup

Hidden variable theory wikipedia , lookup

Matter wave wikipedia , lookup

Particle in a box wikipedia , lookup

Renormalization group wikipedia , lookup

Atomic theory wikipedia , lookup

Wave–particle duality wikipedia , lookup

Theoretical and experimental justification for the Schrödinger equation wikipedia , lookup

T-symmetry wikipedia , lookup

Transcript
Quantum Foundations of Life Sciences, Bio- and Nanotechnology.
PART 1. Quantum foundations. ..................................................................................... 2
Conceptual or fundamental problems in quantum mechanics. ....................................... 2
Quantum States. .......................................................................................................... 3
STM, SPM, AFM. ............................................................................................................. 4
Can Proximal Probes Move Atoms? ....................................................................... 5
Time irreversible. Models of Dissipation. Keldysh, Kantorovich, Razavy. ...... 6
Problems with more precise many particles theories. ..................................................... 7
Introduction to tunnelling based on path integrals. ............................................................. 8
2.1 Quantum escape problem. The Crossover from Classical to Quantum
Escape. .............................................................................................................................. 8
2.3 Path Integral Formulation ................................................................................... 10
PART 2. Mesoscopic Level. Self-Assembly and Self-organization ........................ 13
Introduction to statistics. ................................................................................................ 13
Path integral approach to random motion with nonlinear friction ........................... 16
Nanotechnology. ............................................................................................................ 17
Isn't Nanotechnology Just Very Small Microtechnology? ....................................... 18
What Are the Main Tools Used for Molecular Engineering? ............................... 18
How Can Protein Engineering Build Molecular Machines?................................. 20
Is There Anything Special About Proteins?.......................................................... 20
If Chemists Can Make Molecules, Why Aren't They Building Molecular
Machines? ............................................................................................................. 21
What is the Infoenergy? Is it Free Energy? Is it Negentropy? .............................. 22
Application in dentistry......................................................................................... 22
Part 3. Life as Infoenergy. ............................................................................................. 22
Life is Infoenergy. ................................................................................................. 27
About Entropy and Biology ................................................................................... 28
Part 4. Infoenergy: Time, Symmetry and Structure. ......................................................... 30
Thermodynamics of Healthy Self-replicating Organisms and Sustainable Systems.
What is Schrödinger’s Negentropy? ......................................................................... 31
How Organisms Make a Living ............................................................................ 31
Cycles of Time ....................................................................................................... 32
Redefining the Second Law for Living Systems .................................................... 32
Time of Development. Infoenergy. .............................................................................. 33
Time and Information. Thermodynamic Entropy .......................................................... 34
Logical Entropy ....................................................................................................... 35
Example of applications: Probing water structures in nanopores by
tunnelling. ................................................................................................................ 37
1
PART 1. Quantum foundations.
Conceptual or fundamental problems in quantum mechanics.
We review first the conceptual problems in quantum mechanics on a
fundamental level. It is shown that the proposed theory solve a large part of
these problems, such as the problems related to the status and meaning of
wavefunctions, its statistical properties, non-locality, dissipation, information and
measurement theories on both quantum and classical (macroscopic) levels. The
basic situation: we do have a theory, which describes, what we measure, at least
in time-reversible (non-dissipation) approximations; it just does not make sense.
First of all, we stills have non-understood gaps between life and non-life
sciences. In so called fundamental theories many well-known authors still claim
that time is reversible in all important phenomena, being unable to include life as
well as other real phenomena in consideration. We will see that our time of
development (consistent with information theories) solves these problems. There
are generally two different categories of approaches to these problems:
a. A formal “mathematical” approach: we accept the axioms, and just
“manipulate” with their interpretations, when trying, not very successfully, to
formulate theorems. So called superstring theories try to play with this level of
understanding, but do not have any new results which can be compared not only
with precise experiments, but with many interesting fundamental facts, available
in biology, engineering, economy, etc. One of the problems here, that a “time of
development”, as well as information theories, life and economical sciences still
are far away from complete axiomatic. We discuss this later in details. Not all the
basic concepts are still known, such as infoenergy. They still wait for their own
Galileo, Newton, Einstein, Schrodinger and many others. We are in a prerevolutionary situation now and will review this from historic point of view too.
b. A realistic (pragmatic) “physical” approach: we do not a priori accept the whole
system of axioms, which still do not exist, but think of microphysical objects in
terms of 20th century physical concepts, such as wave function, uncertainty and
complementarity principles, well-known quantum formalisms of Schrodinger,
Heisenberg, Feynman, Pauli, Dirac, Bohm, Penrose and others. This second
approach seems to be based on physical concepts. And since the main concepts
of matter in classical physics are particles or waves, the existing theories
generally use one or the other, or a combination of both. We analyse information
aspects of wave function and show their connection with origins of life and mind.
The uncertainty relations, if interpreted as a physical axiom, prohibit any local
interpretation, because their immediate consequence is a spreading of wave
2
packets: these waves therefore cannot be physical [Werner Hofer]. In this case
the interpretation is not even self-consistent. There exist several experimental
loopholes to these experiments which restrict their validity. But assuming, that
even more precise experiments yield the same result, i.e. a violation of Bell’s
inequalities and thus a contradiction with any local framework, the problem must
be solved from a theoretical, rather than an experimental angle. We will
reformulate the “wave packets” as information or infoenergy properties of matter.
We show here that some of these problems can be overcome within a theoretical
framework of matrix algebra.
Conventional reasoning usually claims that there is no way to describe our
experiences in microphysics by a logically consistent framework of classical
concepts. But even if this were the case, and we are forced to accept the
peculiarities of non-locality [1, 2], individual particles being in fact ensembles of
particles [3], measurements without any interference with measured objects [4],
infinities not being infinite [5], or spreading wave packets not really spreading [6],
the question seems legitimate, whether all these effects occur in reality, or rather
in some logical representation by a theory, which at once employs and
contradicts classical physics [7]. In the usual course of scientific development
there exists an intricate balance between speculative or theoretical concepts and
experimental results: and it is usually thought that progress is made by an
interplay between theory (or creativity) and measurement. In scientific
revolutions, an idea forces the reorientation of most of the - already accumulated
- knowledge, and in this case the meaning of the facts may be subject to sudden
changes [7].
Quantum States.
We consider here (rather briefly) interrelation of such topics as Heisenberg
uncertainty, and the Scanning Tunneling Microscope. [Werner Hofer] showed by
a statistical analysis of high-resolution scanning tunnelling microscopy (STM)
experiments, that the interpretation of the density of electron charge as a
statistical quantity leads to a conflict with the Heisenberg uncertainty principle.
Given the precision in these experiments he found that the uncertainty principle
would be violated by close to two orders of magnitude, if his interpretation were
correct. Today, STMs have reached a level of precision which is quite
astonishing. While it was barely possible to resolve the positions of single atoms
in early experiments it is now routine not only to resolve the atomic positions but
also e.g. the standing wave pattern of surface state electrons on metal surfaces,
and even very subtle effects like surface charging.
What the STM measures, in these experiments, is the current between a surface
and a very sharp probe tip. The current itself is proportional to the density of
electron charge at the surface. While one may dispute this claim for some special
cases, and while it can be shown for specific situations that an explicit simulation
of the scattering process does improve the agreement between experiment and
3
theory (see, for example [8]), in measurements on metal surfaces the bias range
is so low and the dominance of single electron-states at the tip so high, that the
Tersoff-Hamann approximation [9], which assumes tunnelling into a single tipstate with radial symmetry, is a very good approximation. Then the map of
tunnelling currents at a surface is, but for a constant, equal to the map of electron
charge densities at the same surface. A standard deviation of the density of
charge due to the uncertainty of position and momentum can thus be mapped
identically onto a standard deviation of the tunnelling current, which can
immediately be compared to experimental results. In density functional theory
(DFT) a many-electron system is comprehensively described by the density of
electron charge [11, 12]. However, the density itself, within the framework of
second quantization, is thought to be a statistical quantity [13]. In principle, this
statement can be tested by a statistical analysis of high resolution STM
measurements including the uncertainty relations [14].
How does quantum mechanics describe an isolated atom or a molecule or in
general other isolated object? We are assuming that there are no external forces
acting on the object and that it remains localized, but there could be internal
forces acting within it. As an important feature of the description of such an
object, we separate this description into (i) the external characterization of the
object as a whole and (ii) its internal detailed “system” with subsystems and
geometrical structure.
For a general quantum system, these values would only be approximations,
because higher-order terms could start to become important. However, various
systems can be very well approximated in this way. Moreover, rather
remarkably, it turns out that the quantum field theory of photons—or of any
other particle of the kind referred to as a boson—can be treated as though the
entire system of bosons were a collection of oscillators. These oscillators are
exactly of simple harmonic type (where there are no higher terms in the
Hamiltonian) when the bosons are in a stationary state with no interactions
between them. Accordingly, this 'harmonic oscillators’ picture provides a quite
broadly applicable scheme. Nevertheless, to proceed more thoroughly, a
detailed knowledge of the interactions is needed. For example, a hydrogen
atom consists of an electron in orbit proton nucleus (usually taken to be fixed,
as a good approximation). This approach was developed by Bohr, Heisenberg,
Pauling and after the second world war was thoroughly researched with the
help of “second” quantization. We review it later, describing STM device and
experiments.
STM, SPM, AFM.
4
Can Proximal Probes Move Atoms?
To those thinking in terms of nanotechnology, STMs immediately looked promising
not only for seeing atoms and molecules but for manipulating them. IBM quickly saw
a commercial use, as explained by Paul M. Horn, director of physical sciences at the
Thomas J. Watson Research Center: "This means you can create a storage element
the size of an atom. Ultimately, the ability to do that could lead to storage that is ten
million times more dense than anything we have today." A broader vision was given
by another researcher, J. B. Pethica, in the issue of Nature in which the work
appeared: "The partial erasure reported by Foster et al. implies that molecules may
have pieces deliberately removed, and in principle be atomically 'edited,' thereby
demonstrating one of the ideals of nanotechnology." The challenge will be how to
build up structures atom by atom.
As usually (see e.g. book of [Chen, STM]), we look for such solutions of
Schrodinger equation that are linear combinations:
A solution depends on the initial condition. If at t=0 the electron is in the lefthand-side state, the solution is:
We compared this with path integral instanton formulation and 1 order phase
transitions ([e.g. Schulman]). The concept of the complex "paths" and the
idea of complex time which we studied in connection with the instantons are
closely related. The motion of a particle under the barrier can be found from
the Schrodinger equation provided that we replace the time t with the
imaginary time iT. We can find the most probable escape path by calculating
This solution describes a back-and-forth migration of the electron between the
two protons. At t = 0, the electron is revolving about the left-hand-side proton
with a frequency v = | Eo | /h, Then the electron starts migrating to the righthand-side. At t =
| M |, the electron has migrated entirely to the right-hand
side; and at t = 2
| M |, the electron comes back to the left-hand side, etc.
In other words, the electron migrates back and forth between the two protons
with a frequency v = |M|/h. Similarly, we have another solution which starts
with a right-hand-side state at t=0. The linear combinations of the solutions,
are also “good” solutions of the time-dependent Schrodinger equation. For
example, there is a state symmetric with respect to the median plane: as well
5
as an antisymmetric state (beatings!). This discussion is a quantitative
formulation of the concept of resonance introduced by Heisenberg (1926) for
treating many-body problems in quantum mechanics. Heisenberg illustrated
this concept with a classical mechanics model (see Pauling and Wilson
1935): two similar pendulums connected by a weak spring.
At t < 0, the right-hand-side pendulum is held still, and the left hand-side
pendulum is set to oscillate with a frequency |E 0|/h. At t=0, the right-hand-side
pendulum is released. Because of the coupling through the weak spring, the
left-hand-side pendulum gradually ceases to oscillate, transferring its
momentum to the right-hand-side pendulum, which now begins its oscillation.
At t =
|M|, the right-hand-side pendulum reaches the maximum amplitude,
and the left-hand-side pendulum stops. Then the process reverses. This
mechanical system has two normal modes, with the two pendulums
oscillating in the opposite directions or in the same direction, with frequencies
|E0 + M|/h and |E0 – M|/h, respectively. These two normal modes correspond
to the symmetric and antisymmetric states of the hydrogen molecular ion,
respectively. The two curves are the exact solutions of the two low-energy
solutions of the H2+ problem. To a good approximation, these solutions can
be represented by the symmetric and antisymmetric superpositions of the
distorted hydrogen wavefunctions [Chen].
Time irreversible. Models of Dissipation. Keldysh, Kantorovich, Razavy.
The main problem to address dissipation at the quantum level is the way to
envisage the mechanism of irreversible loss of energy. Quantum mechanics
usually deal with the Hamiltonian formalism, where the total energy of the system
is a conserved quantity. So in principle it would not be possible to describe
dissipation in this framework. The idea to overcome this issue consists on
splitting the total system in two parts: the quantum system where dissipation
occurs, and a so-called environment or bath where the energy of the former will
flow towards. The way both systems are coupled depends on the details of the
microscopic model, and hence, the description of the bath. To include an
irreversible flow of energy (i.e., to avoid Poincaré recurrences in which the
energy eventually flows back to the system), requires that the bath contain an
infinite number of degrees of freedom. Notice that by virtue of the principle of
universality, it is expected that the particular description of the bath will not affect
the essential features of the dissipative process, as far as the model contains the
minimal ingredients to provide the effect [Kantorovich]. The simplest way to
model the bath was proposed by Feynman and Vernon in 1963. In this
description the bath is a sum of an infinite number of harmonic oscillators, that in
quantum mechanics represents a set of free bosonic particles. The aim of the
model is to study the effects of dissipation in the dynamics of a particle that can
hop between two different positions. The dissipative two-level systems
6
represents also a paradigm in the study of quantum phase transitions,
computation, renormalization.
Problems with more precise many particles theories.
How are we to treat many-particle systems according to standard non-relativistic
Schrodinger picture? It is well-known from solving of the Schrodinger equations,
that we shall have a single Hamiltonian, in which all momentum variables appear
for all the particles in the system. Each of these momenta gets replaced, in the
quantization prescription of the position-space representation, by a partial
differentiation operator with respect to a relevant position coordinate of that
particular particle. All these operators have to act on something and, for
consistency of their interpretation, they must all act on the same thing. This is the
wavefunction. As stated above, we must indeed have one wavefunction for the
entire system, and this wavefunction must indeed be a function of the different
position coordinates of all the separate particles [R. Penrose].
Let us pause here to elaborate the enormity of this apparently simple last
requirement. If it were the case that each particle had its own separate
wavefunction, then for n scalar (i.e. non-spinning) particles, we should have n
different complex functions of position. Although this is a bit of a stretch of our
visual imaginations, for n little particles, it is something that we can perhaps just
about cope with. For visualization purposes, we could have a picture not so
unlike that of a field in space with n different components, where each
component could itself be thought to describe a separate 'field'.
A noteworthy feature of standard quantum theory is that, for a system of many
particles, there is only one time coordinate, whereas each of the independent
particles involved in the quantum system has its own independent set of position
coordinates. This is a curious feature of non-relativistic quantum mechanics if we
like to think of it as some kind of limiting approximation to a 'more complete'
relativistic theory. For, in a relativistic scheme, the way that we treat space is
essentially the way that we should also treat time. Since each particle has its own
space coordinates, it should also have its own time coordinate. But this is not
how ordinary quantum mechanics works. There is only one time for all the
particles.
When we think about physics in an ordinary 'non-relativistic' way, this may indeed
seem sensible, since in non-relativistic physics, time is external and absolute,
and it simply 'ticks away' in the background, independently of the particular
contents of the universe at any one moment. But, since the introduction of
relativity, we know that such a picture can only be an approximation. What is the
'time' for one observer is a mixture of space and time for another, and vice versa.
Ordinary quantum theory demands that each particle individually must carry its
own space coordinate. Accordingly, in a properly relativistic quantum theory, it
should also individually carry its own time coordinate. Indeed, this viewpoint has
been adopted from time to time by various authors (see [Eddington, 1929], [Mott,
7
1929], [Dirac, 1932], going back to the late 1920s, but it does not developed into
a full-blown relativistic theory. A basic difficulty with allowing each particle its own
separate time is that then each particle seems to go on its merry way off into a
separate time dimension, so further ingredients would be needed to get us back
to reality.
Introduction to tunnelling based on path integrals.
2.1 Quantum escape problem. The Crossover from Classical to Quantum
Escape.
Classical formulation of the problem of escape from a metastable state (which
means in practice close existence of phase transition, such as in supercooled
vapour for condensation transition) is based substantially on the fact that
relaxation is sufficiently fast—much faster than the escape rate. Otherwise, there
would be no such thing as the rate of escape from a metastable state, and the
outgoing current would depend on the initial preparation of the system. Quite
differently, the “standard” quantum formulation is based on the notion of the
escape from a given quantum state. In quasiclassical approximation a particle
escapes from the level n with the probability
  En 
W  En  
exp  2S En ,
2
(2.1)
where the tunnelling action
S E    dq2mU q   E  ,
q2
1/ 2
q1
(2.2)


1
q1;2 are the turning points, and we set the Planck constant
. The
preexponential factor tells how often the particle “hits” the potential wall, and S is
the “action” for the motion under the barrier. If the system is coupled to a bath
then, strictly speaking, its energy spectrum becomes continuous, and the Gibbs
distribution over the energy levels is formed. We shall ignore the continuity of the
spectrum for a moment and consider only the consequences of the equilibrium
thermal energy distribution. At finite temperatures the system occupies not only
the lowest energy level, but also the excited ones, and it may have energies
higher than the barrier height, too. Therefore, there always exists a possibility of
a purely activation escape over the barrier. If this is not the dominating
mechanism, however, we may write the total escape rate as the sum of the
tunnelling rates for different levels weighted with the populations of the levels. In
our range of temperatures the quantum correction decreases the tunnelling
exponent R, and thus alleviates the escape, as one would naively expect.
Unfortunately, the argument works only in a rather narrow range of the relaxation
rates. The dissipation must be not too small so that the quantized energy levels
near the barrier top would not be depleted because of the escape, and not too
large so that the energy levels remain well-defined and the quantum-mechanical
expression for the tunnelling rate is applicable.
8
Let us consider now escape probability and partition function [see e.g. Dynkin et
al.]. Strictly speaking, if a particle is capable of tunnelling out of the potential well,
it must have a continuous energy spectrum. The discrete energy levels
associated with the original closed well are transformed into poles of the
scattering amplitude analytically continued to complex values of energy. If the
escape rate is small enough, these singularities are very close to the real axis,
and they can be observed in various spectra as sharp absorption or emission
peaks. If the properties of the system are dominated by such peaks, we can
ignore the fact that the spectrum is continuous, and describe the system in terms
of eigenstates of complex energy. In order to benefit from such a description, we
need to reformulate the escape problem as a non-Hermitian one.
This can be done by imposing the boundary conditions of unidirectional flow far
enough outside the well. Such conditions agree with our intuitive understanding
of escape as the motion away to infinity with no chance of scattering back. Now,
it is easy to relate the escape rate, or the rate of the reduction of the population
of the state inside the well with the imaginary part of the corresponding energy.
This relation can be obtained by multiplying the equations
H  E , H *  E * * ,
where
1
H    2  U (q)
2
*
by  and  , respectively, and integrating the difference over the interval
including the well and, partly, the area outside the barrier,
2
 2 Im E 0  dq 
q
1  *   * 



2im 
q
q 
(2.3)
Since the wave function is mostly concentrated inside the integration interval, the
integral gives the population of the well, while the r.h.s. of the obtained
expression is exactly the outgoing current j. Defining the escape rate as the ratio
of the current to the population of the well,

W  j  dq
,
2 1
we obtain
W ( En )  2 Im En .
(2.4)
For a multi-level many-dimensional system the expression for the overall
probability of escape may be obtained by averaging the obtained equation over
the states of the system. Assuming that the imaginary parts of the energies are
small, we get
W  2T Im  Z 1 exp(  En / T )  2t Im ln Z  2 Im F
n
(2.5)
It is not surprising that the partition function and the free energy are complex,
since we are analyzing the problem where a particle can escape to infinity.
Another question is how good and how general the obtained expressions are.
First, we assumed that ImE is small for all energies, which is certainly not true for
9
the states close or above the barrier top, and one would expect that Eq. (2.5)
should be modified when such energies become substantial. For not too small
damping the modification becomes important in the range T > T0 [T0 defined is
the crossover temperature], but it comes just to an extra prefactor. This prefactor
can be found by accounts for the overbarrier reflection, which changes the
expression for the escape probability, and also for the modification of the
expression for the partition function. The limit of very small dissipation rate,
although mathematically rather involved, can be easily understood on the
physical grounds: the tunnelling rate is governed by the depletion of the area
close, or even not very close to the barrier top and, respectively, by the
difference between the real and the equilibrium Gibbs distributions used in Eq.
(2.5). Generally, the matter is delicate when the particle is coupled to a bath,
since in this case we have an infinite number of degrees of freedom, and all time
scales are present; the very fact that the particle is away from the well does not
mean that it will not be drawn back as a result of relaxation of the bath. From a
formal point of view, these problems arise because the current is determined by
a two-particle Green’s function, whereas the partition function is determined by
the one-particle Green’s function, and, in general, a two-particle Green’s function
is not expressed in terms of the one-particle one. Clearly, the approach does not
apply to the case of nonequilibrium systems like optically bistable systems, or
electrons in a Penning trap excited by a cyclotron radiation, or bistable chemical
and biochemical systems with in- and outgoing flows, etc.
However, in many cases of physical interest Eq. (2.5) (or its modification for T >
T0) does apply to systems coupled to a thermal bath, and enables us to answer a
few important physical questions including the following:
Does the dissipation increase or decrease the tunnelling probability? What is the
effect of small dissipation?
How does escape occur in the case of a heavily damped (overdamped) motion?
In order to answer these questions, it is convenient to analyze Eq. (2.5) in the
path integral representation.
2.3 Path Integral Formulation
In physics of systems away from thermal equilibrium, authors are interested
primarily in understanding large quantum and classical fluctuations, in particular
tunnelling and activated processes. Large fluctuations play a key role in a broad
range of physical phenomena, from diffusion in solids to nucleation at phase
transitions, mode switching in lasers, and protein folding. Among important
applications are bifurcation amplifiers used in quantum measurements. No
generally accepted principles have been found that describe probabilities of large
fluctuations in nonequilibrium systems. A key to the theoretical analysis is that, in
a large fluctuation to a given state, a classical system is most likely to move
along a certain optimal path. Optimal paths for activated processes are physically
observable. We revealed generic features of the distribution of fluctuational paths
and showed that it may display critical behaviour.
10
Many predictions of the theory of activated processes, including the onset of
several types of the scaling behaviour of the escape rates, have been recently
confirmed by experiments on well-characterized systems. This includes
observation of a sharply peaked distribution of fluctuational paths in lasers and
the characteristic parameter dependence of the switching rates between different
types of coexisting periodic states of electrons in Penning traps, atoms in
modulated traps, micro- and nano-mechanical resonators, Josephson junction
based systems, and particles in modulated optical traps. Understanding
dynamics of activated processes paves the way to controlling them. We
developed a general nonadiabatic theory of the response of fluctuation
probabilities to external fields. This response can be exponentially strong. In a
broad parameter range the activation energy is linear in the field amplitude. The
response is then described in terms of the logarithmic susceptibility.
Decay of a metastable state is usually considered as resulting from tunnelling or
thermal activation. We have predicted that periodically modulated systems
display a different decay mechanism, quantum activation. As tunnelling, quantum
activation is due to quantum fluctuations, but as thermal activation, it involves
diffusion over an effective barrier separating the metastable state. It is often more
probable than tunnelling even for low temperatures.
Following Dynkin works, let us mention also that large fluctuations play an
important role in many physical phenomena, an example being spontaneous
switching between coexisting stable states of a system, like switching
between the magnetization states in magnets, or voltage/current states in
Josephson junctions, or macromolecule configurations or populations.
Typically, large fluctuations are rare events on the dynamical time scale of the
system. An important feature of large rare fluctuations in Markovian (no delay)
systems in thermal equilibrium with a bath is that the optimal fluctuational
path is the time reversed path in the absence of noise. This can be
understood from the argument that, in relaxation in the absence of noise, the
energy of the system goes into the entropy of the thermal reservoir, whereas
in a large fluctuation the entropy of the reservoir goes into the system energy.
The minimal entropy change corresponds to a time-reversed process [13]. In
other words, the optimal trajectory for a large fluctuation corresponds to the
noise-free trajectory with the inversed sign of the friction coefficient. One can
view this property also as a consequence of the symmetry of transition rates
in systems with detailed balance discussed for diffusive systems described by
the Fokker-Planck equation by Kolmogorov [22]. The situation is more
complicated if the dissipative force is delayed (linear coupling to a thermal
reservoir, which leads to a delayed viscous friction).
Macroscopic Quantum Systems and the Quantum Theory of Measurement.
11
We discuss now the next question: How far do experiments on the so called
"macroscopic quantum systems" such as superfluids and superconductors
test the hypothesis that the linear Schrodinger equation may be extrapolated
to arbitrarily complex systems (see [Hawking publications] as well)? It is
shown that the familiar "macroscopic quantum phenomena" such as flux
quantization and the Josephson effect are irrelevant in this context, because
they correspond to states having a very small value of a certain critical
property ("disconnectivity") while the states important for a discussion of the
quantum theory of measurement have a very high value of this property.
Various possibilities for verifying experimentally the existence of such states
are discussed, with the conclusion that the most promising is probably the
observation of quantum tunnelling between states with macroscopically
different properties. It is shown that because of their very high "quantum
purity" and consequent very low dissipation at low temperatures,
superconducting systems offer good prospects for such an observation
[Werner Hofer].
This was a first step into an area which is of fundamental interest but fraught with
great conceptual difficulties. The question [Leggett] discuss is: What experimental
evidence do we have that quantum mechanics is valid at the macroscopic level? In
particular, do the so-called "macroscopic quantum phenomena" which are actually
observed in superconductors and superfluids constitute such evidence? If not, are
there other ways in which we can exploit many-body systems in general, and
superfluids in particular, to answer the question? In one sense the answer to our
question is rather obvious and not very interesting. There clearly is a sense in which
many-body systems afford very strong experimental evidence that quantummechanical effects are not confined to single atoms or to atomic scales of length and
time. For example, the Debye prediction for the low-temperature specific heat of an
insulating solid contains the quantum constant of action h, and experimental
confirmation of it therefore offers at least circumstantial evidence that collective
motions over the scale of many atomic spacings are as subject to the laws of quantum
mechanics as those on an atomic scale. More spectacularly, effects such as circulation
quantization in superfluid helium or flux quantization in superconductors indicate that
quantum coherence effects can operate over distance scales of the order of
millimeters, while the Josephson effect shows (among other things) that the purely
quantum phenomenon of tunnelling through a potential barrier can produce a current
of macroscopic magnitude. Perhaps most spectacularly of all, the Aharonov-BohmMercereau shows that the characteristically quantum effect sometimes called the
"physical reality of the vector potential" is reflected in the behaviour of macroscopic
currents. But none of these effects really extend our experimental evidence for
quantum mechanics in qualitative way; what they show in essence is that atoms in
large assemblies satisfy the laws of quantum mechanics in much the same way as
they do in isolation, and that sometimes (generally because of Bose condensation or
the analogous phenomenon (Cooper pairing) in Fermi systems) a macroscopic
number of atoms can behave in phase so as to produce macroscopic results. However,
there is a much more subtle and interesting sense in which the question can be
12
interpreted. To motivate this interpretation it is necessary to recall one of the most
famous paradoxes in the foundations of quantum mechanics. Once an observation or
"measurement" is made, however, the system immediately collapses into a state with
definite macroscopic properties. Now whatever one's reaction to the paradox, it is
clear that it only arises at all because one has implicitly assumed that the linear laws
of quantum mechanics, in particular the superposition principle, apply to the
description of any physical system, even when it is of macroscopic dimensions and
complexity. [Werner Hofer] The question then arises whether there is any
experimental evidence for this assumption: In particular, is there actually any
evidence that macroscopic systems can under appropriate conditions be in quantum
states?
PART 2. Mesoscopic Level. Self-Assembly and Self-organization
Introduction to statistics.
An attempt to describe the first step of the nucleation process is provided by
the theory of Becker and Doring. Since the first step occurs on the timescale
of the thermal fluctuations, no explicit time-dependent considerations will be
introduced. Instead, we consider homogeneous nucleation. The most general
mechanism of phase transformation is provided by the process of nucleation.
If one starts from an initial phase, say phase 1, without any impurities, one
qualifies the nucleation as being homogeneous. Although the initial creation
of the nuclei could be described in thermodynamic terms, their subsequent
growth or decay must involve genuine kinetic aspects. The latter will be
described here in the context of the theory of Zeldovich and Frenkel. In this
theory, the elementary step involved in the growth or decay of the nuclei is
considered to be the attachment or detachment of a molecule of phase 1 to
an existing nuclei. This process can then be represented in the form of the
following "polymerization" reaction: the Zeldovich-Frenkel kinetic equation for
N (p,t). This equation shows that the evolution is governed by the competition
between a diffusion process which is purely kinetic and a friction process
which is controlled by the thermodynamic force.
Spinodal Decomposition. In the preceding sections it has been seen how the
new phase 2 nucleates in the midst of the old phase 1 when the latter is in a
metastable state (which concept is useful in thermodynamic theory of energy
13
storage systems). An alternative scenario is possible for phase transitions
which involve unstable states. An isostructural phase transition can give rise
to a van der Waals loop, in which case the coexistence region will contain an
unstable region, delimited by two spinodal lines. In such a case, one can first
transform the old phase 1 from a stable initial state into a final state inside the
spinodal region. Since in the spinodal region the thermodynamic state of
phase 1 is unstable, it will immediately undergo a transformation, which is
called a "spinodal decomposition" of the old phase 1 and, which will result in
the creation of a new stable state, the new phase 2. This scenario is different
from a nucleation but in order to realize it the transformation from initial to
final state must proceed sufficiently rapidly. Indeed, between the spinodal and
the binodal there is always a region where phase 1 is metastable and where a
nucleation could start. To avoid this, one has to transform the initial stable
state of phase 1 rapidly into an unstable state, so that along the intermediate
metastable states and energy storage systems the nucleation has no time to
proceed. Such a rapid transformation of the initial into the final state is called
a "quench." Although the spinodal decomposition process is more easily
observed in mixtures, it will, for simplicity, be illustrated here for the case of
the liquid-vapour transition of a one-component system. We remind the
classification of the processes by time as the well-known equation from the
science of barriers on the example of the first order phase transitions.
Infoenergy in the energy storage problems means overcoming of barriers (in
science, technology, economics). We show further, that the above “classical”
results may be proved by our most probable evolution (dynamics) theory as
well. The final step will be to generalize these results from the 1-st and 2-nd
time, where they were traditionally elaborated by Gibbs and the followers, and
to apply them to biological time, which was the goal of works of L. Boltzmann,
E. Schroedinger, I. Prigogine, E. Jaynes, P. Samuelson and others. In this
time, the concept of infoenergy is quite necessary and useful and cannot be
replaced by more traditional concepts, because flows of information are going
forward in time, modelling and organizing subsequent flows of energy.
14
Energy Storage, Entropy and Reversibility.
Consider, for instance, the free expansion of an ideal gas. As initial condition,
one has a container of volume V divided into two regions of volume V/2, one
of which is occupied by a classical ideal gas of N particles and energy E. If
the wall separating the two regions is removed, experience shows that the
gas expands freely until it reaches a final state in which the gas occupies the
whole container. The highly symmetrical state is called “the false vacuum”.
Although “the false vacuum” appears quite symmetrical, it is not stable. The
space 'sheet' does not want to be in this stretched condition. There is too
much tension. The energy is too high. Thus the bed sheet curls up. The
symmetry is broken, and the bed sheet has gone to a lower-energy state with
less symmetry. By rotating the curled-up bed sheet 180 degrees around an
axis, we no longer return to the same sheet. Now let us replace the bed sheet
with ten-dimensional space-time, the space-time of ultimate symmetry. At the
beginning of time, the universe was perfectly symmetrical. If anyone was
around at that time, he could freely pass through any of the ten dimensions
without problem. Also to travel in time: forth and back! At that time, gravity
and the weak, the strong, and the electromagnetic forces were all unified by
the superstring. All matter and forces were part of the same string multiple.
However, this symmetry couldn't last. The ten-dimensional universe, although
perfectly symmetrical, was unstable, just like the bed sheet. Thus tunneling to
a lower-state was inevitable. When tunnelling finally occurred, a phase
transition took place, and symmetry was lost.
Action principle and Jaynes’ guess method as introduction to information and
infoenergy.
A path information is defined in connection with the probability distribution of
paths of nonequilibrium hamiltonian systems moving in phase space from an
initial “cell” to different final cells. On the basis of the assumption that these
paths are physically characterized by their action, we show that the maximum
path information leads to an exponential probability distribution of action
which implies that the most probable paths are just the paths of stationary
action. We also show that the averaged (over initial conditions) path
information between an initial “cell” and all the possible final cells can be
15
related to the entropy change defined with natural invariant measures for
dynamical systems. Hence the principle of maximum path information
suggests maximum entropy and entropy change which, in other words, is just
an application of the action principle of classical mechanics to the cases of
stochastic or instable dynamics. The principle of maximum information and
entropy are investigated for nonequilibrium hamiltonian system in connection
with action principle of classical mechanics.
The nature of most probable paths at finite temperatures
We determine the most probable length of paths at finite temperatures, with a
preassigned end-to-end distance and a unit of energy assigned to every step
on a D-dimensional hypercubic lattice. The asymptotic form of the most
probable path-length shows a transition from the directed walk nature at low
temperatures to the random walk nature as the temperature is raised to a
critical value Tc (we can use equation: Tc = 1/(ln2 + lnD) ). Below Tc the most
probable path-length shows a crossover from the random walk nature for
small end-to-end distance to the directed walk nature for large end-to-end
distance; the crossover length diverges as the temperature approaches Tc.
For every temperature above Tc we find that there is a maximum end-to-end
distance beyond which a most probable path-length does not exist.
Path integral approach to random motion with nonlinear friction
We study an old but still only very partially understood problem: the dynamics
of a solid object moving over a solid surface. In practice this is a very
complicated and as yet unsolved problem, although there is a wealth of
experiments, since the general problem is very old and ubiquitous in nature,
ranging from geology to physics and biology. The basic difficulty lies in the
very complex nature and behaviour of the solid/solid interface, which leads to
a complicated stick-slip motion of the object. Following P.-G. de Gennes we
study in detail one of the simplest phenomenological models, far from those
of most practical interest, but as a starting point to develop a new theoretical
approach to describe basic aspects of the above mentioned problem. Ignoring
all details of the solid/solid interfacial layer, de Gennes proposed a simple
Langevin equation for the velocity. [A. Baule, E. G. D. Cohen, and H.
Touchette, 24 Oct 2009]. We show that the optimal (or most probable) paths
of this model can be divided into two classes of paths, which correspond
physically to a sliding or slip motion, where the object moves with a nonzero
velocity over the underlying surface, and a stick-slip motion, where the object
is stuck to the surface for a finite time. These two kinds of basic motions
underlie the behavior of many more complicated systems with solid/solid
friction and appear naturally in de Gennes’ model in the path integral
framework.
16
Nanotechnology.
L. Kantorovich: Classical Molecular Dynamics (MD) simulations play an
important role in modern condensed matter physics giving direct access to a
wide range of statistical properties of the systems under study. In ab initio MD
simulations atoms which are treated classically follow in time Newtonian's
equations of motion. The latter are solved numerically using atomic forces. In
ab initio MD simulations the forces on atoms are calculated from the first
principles by considering electrons (at each atomic configuration) entirely
quantum mechanically, usually within the density functional theory (DFT).
This approach is also sometimes called the mean-field approximation (MFA).
Probably, the simplest quantum-mechanical justification of the MFA is based
on a factorisation of the density operator for the whole system into a product
of individual operators for the nuclei and electrons, and then using this Ansatz
in the quantum Liouville equation with subsequent replacement of the
quantum bracket with the classical Poisson bracket for the classical degrees
of freedom. Then, a classical trajectory is introduced by adopting a special
Delta-function representation for the density operator of the classical
subsystem. The important message here is that the ionic coordinates and
momenta in the usual MD equations appear as statistical averages calculated
at every time step. Thus, usual MD equations constitute the dynamical
equations of motion (EoM) for averages as proposed originally a long time
ago by Ehrenfest.
Kantorovich proposed a general statistical mechanical consideration of a
system consisting of slow and fast degrees of freedom assuming nuclei and
electrons as a particular example. We derive equations of motion (EoM) for
slow degrees of freedom (nuclei) which interact and exchange energy with
the fast degrees of freedom (electrons). Contrary to conventional approaches
based on the Liouville equation which possesses the time-reversal symmetry
and is thus intrinsically equilibrium, our method is based on entirely nonequilibrium consideration within the Non-equilibrium Statistical Operator
Method.
Current studies on molecular electronics are focused on such fundamental
questions:
How to synthesize functional molecules?
How to prepare electrical circuits and to incorporate hybrid
structures?
What are the electrical properties of such circuits?
What is the correlation between molecular structure and their
functionality?
17
How to use molecular electronic devices in a real life
applications?
To address these questions, joint efforts of physicists, chemists and electrical
engineers; experimentalists and theoreticians, are required. Studies of
electron transfer (ET) properties of single molecular junctions are of
importance for fundamental science as well. They challenge both
experimental and theoretical branches of modern science. At present time,
the major part of our knowledge on the properties of single molecules is still
based on the results of investigations carried out under ultra-high vacuum
and/or at low temperatures. These conditions are not appropriate for
production systems. SO INFLUENCE OF WATER IS IMPORTANT.
Experiments at electrified solid-liquid interface (i.e., in electrochemical
environment) represent an interesting alternative approach for addressing
fundamental concepts and principles of the ET processes. Aqueous
electrolytes represent a natural environment for “living” functional
macromolecules. And last, but not least: electrochemists have more than a
century of experience in studying molecular systems at interfaces. Meeting
and cooperation of electrochemists with mesoscopic and surface science
physicists open new prospectives in the field of molecular electronics.
Isn't Nanotechnology Just Very Small Microtechnology?
For many years, it was conventional to assume that the road to very small devices led
through smaller and smaller devices: a top-down path. On this path, progress is
measured by miniaturization: How small a transistor can we build? How small a
motor? How thin a line can we draw on the surface of a crystal? Miniaturization
focuses on scale and has paid off well, spawning industries ranging from
watchmaking to microelectronics. The differences run deeper, though.
Microtechnology dumps atoms on surfaces and digs them away again in bulk, with no
regard for which atom goes where. Its methods are inherently crude. Molecular
nanotechnology, in contrast, positions each atom with care. As Bill DeGrado, a
protein chemist at Du Pont, says, "The essence of nanotechnology is that people have
worked for years making things smaller and smaller until we're approaching
molecular dimensions. At that point, one can't make smaller things except by starting
with molecules and building them up into assemblies." The difference is basic: In
microtechnology, the challenge is to build smaller; in nanotechnology, the challenge
is to build bigger—we can already make small molecules.
What Are the Main Tools Used for Molecular Engineering?
Almost by definition, the path to molecular nanotechnology must lead through
molecular engineering. Working in different disciplines, driven by different goals,
researchers are making progress in this field. Chemists are developing techniques able
to build precise molecular structures of sorts never before seen. Biochemists are
18
learning to build structures of familiar kinds, such as proteins, to make new molecular
objects. Chemists and biochemists advance their field chiefly by developing new
molecules that can serve as tools, helping to build or study other molecules. Further
advances come from new instrumentation, new ways to examine molecules and
determine their structures and behaviors. Yet more advances come from new software
tools, new computer-based techniques for predicting how a molecule with a particular
structure will behave. Many of these software tools let researchers peer through a
screen into simulated molecular worlds much like those toured in the last two
chapters. Physicists have contributed new tools of great promise for molecular
engineering. These are the proximal probes, including the scanning tunneling
microscope (STM) and the atomic force microscope (AFM). A proximal-probe
device places a sharp tip in proximity to a surface and uses it to probe (and sometimes
modify) the surface and any molecules that may be stuck to it.
Proximal-probe instruments may be a big help in building the first generation of
nanomachines, but they have a basic limit: Each instrument is huge on a molecular
scale, and each could bond only one molecular piece at a time. To make anything
large—say, large enough to see with the naked eye—would take an absurdly long
time. A device of this sort could add one piece per second, but even a pinhead
contains more atoms than the number of seconds since the formation of Earth.
Building a Pocket Library this way would be a long-term project. The techniques of
chemistry and biomolecular engineering already have enormous parallelism, and
already build precise molecular structures. Their methods, however, are less direct
than the still hypothetical proximal probe-based molecule-positioners. They use
molecular building blocks shaped to fit together spontaneously, in a process of selfassembly.
David Biegelsen, a physicist who works with STMs at the Xerox Palo Alto Research
Center, put it this way at the nanotechnology conference: "Clearly, assembly using
STMs and other variants will have to be tried. But biological systems are an existence
proof that assembly and self-assembly can be done. I don't see why one should try to
deviate from something that already exists. A huge technology base for molecular
construction already exists. Tools originally developed by biochemists and
biotechnologists to deal with molecular machines found in nature can be redirected to
make new molecular machines. The expertise built up by chemists in more than a
century of steady progress will be crucial in molecular design and construction. Both
disciplines routinely handle molecules by the billions and get them to form patterns
by self-assembly. Biochemists, in particular, can begin by copying designs from
nature. Molecular building-block strategies could work together with proximal probe
strategies, or could replace them, jumping directly to the construction of large
numbers of molecular machines. Either way, protein molecules are likely to play a
central role, as they do in nature.
19
How Can Protein Engineering Build Molecular Machines?
Proteins can self assemble into working molecular machines, objects that do
something, such as cutting and splicing other molecules or making muscles contract.
They also join with other molecules to form huge assemblies like the ribosome (about
the size of a washing machine, in our simulation view). Ribosomes—programmable
machines for manufacturing proteins—are nature's closest approach to a molecular
assembler. The genetic-engineering industry is chiefly in the business of
reprogramming natural nanomachines, the ribosomes, to make new proteins or to
make familiar proteins more cheaply. Designing new proteins is termed protein
engineering. Since biomolecules already form such complex devices, it's easy to see
that advanced protein engineering could be used to build first-generation
nanomachines. Although scientists do the work, the work itself is really a form of
engineering, as shown by the title of the field's journal, Protein Engineering. Bill
DeGrado's description of the process makes this clear: "After you've made it, the next
step is to find out whether your protein did what you expected it to do. Did it fold?
Did it pass ions across bilayers [such as cell membranes]? Does it have a catalytic
function [speeding specific chemical reactions]? And that's tested using the
appropriate experiment. More than likely, it won't have done what you wanted it to
do, so you have to find out why. Now, a good design has in it a contingency plan for
failure and helps you learn from mistakes. Rather than designing a structure that
would take a year or more to analyze, you design it so that it can be assayed for given
function or structure in a matter of days."
Is There Anything Special About Proteins?
The main advantage of proteins is that they are familiar: a lot is known about them,
and many tools exist for working with them. Yet proteins have disadvantages as well.
Just because this design work is starting with proteins—soft, squishy molecules that
are only marginally suitable for nanotechnology—doesn't mean it will stay within
those limits. Like the IBM physicists, protein designers are moved by a vision of
molecular engineering. Catalysts are molecular machines that speed up chemical
reactions: they form a shape for the two reacting molecules to fit into and thereby
help the reaction move faster, up to a million reactions per second. New ones, for
reactions that now go slowly, will give enormous cost savings to the chemical
industry.
This prediction was borne out just a few months later, when Denver researchers John
Stewart, Karl Hahn, and Wieslaw Klis announced their new enzyme, designed from
scratch over a period of two years and built successfully on the first try. It's a catalyst,
making some reactions go about 100,000 times faster. Nobel Prize-winning
biochemist Bruce Merrifield believes that "if others can reproduce and expand on this
work, it will be one of the most important achievements in biology or chemistry."
Chemists, most of whom do not work on proteins, are the traditional experts in
building molecular objects. Chemists mix molecules on a huge scale (in our
20
simulation view, a test tube holds a churning molecular swarm with the volume of an
inland sea), yet they still achieve precise molecular transformations. Given that they
work so indirectly, their achievements are astounding. This is, in part, the result of the
enormous amount of work poured into the field for many decades. An engineer would
say that chemists (at least those specializing in synthesis) are doing construction
work, and would be amazed that they can accomplish anything without being able to
grab parts and put them in place. Chemists, in effect, work with their hands tied
behind their backs. Molecular manufacturing can be termed "positional chemistry" or
"positional synthesis," and will give chemists the ability to put molecules where they
want them in three-dimensional space. Rather than trying to design puzzle pieces that
will stick together properly by themselves when shaken together in a box, chemists
will then be able to treat molecules more like bricks to be stacked. The basic
principles of chemistry will be the same, but strategies for construction may become
far simpler.
If Chemists Can Make Molecules, Why Aren't They Building Molecular Machines?
Molecular engineers working toward nanotechnology need a set of molecular
building blocks for making large, complex structures. Systematic building-block
construction was pioneered by Bruce Merrifield, winner of the 1984 Nobel Prize in
Chemistry. His approach, known as "solid phase synthesis," or simply "the Merrifield
method," is used to synthesize the long chains of amino acids that form proteins. In
the Merrifield method, cycles of chemical reaction each add one molecular building
block to the end of a chain anchored to a solid support. This happens in parallel to
each of trillions of identical chains, building up trillions of molecular objects with a
particular sequence of building blocks. Chemists routinely use the Merrifield method
to make molecules larger than palytoxin, and related techniques are used for making
DNA in so-called gene machines.
While it's hard to predict how a natural protein chain will fold—they weren't designed
to fold predictably—chemists could make building blocks that are larger, more
diverse, and more inclined to fold up in a single, obvious, stable pattern. With a set of
building blocks like these, and the Merrifield method to string them together,
molecular engineers could design and build molecular machines with greater ease.
One may have played with molecular models in chemistry class: colored plastic balls
and sticks that fit together. Each color represents a different kind of atom: carbon,
hydrogen, and so on. Even simple plastic models can give you a feel for how many
bonds each kind of atom makes, how long the bonds are, and at what angles they are
made. A more sophisticated form of model uses only spheres and partial spheres,
without sticks. These colorful, bumpy shapes are called CPK models, and are widely
used by professional chemists. Nobel laureate Donald Cram remarks that "We have
spent hundreds of hours building CPK models of potential complexes and grading
them for desirability as research targets." His research, like that of fellow Nobelists
Charles J. Pedersen and Jean-Marie Lehn, has focused on designing and making
medium-sized molecules that self assemble.
21
What is the Infoenergy? Is it Free Energy? Is it Negentropy?
Infoenergy summarize the concepts of information, free energy, energy storage and
negentropy. The interconnections of “information” and “energy” are crucially
important for our modern civilisation: we have already huge amounts of information
available as a result of information revolution: processors, controllers, etc., which
historically were developed for organizing of energy flows in every industry where it
is important (military, energy, transportation, all other industries, using engines and
energy). And we have a lot of different energy, still relatively available and cheap: all
the kinds: oil, gas, nuclear, renewables, etc.
The key elements: energy storage, which is organized inside of energy flows and
circulation, as well as the highest qualities and levels of information, works of
information, such as planning, organization, coordination, control, etc., we
summarized in our infoenergy concept. Information and energy storage were
necessary for the very emergence of life: any living organism is not connected
constantly to power sources, so the energy storage is absolutely necessary for the
existence of every organism – from the viruses, DNA, RNA, the simplest bacteria,
etc. to people (and societies).
Application in dentistry
Nanostructured Surfaces of Dental Implants. The structural and functional fusion of
the surface of the dental implant with the surrounding bone (osseointegration) is
crucial for the short and long term outcome of the device. In recent years, the
enhancement of bone formation at the bone-implant interface has been achieved
through the modulation of osteoblasts adhesion and spreading, induced by structural
modifications of the implant surface, particularly at the nanoscale level. In this
context, the chemical and physical processes find new applications to achieve the best
dental implant technology. [Eriberto Bressan, et al.] stressed the importance of the
modifications on dental implant surfaces at the nanometric level. Nowadays, there is
still little evidence of the long-term benefits of nanofeatures, as the promising results
achieved in vitro and in animals have still to be confirmed in humans. However, the
increasing interest in nanotechnology is undoubted and more research is going to be
published in the coming years. [see Review: Nanostructured Surfaces of Dental
Implants, Eriberto Bressan, et al.]
Part 3. Life as Infoenergy.
The evolution of the biophysical paradigm over 65 years since the publication
in 1944 of Erwin Schrodinger’s What is life?: based on the advances in
molecular genetics, it is argued that all the features characteristic of living
22
systems can also be found in nonliving ones. Ten paradoxes in logic and
physics are analyzed that allow defining life in terms of a spatial — temporal
hierarchy of structures and combinatory probabilistic logic. From the
perspective of physics, life can be defined as resulting from a game involving
interactions of matter one part of which acquires the ability to remember the
success (or failure) probabilities from the previous rounds of the game,
thereby increasing its chances for further survival in the next round. This part
of matter is currently called living matter [Ivanicky, UFN, 2013].
Within biological science, free energy is generally regarded the most relevant for
biochemical reactions. The change DF in free energy F being,
DF = DE - TDS
The energy content DE is thereby partitioned into the entropic term TDS, which is
related to the random thermal motion (molecular chaos) of the molecules that is not
available for work, and tends to disappear at absolute zero temperature, and the free
energy DF, which is available for work. But as there need be no entropy generated in
adiabatic processes - which occur frequently in living systems (see below) - the
division into available and non-available energy cannot be absolute [Ho].
Time of traditional sciences, which is time reversible (or imaginary time, as defined
by Feynman and Hawking). Determinism as a basic methodology: Einstein, Newton,
all traditional physics before entropy was introduced by Boltzmann, Gibbs and others
in the end of 19th century.
Equilibrium or equilibration time (more precisely: time of tending to equilibrium).
[Prigogine] in fact devoted to this time his [From Being to Becoming], and
Boltzmann, who wanted to introduce evolution in physics (similar as Darwin
introduced it in biology) – his whole life [Prigogine]. This time is determined by
entropy and second law of thermodynamics, elaborated in 19th century by Gibbs,
Boltzmann, Carnot, Maxwell, Kelvin and many others. This is the time of dissipation
(Prigogine), fluctuations and (quasi-) equilibrium phenomena. This is the time of
dying (medical science and practice, all the illnesses).
23
Time of development. The new scientific concept, which is emerging now.
Information is necessary and infoenergy is useful in order to describe this third time.
From the other hand, full scientific (and physical) definition of information is
impossible without time of development. This is time of scientific and economical
innovations, of scientific economics and theoretical biology, as well as all sciences
about Life, Human and Society. Living and especially Human nature can not be
understand as merely struggle with death and competition. Non-living nature tends to
equilibrium, which means death. All the living nature have quite opposite tendency,
at least when considered as a system phenomena. Several billions years of biological
evolution and about one million years of human evolution demonstrated exponential
development which is strikingly different from all the non-living Universe. The first
and probably most important difference of living versus non-living worlds is in their
“relationships” with time. Life means time of development (together with time of
dying, which is rather a trivial fact as well as the well-known “application” of the
second law of thermodynamics).
These classifications are useful basis in the theory of Modern Energy Systems, as
well as of sustainable development in energy sciences, technologies and economics.
Defining the science which is appropriate for the 21-st century: Energy science,
technologies, industries (renewables and traditional, including here problems of
economics, ecology, sociology and even politics), we propose the following key
words:
processes and development (hopefully sustainable – we define these below).
statistical physics, chemistry, thermodynamics and economics.
the correlated flows of energy, information, finances, technology, substances (oil, gas,
water, wind etc.).
all the interconnections between information and energy industries.
24
The interconnections of “information” and “energy” are crucially important for our
modern civilisation: we have already huge amounts of information available as a
result of information revolution: processors, controllers, etc., which historically were
developed for organizing of energy flows in every industry where it is important
(military, energy, transportation, all other industries, using engines and energy). And
we have a lot of different energy, still relatively available and cheap: all the kinds: oil,
gas, nuclear, renewables, etc. We formulate here the main problem as lack or
insufficiency of such interconnections. Modern energy industry is still like a
monstrous body without mind; and information industry is a huge mind without body.
The fundamental interconnections of these two concepts (information and energy) on
the scientific levels should be stronger, than of “energy” and “mass”, because in the
most of everyday problems “only energy” or “only mass” is important (even
remembering the equation of Einstein: energy is equal to the mass multiplied by
square of speed of light velocity, which is such an extreme kind of constant which is
never important in our every day life. We need really huge concentrations of energy
in order to make the Einstein equation work as it happens in e.g. nuclear reactions).
But the flows of information, let us repeat again, do not influence on our stable 3dimensional world without the resulting flows of energy. So, when we think about
information, we deal in fact with infoenergy. As it was mentioned, when describing
the present “energy revolution”, we propose below the concept of infoenergy: the
main fundamentals of science, technology, economics, modern civilization, etc. are:
1. energy. 2. space, 3. time, 4. mass (matter, substance: atoms, molecules, elementary
particles, etc.). The computer revolution attracted the greatest attention to the fifth
one: information, for which physics gave precise equations. Information means the
choices (logarithm of such number of choices, which is greater, that the number of
atoms in the whole Universe, as is well-known from the analogy with chess), which
influence greatly on the very essence of time and, through the time, on everything
else.
Photosynthesis technology embodies a sunlight-driven electron pump. By harvesting
solar energy, blue-greens managed to strip electrons from water molecules which
25
yields hydrogen nuclei, or protons, and molecular oxygen. The released hydrogen
nuclei acquired an essential role in the blue-greens' biosynthetic activities, but oxygen
was simply released as a waste gas. The impact of this emission can only be
underestimated, because in those days the environment contained no oxygen. But
before going more deeply into the fate of oxygen, we will follow the stripped
electrons, as they hold the key to photosynthetic energy storage.
A sugar crude is completely combustible and therefore a fuel. But microbes need
matter to grow and reproduce. To this end they, by-and-large, exploit
photosynthetically fixed sugar-carbon for in-cell bio-construction works. Noticeably,
airborne carbon dioxide is the only carbon source blue-greens consume. In effect,
blue-greens create organic tissue from inorganic matter, or life from inanimate nature.
The emergence of such “energy harvesters” as well as the emergence of the life itself
is a greatest problem still unsolved scientifically. Let us notice first, that every
sustainable system has all the essential characteristics of an organism - an irreducible
whole that develops, maintains and reproduces, or renews, itself by mobilizing
material and energy captured from the environment. What is the nature of the material
and energy mobilization that makes sustainable system or organism?
The healthy organism excels in maintaining its organisation and keeping away from
thermodynamic equilibrium – death by another name – and in reproducing and
providing for future generations. In those respects, it is the ideal sustainable system
(Ho, 1998b,c; Ho and Ulanowicz, 2005). Looking at sustainable systems as
organisms provides fresh insights on sustainability, and offers diagnostic criteria that
reflect the system’s health. We consider this in detail in order to apply these basic
sustainability and development principles to energy and economical systems. We
show, that increasing of the infoenergy, which is often the energy storage multiplied
by information explain the ways of development for many systems in energy,
biology, economics.
26
Life is Infoenergy.
Living organisms are so enigmatic from the physical and thermodynamic point of
view that Lord Kelvin, co-inventor of the second law of thermodynamics, specifically
excluded them from its dominion. As distinct from heat engines which require
constant input of heat to do work, organisms are able to work without a constant
energy supply, and moreover, can mobilize energy at will, whenever and wherever
required, and in a perfectly coordinated way. Similarly, E. Schrödinger (1944) was
impressed with the ability of organisms to develop and evolve as a coherent whole,
and in the direction of increasing organization, in defiance of the second law of
thermodynamics [Shrödinger E., 1944]. He suggested that they feed upon "negative
entropy" (infoenergy) to free themselves from all the entropy they cannot help
producing. The intuition of both physicists is that energy and living organization are
intimately linked.
The idea that open systems can "self-organize" under energy flow became more
concrete in the concept of dissipative structures (developed through the whole life by
Prigogine (1967 and later) and Haken (1977) and their successors) that depend on the
flow and dissipation of energy, such as e.g. the Bénard convection cells and the laser,
among many other examples. In both cases, energy input results in a phase transition
to global dynamic order in which all the molecules or atoms in the system move
coherently. From these and other considerations, Mae-Wan Ho has identified
Schrödinger's "negative entropy" (which is in fact our infoenergy) as "stored
mobilizable energy in a space-time structured system" , which begins to offer a
possible solution to the enigma of living organization [Ho M. W., 1993, 1994a,
1995b].
The cyclic non-dissipative branch will include most living processes because of the
ubiquity of coupled cycles, for which the net entropy production most probably does
balance out to zero, as Schrödinger had surmised [Schrödinger E.,1944]. In this way,
the organism achieves dynamic closure to become a self-sufficient energetic domain
[Ho M. W., 1996c]. The dynamic closure of the living system has a number of
27
important consequences. First and foremost, it frees the organism from the immediate
constraints of energy conservation - the first law - as well as the second law of
thermodynamics, thus offering a solution to the enigma of the organism posed by
Lord Kelvin and Schrödinger. There is (or rather should be) always energy available
within the system, for it is stored and mobilized at close to maximum efficiency over
all space-time domains. Two other consequences of dynamic closure are that, it frees
the organism from mechanistic constraints, and creates, at least, some of the basic
conditions for quantum coherence. Stored energy is coherent energy capable of doing
work. That implies the organism is a highly coherent domain, possessing a full range
of coherence times and coherence volumes of energy storage. In the ideal, it can be
regarded as a quantum superposition of activities - organized according to their
characteristic space-times - each itself coherent, so that it can couple coherently, i.e.,
non-dissipatively, to the rest. We formulate here the main problems as lack or
insufficiency of such coherence interconnections.
About Entropy and Biology
A rare example of the use of mathematics to combine the two kinds of entropy is
given in The Mystery of Life's Origin, published in 1984. Its authors acknowledge
two kinds of entropy, which they call "thermal" and "configurational." To count the
"number of ways" for the latter kind of entropy they use restrictions which they later
admit to be unrealistic. They count only the number of ways a string of amino acids
of fixed length can be sequenced. They admit in the end, however, that the string
might never form. To impose the units joules per degree onto "configurational"
entropy, they simply multiply by Boltzmann's constant. Nevertheless, they ultimately
reach the following conclusion (p 157-158):
In summary, undirected thermal energy is only able to do the chemical and thermal
entropy work in polypetide synthesis, but not the coding (or sequencing) portion of
the configurational entropy work.... It is difficult to imagine how one could ever
couple random thermal energy flow through the system to do the required
configurational entropy work of selecting and sequencing.
In Evolution, Thermodynamics and Information, Jeffrey S. Wicken also adopts the
terms "thermal" and "configurational." But here they both pertain only to the nonenergetic "information content" of a thermodynamic state, and "energetic"
information is also necessary for the complete description of a system. Shannon
entropy is different from all of these, and not a useful concept to Wicken.
28
Nevertheless, he says that evolution and the origin of life are not separate problems
and, "The most parsimonious explanation is to assume that life always existed" !
An ambitious treatment of entropy as it pertains to biology is the book Evolution as
Entropy, by Daniel R. Brooks and E. O. Wiley. They acknowledge that the
distinction between the different kinds of entropy is important:
It is important to realize that the phase space, microstates, and macrostates described
in our theory are not classical thermodynamic constructs. The entropies are array
entropies, more like the entropies of sorting encountered in considering an ideal gas
than like the thermal entropies associated with steam engines.
In another book entitled Life Itself, mathematical biologist Robert Rosen of Columbia
University seems to have grasped the problem when he writes, "The Second Law thus
asserts that... a system autonomously tending to an organized state cannot be closed ".
But immediately he veers away, complaining that the term "organization" is vague.
Intent on introducing terms he prefers, like "entailment," he does not consider the
possibility that, in an open system, life's organization could be imported into one
region from another.
One of the most profound and original treatments of entropy is that by the Nobel
prize-winning chemist Ilya Prigogine. He begins by noticing that some physical
processes create surprising patterns such as snowflakes, or exhibit surprising behavior
such as oscillation between different states. In From Being To Becoming he says, in
effect, that things sometimes do, under certain circumstances, organize themselves.
He reasons that these processes may have produced life: It seems that most biological
mechanisms of action show that life involves far-from-equilibrium conditions beyond
the stability of the threshold of the thermodynamic branch. It is therefore very
tempting to suggest that the origin of life may be related to successive instabilities
somewhat analogous to the successive bifurcations that have lead to a state of matter
of increasing coherence.
In 1999's The Fifth Miracle, theoretical physicist and science writer Paul Davies
devotes a chapter, "Against the Tide," to the relationship between entropy and
biology. In an endnote to that chapter he writes, "'higher' organisms have higher (not
lower) algorithmic entropy..." (p 277, Davies' italics) — another reversal of the usual
understanding. He concludes, "The source of biological information, then, is the
organism's environment" (p 57). Later, "Gravitationally induced instability is a source
of information" (p 63). But this "still leaves us with the problem.... How has
meaningful information emerged in the universe?" (p 65). He gives no answer to this
question.
29
The Touchstone of Life (1999) follows Prigogine's course, relying on Boltzmann's
constant to link thermodynamic and logical entropy. Author Werner Loewenstein
often strikes the chords that accompany deep understanding. "As for the origin of
information, the fountainhead, this must lie somewhere in the territory close to the big
bang" (p 25). "Evidently a little bubbling, whirling and seething goes a long way in
organizing matter... That understanding has led to the birth of a new alchemy..." (p
48-49). It is surprising that mixing entropy and biology still fosters confusion. The
relevant concepts from physics pertaining to the second law of thermodynamics are at
least 100 years old. The confusion can be eradicated if we distinguish thermodynamic
entropy from logical entropy, and admit that Earth's biological system is open to
organizing input from outside.
Part 4. Infoenergy: Time, Symmetry and Structure.
“Energy storage depends on the highly differentiated space-time structure of the life
cycle, whose predominant modes of activities are themselves cycles of different sizes,
spanning many order of magnitudes of space-times, which are all coupled together,
and feeding off the one-way energy flow [Ho, 1993; 1995b, 1996c]. The more
coupled cycles there are in the system, the more energy is stored, and the longer it
takes for the energy to dissipate. The average residence time of energy in the system
is therefore a measure of the organized complexity of the system [Morowitz, 1968]. It
was proposed that open systems capable of storing energy tends to evolve towards an
extremum, or end-state, in which all space-time modes become equally populated
with energy under energy flow [Ho M. W., 1994a; 1995b]. This implies an evolution
towards increasing complexity, which we shall come back to later.
Symmetrical coupling of energy mobilization is a hall-mark of the healthy organism.
It is that which enables the system to mobilize energy at will and in a perfectly
coordinated way. It involves certain reversibility of flows and reciprocity in
relationships. What would these be for the economic system? It would be a
relationship of trust and goodwill, of cooperation, of connectivity in the system, so
that intercommunication is optimized. It would be a smooth and balanced coupling of
production to consumption, of employer to employee, of lending to borrowing, of
investment to modest and reasonable profit-taking. It is based on a differentiated
space-time structure, so that debts and surpluses can be properly distributed and redistributed to offset one another and to maintain the system as a whole.
30
A slight digression into the living system will illustrate this. For our muscles to work,
even under the extreme exertion of say, long distance running, or better yet, a man
running away from a tiger, energy has to be efficiently mobilized over a range of
space-time scales. The immediate energy supply is in the form of the universal energy
intermediate, ATP, which biochemists themselves have likened to "energy currency".
The important thing in a healthy system is that the ATP is never allowed to become
depleted in the working muscle (otherwise the man will be mauled and killed by the
tiger). How is that accomplished? It is accomplished by a cascade of energy
'indebtedness' to more and more distant, longer term energy stores, which are
replenished after the crisis is over, and the man can recover and have a hearty meal
(see Ho, 1995a). Thus, for the economic system to function effectively and
efficiently, it has to achieve a smooth distribution and redistribution of surplus and
indebtedness in space and time as the need arises.
Thermodynamics of Healthy Self-replicating Organisms and Sustainable Systems.
What is Schrödinger’s Negentropy?
Schrödinger (1944) wrote: “It is by avoiding the rapid decay into the inert state of
“equilibrium” that an organism appears so enigmatic… What an organism feeds
upon is negative entropy. Or, to put it less paradoxically, the essential thing in
metabolism is that the organism succeeds in freeing itself from all the entropy it
cannot help producing while alive.” Schrödinger was struggling to make explicit the
intimate relationship between energy and organisation or information/infoenergy in
the modern context. To make progress in many sciences (not only in biology!), we
need to see life with fresh eyes.
How Organisms Make a Living
The first thing to take note is that organisms do not make their living by heat transfer.
Instead, they are isothermal systems (c.f. Morowitz, 1968) dependent on the direct
transfer of molecular energy, by proteins and other macromolecules acting as
“molecular energy machines”. The organism as a whole keeps far away from
31
thermodynamic equilibrium, but how does it free itself from “all the entropy it cannot
help producing while alive”? That’s the point of departure for the “thermodynamics
of organised complexity”.
The pre-requisite for keeping away from thermodynamic equilibrium – the state of
maximum entropy or death by another name – is to be able to capture energy and
material from the environment to develop, grow and recreate oneself from moment to
moment during one’s life time and also to reproduce and provide for future
generations, all part and parcel of sustainability.
The key to understanding the thermodynamics of the living system is not so much
energy flow (see e.g. Prigogine, 1967 and later) as energy capture and storage under
energy flow. Energy flow is of no consequence unless the energy can be trapped and
stored within the system, where it is mobilised to give a self-maintaining, selfreproducing life cycle coupled to the energy flow. By energy, we include material
flow, which enables the energy to be stored and mobilised.
Cycles of Time
The perfect coordination (organisation) of the organism depends on how the captured
energy is mobilised within the organism. It turns out that energy is mobilised in
cycles, or more precisely, quasi-limit cycles, which can be thought of as dynamic
boxes; and they come in all sizes, from the very fast to the very slow, from the global
to the most local. Cycles provide the dynamic closure that’s absolutely necessary for
life, perhaps much more so than physical closure. Biologists have long puzzled over
why biological activities are predominantly rhythmic or cyclic, and much effort has
gone into identifying the centre of control, and more recently to identifying master
genes that control biological rhythms, to no avail.
Redefining the Second Law for Living Systems
The authors [Liapin R.] showed that this is Most Probable Evolution principle, which
can be derived straightforwardly from the basic least action principle, which
32
implicates as well the tendencies to energy storage (in steady non-equilibrium states)
as one of the basic laws of Nature. The old scientific dogma: everything tends to
equilibrium (which comes from Aristotle) is not thus a universal law, neither it is
helpful to understand the most interesting systems: Life Sciences, Economics, Modern
Technology, such as the one we described in the first chapters.
McClare (1971) proposed that, “Useful work is only done by a molecular system
when one form of stored energy is converted into another”. In other words,
thermalised energies cannot be used to do work, and thermalised energy cannot be
converted into stored energy. This raised obvious objections, as critics pointed out,
automobiles do run on thermalised energy from burning petrol, so the proposal could
not be right.
Time of Development. Infoenergy.
Infoenergy summarize the concepts of information, free energy, energy
storage and negentropy. The interconnections of “information” and “energy”
are crucially important for our modern civilisation: we have already huge
amounts of information available as a result of information revolution:
processors, controllers, etc., which historically were developed for organizing
of energy flows in every industry where it is important (military, energy,
transportation, all other industries, using engines and energy). And we have a
lot of different energy, still relatively available and cheap: all the kinds: oil,
gas, nuclear, renewables, etc.
The key elements: energy storage, which is organized inside of energy flows
and circulation, as well as the highest qualities and levels of information,
works of information, such as planning, organization, coordination, control,
etc., we summarized in our infoenergy concept. Information and energy
storage were necessary for the very emergence of life: any living organism is
not connected constantly to power sources, so the energy storage is
absolutely necessary for the existence of every organism – from the viruses,
DNA, RNA, the simplest bacteria, etc. to people (and societies).
33
Within biological science, free energy is generally regarded the most relevant
for biochemical reactions. The change DF in free energy F being,
DF = DE - TDS
The energy content DE is thereby partitioned into the entropic term TDS,
which is related to the random thermal motion (molecular chaos) of the
molecules that is not available for work, and tends to disappear at absolute
zero temperature, and the free energy DF, which is available for work. But as
there need be no entropy generated in adiabatic processes - which occur
frequently in living systems (see below) - the division into available and nonavailable energy cannot be absolute [Ho].
Historically, Ludwig Boltzmann, Ilya Prigogine, physiologist Colin McClare,
biologist Mae-Wan Ho (see references) made an important contributions
towards reformulating thermodynamics so that it can apply to living systems.
They proposed (in a short formulation of Ho) that in a system defined by
some macroscopic parameter, such as temperature, q, its energies can be
separated into two categories: stored (“coherent”) energies that remain in a
non-equilibrium state within a characteristic time, t, and thermal (random)
energies that exchange with each other and reach equilibrium (or equilibrate)
in a time less than t.
The key to understanding the thermodynamics of the living system is not so
much energy flow (see e.g. Prigogine, 1967 and later) as energy capture and
storage under energy flow. Energy flow is of no consequence unless the
energy can be trapped and stored within the system, where it is mobilised to
give a self-maintaining, self-reproducing life cycle coupled to the energy flow.
By energy, we include material flow, which enables the energy to be stored
and mobilised.
Time and Information. Thermodynamic Entropy
34
The use of thermodynamics in biology has a long history rich in confusion.
[Harold J. Morowitz]. The first opportunity for confusion arises when we
introduce the term entropy into the mix. Clausius invented the term in 1865.
He had noticed that a certain ratio was constant in reversible, or ideal, heat
cycles. The ratio was heat exchanged to absolute temperature. Clausius
decided that the conserved ratio must correspond to a real, physical quantity,
and he named it "entropy".
Logical Entropy
Richard Feynman supposed that there is a difference between the two
meanings of entropy. He discussed thermodynamic entropy in the section
called "Entropy" of his Lectures on Physics published in 1963 (7), using
physical units, joules per degree, and over a dozen equations (vol I section
44-6). He discussed the second meaning of entropy in a different section
titled "Order and entropy" (vol. I section 46-5) as follows:
So we now have to talk about what we mean by disorder and what we mean
by order. Suppose we divide the space into little volume elements. If we have
black and white molecules, how many ways could we distribute them among
the volume elements so that white is on one side and black is on the other?
On the other hand, how many ways could we distribute them with no
restriction on which goes where? Clearly, there are many more ways to
arrange them in the latter case. We measure "disorder" by the number of
ways that the insides can be arranged, so that from the outside it looks the
same. The logarithm of that number of ways is the entropy. The number of
ways in the separated case is less, so the entropy is less, or the "disorder" is
less.
Thermodynamics is the central branch of modern science. The most
interesting processes in Nature are irreversible. A good example is provided
by living organisms which consume chemical energy in the form of nutrients,
perform work and excrete waste as well as give off heat to the surroundings
without themselves undergoing changes; they represent what is called a
stationary or steady state. The boiling of an egg provides another example,
and still another one is, a thermocouple with a cold and a hot junction
connected to an electrical measuring instrument.
35
TIME, STRUCTURE AND FLUCTUATIONS
The problem of time in physics and chemistry is closely related to the
formulation of the second law of thermodynamics. Therefore another possible
subtitle could have been: “the macroscopic and microscopic aspects of the
second law of thermodynamics”. It is a remarkable fact that the second law of
thermodynamics has played in the history of science a fundamental role far
beyond its original scope. Suffice it to mention Boltzmann’s work on kinetic
theory, Planck’s discovery of quantum theory or Einstein’s theory of
spontaneous emission, which were all based on the second law of
thermodynamics. It was the main thesis of I. Prigogine's Nobel Prize lecture
that we are only at the beginning of a new development of biology, theoretical
chemistry and physics in which thermodynamic concepts will play an even
more basic role. Because of the complexity of the subject Prigogine had to
limit himself mainly to conceptual problems, both macroscopic and
microscopic. For example, from the macroscopic point of view classical
thermodynamics has largely clarified the concept of equilibrium structures
such as crystals. Are most types of “organisations” around us of this nature?
It is enough to ask such a question to see that the answer is negative.
Obviously in a town, in a living system, we have a quite different type of
functional order. To obtain a thermodynamic theory for this type of structure
we have to show that non-equilibrium may be a source of order. Irreversible
processes may lead to a new type of dynamic states of matter which
Prigogine called “dissipative structures” and worked out the thermodynamic
theory of such structures.
ENTROPY PRODUCTION
At the very core of the second law of thermodynamics we find the basic distinction
between “reversible” and “irreversible processes”. This leads ultimately to the
introduction of entropy S and the formulation of the second law of thermodynamics.
The classical formulation due to Clausius refers to isolated systems exchanging
neither energy nor matter with the outside world. The second law then merely
ascertains the existence of a function, the entropy S, which increases monotonically
until it reaches its maximum at the state of thermodynamic equilibrium.
Although these problem exist since the 19 century, a way of their formulation
is changing all the time together with development of physics. The first
formulation was: Does the classical statistical mechanics has its complete
foundation in the dynamic Newton laws?
Nowadays another formulation is more interesting; we can say that it is
reverse formulation in some sense. Now we can formulate the problem of
36
connection and coexistence of dynamic and statistical laws in such way:
which mathematical formalism for basic probabilistic laws is more compatible,
consistent and convenient in order to predict dynamic behaviour of given
physical system.
We propose in this work one of such formulations, which is convenient in
application to the problems of water on solid state surfaces, in first order
phase transitions, etc.
Example of applications: Probing water structures in nanopores by
tunnelling.
We study the effect of volumetric constraints on the structure and electronic
transport properties of distilled water in a nanopore with embedded
electrodes. Combining classical molecular dynamics simulations with
quantum scattering theory, it was shown that the structural motifs water
assumes inside the pore can be probed directly by tunnelling. In particular, we
show that the current does not follow a simple exponential curve at a critical
pore diameter of about 8 A, rather it is larger than the one expected from
simple tunnelling through a barrier. This is due to a structural transition from
bulk-like to “nanodroplet” water domains. Our results can be tested with
present experimental capabilities to develop our understanding of water as a
complex medium at nanometre length scales.
Liquid water is a very common and abundant substance that is considered a
fundamental ingredient for life more than any other. Yet we do not fully
understand many of its properties, especially when we probe it at the nanometre
scale, although a lot of research has been done on this important system in this
regime. Some of the first experimental studies of water on the nanoscale have
been done using a scanning-tunnelling microscope (STM), in which the tunnelling
barrier height was found to be unusually low. This was hypothesized to be a
result of the three-dimensional nature of electron tunnelling in water. Some STM
experiments actually studied the tunnelling current as a function of distance to
understand the solid/liquid interface and found that the tunnelling current
oscillates with a period that agrees with the effective spacing of the Helmholtz
layers. Water has also been studied when encapsulated by single-walled carbon
nanotubes in which, via neutron scattering, the water was observed to form a
cylindrical “square-ice sheet” which enclosed a more freely moving chain of
molecules. These structures are related to the fact that these carbon nanotubes
have cylindrical symmetry and are hydrophobic. More recently, the dynamics of
water confined by hydrophilic surfaces were studied by means of inelastic X-ray
scattering showing a phase change at a surface separation of 6 A. Well above 6
A there are two deformed surface layers that sandwich a layer of bulk-like water
but below 6 A the two surface layers combine into one layer that switches
between a localized “frozen” structure and a delocalized “melted” structure [4].
On the computational side, many molecular dynamics (MD) simulations have
37
been done to study water in a variety of environments. Of particular interest has
been the study of hydrophobic channels because in this case water has been
shown to escape from the channel altogether for entropic gain [5, 6]. However,
these structures, and in particular the formation of water nanodroplets, are
difficult to probe experimentally. Recent interest in fast DNA sequencing
approaches has been crucial to the advancement of novel techniques to probe
polymers in water environments at the nanometre scale. In particular, the
proposal to sequence DNA via tunnelling [7, 8] has been instrumental for the
development of sub-nanometre electrodes embedded into nanochannels [9–11].
These techniques open the door to investigating the properties of liquids
volumetrically constrained by several materials by relating the local structure of
the liquid to electrical (tunnelling) currents. We can take advantage of these
newly developed experimental techniques and propose the study of water in
nanopores with embedded electrodes. We find that indeed the structural motifs
water assumes inside pores of different diameters can be probed directly by
tunnelling. In fact, we predict that the tunnelling current does not follow a simple
exponential curve at a critical pore diameter of about 8 A as simple tunnelling
through a barrier would produce. Instead, water domains form a specific density
of states which in turn gives rise to these peculiar features. The findings can be
tested with the available experimental capabilities on similar systems.
38