Download URL - StealthSkater

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

State of matter wikipedia , lookup

Probability amplitude wikipedia , lookup

Path integral formulation wikipedia , lookup

Quantum mechanics wikipedia , lookup

Wormhole wikipedia , lookup

Hydrogen atom wikipedia , lookup

Electromagnetism wikipedia , lookup

Aharonov–Bohm effect wikipedia , lookup

Anti-gravity wikipedia , lookup

Copenhagen interpretation wikipedia , lookup

Bell's theorem wikipedia , lookup

Yang–Mills theory wikipedia , lookup

Quantum chromodynamics wikipedia , lookup

Quantum field theory wikipedia , lookup

Quantum potential wikipedia , lookup

Quantum electrodynamics wikipedia , lookup

Quantum entanglement wikipedia , lookup

Elementary particle wikipedia , lookup

Relational approach to quantum physics wikipedia , lookup

EPR paradox wikipedia , lookup

History of physics wikipedia , lookup

Supersymmetry wikipedia , lookup

Quantum gravity wikipedia , lookup

Grand Unified Theory wikipedia , lookup

Introduction to gauge theory wikipedia , lookup

Time in physics wikipedia , lookup

History of subatomic physics wikipedia , lookup

Standard Model wikipedia , lookup

Quantum vacuum thruster wikipedia , lookup

Theory of everything wikipedia , lookup

Renormalization wikipedia , lookup

Mathematical formulation of the Standard Model wikipedia , lookup

Fundamental interaction wikipedia , lookup

Quantum logic wikipedia , lookup

Canonical quantization wikipedia , lookup

Old quantum theory wikipedia , lookup

History of quantum field theory wikipedia , lookup

Condensed matter physics wikipedia , lookup

T-symmetry wikipedia , lookup

Transcript
archived as http://www.stealthskater.com/Documents/TGDBlog_2015.doc
(also …TGDBlog_2015.pdf) => doc pdf URL-doc URL-pdf
more from Matti Pitkänen is on the /Pitkanen.htm page at doc pdf
URL
note: because important websites are frequently "here today but gone tomorrow", the following was
archived from http://matpitka.blogspot.com on November 16, 2015. This is NOT an attempt to
divert readers from the aforementioned website. Indeed, the reader should only read this backup copy if the updated original cannot be found at the original author's site.
[note: listed in descending date order (latest is listed first and oldest is listed last) ]
11/18/2015 - http://matpitka.blogspot.com/2015/11/does-riemann-zeta-code-forgeneric.html#comments
Does Riemann Zeta Code for Generic Coupling Constant Evolution?
Understanding of coupling constant evolution and predicting it is one of the greatest challenges of TGD.
During the years I have made several attempts to understand coupling evolution.
1. The first idea dates back to the discovery of WCW Kähler geometry defined by Kähler function
defined by Kähler action (this happened around 1990) (see this). The only free parameter of the
theory is Kähler coupling strength αK analogous to temperature parameter αK postulated to be is
analogous to critical temperature. Whether only single value or entire spectrum of of values α K is
possible, remained an open question.
About decade ago, I realized that Kähler action is complex receiving a real contribution from
space-time regions of Euclidian signature of metric and imaginary contribution from the
Minkoswkian regions. Euclidian region would give Kähler function and Minkowskian regions
analog of QFT action of path integral approach defining also Morse function. Zero energy
ontology (ZEO) (see this) led to the interpretation of quantum TGD as complex square root of
thermodynamics so that the vacuum functional as exponent of Kähler action could be identified
as a complex square root of the ordinary partition function.
Kähler function would correspond to the real contribution Kähler action from Euclidian
space-time regions. This led to ask whether also Kähler coupling strength might be complex: in
analogy with the complexification of gauge coupling strength in theories allowing magnetic
monopoles. Complex αK could allow to explain CP breaking. I proposed that instanton term also
reducing to Chern-Simons term could be behind CP breaking.
2. p-Adic mass calculations for 2 decades ago (see this) inspired the idea that length scale evolution
is discretized so that the real version of p-adic coupling constant would have discrete set of
values labelled by p-adic primes. The simple working hypothesis was that Kähler coupling
strength is renormalization group (RG) invariant and only the weak and color coupling strengths
depend on the p-adic length scale. The alternative ad hoc hypothesis considered was that
gravitational constant is RG invariant. I made several number theoretically motivated ad hoc
guesses about coupling constant evolution, in particular a guess for the formula for gravitational
-1-
coupling in terms of Kähler coupling strength, action for CP2 type vacuum extremal, p-adic
length scale as dimensional quantity (see this). Needless to say, these attempts were premature
and ad hoc.
3. The vision about hierarchy of Planck constants heff=n× h and the connection heff= hgr= GMm/v0,
where v0<c=1 has dimensions of velocity (see this>) forced to consider very seriously the
hypothesis that Kähler coupling strength has a spectrum of values in one-one correspondence
with p-adic length scales. A separate coupling constant evolution associated with h eff induced by
αK∝ 1/hbareff ∝ 1/n looks natural and was motivated by the idea that Nature is theoretician
friendly: when the situation becomes non-perturbative, Mother Nature comes in rescue and an
heff increasing phase transition makes the situation perturbative again.
Quite recently, the number theoretic interpretation of coupling constant evolution (see this>
or this in terms of a hierarchy of algebraic extensions of rational numbers inducing those of padic number fields encouraged to think that 1/αK has spectrum labelled by primes and values of
heff. Two coupling constant evolutions suggest themselves: they could be assigned to length
scales and angles which are in p-adic sectors necessarily discretized and describable using only
algebraic extensions involve roots of unity replacing angles with discrete phases.
4. Few years ago, the relationship of TGD and GRT was finally understood (see this>) . GRT
space-time is obtained as an approximation as the sheets of the many-sheeted space-time of TGD
are replaced with single region of space-time. The gravitational and gauge potential of sheets add
together so that linear superposition corresponds to set theoretic union geometrically. This forced
to consider the possibility that gauge coupling evolution takes place only at the level of the QFT
approximation and αK has only single value. This is nice. But if true, one does not have much to
say about the evolution of gauge coupling strengths.
5. The analogy of Riemann zeta function with the partition function of complex square root of
thermodynamics suggests that the zeros of zeta have interpretation as inverses of complex
temperatures s=1/β. Also 1/αK is analogous to temperature. This led to a radical idea to be
discussed in detail in the sequel.
Could the spectrum of 1/αK reduce to that for the zeros of Riemann zeta or - more plausibly to the spectrum of poles of fermionic zeta ζF(ks)= ζ(ks)/ζ(2ks) giving for k=1/2 poles as zeros of
zeta and as point s=2? ζF is motivated by the fact that fermions are the only fundamental particles
in TGD and by the fact that poles of the partition function are naturally associated with quantum
criticality whereas the vanishing of ζ and varying sign allow no natural physical interpretation.
The poles of ζF(s/2) define the spectrum of 1/αK and correspond to zeros of ζ(s) and to the
pole of ζ(s/2) at s=2. The trivial poles for s=2n, n=1,2,.. correspond naturally to the values of
1/αK for different values of heff=n× h with n even integer. Complex poles would correspond to
ordinary QFT coupling constant evolution. The zeros of zeta in increasing order would
correspond to p-adic primes in increasing order and UV limit to smallest value of poles at critical
line. One can distinguish the pole s=2 as extreme UV limit at which QFT approximation fails
totally. CP2 length scale indeed corresponds to GUT scale.
6. One can test this hypothesis. 1/αK corresponds to the electroweak U(1) coupling strength so that
the identification 1/αK= 1/αU(1) makes sense. One also knows a lot about the evolutions of 1/αU(1)
and of electromagnetic coupling strength 1/αem= 1/[cos2(θW)αU(1). What does this predict?
It turns out that at p-adic length scale k=131 (p≈ 2k by p-adic length scale hypothesis, which
now can be understood number theoretically (see this ) fine structure constant is predicted with .7
-2-
per cent accuracy if Weinberg angle is assumed to have its value at atomic scale! It is difficult to
believe that this could be a mere accident because also the prediction evolution of αU(1) is correct
qualitatively. Note however that for k=127 labelling electron one can reproduce fine structure
constant with Weinberg angle deviating about 10 per cent from the measured value of Weinberg
angle. Both models will be considered.
7. What about the evolution of weak, color and gravitational coupling strengths? Quantum
criticality suggests that the evolution of these couplings strengths is universal and independent of
the details of the dynamics. Since one must be able to compare various evolutions and combine
them together, the only possibility seems to be that the spectra of gauge coupling strengths are
given by the poles of ζF(w) but with argument w=w(s) obtained by a global conformal
transformation of upper half plane - that is Möbius transformation (see this) with real coefficients
(element of GL(2,R)) so that one as ζF((as+b)/(cs+d)). Rather general arguments force it to be
and element of GL(2,Q), GL(2,Z) or maybe even SL(2,Z) (ad-bc=1) satisfying additional
constraints. Since TGD predicts several scaled variants of weak and color interactions, these
copies could be perhaps parameterized by some elements of SL(2,Z) and by a scaling factor K.
Could one understand the general qualitative features of color and weak coupling contant
evolutions from the properties of corresponding Möbius transformation? At the critical line there
can be no poles or zeros but could asymptotic freedom be assigned with a pole of cs+d and color
confinement with the zero of as+b at real axes? Pole makes sense only if Kähler action for the
preferred extremal vanishes. Vanishing can occur and does so for massless extremals
characterizing conformally invariant phase. For zero of as+b vacuum function would be equal to
one unless Kähler action is allowed to be infinite: does this make sense?
One can, however, hope that the values of parameters allow to distinguish between weak and
color interactions. It is certainly possible to get an idea about the values of the parameters of the
transformation and one ends up with a general model predicting the entire electroweak coupling
constant evolution successfully.
To sum up, the big idea is the identification of the spectra of coupling constant strengths as poles of
ζF((as+b/)(cs+d)) identified as a complex square root of partition function with motivation coming from
ZEO, quantum criticality, and super-conformal symmetry; the discretization of the RG flow made
possible by the p-adic length scale hypothesis p≈ kk, k prime; and the assignment of complex zeros of ζ
with p-adic primes in increasing order. These assumptions reduce the coupling constant evolution to
four real rational or integer valued parameters (a,b,c,d). One can say that one of the greatest challenges
of TGD has been overcome.
For details see the article Does Riemann Zeta Code for Generic Coupling Constant Evolution?.
For a summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 10:31 PM
9 Comments:
At 2:50 PM,
Anonymous said...
The part I have doubts about is the validity of assigning a prime to each zero. I really
doubt there is a one to one correspondence.. unless I missed something
--Stephen
At 7:19 PM,
[email protected] said...
I do not see as a question of whether to believe or not.
-3-
Number theoretical universality (one of the basic principles of Quantum-TGD) states
that for given prime p p^iy exists for some set C(p) of zeros y. The strong form (supported
now by the stunning success of the identification of zeros as inverses of U(1) coupling
constant strength) states that correspondence is 1-1: C(p) contains only one zero.
Another support for the hypothesis is that it works and predicts U(1) coupling at
electron scale with accuracy of .7 per cent without any further assumptions and that it
leads to a parametrisation of generic coupling constant evolution in terms of rational or
integer parameter real Mobius transformation. This is incredibly powerful prediction:
number theoretical universality would provide highly detailed overall view about physics
in all length scales. No one has dared even to dream of anything like this.
Dyson speculated that zeros and primes and their powers form quasicrystals. Ordinary
crystal is such and zeros and primes would be analogous to lattice and reciprocal lattice
and therefore in 1-1 correspondence naturally.
At 11:35 PM,
Anonymous said...
So the relation is the ordering in which they appear and not some other permutation?
At 1:55 PM,
Stephen said...
https://en.wikipedia.org/wiki/Fej%C3%A9r_kernel
Unitary Correlations and the Fiejer kernel
https://statistics.stanford.edu/sites/default/files/2001-01.pdf
You might be on to something here from what I can tell with my mathematical
understanding...
Wwikipedia has something about "almost-Hermitian operators" I think this might be
found in the last section where I briefly mention the possibility
http://vixra.org/pdf/1510.0475v6.pdf on the last page in section 2.3
𝒥^(2,+)x(t)={(p,X)|x(t+z)⩽x(t)+p⋅z+(X:z⊗z)/2+o(|z|^2) as z→0}
𝒥^(2,-)x(t)={(p,X)|x(t+z)⩾x(t)+p⋅z+(X:z⊗z)/2+o(|z|^2) as z→0}
what I think is so cool is that the error term just so happens to be small-o |z|^2 hapybe
the 'approximation error' is also a (randomly.. at what level?) complex wavefunction?
At 6:58 PM,
Matti Pitkanen said...
Ordering by size is essential for obtaining realistic coupling constant evolution.
About almost-Hermitian operators. The problem of standard approach is that zeros s=
1/2+iy are not eigenvalues of a unitary operator. In Zero Energy Ontology wave function
is replaced with a couple square root of density matrix and vacuum functional with a
complex square root of partition function. This interpretational problem disappears. It is a
pity that a too restricted view about quantum theory leads to misguided attempts to
understand RH in terms of physical analogies. But this is not my problem;-).
-4-
At 7:06 PM,
[email protected] said...
Fejer kernel seems to be average of approximations to delta function at zero. Easier to
remember.
11/04/2015 - http://matpitka.blogspot.com/2015/11/about-fermi-dirac-and-boseeinstein.html#comments
About Fermi-Dirac and Bose-Einstein statistics, negentropic entanglement,
Hawking radiation, and firewall paradox in TGD framework
In quantum field theories (QFTs) defined in 4-D Minkowski space, spin statistics theorem forces
spin statistics connection: fermions/bosons with half-odd integer/integer spin correspond to totally
antisymmetric/symmetric states. TGD is not a QFT and one is forced to challenge the basic assumptions
of 4-D Minkowskian QFT.
1. In TGD framework the fundamental reason for the fermionic statistics are anticommutation
relations for the gamma matrices of the "World of Classical Worlds" (WCW). This naturally
gives rise to geometrization of the anticommutation relations of induced spinor fields at spacetime surfaces. The only fundamental fields are second quantized space-time spinors, which
implies that the statistics of bosonic states is induced from the fermionic one since they can be
regarded as many-fermion states. At WCW level spinor fields are formally classical.
Strong form of holography (SH) forced by strong form of General Coordinate Invariance
(SGCI) implies that induced spinor fields are localized at string world sheets. 2-dimensionality of
the basic objects (string world sheets and partonic 2-surfaces inside space-time surfaces) makes
possible braid statistics, which is more complex than the ordinary one. The phase corresponding
to 2π rotation is not +/-1 but a root of unitary and the phase can be even replaced with noncommuting analog of phase factor.
What about the ordinary statistics of QFTs expected to hold true at the level of imbedding space
H =M4× CP2? Can one deduce it from the q-variants of anticommutation relations for fermionic
oscillator operators - perhaps by a suitable transformation of oscillator operators? Is the
Fermi/Bose statistics at imbedding space level an exact notion or does it emerge only at the QFT
limit when many-sheeted space-time sheets are lumped together and approximated as a slightly
curved region of empty Minkowski space?
2. Zero energy ontology (ZEO) means that physical systems are replaced by pairs of positive and
negative energy states defined at the opposite boundaries of causal diamond (CD). CDs form a
fractal hierarchy. Does this mean that the usual statistics must be restricted to coherence regions
defined by CDs rather than assume it in entire H? This assumption looks reasonable since it
would allow to milden the rather paradoxical looking implications of statistics and quantum
identify for particles.
Interesting questions relate to the notion of negentropic entanglement (NE).
1. Two states are negentropically entangled if their density matrix is proportional to a projection
operator and thus proportional to unit matrix. This require also algebraic entanglement
coefficients. For bipartite entanglement this is guaranteed if the entanglement coefficients form a
matrix proportional a unitary matrix. The so-called quantum monogamy theorem (see ___) has a
-5-
highly non-trivial implication for NE. In its mildest form, it states that if two entangled systems
are in a 2-particle state which is pure, the entire system must be de-entangled from the rest of the
Universe. As a special case, this applies to NE. A stronger form of monogamy states that two
maximally entangled qubits cannot have entanglement with a third system. It is essential that one
has qubits. For 3-valued color, one can have maximal entanglement for 3-particle states
(baryons). For instance, the negentropic entanglement associated with N identical fermions is
maximal for subsystems in the sense that density matrix is proportional to a projection operator.
Quantum monogamy could be highly relevant for the understanding of living matter. Biology
is full of binary structures (DNA double strand, lipid bi-layer of cell membrane, epithelial cell
layers, left and right parts of various brain nuclei and hemispheres, right and left body parts,
married couples,... ). Could given binary structure correspond at some level to a negentropically
entangled pure state and could the system at this level be conscious? Could the loss of
consciousness accompany the formation of a system consisting of a larger number of
negentropically entangled systems so that 2-particle system ceases to be pure state and is
replaced by a larger pure state. Could something like this take place during sleep?
2. NE seems to relate also to the statistics. Totally antisymmetric many-particle states with
permutations of states in tensor product regarded as different states can be regarded as
negentropically entangled for any subsystem since the density matrix is projection operator. Here
one could of course argue that the configuration space must be divided by the permutation group
of n objects so that permutations do not represent different states. It is difficult to decide which
interpretation is correct so that let us considere the first interpretation.
The traced out states for subsystems of many-fermion state are not pure. Could fermionic
statistics emerge at imbedding space-level from the braid statistics for fundamental fermions and
Negentropy Maximization Principle (NMP) favoring the generation of NE? Could CD be
identified as a region inside which the statistics has emerged? Are also more general forms of NE
possible and assignable to more general representations of permutation group? Could ordinary
fermions and bosons be also in states for which entanglement is not negentropic and does not
have special symmetry properties? Quantum monogamy plus purity of the state of conscious
system demands decomposition into de-entangled sub-systems - could one identify them as CDs?
Does this demand that the entanglement due to statistics is present only inside CDs/selves?
3. At space-time level, space-time sheets (or space-like 3-surfaces or partonic 2-surfaces and string
world sheets by SH) serve as natural candidates for conscious entities at space-time level. At
imbedding space level, elementary particles associated with various space-time sheets inside
given CD would contain elementary particles having NE forced by statistics. But doesn't this
imply that space-time sheets cannot define separate conscious entities?
The notion of finite resolution for quantum measurement, cognition, and consciousness
suggests a manner to circumvent this conclusion. One has entanglement hierarchies assignable to
the length scale hierarchies defined by p-adic length scales, hierarchy of Planck constants and
hierarchy of CDs. Entanglement is defined in given resolution and the key prediction is that two
systems unentangled in given resolution can be entangled in an improved resolution. The spacetime correlate for this kind of situation are space-time sheets, which are disjoint in given
resolution but contain topologically condensed smaller space-time sheets connected by thin flux
tubes serving as correlates for entanglement.
-6-
The paradoxical looking prediction is that at a given level of hierarchy characterized by size
scale for CD or space-time surface two systems can be un-entangled although their subsystems
are entangled. This is impossible in standard quantum theory. If the sharing of mental images by
NE between subselves of separate selves makes sense, contents of consciousness are not
completely private as often assumed in theories about consciousness. For instance, stereo vision
could rely on fusion and sharing of visual mental images assignable to left and right brain
hemispheres and generalizes to the notion of stereo consciousness making to consider the
possibility of shared collective consciousness. An interpretation suggesting itself is that selves
correspond to space-time sheets and collective levels of consciousness to CDs.
Encouragingly, dark elementary particles would provide a basic example about sharing of
mental images. Dark variants of elementary particles could be negentropically entangled by
statistics condition in macroscopic scales and form part of a kind of stereo consciousness, kind of
pool of fundamental mental images shared by conscious entities. This could explain why for
instance the secondary p-adic time scale for electron equal to T= 0.1 seconds corresponds to a
fundamental biorhythm.
Quantum monogamy relates also to the firewall problem of blackhole physics.
4. There are two entanglements involved. There is entanglement between Alice entering the
blackhole and Bob remaining outside it. There is also the entanglement between blackhole and
Hawking radiation implied if Hawking radiation is only apparently thermal radiation and
blackhole plus radiation defines a pure quantum state. If so, Hawking evaporation does not lead
to a loss of information. In this picture blackhole and Hawking radiation are assumed to form a
single pure system.
Since Alice enters blackhole (or its boundary), one can identify Alice as part of the modified
blackhole being entangled with the original blackhole and forming a pure state. Thus Alice
would form an entangled pure quantum state with both Bob and Hawking blackhole. This in
conflict with quantum monogamy. The assumption that Alice and blackhole are un-entangled
does not look reasonable. But why Alice, Bob and blackhole could not form pure entangled 3particle state or belong to a larger entangled state?
In TGD framework, the firewall problem seems to be mostly due to the use of poorly defined
terms. The first poorly defined notion is blackhole as a singularity of GRT. In TGD framework
the analog for the interiors of the blackhole are space-time regions with Euclidian signature of
induced metric and accompany all physical systems. Second poorly defined notion is that of
information. In TGD framework one can define a measure for conscious information using padic mathematics and it is non-vanishing for NE. This information characterizes always twoparticle system - either as a pure system or part of a larger system. Thermodynamical negentropy
characterizes single particle in ensemble so that the two notions are not equivalent albeit closely
related.
Further in the case of blackhole, one cannot speak of information going down to blackhole
with Alice since information is associated with a pair formed by Alice and some other system
outside blackhole like objects or perhaps at its surface. Finally, the notion is hierarchy of Planck
constants allows NE in even astrophysical scales. Therefore entangling Bob, Alice, and TGD
counterpart of blackhole is not a problem. Hence the firewall paradox seems to dissolve.
-7-
5. The hierarchy of Planck constants heff=n×h connects also with dark quantum gravity via the
identification heff= hgr, where hgr= GMm/v0, v0/c≤ 1, is gravitational Planck Planck constant.
v0/c< 1 is velocity parameter characterizing system formed by the central mass M and small
mass m, say elementary particle.
This allows us to generalize the notion of Hawking radiation (see this, this, and this ), and
one can speak about dark variant of Hawking radiation and assign it with any material object
rather than only blackhole. The generalized Hawking temperature is proportional to the mass m
of the particle at the gravitational flux tubes of the central object and to the ratio RS/R of the
Schwartschild radius RS and radius R for the central object. Amazingly, the Hawking
temperature for solar Hawking radiation in the case of proton corresponds to physiological
temperature. This finding conforms with the vision that bio-photons result from dark photons
with heff= hgr. Dark Hawking radiation could be very relevant for living matter in TGD Universe!
Even more (see this and this) , one ends up via SH to suggest that the Hawking temperature
equals to the Hagedorn temperature assignable to flux tubes regarded as string like objects! This
assumption fixes the value of string tension and is highly relevant for living matter in TGD
Universe since it guarantees that subsystems can become time-reversed with high probability in
state function reduction. The frequent occurrence of time reversed mental images makes possible
long term memory and planned action and one ends up with thermodynamics of consciousness.
This is actually not new: living systems are able to defy second law and the notion of syntropy
was introduced long time ago by Fantappie.
6. Does one get rid of firewall paradox in theTGD Universe? It is difficult answer the question
since it is not at all clear that there exists any paradox anymore. For instance, the assumption that
blackhole represents pure state looks in TGD framework rather ad hoc and the NE between
blackhole and other systems outside it looks rather natural if one accepts the hierarchy of Planck
constants.
It would however seems to me that the TGD analog of dark Hawking radiation along flux
tubes is quite essential for communications and even more, for what it is to be Alice and Bob and
even for their existence! The flux tube connections of living systems to central star and planets
could be an essential part of what it is to be alive as I have already earlier suggested with the
inspiration coming from heff= hgr. In this framework biology and astrophysics would meet in
highly non-trivial manner.
posted by Matti Pitkanen @ 4:08 AM 8 comments
8 Comments:
At 10:27 AM,
Anonymous said...
See section 4 entitled "Physical Perspectives" of http://arxiv.org/abs/0903.4321
"Eigenvalue Density, Li's Positivity, and the Critical Strip" where the Hamilton-Jacobi (no
Bellman here!! :) conditions of classical mechanics form the basis of the quantization
conditions. The idea is related to "H=xp" Hamiltonian, with a twist. From the paper also
"This tells us that the Riemann ξ-function (symmetrized Zeta function), up to a factor
which does not vanish in the critical strip, is the Mellin transform of a Fermi–Dirac
distribution". Can you please have a look and comment?
--crow
At 11:29 PM,
[email protected] said...
-8-
A nice paper written in such a manner that physicist understands.
*The finding that by writing zeta as WKB wave function as a product of modulus and
phase with phase given by exponent of action defined by the function S(z) giving the
number of zeros at critical line is very interesting since at zeros of zeta S(z) as number of
zeros is quantized action!
*The authors propose interpretation of zeros as conformal weights: this is natural since
angle is generalized to complex coordinate z and the integrals INT p.dq along closed
classical orbit with residue integral over S interpreted as complex 1-form. The
interpretation of zeros as conformal weights is also the TGD interpretation.
*The authors mention also the doubling formula of zeta and deduce that Riemann zeta is
proportional to Mellin transform of fermionic zeta function. Here I did not understand at
all. I see it as Mellin transform of **inverse** of the inverse 1/(1+exp(x)) of fermionic
zeta , not fermionic zeta 1+exp(x)!! My first reaction is that they have made a mistake. I
cannot hope that I am wrong since in this case I myself had made a horrible blunder!;-).
Let me explain why I believe that I am right. I have proposed different interpretation
for the fermionic zeta based on the fact that fermionic zeta zeta_F equals to Z_F=Prod_p
(1+p^s) and Riemann zeta to Z_B= Prod 1/(1-p^s) as formal partition functions of a
number theoretic many-fermion/many-boson system with energies coming as multiples of
log(p) (for fermions only n=0,1 is of course possible, for bosons all positive values of n
are possible and this the form of Z_F). Product involves all primes.
In this framework, poles (not zeros!!) of fermionic zeta zeta_F(s)= zeta(s)/zeta(2s))
(this identity is trrivial to deduce, do it!!) correspond trivial zeros of zeta, the pole of zeta
at s=1, and to trivial poles at negative integers.
The interpretation of **poles** (much more natural in physics, and even more so in
TGD, where fundamental particles involve only fermions!) of z_F as conformal weights
associated with the representations of extended super-conformal symmetry associated with
super-symplectic algebra defining symmetries of TGD at the level of "world of classical
worlds" is natural.
"Conformal confinement" stating that the sum of conformal weights is real is natural
assumption in this picture.
The fact that superconformal algebra has a fractal structure implies direct connection
with quantum criticality: infinite hierarchy of symmetry breakings to sub-symmetry
isomorphic to original one!! Needless to say the conformal structure is infinitely richer
than the ordinary one since the algebra in question has infinite number of generators given
by all zeros of zeta rather than a handfull of with conformal weights n=-2,...+2). Kind of a
Mandelbrot fractal realized physically.
*The problem of all attempts to interpret zeros of zeta relate to the fact that zeros are not
purely imaginary, they have the troublesome real part Res)=1/2. This led me for long time
ago to consider coherent states instead of eigenstates of Hamiltonian in my proposal for a
strategy to prove Riemann hypothesis.
-9-
Also the interpretation as partition function suffers from the same disease: genuine
partition function should be real.
In TGD framework, the solution of the problem is zero energy ontology (ZEO).
Quantum theory is "complex square root" of thermodynamics and means that partition
function indeed becomes complex entity having also the phase. Ordinary partition function
is modulus squared for it.
At 11:10 AM,
Anonymous said...
yes... if the authors would have done the same analysis with the Hardy Z function (it is
totally real valued when its parameter is real) they would have gotten farther or made even
deeper insights.
I will think more about the Mellin transform aspect, I am quite comfortable with them by
now.
[MP] *The finding that by writing zeta as WKB wave function as a product of modulus
and phase with phase given by exponent of action defined by the function S(z) giving the
number of zeros at critical line is very interesting since at zeros of zeta S(z) as number of
zeros is quantized action!
[SC] Yes, it is quite cool! I am adding more content related to the Hamilton-Jacobi
equations to my paper on this topic at http://vixra.org/abs/1510.0475
p.s. see page 5 of the paper at http://www.math.waikato.ac.nz/~kab/papers/zeta2.pdf . The
flow near the zeros is graphed... I think, if u look on the appendix of my paper at
http://vixra.org/abs/1510.0475
I
suggest
an
extension
of
the
Berry-Keating
Hamiltonian
H(x(t))=((x(t)x'(t)+x'(t)x(t)))/2=-i(x(t)x'(t)+1/2)
to
something
like
𝒥^(2,+)x(t)={(p,X)|x(t+z)⩽x(t)+p⋅z+(X:z⊗z)/2+o(|z|^2)
as
z→0}
and
𝒥^(2,)x(t)={(p,X)|x(t+z)⩾u(x)+p⋅z+(X:z⊗z)/2+o(|z|^2) as z→0} which are sets called the
second-order superjets and subjets of x(t) where p∈ℝ^n is the Jacobian of
x(t∈Ω)∈C^0(Ω⊆ℝ^n) and X∈𝕊^n is the Hessian of x(t) where 𝕊^n is the set of n×n
symmetric matrices. The main idea being that the canonical momentum x˙(t) is replaced
with the generalized pointwise derivative which can be written as 𝒥^2x(t) in case both the
superjet and subjet exist and actually define the same set.
The "X" in that superjet and subjet definition should really be X_n where n is the n-th
Riemann zero, I think that the flow around each zero can be specified in terms of a
symmetric operator X... it would algebrically/numerically encode the flow around each
zero.
anyway... I'm not personally that interested but it would be something someone else
could do. I really want to work with the Hamilton-Jacobi aspects some more and try
applying their analysis to the Z function, since that will get rid of some of the objects you
stated. Hardy's Z function is most clearly expressed as Z(t)=exp(I*(I*(lnGAMMA(I*t*(1/2)+1/4)-lnGAMMA(1/4-I*t*(1/2)))*(1/2)(1/2)*ln(Pi)*t))*Zeta(1/2+I*t)
- 10 -
At 4:56 PM,
[email protected] said...
Mellin transform representation is of course completely ok mathematically. But the
physical interpretation in terms of fermionic c statististics seems utterly wrong. Fermionic
partition function is sum n^-s where n is square free integer and I cannot see a connection
with representation.
At 7:54 PM,
o?
Anonymous said...
More talk here http://empslocal.ex.ac.uk/people/staff/mrwatkin//zeta/physics2.htm
At 11:52 PM,
[email protected] said...
Corrected version of the comment which contained something which should not belong to
it.
########
I was thinking further about the idea that physically the poles of fermionic zeta are the
appropriate notion. The partition function for poles would diverge as intuition suggests. I
want to explain more precisely this intutiion.
Since the temperature interpreted as 1/s is not infinite this means that one has analog of
Hagedorn temperature in which degeneracy of states increases exponentially to
compensate exponential decreases of Boltzman weight so that partition function is sum of
infinite number of numbers approaching to unity. Hagedorn temperature relates by strong
form of holography to magnetic flux tubes behaving as strings with infinite number of
degrees of freedom. One would have quantum critical system. Supersymplectic invariance,
etc.
The real part of temperature is real part of 1/s and given by T_R= 1/2/(1/4+y^2) and
approaches zero for large y as it should. Also imaginary part T_Im approach zero. One has
infinite number of critical temperatures.
An interesting question is whether zeros of zeta correspond to critical values of Kahler
coupling strength having interpretation as inverse of critical temperature?!
But what about negative values of poles of z_F at n=-1,-2,...? They would correspond
to negative temperatures -1/n. No problem! In p-adic thermodynamics p-adic temperatures
has just these values if one defines p-adic Boltzmann weight as exp(-E/T)---> p^(-E/T),
with E=n conformal weight!! The condition that weight approaches zero requires T
negative! Trivial poles would correspond to p-adic thermodynamics and non-trivial poles
to ordinary real thermodynamics!
This would be marvellous connection between TGD and number theory. Riemann zeta
would code basic TGD and of course, all of fundamental physics! ;-)
It would be interesting to look the behavior of inverse of complex temperature defined
by zeros of zeta and its pole: this for for fermionic partition function.
- 11 -
I have assumed the Kahler coupling strength is real but one might consider in spirit of
electric magnetic duality also complex values. This would allow to consider the
identification as poles of fermionic zeta as values of 1/alpha_K. Just for fun of course!
Trivial zeros would correspond to p-adic temperatures T=-1/n in the convention
defined by strict formal correspondence of p-adic Boltzmann weight with real Boltzmann
weight. I have earlier defined p-adic temperature as T=1/n.
The inverse of real critical temperature correspond to 1/alpha_K=T=1 (pole of
Riemann zeta), p-adic temperatures T=1/n, and inverses of complex temperatures to
inverses of non-trivial zeros of zeta which approach to zero at the limit y--> infty. This
could have interpretation as asymptotic freedom since alpha_K would go to zero. At
infrared 1/alpha_K approach to the lowest zero 1/2+iy, y=14.1 so that it does not diverge
anywhere.
The very naive guess would be that the real or imaginary part of some nontrivial zero
corresponds to fine structure constant: this guess might be wrong of course The first
estimate shows that this cannot be the case very accurately although zeros . The smallest
value of alpha_K would correspond to 1/14: color coupling strength?
One obtains also alpha_K = about 127 for one of the zeros - fine structure constant at
electroweak scale. Could the values of 1/alpha_K be identified imaginary parts of zeros of
zeta and assigned with p-adic length scales?
Magnetic coupling would correspond to real part and be equal to -1/2,-1, and
n=1,2,3,.... Kahler electric coupling would have values vanishing for real zeros and pole
and imaginary part of zero at critical line. Does this make any sense? Difficult to say!
If this crazy conjecture makes sense then both super-symplectic conformal weights and
complex inverse of Kahler coupling strength would have poles of zeta_F as their value
spectrum. The different values for zeros of zeta could in turn naturally correspond to
number theoretical coupling constant evolution with values of coupling strength associated
with different algebraic extensions of rationals.
At 1:16 PM,
Ulla said...
http://www.nature.com/ncomms/2015/151117/ncomms9818/full/ncomms9818.html
The new way to look at symmetry and deformation. Compare to
http://scitation.aip.org/content/aip/magazine/physicstoday/article/68/11/10.1063/PT.3.298
0 and the older
http://scitation.aip.org/content/aip/magazine/physicstoday/article/62/1/10.1063/1.3074260
and http://motls.blogspot.fi/2015/11/fq-hall-effect-has-vafa-superseded.html .
Moreover, the big strength of Chern-Simons and topological field theories is that you
may put them on spaces of diverse topologies so even the weaker topological claim about
the AdS space can't be considered an absolute constraint for the theory.
What is your opinion of AdS space other than y don't understand it :) How then
interpret results gained with it?
At 6:16 PM,
[email protected] said...
- 12 -
Thank you. A lot of reading. I hope that I havetime. The notion of distortion is new to
me. It looks interesting. I did not understand what antisymmetry in this context could
mean.
AdS is a mathematical representation of holography but with too small conformal
symmetry. One cannot actually forget that space-time is 10-D in it and this makes
condensed matter applications questionable. For instance, in applications one must assume
that the additional dimensions are large. This makes no sense. One cannot take just some
features of the system and forget things like actual space-time dimension. This is typical
left-brainy thinking plaguing the theoretical physics today.
One could see AdS an alternative description for 2-D conformal theory with 2-surfaces
imbedded in 10-D spacetime with boundary. This is purely mathematical approach and
personally I see the attempts to describe condensed matter as 10-D blackholes as a horrible
waste of time. AdS has made no breakthroughs where it has been applied. There is
diplomatically incorrect bread and butter view about this and I cannot prevent my alter ego
from stating it;-). string theory was a failure but string theorists have a lot of methods and
they want to apply them and receive also funding for this activity: why not condensed
matter!
To get something better, one must generalise the notion of conformal symmetry from
2-D to 4-D. Light-bone boundary allows hugely extended superconformal symmetry and
supersymplectic symmetry. Now 10-D space is replaced with 4-D space-time surface in
M^4xCP_2 and also twistors essential for conformal invariance enter the game.
Holography becomes strong form of holography.
In holography, 3-D surface could could for 4-D physics. Now 2-D partonic 2-surfaces
and string world sheets do it. Strings are of course essential also now and in TGD inspired
quantum biology and thermodynamics of consciousness the Hagedorn temperature seems
to be a key player. I must also mention the magnetic flux tubes which are everywhere.
If I were a dictator or Ed Witten, it would took 5 minutes and people would be busily
making TGDt;-).
At 8:08 AM,
Ulla said...
https://books.google.fi/books?id=hyx6BjEX4U8C&pg=PR9&hl=sv#v=onepage&q&f=fal
se
A Quantum Approach to Condensed Matter Physics, AvPhilip L. Taylor,Olle Heinonen,
2002
At 8:14 AM,
Ulla said...
"AdS is a mathematical representation of holography but with too small conformal
symmetry. One cannot actually forget that space-time is 10-D in it and this makes
condensed matter applications questionable."
This I don't get. Condensed matter is not in 10-D?
At 8:22 AM,
Ulla said...
- 13 -
This is a book, http://users.physik.fu-berlin.de/~kleinert/kleiner_reb1/pdfs/1.pdf gauge
fields in condensed matter
At 9:56 AM,
Ulla said...
The most promising support of this theory is the AdS/CFT correspondence [25] which
explicitly connects via a one-to-one correspondence the framework of a 5D string theory
in anti-deSitter space with a conformal quantum field theory on the 4D boundary.
Maldacena, J., Adv. Theor. Math. Phys.2 (1998) 231;
Maldacena, J., Int. J. Theor. Phys.38 (1998) 1113;
Petersen, J. L., Int. J. Mod. Phys. A 14(1999) 3597;
+ additional introductions to and overviews of the AdS/CFT
+ http://arxiv.org/abs/hep-th/0309246
At 10:33 AM,
[email protected] said...
Condensed matter is certainly not 10-D: this I am saying. String theorists however
want some use for their methods;-)! So that it is not so big deal to decide that it is 10-D! A
lot of papers and long curriculum vitae: this is after all more important than physics;-).
Condensed matter gives a lot of applications for the high-leveled mathematical
physics: I have been developing them for more than a decade.
I have explained again and again that the problem of string theory approach is that the
realisation of holography is doomed to fail because conformal invariance is too restricted.
The extended conformal invariance of TGD requires 4-D space-time and 4-D Minkowski
space. This is physics.
Maldacena has done nice mathematical work but unfortunately it has very little to do
with physics of any kind.
10/22/2015 - http://matpitka.blogspot.com/2015/10/could-one-applythermodynamical.html#comments
Could one apply the thermodynamical approach of England in TGD framework?
It turns out possible to gain amazing additional insights about TGD-inspired view of Life and
Consciousness by generalizing England's approach discussed in previous posting. Several puzzling coincidences find an explanation in the thermodynamical framework and the vision about solar system as a
living quantum coherent entity gains additional support.
1. The situation considered in England's approach is a system - say biomolecule - in heat bath so
that energy is not conserved due the transfer of energy between reactants and heat bath.
2. The basic equation is equilibrium condition for the reaction i→ f and its time reversal f*→ i*.
The initial and final state can be almost anything allowing thermodynamical treatment: states of
biomolecule or even gene and its mutation. The ratio of the rates for the reaction and its time
reversal is given by the ratio of the Boltzmann weights in thermal equilibrium:
R(i→ f)/R(f*→ i*)= R , R= e-(Ei-Ef)/T .
- 14 -
Ei and Ef denote the energies of initial and final state. This formula is claimed to hold true
even in non-equilibrium thermodynamics. It is important that the ratio of the rates does not
depend at all on various coupling constant parameters. The equilibrium condition must be
modified if initial and final states are fermions but it is assumed that states can be described as
bosons. Note that in heat bath even fermion number need not be conserved.
3. If the energy eigenstates are degenerate, the ratio R of Boltzman factors must be modified to
include the ratio of state degeneracies R→ (D(Ei)/D(Ef) × e-(Ei-Ef)/T . This generalization is
essential in the sequel.
One can imagine two possible reasons for the presence of exponentially large factors
compensating Boltzmann weights D(Ei). The first reason is that for heff=n× h the presence of nfold degeneracy due to the n-fold covering of space-time surface reducing to 1-fold covering at
its ends at the ends of CD is essential. Second possible reason is that the basic object are
magnetic flux tubes modellable as strings with exponentially increasing density of states. These
mechanisms could quite well be one and same.
Consider now the basic idea inspired by this formula in TGD framework.
1. Since magnetic flux tubes are key entities in TGD inspired quantum biology, stringy dynamics
suggests itself strongly. The situation thus differs dramatically from the standard biochemical
situation because of the presence of dark matter at magnetic flux tubes to which one can assign
fermion carrying strings connecting partonic 2-surfaces defining correlates for particles in very
general sense.
2. The key aspect of stringy dynamics is Hagedorn temperature. Slightly below Hagedorn
temperature the density of states factor, which increases exponentially, compensates for the
Boltzmann factor. Hagedorn temperature is given by THag = (61/2/2π) × (1/α')1/2 where α' is string
tension. In superstring models the value of string tension is huge but in TGD framework the
situation is different. As a matter fact, the temperature can be rather small and even in the range
of physiological temperatures.
3. What makes THag so special is that in the equilibrium condition reaction and its reversal can have
nearly the same rates. This could have profound consequences for life and even more - make it
possible.
In ZEO-based quantum measurement theory and theory of consciousness, time reversal
indeed plays key role: Self dies in state function reduction to the opposite boundary of CD and
experiences re-incarnation as a time-reversed self. This process is essential element of memory,
intentional action, and also remote metabolism, which all rely on negative energy signals
travelling to geometric past assignable to time reversed sub-selves (mental images). The above
formula suggests that intelligent life emerges near THag where the time reversed selves are
generated with high rate so that system remembers and pre-cognizes geometric future as it sleeps
so that memory planned action are possible.
4. String tension cannot be determined by Planck length as in string models if it is to be important
in biology. This is indeed the case in TGD based quantum gravity. The gravitational interaction
between partonic 2-surfaces is mediated by fermionic strings connecting them. If string tension
were determined by Planck length, only gravitational bound states of size of order Planck length
would be possible. The solution of the problem is that the string tension for gravitational flux
tubes behaves like 1/heff2.
- 15 -
In TGD framework, string tension can be identified as an effective parameter in the
expression of Kähler action as stringy action for preferred extremal strongly suggested by strong
form of holography (SH) allowing the description of the situation in terms of fermionic strings
and partonic 2-surfaces or in terms of interiors of space-time surfaces and Kähler action. 1/heff2
dependence can be derived from strong form of holography assuming electric-magnetic duality
for Kähler form, and using the fact that the monopoles associated with the ends have same
magnetic and electric charges.
5. The discussion of the analog of Hawking radiation in TGD framework led to an amazing
prediction: the TGD counterpart of Hawking temperature turns out to be in the case of proton
very near to the physiological temperature if the big mass is solar mass (see this). This suggests
that the entire solar system should be regarded as quantum coherent living system. This is also
suggested by the general vision about EEG. Could Hawking temperature be near to the Hagedorn
temperature but below it?
One can make this vision more detailed.
1. In ZEO the notion of heat bath requires that one considers reactants as subsystems. The basic
mathematical entity is the density matrix obtained by tracing over entanglement with
environment. The assumption that dark matter is in thermal equilibrium with ordinary matter can
be made but is not absolutely crucial. The reactions transforming visible photons to dark photons
should take care of the equilibrium. One could even assume that the description applies even in
case of the negentropic entanglement since thermodynamical entropy is different from
entanglement entropy negative for negentropic entanglement.
2. In TGD-inspired quantum biology, one identifies the gravitational Planck constant introduced by
Nottale with heff=n× h. The idea is simple: as the strength of gravitational interaction becomes so
strong that perturbation series fails to converge, a phase transition increasing the Planck constant
takes place. hgr=GMm/v0= heff=n× h implies that v0/c<1 becomes the parameter defining the
perturbative expansion. hgr is assigned with the flux tubes mediating gravitational interaction and
one can say that gravitons propagate along them.
Note that this assumption makes sense for any interaction - say in the case of Coulomb
interaction in heavy atoms: this assumption is indeed made in the model of leptohadrons (see
this) predicting particles colored excitations of leptons lighter the weak bosons: this leads to a
contradiction with the decay widths of weak bosons unless the colored leptons are dark. They
would be generated in the heavy ion collisions when the situation is critical for overcoming the
Coulomb wall.
The cyclotron energy spectrum of dark particles at magnetic flux tubes is proportional to
hgr/m does not depend on particle mass being thus universal. In living matter cyclotron energies
are assumed to be in the energy range of bio-photons and thus includes visible and UV energies
and this gives a constraint on hgr if one makes reasonable assumption about strengths of the
magnetic fields at the flux tubes (see this). Bio-photons are assumed to be produced in the
transformation of dark photons to ordinary photons. Also (gravitational) Compton length is
independent on particle mass being equal to Lgr=GM/v0: this is crucial for macrosopic quantum
coherence at gravitational flux tubes.
3. The basic idea is that Hawking radiation in TGD sense is associated with all magnetic flux tubes
mediating gravitational interaction between large mass M, say Sun, and small mass m of say
elementary particle. How large m can be, must be left open. This leads to a generalization of
- 16 -
Hawking temperature (see this) assumed to make sense for all astrophysical objects at the flux
tubes connecting them to external masses:
TGR=hbar (GM/RS2 2π) = hbar/(8 π GM).
For Sun with Schwartschild radius, rS=2GM=3 km one has TGR= 3.2× 10-11 eV.
Planck constant is replaced with hgr=GMm/v0= heff=n× h in the defining formula for
Hawking temperature. Since Hawking temperature is proportional to the surface gravity of
blackhole, one must replace surface gravity with that at the surface of the astrophysical object
with mass M so that radius RS=2GM of the blackhole is replaced with the actual radius R of the
astrophysical object in question. This gives THaw= (m/8 π v0) ×(RS/R)2 .
The amazing outcome is that for proton the estimate for the resulting temperature for M the
solar mass, is 300 K (27 C), somewhat below the room temperature crucial for Life!
Could Hagedorn temperature correspond to the highest temperature in which life is possible something like 313 K (40 C)? Could it be that the critical range of temperatures for life is defined
by the interval [THaw,THag]? This would require that THaw is somewhat smaller THag. Note that
Hawking temperature contains the velocity parameter v0 as a control parameter so that Hawking
temperature could be controllable. Of course, also THaw=THag can be considered. In this case the
temperature of environment would be different from that of dark matter at flux tubes.
4. The condition THaw≤ THag allows to pose an upper bound on the value of the effective string
tension
(α')-1/2≥ (m/4×61/2v0) × (RS/R) .
For details see the article Jeremy England's vision about life and evolution: comparison with TGD
approach. For a summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 3:50 AM
10/21/2015 - http://matpitka.blogspot.com/2015/10/jeremy-englands-vision-aboutlife.html#comments
Jeremy England's vision about Life
We had an intensive discussion with my son-in-law Mikko about the work of Jeremy England. The
article of the link is probably the most aggressive hyping I have ever seen but this should not lead to
think that a mere hype is in question. There is also another, not so heavily hyped popular article. The
material at england lab homepage gives a good view about the work of England for those who cannot
tolerate hyping. England's work is indeed very interesting also from TGD point of view although it is
based on standard physics.
Basic ideas of England's theory
I try first to summarize England's vision.
1. Non-equilibrium thermodynamics (NET) is the starting point. NET has been for decades the
theoretical framework underlying the attempts to understand living matter using the principles of
self-organization theory. Living matter is never an isolated system: dissipation would take it to a
totally dead state in this case - nothing would move. Water in the pond when there is no wind, is
a good example.
- 17 -
Self-organization requires an external energy feed - gravitational potential energy liberated in
water flow in river or electric power feed to the hot plate below a teapot. This energy feed drives
the system to a non-stationary state far from a thermal equilibrium state. Dissipation polishes out
all details and leads to an asymptotic spatio-temporal self-organization patterns. The flow in a
river and convection in the heated teapot. With high enough energy feed chaos emerges: water
fall or boiling of tea pot.
2. The basic hypothesis of England is that evolution means increase in the ability to dissipate. This
looks intuitively rather obvious. The evolving system tends to get to a resonance with the energy
feed by oscillating with the same frequency so that energy feed becomes maximal and therefore
also dissipation. The basic rule is simple: choose the easy option, ride on the wave rather than
fighting against it! For instance, the emergence of photosynthesis means that the systems we call
plants become very effective in absorbing the energy of sunlight. In this framework essentially
all systems are alive to some degree.
Dissipation means generation of entropy. Evolution of life and conscious intelligence would
mean maximal effectivenes in the art of producing disorder. Now I am perhaps exaggerating.
One should speak about "system's maximal ability to transfer entropy out of it": life is not
possible without paper baskets. One could argue that the development of civilization durig last
decades demonstrates convincingly that evolution indeed generates systems generating disorder
with a maximal rate.
One could argue that the definition is too negative. Living matter is conscious and there is
genuine conscious information present. The fact is that evolution involves a continual increase of
conscious information: the exponential explosion of science is the best proof for this. England's
vision says nothing about it. Something is missing.
It is however quite possible to imagine that the principle of maximal entropy generation is
true and that the increase of the ability to produce entropy is implied by some deeper principle
allowing to speak about living matter as something tending to increase conscious information
resources. To formulate this idea one needs a theory of consciousness, thermodynamics is not
enough.
3. England has a further idea. The evolution life is not climbing to Mount Everest but coming down
from it. Life emerges spontaneously. This is definitely in conflict with the standard wisdom, in
particular with the thermodynamical belief on thermal death of the Universe as all gradients
disappear. Darwinian evolution would be a special case of a more general phenomenon, which
could be called dissipation driven adaptation (DDA). I made a head-on-collision with this
principle in totally different framework by starting from quantum criticality of TGD: if took time
to fully realize that indeed: evolution could be seen as a sequence of phase transitions breaking in
which certain infinite-dimensional symmetry was spontaneously broken to become just the same
symmetry but in longer scale!
Standard thermodynamics predicts the heat death of the Universe as all gradients gradually
disappear. This prediction is problematic for England's argument suggesting that differentiation
occurs instead of homogenization. Here the standard view about space-time might be quite too
simplistic to overcome the objection. In TGD many-sheeted space-time comes in rescue.
Here is an example about England's argumentation. It seems intuitively clear that replication
increases entropy (it is not however clear whether just the splitting into pieces is even more
effective manner to increase entropy!). This would suggest that DDA forces the emergence of
replication. Very effective dissipators able to replicate, would increase the total effectiveness in
- 18 -
dissipation and be the winners. The proposal to be tested is that bacterial mutations , which are
best replicators are also best dissipators.
What is missing from England's theory?
What is missing from England's theory? The answer is same as the answer to the question what is
missing from standard physics.
1. What is conscious observer - self?
Observer, which remains outsider to the physical world in the recent day physics - both classical
and quantum. Hence one does not have a theory of consciousness and cannot speak about
conscious information. Thermodynamics gives only the notion of entropy as a measure for the
ignorance.
Therefore there is a long list of questions that England's theory does not address. What are
the physical correlates of attention, sensory perception, cognition, emotions relating closely to
information, etc.? Is there some variational principle behing concious existence, and does it
imply evolution? Could second law and DDA be seen as consequences of this variational
principle ?
England does not say much about quantum theory since he talks only about thermodynamics but
his hypothesis is consistent with quantum theory. The restriction to thermodynamics allows only
statistical description and notions like macroscopic quantum coherence are left outside.
2. What is Life?
Again, one has a long list of questions.
What it is to be alive? What distinguishes between living and inanimate systems. What it is
to die? How general phenomenon evolution is: does it apply to all matter? Also notions like selfpreservation and death are present only implicitly in an example about a population of wine
glasses whose members might gradually evolve to survive in an environment populated by opera
sopranos.
One can make also other kinds of questions. What really happens in replication? What is behind
genetic code? Etc...
England is a spiritual person and has made clear that the gulf between science and spirituality
is something which bothers him. England even has the courage to use the word "God". Therefore
it sounds somewhat paradoxical that England avoids using the concepts related to consciousness
and life. This is however the only option if one does not want to lose academic respectability.
How England's theory relates to TGD?
It is interesting to see whether England's vision is consistent with TGD inspired theory of
consciousness, which can be also seen as a generalization of quantum measurement theory achieved by
bringing the observer part of the quantum physical world. In TGD framework several new principles are
introduced and they relate to the new physics implied by the new view about space-time.
1. The new physics involves a generalization of quantum theory by introducing a hierarchy of
Planck constants heff =n×h with various quantal length and time scales are proportional to heff.
heff hierarchy predicts a hierarchy of quantum coherent systems with increasing size scale and
time span of memory and planned action. heff defining a kind of intelligence quotient labels the
levels of a hierarchy of conscious entities.
heff hierachy labels actually a fractal hierarchy of quantum criticalities: a convenient analogy is a
ball at a top of ball at the top..... The quantum phase transitions inreasing h eff occur
spontaneously: this is the TGD counterpart for the spontaneous evolution in England's theory.
- 19 -
Dark matter is what makes system alive and intelligent and thermodynamical approach can
describe only what we see at the level of visible matter.
2. Second key notion is zero energy ontology (ZEO). Physical states are replaced by events, one
might say. Event is a pair of states: initial state and final state. In ZEO these states correspond to
states with opposite total conserved quantum numbers: positive and negative energy states. This
guarantees that ZEO based quantum theory is consistent with the fundamental conservation laws
and laws of physics as we understand them although it allows non-determinism and free will.
Positive and negative energy states are localized at opposite boundaries of a causal diamond
(CD). Penrose diagram - diamond symbol - is a good visualization and enough for getting the
idea.
State function reduction (SFR) is what happens in quantum measurement. The first SFR
leads to a state which is one in a set of states determined once measurement is characterized. One
can only predict the probabilities of various outcomes. Repeated quantum measurements leave
the state as such. This is Zeno effect - watched kettle does not boil.
In ZEO, something new emerges. The SFR can be performed at either boundary of CD. SFR
can occur several times at the same boundary so that the state at it does not change. The state at
the opposite boundary however changes - one can speak of the analog of unitary time evolution and the second boundary also moves farther away. CD therefore increases and the temporal
distance between its tips does so also.
The interpretation is as follows. The sequence of reductions at fixed bounary corresponds to
a conscious entity, self. Self experiences the sequence of state function reductions as a flow of
time. Sensory experience and thoughts, emotions, etc.. induced by it come from the moving
boundary of CD. The constant unchanging part of self which meditators try to experience
corresponds to the static boundary - the kettle that does not boil.
Self dies in the first reduction to the opposite boundary of CD. Self however re-incarnates.
The boundaries of self change their roles and the geometric time identified as distance between
the tips of CD increases now in opposite direction. Time-reversed self is generated.
3. Negentropy Maximization Principle (NMP) stating roughly that the information content of
consciousness is maximal. Weak form of NMP states that self has free will and can choose also
non-maximal negentropy gain. The basic principle of ethics would be "Increase negentropy". pAdic mathematics is needed to construct a measure for conscious information and the notion of
negentropic entanglement (NE) emerges naturally as algebraic entanglement.
The negentropy to which NMP refers is not the negative of thermodynamical entropy
describing lack of information of outsider about state of system. This negentropy characterizes
the conscious information assignable to negentropic entanglement (NE) characterized by
algebraic entanglement coefficients with measure identified as a number theoretic variant of
Shannon entropy. Hence NMP is consistent with the second law implied by the mere nondeterminism of SFR.
NMP demands that self during sequence of reductions at the same boundary generates
maximum negentropy gain at the changing CD boundary. If self fails, it dies and re-incarnates
(in a reduction to the opposite CD boundary more negentropy is generated). Selves do not want
to die and usually they do not believe on re-incarnation, and therefore do their best to avoid what
they see as a mere death. This is the origin of self-preservation. Self must collect negentropy
somehow: gathering negentropic sub-selves (mental images) is a manner to achieve this. Plants
- 20 -
achieve this by photosynthesis, which means generation of negentropy and storage of it to
various biomolecules. Animals are not so saintly and simply eat plants and even other animals.
We are negentropy thieves all.
Reincarnation also means increase of heff and getting to higher level in hierarchy and occurs
unavoidably. As in England's theory, evolution occurs spontaneously: it is not climbing to Mount
Everest but just dropping down.
4. England says "Some things we consider inanimate actually may already be 'alive'." This
conforms with TGD view. Even elementary particles could have self: it is however not clear
whether their SFR sequences contain more that one reduction to a fixed boundary - necessary for
having a sense about the flow of time. Elementary particles would even cognize: in adelic
physics every system has both real and p-adic space-time surfaces as its correlates. It can even
happen that system has only p-adic space-time correlates but not the real one: this kind of
systems would be only imaginations of real system! This is one of the most fascinating
implications of strong form of holography which follows from strong form of General
Coordinate Invariance forced by the new view about space-time.
Clearly the notion of evolution generalizes from biological context to entire physics in TGD.
One can speak about p-adic evolution and evolution as increase of heff. The most abstract
formulation is number theoretical: evolutioncorresponds to the increase of the complexity of
extension of rationals to which the parameters characterizing space-time surfaces belong to.
5. Does DDA emerge in TGD framework? NMP demands a lot of SFRs - also at the level of visible
matter. The non-determimism of SFR alone means a loss of knowledge about the state of system
and an increase of thermodynamical entropy so that living systems would generate entropy very
effectively also in TGD Universe at the level of visible matter. If one believes that second law
and NET imply DDA as England argues, then also TGD implies it at the level of visible matter.
For dark matter the situation is different, since the outcome of SFR is not not random anymore.
Seen from TGD perspective England's vision misses what is essential for life - the generation of
phases of matter identifiable as the mysterious dark matter.
6. England talks about God. In a theory of consciousness predicting infinite self hierarchy, it is easy
to assign the attribute "divine" to the levels of consciousness above given level of hierarchy.
Personally I have nothing against calling the Entire Universe "God".
One could give NMP the role of God. For strong form of NMP SFR would be almost
deterministic except for ordinary matter for which entanglement is not algebraic and is therefore
entropic: the universe would the best possible one in dark sectors and the worst one in the visible
matter sector - Heaven and Hell! Weak form of NMP makes possible even more effective
generation of negentropy than its strong form but allows self to make also stupid things and even
SFRs with a vanishing negentropy gain: the outcome is state with no entanglement (system is in
very literal sense alone in this state). The world in dark matter sectors is not anymore the best
possible one but can become better and does so in statistical sense.
7. Replication is a crucial aspect of being alive. England argues that DDA allows to understand its
emergence but does not tell about its mechanism. In TGD framework replication can be
understood as an analog of particle decay - say photon emission by electron. This requires
however a new notion: magnetic body. In Maxwell's theory one cannot assign any field identity
to a physical system but TGD view about space-time forces to assign to a given system its
field/magnetic body. The replication occurs primarily at the level of magnetic body carrying dark
matter as large heff phases. Magnetic body replicates and ordinary visible matter self-organizates
around the resulting copies of it. The dynamics of dark matter would induce also DNA
replication, transcription and mRNA translation, and there are some indications that it is indeed
- 21 -
"dark DNA" (dark proton sequences having DNA, RNA, amino-acids, and tRNA as biochemical
counterparts), which determines what happens in transcription.
For details see the article Jeremy England's vision about life and evolution: comparison with TGD
approach. For a summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 1:48 AM
10/17/2015 - http://matpitka.blogspot.com/2015/10/quantum-criticality-and-proteinfolding.html#comments
Quantum Criticality and Protein Folding
In Thinking Allowed Original there was a link to a new observation about protein folding process.
During folding some proteins hold single building blocks in shapes that were thought to be impossible to
find in stable form. Stable shapes contained some parts, which were trapped like mosquitos in amber.
A concrete TGD-based model relies on the general ideas of TGD inspired quantum biology.
1. Biomolecules containing aromatic rings play a fundamental role. All DNA nucleotides contain
them and there are 4 proteins, which also have them. trp and phe are of special importance and
form a pair structurally analogous to a base pair in DNA strand. The rings are assumed to carry
the analog of supra current and be in or at least be able to make transition to a state with large
heff. The delocalization of electron pairs in aromatic ring could be a signature of heff/h>1.
2. trp-phe pairing would be responsible for information molecule-receptor pairing. Information
molecule and receptor would be at the ends of flux tubes serving as communication lines, and the
attachment of info molecule to receptor would fuse the two flux tubes to longer one. After that
communication would become possible as dark photon signals and dark supra currents.
Formation of info molecule-receptor complex would be like clicking icon generating a
connection between computers in net. Info molecules would generate the communication
channels - they would not be the signals. This is the distinction from standard neuroscience.
3. All quantum critical phenomena involve generation of large heff phases. Folding emerges or
disappears at quantum criticality (QC) possible in certain temperature range of width about 40 K
and depending on pH. The flux tubes associated with phe and trp containing aromatic rings
carrying "supra current" would become dark (either h→ heff or heff> h increases) and thus much
longer and reconnect temporarily and force phe and trp in a close contact after the reverse
transition inducing shortening. This is a general mechanism making biomolecules able to find
each other in what looks like molecular soup in the eyes of standard biochemist. The contacts
between amino-acids phe and trp formed in this manner are structurally identical with the
hydrogen bonding between members of DNA base pairs and they would fix the final folding
pattern to high degree.
There was also a link to very interesting article about possible topological phenomena related to
protein folding. Authors are Henrik and Jakob Bohr (akin to the Great Niels Bohr?) and Sören Brunak.
The article is very easy to read and explains the basic topological concepts like winding in a simple
manner. The proposal is that the excitation of so called wringing modes of proteins are involved in the
generation and disappearance of the protein folding. Excitation of wringing modes twisting the protein
(think about how one wrings water from a wetted cloth) would make the protein folding state cold
denatured unstable and transform in to stable folded (F) state. In the same manner their excitation would
transform hot denatured (HD) stable state to folded state. Wringing modes could be excited by radiation.
- 22 -
In TGD framework, the folding phase diagram CD-F-HD could be understood also in terms of
quantum criticality (QC). Perhaps the simplest option is that the transitions CD-F and HD-F involve a
generation of critical states leading to a generation of long range correlations (large heff) inducing the
folding pattern. Absorption of photons to wringing modes would induce the criticality and the folding
would proceed by the mechanism discussed above.
For background see the chapter More Precise TGD Based View about Quantum Biology and Prebiotic
Evolution or article with the same title. For a summary of earlier postings see Links to the latest
progress in TGD.
posted by Matti Pitkanen @ 11:50 PM 3 comments
3 Comments:
At 12:47 AM,
Leo Vuyk said...
About the folding aspects of the micro world, I would suggest to assume that we have to
do with the rotation with staps of 90 degrees internal quantum structure. see:
https://www.flickr.com/photos/93308747@N05/albums/72157633110734398
At 12:31 PM, Ulla said...
https://www.newton.ac.uk/files/seminar/20120905115012101-153268.pdf
At 1:29 AM,
[email protected] said...
DNA folding mechanism looks very interesting since it also distinguishes between
prokaryotes and eukaryotes with much larger genome. Histones are proposed to contain
kind of figure eights giving rise of 10 nm sized very loopy but un-knotted balls if I
understood correctly. Does the reconnection of closed circular protein loop through which
DNA goes and returns back give rise to these figure eights? Does re-connection giving rise
to figure eight involve hydrogen bonding between aromatic rings of trp and phe so that it
would be analogous to base pairing in DNA?
10/16/2015 - http://matpitka.blogspot.com/2015/10/further-progress-in-numbertheoretic.html#comments
Further progress in the number theoretic anatomy of zeros of zeta
I wrote some time ago an article about number theoretical universality (NTU) in TGD framework
and ended up with the conjecture that the non-trivial zeros of zeta can be divided into classes C(p)
labelled by primes p such that for zeros y in given class C(p) the phases piy are roots of unity. The
impulse leading to the idea came from an argument of Dyson referring to the evidence that the Fourier
transform for the locus of non-trivial zeros of zeta is a distribution concentrated on powers of primes.
There is a very interesting blog post by Mumford which led to a much more precise formulation of
the idea and improved view about the Fourier transform hypothesis: the Fourier transform must be
defined for all zeros, not only the non-trivial ones and trivial zeros give a background term allowing to
understand better the properties of the Fourier transform.
Mumford essentially begins from Riemann's "explicit formula" in von Mangoldt's form.
∑p∑n≥ 1 log(p)δpn(x)= 1- ∑k xsk-1 - 1/x(x2-1) where p denotes prime and sk a non-trivial zero of zeta. The
left hand side represents the distribution associated with powers of primes. The right hand side contains
- 23 -
sum over cosines ∑k xsk-1= 2∑k cos(log(x)yk)/x1/2 where yk ithe imaginary part of non-trivial zero .
Apart from the factor x-1/2 this is the just Fourier transform over the distribution of zeros.
There is also a slowly varying term 1-1/x(x2-1) which has interpretation as the analog of the Fourier
transform term but sum over trivial zeros of zeta at s=-2n, n>0. The entire expression is analogous to a
"Fourier transform" over the distribution of all zeros.
Therefore the distribution for powers of primes is expressible as "Fourier transform" over the
distribution of both trivial and non-trivial zeros rather than only non-trivial zeros as suggested by
numerical data to which Dyson referred to). Trivial zeros give a slowly varying background term large
for small values of argument x (poles at x=0 and x=1 - note that also p=0 and p=1 appear effectively as
primes) so that the peaks of the distribution are higher for small primes.
The question was how can one obtain this kind of delta function distribution concentrated on powers
of primes from a sum over terms cos(log(x)yk) appearing in the Fourier transform of the distribution of
zeros. Consider x=pn. One must get a constructive interference. Stationary phase approximation is in
terms of which physicist thinks.
The argument was that a destructive interference occurs for given x=pn for those zeros for which
cosine does not correspond to a real part of root of unity as one sums over such yk: random phase
approximation gives more or less zero. To get something nontrivial yk must be proportional to 2π×
n(yk)/log(p) in class C(p) to which yk belongs. If the number of these yk:s in C(p) is infinite, one obtains
delta function in good approximation by destructive interference for other values of argument x.
The guess that the number of zeros in C(p) is infinite is encouraged by the behaviors of the densities
of primes one hand and zeros of zeta on the other hand. The number of primes smaller than real number
x goes like π(x)=#(primes <x)∼ x/log(x) in the sense of distribution. The number of zeros along critical
line goes like #(zeros <t) = (t/2π) × log(t/2π) in the same sense.
If the real axis and critical line have same metric measure then one can say that the number of zeros
in interval T per number of primes in interval T behaves roughly like
#(zeros<T)/#(primes<T) =log(T/2π)×log(T)/2π
so that at the limit of T→ ∞ the number of zeros associated with given prime is infinite. This
assumption of course makes the argument a poor man's argument only.
See the chapter Unified Number Theoretic Vision of "Physics as Generalized Number Theory" or the
article Could one realize number theoretical universality for functional integral?.
For a summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 12:44 AM
3 Comments:
At 3:25 PM, Anonymous said...
Matti,
what
do
you
think
of
J-F
Geneste's
LENR
theory:
https://drive.google.com/file/d/0B1_tFmz65k8BZ3RXejdnZm9ZN3ljZzN4YjZvXzI2dEkt
UVVV/view?usp=sharing
Sources and background:http://egooutpeters.blogspot.ro/2015/10/all-papers-presented-atairbus-iscmns.html
https://www.linkedin.com/pulse/major-breakthrough-field-energy-jean-francois-geneste
At 8:59 PM,
[email protected] said...
- 24 -
Thank you for interesting question. I have proposed for findings of Urutskoev and
Fredericks
a
TGD-based
interpretation:
html://tgdtheory.fi/public_html/neuplanck/neuplanck.html#darkforces . The traces seem
to be often created in presence of biological material.
To me the experimental findings of Urutskoev, Fredericks, and Gariaev are extremely
interesting since they could be one of the first direct detections of dark matter in TGD
sense. I do not like some elements of his theory - say aether. Also the identification of
traces as very special kind of surfaces is ad hoc but shows that Urutskoev is already near
to the realization of the fact that traces are not actually tracks of particles but static
geometric objects.
The theory of Urutskoev interprets the traces as tracks created some exotic particles the identification in terms of ordinary particles or particles of any kind does not work. I
would identify them as dark magnetic flux tubes becoming visible in the emulsion. For
instance, it takes time for the tracks to appear: tubes must be present for long enough time.
Particle tracks would be generated immediately.
Dark matters large h_eff would be associated with the magnetic flux tubes and
emulsion would generate kind of "photograph" of the tube. In Gariaev's experiment one
obtains also an image consisting of tubular structures identifiable as flux tubes containing
dark matter: visible photons transform to dark ones, reflect from the charged dark matter
in flux tube, and transform back to ordinary ones and the image is created. Now the the
first step is not present: part of dark photon radiation from flux tube is transformed to
ordinary photons (interpretation is as biophoton in biology) and creates the image of flux
tube in photo-emulsion.
In TGD framework, dark matter is generated at quantum criticality which has ordinary
criticality (maybe also thermal criticality) as correlate. In the case of Urutskoev the
"powders" were created in electrical discharge - criticality against dielectric breakdown is
in question. Tesla's experiments involved also similar criticality and I have proposed
generation of dark matter as explanation for the strange findings.
TGD model for LENR relies on this picture two: weak interaction length scale would
be scale from 10^-17 meters to about atomic length scale and weak bosons would be
effectively massless below that scale so that also weak interaction would of same strength
as ordinary weak interaction. Proton shot to the target would emit dark charged weak
boson (W^+) absorbed by the target and make its way to the target as neutron feeling no
Coulomb wall. After that the strong reaction would proceed and lead cold fusion.
Biofusion could take place also in different manner: by generation of dark proton
sequences at dark flux tubes as Pollacks's exclusion zones are generated. These dark
proton sequences -dark nuclei- could transform to ordinary nuclei with some probability
liberating quite a large energy if nuclear biding energy scales like 1/h_eff! This could
occur in the splitting of water and explain strange properties assigned with Brown's gas
such as ability to melt metals at low temperature.
At 9:08 PM, [email protected] said...
Sorry for a stupid error!! Since the author was missing from beginning, I thought that the
theory was by Urutskoev. It was Geneste's theory about findings of Urutkoev!
- 25 -
It is mentioned that the imbedding of Lobatchevski plane to Euclidian 3-space fails
globally (I think this is just negative constant curvature surface): corals consist of pieces of
this kind. The imbeddability to non-Archimedean variant of E^3 is mentioned. I do not
however think that the model of traces identified as flux tubes is as closed tubular
surfaces.
10/09/2015 - http://matpitka.blogspot.com/2015/10/blog-post.html#comments
How did life evolve during pre-Cambrian period?
The formulation of a more detailed TGD-inspired vision about how Life might have evolved in TGD
Universe during pre-Cambrian era is a fascinating challenge. In Cambrian explosion relatively rapid
expansion of Earth size by a factor of 2 assumed in TGD version of Expanding Earth model. TGD
indeed predicts that cosmic expansion takes place in given scale as rapid jerks rather than continuously
as in ordinary cosmology. The key ingredients besides standard facts are TGD inspired interpretation for
Cambrian Explosion (CE) (see this and this), of dark matter as large heff phases (see this), and the notion
of magnetic flux tubes. These provide TGD view about Pollack's Exclusion Zones (EZs, as key factors
in the evolution of Life.
I have gathered useful links from web to build a more detailed version of TGD vision and it is
perhaps appropriate to give a list of some useful links - they appear also as references. These links might
help reader considerably in getting touch about the problems involved and reader can easily find more.
1. Data related to Mars Two generations of windblown sediments on Mars Sedimentary MarsLiquid
flows in Mars today: NASA confirms evidenc:
2. Metabolism Microbial metabolism Electron transport chainMetal-eating microbes in African
lake could solve mystery of the planet's iron deposits
3. When did photosythesis emerge? Ancient rocks record first evidence for photosynthesis that
made oxygenCyanobacteria:
4. When did oxygenation really occur?Great Oxygenation Event Mass-Independent Sulfur Isotopic
Compositions in Stratospheric Volcanic EruptionsNeoarchean carbonate-associated sulfate
records
positive
Δ33S
anomaliesGreat
Oxidation
Event
"a
misnomer"
An Oxygen-poor "Boring" Ocean Challenged Evolution of Early Life
5. The role of iron Evidence for a persistently iron-rich ocean changes views on Earth's early
history
1. What happened before Cambrian explosion?
The story about evolution of Life is constructed from empirical findings based on certain geological,
chemical, and isotope signatures. The study of sediment rocks makes possible reasonably reliable age
determinations but involves assumptions about the rate of sedimentation. Water, ice, acids, salt, plants,
animals, and changes in temperature contribute to weathering and cause erosion involves water, ice,
snow, wind, waves and gravity as agents and leads to sedimentation. Also organic material forms
sediments both on land and at ocean floors.
Isotope ratios serve as signatures since they are different in inanimate and living matter because
those for living matter reflect those in atmosphere and are affected by cosmic rays. The concentrations
of various elements are important signatures: mention only oxygen, nitrogen, sulphur compounds such
as sulphide, hydrogen sulphide. and sulphate iron, and molybden.
- 26 -
The story involves great uncertainties and should not be taken only as a story. In the following TGD
view about how life evolved before Cambrian Explosion (CE) about 0.6 gy ago is summarized. The
Pre-Cambrian part of TGD inspired story differs dramatically from the official narrative since only lakes
would have been present whereas official story assumes oceans and continents. Earth would have very
much like Mars before CE - even its radius would have been essentially same (half of the recent radius
of Earth). This suggests that Mars could teach us a lot about the period before CE. The deviations seem
to explain its paradoxical looking aspects of the standard story.
1. Life according to TGD evolved in underground oceans and at the surface of Earth containing
lakes but no oceans. The lifeforms at the surface of Earth were prokaryotes whereas the life in
underground oceans consisted of relatively complex photo-synthesizing eukaryotes.
2. The recent data from Mars gives an idea what the situation at Earth was during CE since the
radius of Earth at that time was very nearly same as that of Mars now. There is evidence for
sedimentation and for water near to and even at the surface provided quite recently. The life at
the surface of Earth before CE consisted mainly of prokaryotes and very simple mono-cellular
eukaryotes and something like this is expected at the surface of Mars now.
3. Already around 3.5 gy ago prokaryotes using sulphate as energy metabolite were present. Photosynthesizing cyanobacteria emerged about 3.2 gy ago. They became later the plasmids of plant
cells responsible for photo-synthesis. The problem of the standard story is that this did not lead
to oxygenation of the hypothetic oceans and rapid evolution of eukaryotes and multi-cellulars.
In the standard model vision, one can explain the absence of oxygen based life in hypothetic
oceans by the presence of oxygen sinks. It is known that the ancient oceans (shallow oceans,
lakes, or ponds in TGD) were oxygen poor and iron rich. The data about Mars - the red planet
because ofiron rusting - makes possible to test the feasibility of this hypothesis. The oxygen
produced by the cyanobacteria was used to the formation of rusted iron layers giving rise to iron
ores. For 1.8 gy ago the formation of rusted iron layers ceased. A possible explanation is that all
iron was used. The ores could have been also generated by bacteria using iron as metabolite and
transforming it to iron oxide. There are however now iron ores after 1-8 gy: did these bacteria
lose the fight for survival?
In TGD physics, Earth atmosphere remained oxygen poor since the small lakes could not
produce enough oxygen to induce the oxygenation of the atmosphere. The lakes however gained
gradually oxygen. First it went to the oxidation of iron.
4. A general belief has been that about 2.4 gy ago Great Oxidation Event (GOE) occurred. The
basic evidence for GEO is from volcano eruptions, which seem to have produced anomalously
small amount of sulphur after 2.4 gy. The reason would have been the formation of sulphate SO4
from atmospheric oxygen and sulphur emanating from volcano.
This evidence has been however challenged by measuring sulphur anomalies for recent
volcanic eruptions. Their sign varies in time scale of month changing from positive to negative.
It is quite possible that GOE is an illusion.
5. There is also problem related to to the "boring period" 1.8-0.8 gy. It seems that the hypothetic
oceans remained still oxygen poor and iron rich. It has been also suggested that the boring period
continued up to CE: the first animals after CE could have oxygenated Earth's oceans. In TGD
Universe, GOE is indeed illusion for the simple reason that oceans did not exist! Life was boring
at the surface of Earth from 3.5 gy to 0.6 gy.
6. Life would have evolved in underground seas containing oxygenated water, probably already 3.2
gy ago, and making possible photo-synthesis and cellular respiration. Animal cells formed by
- 27 -
eukaryotes with nucleus carrying genome with prokaryotes, which later became mitochondria.
Plant cells emerged when these eukaryotes engulfed also cyanobacteria, which made photosynthesis possible. The highly developed eukaryotes were burst to the surface as the radius of
Earth increased by a factor two in geologically short time scale. Oceans containing oxygen rich
water were formed. CE can be equated with GOE in TGD picture.
Plants are divided into green and red algae, a small group of fresh water monocellulars
glaucophytes, and land plants. Land plants must have emerged after CE. Red algae are multicellulars (corals are representative example). Also green algae can be multi-cellulars and land
plants are thought to have developed from them. An interesting question is whether multicellular plants and animals emerged already before CE as the findings would suggest.
The basic objection against this vision is that photo-synthesis is not possible underground. Did
photo-synthesis occur in shallow lakes storing chemical energy transferred to the underground seas. This
does not seem a plausible option but cannot be excluded. The volcanoes and hydrothermal vents bring
water from underground. The water contains ground water and ordinary sea water, which ended
underground in various manners, and also magmatic component. The geothermal vents and most
volcanoes are however associated with the regions were tectonic plates meet and should not have existed
before CE.
TGD-inspired model for Pollack's exclusion zones (EZs) suggests a solution of the problem. The
formation of these negatively charged regions of water is induced by solar radiation, IR radiation at
energies which correspond to metabolic energy quantum, and also at energies corresponding to THz
frequency. TGD based model proposes that the protons from EZ becomes large h eff protons at magnetic
flux tubes associated with EZ. These flux tubes could be quite long and extend to the underground
oceans. Dark photons with energy spectrum containing that of bio-photons could travel along these flux
tubes. This suggests that solar radiation transforms partially to dark photons, which travel along flux
tubes to the underground sea and transform to ordinary photons caught by photo-synthesizing cells.
Interestingly, also the temperature of Earth is such that thermal radiation would be in visible region
and one cannot exclude the possibility that dark photons emerge also from this source. This would make
possible also cell respiration and oxygen rich water. The skeptic is of course wondering whether the
flux tubes were long enough.
1. The basic idea about dark matter residing at magnetic flux tubes emerged in TGD from
Blackman's findings about quantal looking effects of ELF em fields on vertebrate brain by
assigning them to cyclotron frequencies Ca++ ions in endogenous magnetic field Bend= .2 Gauss,
which is by a factor 2/5 weaker than the recent magnetic field of Earth and assigning large nonstandard value of Planck constant to the flux tubes so that the energies of ELF quanta are above
thermal energies.
2. The value of magnetic field at flux tubes of "personal" magnetic bodies of organisms have B end
in its value spectrum. Bend could be conserved in evolution somewhat like the salinity of ancient
(underground) ocean. The flux tubes of Bend would have transformed the photons of solar
radiation to dark cyclotron photons allowing them to travel to underground sea and transform
back to ordinary photons to be absorbed by pre-plant cells. I have proposed that a similar
mechanism is at work in biological body and could explain the reported ability of some people to
survive without any obvious metabolic energy feed.
2. How the cellular life could have evolved before CE?
- 28 -
In the following I summarize what looks the most plausible view about evolution of life in TGD
framework. I represent first basic classification to make reading easier.
2.1 Basic classification of lifeforms
Lifeforms are classified into prokarioties (no cell nucleus) and eukaryotes (cell nucleus).
1. Prokaryotes are mono-cellular and have no separate cell nucleus. They are divided into bacteria
and archea. Bacteria do not have genome but only circular DNA strand and usually accompanied
by an almost palindrome. Archea have also genes. Cyanobacteria are simplest photo-synthetizing
cells: these prokaryotes have been engulfed by eukaryotes to form plant cells containing them as
plasmids. Plant cells contain also mitochondria believed also to be ancient prokaryotes which
have been "eaten" by eukaryotes. Plants cells contain both mitochondria and plastids whereas
animal cells contain only mitochondria.
2. Eukaryotes have cell nucleus containing the genome. Eukaryotes divide into three kingdoms:
animals, plants, and fungi. Fungi can be said to be between animals and plants: they do not
perform photo-synthesis but have cell walls.
2.2 Prokaryote-eukaryote distinction
From the existing data one can conclude that during pre-Cambrian period only prokaryotes existed at the
surface of earth - presumably in small lakes in TGD Universe and ocean floors in standard Universe.
The first photo-synthetizing prokaryotes - cyanobacteria - emerged about 3.2 gy ago and their
predecessors where prokaryotes extracting metabolic energy from sulphate. Cyanobacteria are able to
survive in practically any imaginable environment:
Cyanobacteria are arguably the most successful group of microorganisms on earth. They are the
most genetically diverse; they occupy a broad range of habitats across all latitudes, widespread in
freshwater, marine, and terrestrial ecosystems, and they are found in the most extreme niches such as
hot springs, salt works, and hypersaline bays. Photoautotrophic, oxygen-producing cyanobacteria
created the conditions in the planet's early atmosphere that directed the evolution of aerobic metabolism
and eukaryotic photo-synthesis. Cyanobacteria fulfil vital ecological functions in the world's oceans,
being important contributors to global carbon and nitrogen budgets.
It is therefore natural to assume that cyanobacteria migrated to underground ocean through pores and
fractures at the floor of lakes. They would have fused with pre-eukaryotes having only cell nucleus but
no metabolic machinery to become chloroplasts. This would have given rise to the first eukaryotes able
to perform photo-synthesis. The primitive cells prokaryotes defining pre-mitochondria would have also
fused with these pre-eukaryotes so that both pre-plant and pre-animal cells wold have emerged. Why
there is no evidence for the existence of pre-mitochondria as independent cells at the surface of Earth?
Did they emerge first underground oceans, where photo-synthesis was not possible and disappeared in
the fusion with pre-eukaryotes and therefore left no trace about their existence on the surface of Earth?
Both photosynthesis and cell respiration involve so called electron transport chain (ETC) as a basic
structural element. It is associated with any membrane structure and in photo-synthesis it captures the
energy of photon and in cell respiration it catches the biochemical energy which could be emitted as
photon so that the fundamental mechanism is the same. This suggests that cell respiration emerged as a
modification of photo-synthesis a the level of prokaryotes first. Before the emergence of mitochondria
and plastids ETC associated with pre-eukaryote membrane would have served the role of mitochondria
or plastid. Using business language, mitochondria and plastids meant "outsourcing" of photosynthesis
and cellular respiration.
- 29 -
For background see the chapter More Precise TGD Based View about Quantum Biology and
Prebiotic Evolution or article with the same title. See also the article About evolution before Cambrian
Explosion.
For a summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 11:46 PM
10/06/2015 - http://matpitka.blogspot.com/2015/10/neil-turok-about-recentdepressing.html#comments
Neil Turok about recent depressing situation in Theoretical Physics
Neil Turok is one of the very few theoreticians whose existence raises hopes that we might get over
the stagnation period of theoretical physics lasted for four decades now. It relies basically on sloppy
thinking initiated with the advent of GUTs and amplified via the emergence of SUSY, superstring
models and eventually M-theory. Loop gravity starting from naive canonical quantization attempt of
general relativity is also a representative example of this sloppy thinking and naive belief that it is
possible to understand Nature without considering deep metaphysical questions.
The outcome is endless variety of extremely contrived "models" and "effective theories" creating the
illusion of understanding and most importantly: huge amounts of curriculum vitae is generated.
Academic theoretical physics has become very much like playing the market.
The standard excuse for the failure of the theories is that there is no data, we need a bigger
accelerator, gravitation is so extremely weak interaction, we cannot detect dark matter, etc... The fact is
that there is more than enough refined experimental data challenging existing physics in all length scales
- condensed matter, biology,..., even astrophysics in solar system.
The problems are on the theory side and due to reductionistic dogmatism and the refusal to see
physics from a wider perspective including also biology and consciousness. The extremely narrow
professional specialization optimal for the career building is one stumbling block. The professional
arrogance of theoreticians enjoying monthly salary combined with unforeseen conformism is second
problem. Typical theoretician simply refuses to touch in anything not published in so called respected
journal and is even proud of this narrow mindedness and calls it professionalism!
Turok does not hesitate to articulate the truth known by many people in the field. "Our existing
theories are too contrived, arbitrary, ad hoc,” Turok complains. “I think we could be on the verge of the
next big revolution in theory. It will be inspired by the data, by the failure of traditional paradigms. It
will change our view of the universe.”"
I cannot but agree (and even have some rather detailed ideas about what this revolution might be;-)).
But we must patiently wait until the lost generations get retired.
See the article To Explain the Universe, Physics Needs a Revolution in Scientific American.
For a summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 9:36 PM
5 Comments:
At 4:06 AM,
Leo Vuyk said...
- 30 -
I think we need more simple imaginative 3D design power at the quantum scale and
above.
For
my
personal
architectural
3D
focus
see:https://www.flickr.com/photos/93308747@N05/albums/
At 8:13 PM, [email protected] said...
Certainly an important aspect, especially so if one is ready to give up the naive picture
about 3-space as flat space slightly curved and make it many-sheeted structure with
particles identified in terms of topological inhomogenities.
At 7:22 AM,
Leo Vuyk said...
How simple can it be?
https://www.flickr.com/photos/93308747@N05/albums/72157658993403568
At 12:38 AM,
Leo Vuyk said...
or the birth of a planet-sized plasma sphere expelled by the Sun?
https://www.facebook.com/photo.php?fbid=912389065506704&set=a.116217121790573.
22660.100002068560151&type=3&theater
At 9:42 PM, donkerheid said...
http://www.weissensee-verlag.de/autoren/Volkamer/Volkamer-226/volkamer226_kurz.pdf from p. 20
http://www.gbv.de/dms/goettingen/390971308.pdf
seems impressive.
10/04/2015 - http://matpitka.blogspot.com/2015/10/neurons-have-their-privategenomes.html#comments
Do neurons have their private genomes?
Genetics is experiencing a revolution as the information technology has made possible new research
methods and old dogmas must be given up. Before continuing, thanks for Ulla for giving links (see this
and this) explaining the results of the article discussed in more detail: this led to a correction of some
misunderstandings. See also this for a background.
It has been discovered that brain cells have a mosaic like distribution of genomes (see this, this, and
this). In standard framework, this mosaic should be created by random mutations. The mechanism of
mutation is reported to involve transcription rather than DNA replication. The mutation would take
place for DNA when its is copied to RNA after opening of the DNA double strand. The mutations would
have occurred during the period when neurons replicate and the mutation history can be read by
studying the distributions of changes in the genome.
This brings to mind the finding that "knockout", that is removing a part of gene does not affect
transcription (see the earlier blog posting). This suggests that the dark DNA is not changed in these
modifications and mRNA is determined by the dark DNA which would serve as a template for
transcription rather than ordinary DNA. If this were the case also for neurons, the mutations of neuronal
- 31 -
genes should not affect the gene transcription at all, and there would be no negative (or positive) effects
on brain function. This seems too conservative. The mutations should have some rmore active role.
One can consider also different interpretation. The mutations of DNA could be induced by the dark
DNA. As dark DNA changes, ordinary DNA associated with it is forced to change too - sooner or later.
Especially so when the genome is in a state in which mutations can take place easily. Neurons during to
replication stage could have such quantum critical genomes.
Evolution would not be mere selection by a survival of random mutations by external environment
in the time scale much longer than lifetime of individual - but a controlled process, which can occur in
time scale shorter than lifetime and differently inside parts of say brain. This is what the idea TGD
inspired biology suggests. The modified DNA could be dark DNA and serve as template for
transcription and also induce transformation of ordinary DNA associated with it.
Whether this change can be transferred to the germ cells to be transferred to the offspring remains of
course an open question. One can imagine that dark DNA strands (magnetic flux tubes) can penetrate
germ cells and replace the earlier dark DNA sections and induce change of ordinary DNA. Or is a more
delicate mechanism involving dark photons in question. With inspiration coming from the findings
reported by Peter Gariaev I have proposed a model of remote DNA replication suggesting that DNA can
be replicated remotely if the needed nucleotides are present: the information about DNA could be
transferred as dark photons, which can be transformed to ordinary photons identified as
bio-photons. Could Lysenko have been at least partially right despite that he was a swindler
basing his views on ideology?
In any case, TGD-inspired biology allows to imagine a controlled evolution of DNA in analogy to
that what occurs in R&D departments of modern technological organizations. The notion of dark DNA
suggests that biological systems indeed have a "R&D department" in which new variants of DNA
studied as "dark DNA" sequences realised as dark proton sequences - same about dark RNA, and aminoacids and even tRNA. The possibility to transcribe RNA from dark DNA would mean that the testing
can be carried in real life situations.
There indeed exists evidence that traumatic - and thus highly emotional - memories may be passed
down through generations in genome . Could the modifications of brain DNA represent long term
memories as the above described experiment suggests? Could the memories be transferred to the germ
cells using the mechanism sketched above?
For background see the new chapter Dark matter, quantum gravity, and prebiotic evolution or the
article "Direct Evidence for Dark DNA?!". For a summary of earlier postings see Links to the latest
progress in TGD.
posted by Matti Pitkanen @ 9:46 PM
10 Comments:
At 11:43 AM,
Ulla said...
I put the links here for future:
http://www.sciencedaily.com/releases/2015/10/151001153931.htm
http://www.prnewswire.com/news-releases/tracking-subtle-brain-mutationssystematically-300017369.html
At 11:46 AM,
Ulla said...
Also this. http://www.sciencemag.org/content/341/6141/1237758 a review article.
- 32 -
At 8:01 PM, [email protected] said...
Thank you for the links. Very valuable!
At 12:48 PM, Ulla said...
Maybe some ideas concerning gravitational lensings
http://arxiv.org/PS_cache/arxiv/pdf/1108/1108.3793v1.pdf
would
be
useable?
At 12:56 PM, Ulla said...
Actually the creation of negative refraktion is a clue to this? How does it invoke on the
'negative energy' concept?
At 12:35 AM,
Ulla said...
This is about regulating DNA loops, which has bothered me much. Maybe the loops are
like gravitational lenses??? Smiley. It would explain the dark matter regulation?
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC545818/
At 1:04 PM, [email protected] said...
That topology is involved with DNA regulation is very interesting. I am however
unable to say anything interesting except that magnetic flux loops should be involved as in
any bio-control. Reconnections, length change induced by change of h_eff, etc… A lot of
mechanisms. Dark matter as large h_eff phases would be certainly involved should be
certainly involved in the TGD Universe.
The idea about gravitational lensing does not look compelling;-) to me (why it would
be needed?) although h_eff= h_gr identity means that flux tubes mediating gravitational
interactions - at least those between dark matter - should be involved with biology.
h_eff=h_gr is nice for other reasons: the Compton long length does not depend on
particle mass and the energy spectrum of dark photons is universal - in the range of
biophotons, which is in visible and UV and corresponds to molecular transition energies.
Dark photons can induced transitions of biomolecules and the control by magnetic body
becomes possible.
At 1:10 PM,
[email protected] said...
Negative energy concept is basic element at all levels of self hierarchy. Also DNA.
Translated to a theory of consciousness, it says that sequences of self-its time reversal-self… are fundamental. This is new and I have not yet found a manner to express is
mathematical result in a manner appealing to intuition. Reincarnation is familiar notion.
But how I could make reincarnations as time reversal with sensory input somewhere in
Geometric-Past understandable. We are used to think that Time has always the same
arrow: how to overcome come this barrier?
New theories lead to highly counterintuitive predictions, if they are really new theories. If
they are also working theories these predictions turn out to be correct! Jury has to decide;-)
At 12:08 PM,
Ulla said...
- 33 -
It is the gravitational hierarchy as Plancks constant that made me think of a 'blowed up
state' like a gravitational lens for the regulation and for quantum states. In a way this
would give Einstein right?
At 7:36 PM, [email protected] said...
Einstein was right in certain resolution: when you approximate many-sheetedness with
single sheet with single piece of curved Minkowski space (like replacing sandwich with
single infinitely thin sheet), Einstein was right. Also his two principles (general
coordinate invariance and equivalence principle) hold still true albeit in different
framework. Even better, they are consistent with special relativity. TGD-physics is more
Einsteinian that Einstein himself!;-)
09/28/2015 - http://matpitka.blogspot.com/2015/09/how-fourth-phase-of-water-discoveredby.html#comments
How the fourth phase of water discovered by Pollack is formed?
The fourth phase of Pollack is new phase of water whose existence is demonstrated convincingly by
Pollack. This phase might be behing various other phases of water such as ordered water, the Brown
gas, etc... claimed by free energy researchers.
1. The fourth phase is formed in the experiments of Pollack in a system consisting of water
bounded by gel phase and the new phase itself is gel like. It consists of negatively charged
regions of thickness, which can be as thick as 200 micrometers and even thicker exclusion zones
(EZs) have been reported. "EZ" comes from the observation that the negatively charged regions
exclude some atoms, ions, and molecules. The stoichiometry of EZ is H1.5O meaning that there
hydrogen bonded pairs of H2O molecules with one proton kicked away.
2. Where do the protons go? TGD proposal is that it becomes dark proton at the flux tubes outside
the exclusion zone. This generates nanoscopic or even macroscopic quantum phase. DNA double
strands with negative charge would represent an example of this kind of phase with 2 negative
charges per base pair. Cell is negatively charged and would also represent an example about
fourth phase inside cell. In TGD inspired prebiology protocells would be formed by the bubbles
of fourth phase of water as I have proposed in the article "More Precise TGD Based View about
Quantum Biology and Prebiotic Evolution". One ends up with a rather detailed model of
prebiotic evolution involving also clay minerals known as phyllosilicates.
3. irradiation of water using IR photons with energies in the range of metabolic energy quanta nominal value is about .5 eV) - generates fourth phase. This suggests that metabolic energy
might be used to generate them around DNA strands for instance.
Also microwave radiation in the range of few GHz generates fourth phase and many other
mechanisms have been suggested. Microwaves also generate the burning of water which might
therefore also involve the formation of EZs and microwave frequenzies affect strongly
microtubules (see this and this). It seems that the energy feed is the key concept. Maybe a
spontaenously occurring self-organization process induced by a feed of energy is in question.
How the fourth phase is formed? I have considered several alternative answers.
1. Physicist del Giudice has proposed the existence of coherence domains of size of order 1
micrometer in water. The proposal would be that coherence domains of water containing water
molecules with one O-H bond per two molecules at criticality for splitting so that metabolic
- 34 -
energy quantum can split it and kick the proton out as dark proton. The problem is how to
achieve the almost ionization in such a manner that highly regular H1.5 stochiometry is obtained.
2. I have also proposed that so called water clathrates might be involved and serve at least as seeds
for the generation of fourth phase of water. Water clathrates can be see as cages containing
hydrogen-bonded water - forming a crystal structure analogous to ice. Maybe fourth phase
involving also hydrogen bonds between two water molecules before the loss of proton could be
formed from clathrates by a phase transition liberating the dark protons.
I have not considered earlier a proposal inspired by the fact that the sequences of dark protons at
dark magnetic flux tubes outside the fourth phase can be regarded as dark nuclei.
1. Dark nuclei are characterized by dark binding energy. If the dark binding energy scales like
Coulomb energy it behaves like 1/size scale and thus like 1/heff. The model for dark genetic code
as dark proton sequences generated as DNA becomes negatively charged by the generaton of
fourth phase of water around it suggests that the size of dark protons is of order nanometer. This
implies that the dark nuclear binding energy is in the UV region. Just the energy about 5 eV
needed to kick O-H bond near criticality against splitting by a small energy dose such as
metabolic energy quantum.
2. Could it be that the formation of EZs proceeds as a chain reaction, that is dark nuclear fusion? If
so the basic entities of life - EZs - would be generated spontaneously as Negentropy
Maximization Principle indeed predicts! Note that this involves also formation of dark nucleus
analogs of DNA, RNA, aminoacids and realization of genetic code! As dark proton is added to a
growing dark proton sequence, a (dark or) ordinary photon with energy about 5 eV is liberated as
binding energy and could (transform to ordinary photon and) kick new O-H bond near near
criticality or over it and external IR radiation at metabolic energy takes care of the rest. If the
system is near criticality, many other manners to get it over the border can be imagined. Just a
feed of energy generating IR radiation is enough.
3. The resulting dark nuclei can be transform to ordinary nuclei. and this would give rise to
ordinary cold fusion (see "Biological transmutations, and their applications in chemistry,
physics, biology, ecology, medicine, nutrition, agriculture, geology" by Kervran and "The secret
life of plants" by Tompkins and Bird). Biofusion of nuclei of biologically important ions such as
Ca in living matter has been reported. Also in systems in which hydrolysis splits water to yield
hydrogen gas, the same phenomenon is reported. For instance, Kanarev and Mizuno report cold
fusion in this kind of systems. The explanation would be in terms of fourth phase of water: in the
splitting of water not only hydrogen atoms but also dark protons and dark nuclei would be
formed.
For a summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 9:48 PM
09/28/2015 - http://matpitka.blogspot.com/2015/09/are-zeros-of-riemann-zetauniversal.html#comments
Are the zeros of the Riemann zeta number theoretically universal?
I have already posted the following piece of text as part of earlier posting. Since the outcome of
simple argument leads to very powerful statement about number theoretic anatomy of
the zeros of Riemann zeta, I thought that it would be approriate to represent it separately.
Dyson's comment about Fourier transform of Riemann Zeta is very interesting concerning Number
Theoretic Universality (NTU) for Riemann zeta.
- 35 -
1. The numerical calculation of Fourier transform for the distribution of the imaginary parts iy of
zeros s=1/2+iy of zeta shows that it is concentrated at discrete set of frequencies coming as
log(pn), p prime. This translates to the statement that the zeros of zeta form a 1-dimensional
quasicrystal, a discrete structure Fourier spectrum by definition is also discrete (this of course
holds for ordinary crystals as a special case). Also the logarithms of powers of primes would
form a quasicrystal which is very interesting from the point of view of p-adic length scale
hypothesis. Primes label the "energies" of elementary fermions and bosons in arithmetic number
theory, whose repeated second quantization gives rise to the hierarchy of infinite primes. The
energies for general states are logarithms of integers.
2. Powers pn label the points of quasicrystal defined by points log(pn) and Riemann zeta has
interpretation as partition function for boson case with this spectrum. Could pn label also the
points of the dual lattice defined by iy?
3. The existence of Fourier transform for points log(pin) for any vector ya requires piiya to be a root
of unity. This could define the sense in which zeros of zeta are universal. This condition also
guarantees that the factor n-1/2-iy appearing in zeta at critical line are number theoretically
universal (p1/2 is problematic for Qp: the problem might be solved by eliminating from p-adic
analog of zeta the factor 1-p-s.
1. One obtains for the pair (pi,sa) the condition log(pi)ya= qia2π, where qia is a rational
number. Dividing the conditions for (i,a) and (j,a) gives
pi= pjqia/qja
for every zero sa so that the ratios qia/qja do not depend on sa. Since the exponent is
rational number one obtains piM= pjN for some integers, which cannot be true.
2. Dividing the conditions for (i,a) and (i,b) one obtains
ya/yb= qia/qib
so that the ratios qia/qib do not depend on pi. The ratios of the imaginary parts of zeta
would be therefore rational number which is very strong prediction and zeros could be
mapped by scaling ya/y1 where y1 is the zero which smallest imaginary part to rationals.
3. The impossible consistency conditions for (i,a) and (j,a) can be avoided if each prime and
its powers correspond to its own subset of zeros and these subsets of zeros are disjoint:
one would have infinite union of sub-quasicrystals labelled by primes and each p-adic
number field would correspond to its own subset of zeros: this might be seen as an
abstract analog for the decomposition of rational to powers of primes. This
decomposition would be natural if for ordinary complex numbers the contribution in the
complement of this set to the Fourier transform vanishes. The conditions (i,a) and (i,b)
require now that the ratios of zeros are rationals only in the subset associated with pi.
For the general option, the Fourier transform can be delta function for x=log(pk) and the set {ya(p)}
contains Np zeros. The following argument inspires the conjecture that for each p there is an infinite
number Np of zeros ya(p) satisfying piya(p)=u(p)=e(r(p)/m(p))i2π where u(p) is a root of unity that is ya(p)=2π
(m(a)+r(p))/log(p) and forming a subset of a lattice with a lattice constant y0=2π/log(p), which itself
need not be a zero.
In terms of stationary phase approximation, the zeros ya(p) associated with p would have constant
stationary phase whereas for ya(pi≠ p)) the phase piya(pi) would fail to be stationary. The phase eixy would
be non-stationary also for x≠ log(pk) as function of y.
1. Assume that for x =qlog(p), q not a rational, the phases eixy fail to be roots of unity and are
random implying the vanishing/smallness of F(x) .
- 36 -
2. Assume that for a given p all powers piy for y not in {ya(p)} fail to be roots of unity and are also
random so that the contribution of the set y not in {ya(p)} to F(p) vanishes/is small.
3. For x= log(pk/m) the Fourier transform should vanish or be small for m different from 1 (rational
roots of primes) and give a non-vanishing contribution for m=1. One has
F(x= log(pk/m ) =∑1≤ n≤ N(p) e[kM(n,p)/mN(n,p)]i2π .
Obviously one can always choose N(n,p)=N(p).
4. For the simplest option N(p)=1 one would obtain delta function distribution for x=log(pk). The
sum of the phases associated with ya(p) and -ya(p) from the half axes of the critical line would
give
F(x= log(pn)) ∝ X(pn)==2cos(n× (r(p)/m(p))× 2π) .
The sign of F would vary.
5. The rational r(p)/m(p) would characterize given prime (one can require that r(p) and m(p) have
no common divisors). F(x) is non-vanishing for all powers x=log(pn) for m(p) odd. For p=2, also
m(2)=2 allows to have |X(2n)|=2. An interesting ad hoc ansatz is m(p)=p or ps(p). One has
periodicity in n with period m(p) that is logarithmic wave. This periodicity serves as a test and in
principle allows to deduce the value of r(p)/m(p) from the Fourier transform.
What could one conclude from the data (see this)?
1. The first graph gives |F(x=log(pk| and second graph displays a zoomed up part of |F(x| for small
powers of primes in the range [2,19]. For the first graph the eighth peak (p=11) is the largest one
but in the zoomed graphs this is not the case. Hence something is wrong or the graphs
correspond to different approximations suggesting that one should not take them too seriously.
In any case, the modulus is not constant as function of pk. For small values of pk the envelope of
the curve decreases and seems to approach constant for large values of pk (one has x< 15 (e15≈
3.3× 106).
2. According to the first graph | F(x)| decreases for x=klog(p)<8, is largest for small primes, and
remains below a fixed maximum for 8<x<15. According to the second graph the amplitude
decreases for powers of a given prime (say p=2). Clearly, the small primes and their powers have
much larger | F(x)| than large primes.
There are many possible reasons for this behavior. Most plausible reason is that the sums involved
converge slowly and the approximation used is not good. The inclusion of only 104 zeros would show
the positions of peaks but would not allow reliable estimate for their intensities.
1. The distribution of zeros could be such that for small primes and their powers the number of
zeros is large in the set of 104 zeros considered. This would be the case if the distribution of
zeros ya(p) is fractal and gets "thinner" with p so that the number of contributing zeros scales
down with p as a power of p, say 1/p, as suggested by the envelope in the first figure.
2. The infinite sum, which should vanish, converges only very slowly to zero. Consider the
contribution Δ F(pk,p1) of zeros not belonging to the class p1\ne p to F(x=log(pk)) =∑pi Δ F(pk,pi),
which includes also pi=p. Δ F(pk,pi), p≠ p1 should vanish in exact calculation.
1. By the proposed hypothesis this contribution reads as
l Δ F(p,p1)= ∑a cos[X(pk,p1)(M(a,p1)+ r(p1)/m(p1))2π)t] .
X(pk,p1)=log(pk)/log(p1).
Here a labels the zeros associated with p1. If pk is "approximately divisible" by p1 in other
words, pk≈ np1, the sum over finite number of terms gives a large contribution since
interference effects are small, and a large number of terms are needed to give a nearly
vanishing contribution suggested by the non-stationarity of the phase. This happens in
several situations.
2. The number π(x) of primes smaller than x goes asymptotically like π(x) ≈ x/log(x) and
prime density approximately like 1/log(x)-1/log(x)2 so that the problem is worst for the
- 37 -
small primes. The problematic situation is encountered most often for powers pk of small
primes p near larger prime and primes p (also large) near a power of small prime (the
envelope of | F(x)| seems to become constant above x∼ 103).
3. The worst situation is encountered for p=2 and p1=2k-1 - a Mersenne prime and p1=
22k+1, k≤ 4 - Fermat prime. For (p,p1)=(2k,Mk) one encounters X(2k,Mk)= (log(2k)/log(2k1) factor very near to unity for large Mersennes primes. For (p,p1)=(Mk,2) one encounters
X(Mk,2)= (log(2k-1)/log(2) ≈ k. Examples of Mersennes and Fermats are
(3,2),(5,2),(7,2),(17,2),(31,2), (127,2),(257,2),... Powers 2k, k=2,3,4,5,7,8,.. are also
problematic.
4. Also twin primes are problematic since in this case one has factor
X(p=p1+2,p1)=log(p1+2)/log(p1). The region of small primes contains many twin prime
pairs: (3,5), (5,7), (11,13), (17,19), (29,31),....
These observations suggest that the problems might be understood as resulting from
including too small number of zeros.
3. The predicted periodicity of the distribution with respect to the exponent k of pk is not consistent
with the graph for small values of prime unless the periodic m(p) for small primes is large
enough. The above mentioned effects can quite well mask the periodicity. If the first graph is
taken at face value for small primes, r(p)/m(p) is near zero, and m(p) is so large that the
periodicity does not become manifest for small primes. For p=2 this would require m(2)>21
since the largest power 2n≈ e15 corresponds to n∼ 21.
To summarize, the prediction is that for zeros of zeta should divide into disjoint classes {ya(p)\ labelled
by primes such that within the class labelled by p one has piya(p)=e(r(p)/m(p))i2π so that has ya(p) = [M(a,p)
+r(p)/m(p))] 2π/log(p). hat this speculative picture from the point of view of TGD?
1. A possible formulation for number theoretic universality for the poles of fermionic Riemann zeta
ζF(s)= ζ(s)/ζ(2s) could be as a condition that is that the exponents pksn(p)/2= pk/4pikyn(p)/2 exist in a
number theoretically universal manner for the zeros sn(p) for given p-adic prime p and for some
subset of integers k. If the proposed conditions hold true, exponent reduces pk/4 requiring that k is
a multiple of 4. The number of the non-trivial generating elements of super-symplectic algebra in
the monomial creating physical state would be a multiple of 4. These monomials would have real
part of conformal weight -1. Conformal confinement suggests that these monomials are products
of pairs of generators for which imaginary parts cancel. The conformal weights are however
effectively real for the exponents automatically. Could the exponential formulation of the
number theoretic universality effectively reduce the generating elements to those with conformal
weight -1/4 and make the operators in question hermitian?
2. Quasi-crystal property might have an application to TGD. The functions of light-like radial
coordinate appearing in the generators of supersymplectic algebra could be of form rs, s zero of
zeta or rather, its imaginary part. The eigenstate property with respect to the radial scaling rd/dr
is natural by radial conformal invariance.
The idea that arithmetic QFT assignable to infinite primes is behind the scenes in turn suggests
light-like momenta assignable to the radial coordinate have energies with the dual spectrum
log(pn). This is also suggested by the interpretation of ζ as square root of thermodynamical
partition function for boson gas with momentum log(p) and analogous interpretation of ζF.
The two spectra would be associated with radial scalings and with light-like translations of
light-cone boundary respecting the direction and light-likeness of the light-like radial vector.
log(pn) spectrum would be associated with light-like momenta whereas p-adic mass scales would
characterize states with thermal mass. Note that generalization of p-adic length scale hypothesis
- 38 -
raises the scales defined by pn to a special physical position: this might relate to ideal structure of
adeles.
3. Finite measurement resolution suggests that the approximations of Fourier transforms over the
distribution of zeros taking into account only a finite number of zeros might have a physical
meaning. This might provide additional understand about the origins of generalized p-adic length
scale hypothesis stating that primes p≈ p1k, p1 small prime - say Mersenne primes - have a special
physical role.
See the chapter Unified Number Theoretic Vision of "TGD as Generalized Number Theory" or the
article Could one realize number theoretical universality for functional integral?. For a summary of
earlier postings, see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 5:09 AM
09/24/2015 - http://matpitka.blogspot.com/2015/09/where-they-are-gravitationalwaves.html#comments
Where they are - the Gravitational Waves?
100 years since Einstein proposed gravitational waves as part of his general theory of relativity, an
11-year search performed with CSIRO's Parkes telescope has failed to detect them, casting doubt on our
understanding of galaxies and black holes. The work, led by Dr Ryan Shannon (of CSIRO and the
International Centre for Radio Astronomy Research), is published today in the journal Science, see the
article Gravitational waves from binary supermassive black holes missing in pulsar observations. See
also the popular article 11-year cosmic search leads to black hole rethink.
This finding is consistent with the TGD view about blackhole like entities (I wrote 3 blog articles
inspired by the most recent Hawking hype: see this, this and this).
In the TGD Universe, ideal blackhole a space-time region with Euclidian(!) signature of induced
metric and horizon would correspond to the light-like 3-surface at which the signature of the metric
changes. Ideal blackhole (or rather its surface) would consist solely of dark matter. The large values of
gravitational Planck constant hgr= GMm/v0, M here is the mass of blackhole and m is the mass of say
electron, would be associated with the flux tubes mediating gravitational interaction and gravitational
radiation. v0 is a parameter with dimensions of velocity - some characteristic rotational velocity (say the
rotation velocity of blackhole) would be in question.
The quanta of dark gravitational radiation would have much large energies E= heff than one would
expect on basis of the rotation frequency, which corresponds to a macroscopic time scale. Dark
gravitons would arrive as highly energetic particles along flux tubes and could decay to bunches of
ordinary low energy gravitons in the detection. These bunches would be bursts rather than continuous
background and would be probably interpreted as a noise. I have considered a model for this in here.
See the article TGD view about blackholes and Hawking radiation. For a summary of earlier postings
see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 6:12 PM
2 Comments:
At 12:07 PM, Ulla said...
- 39 -
Dark gravitation is a difficult concept... even dark radiation, if not from some noisy effect
like ether or 'grid'.
At 7:35 PM, [email protected] said...
The basic condition is that the verbal shorthands have mathematically well-defined
counterparts. Of course intuitive picture, which is difficult to even verbalize and develops
only gradually, is necessary. What is completely trivial after 38 years with TGD, is often
completely incomprehensible for newcomer.
This is the general problem. Language forms linear representations- "stories" - and
forgets most of the incredibly complex web like structure. The reader knows nothing about
the web in which story is contained as a small thread. Hence problems.
Dark gravitons are large h_eff gravitons for which one has E= h_eff*f rather
thanE=h*f as usually.
One can also say something concrete if if there is idea about how dark gravitons
transform to ordinary ones. Concerning modelling of graviton emissions situation differs
from that for modelling atomic emissions only by the fact that h is replaced with h_eff. I
have considered tis kind of simple model for about decade ago in
http://tgdtheory.fi/public_html/tgdclass/tgdclass.html#qastro .
09/24/2015 - http://matpitka.blogspot.com/2015/09/some-applications-of-numbertheoretical.html#comments
Some applications of Number Theoretical Universality
Number theoretic universality (NTU) in the strongest form says that all numbers involved at "basic
level" (whatever this means!) of adelic TGD are products of roots of unity and of power of a root of e
defining finite-dimensional extensions of p-adic numbers (ep is ordinary p-adic number). This is
extremely powerful physics inspired conjecture with a wide range of possible mathematical applications.
1. For instance, vacuum functional defined as an exponent of Kähler action for preferred externals
would be number of this kind. One could define functional integral as adelic operation in all
number fields: essentially as sum of exponents of Kähler action for stationary preferred
extremals since Gaussian and metric determinants potentially spoiling NTU would cancel each
other leaving only the exponent.
2. The implications of NTU for the zeros of Riemann zeta expected to be closely related to supersymplectic conformal weights will be discussed below.
3. NTU generalises to all Lie groups. Exponents exp(iniJi/n) of lie-algebra generators define
generalisations of number theoretically universal group elements and generate a discrete
subgroup of compact Lie group. Also hyperbolic "phases" based on the roots em/n are possible
and make possible discretized NTU versions of all Lie-groups expected to play a key role in
adelization of TGD.
- 40 -
NTU generalises also to quaternions and octonions and allows to define them as number
theoretically universal entities. Note that ordinary p-adic variants of quaternions and octonions
do not give rise to a number field: inverse of quaternion can have vanishing p-adic variant of
norm squared satisfying ∑n xn2=0.
NTU allows to define also the notion of Hilbert space as an adelic notion. The exponents of
angles characterising unit vector of Hilbert space would correspond to roots of unity.
Super-symplectic conformal weights and Riemann zeta
The existence of WCW geometry highly nontrivial already in the case of loop spaces. Maximal
group of isometries required and is infinite-dimensional. Super-symplectic algebra is excellent candidate
for the isometry algebra. There is also extended conformal algebra associated with δ CD. These algebras
have fractal structure. Conformal weights for isomorphic subalgebra n-multiples of those for entire
algebra. Infinite hierarchy labelled by integer n>0. Generating conformal weights could be poles of
fermionic zeta ζF. This demands n>0. Infinite number of generators with different non-vanishing
conformal weight with other quantum numbers fixed. For ordinary conformal algebras there are only
finite number of generating elements (n=1).
If the radial conformal weights for the generators of g consist of poles of ζF, the situation changes. ζF
is suggested by the observation that fermions are the only fundamental particles in TGD.
1. Riemann Zeta ζ(s)= ∏p(1/(1-p-s) identifiable formally as a partition function ζB(s) of arithmetic
boson gas with bosons with energy log(p) and temperature 1/s= 1/(1/2+iy) should be replaced
with that of arithmetic fermionic gas given in the product representation by ζ F(s) =∏p (1+p-s) so
that the identity ζB(s))/ζF(s) =ζB(2s) follows. This gives ζB(s)/ζB(2s) .
ζF(s) has zeros at zeros sn of ζ (s) and at the pole s=1/2 of zeta(2s). ζF(s) has poles at zeros sn/2 of
ζ(2s) and at pole s=1 of ζ(s).
The spectrum of 1/T would be for the generators of algebra {(-1/2+iy)/2, n>0, -1}. In p-adic
thermodynamics the p-adic temperature is 1/T=1/n and corresponds to "trivial" poles of ζF.
Complex values of temperature does not make sense in ordinary thermodynamics. In ZEO
quantum theory can be regarded as a square root of thermodynamics and complex temperature
parameter makes sense.
2. If the spectrum of conformal weights of generators of algebra (not the entire algebra!)
corresponds to poles serving as analogs of propagator poles, it consists of the "trivial" conformal
h=n>0- the standard spectrum with h=0 assignable to massless particles excluded - and "nontrivial" h=-1/4+iy/2. There is also a pole at h=-1.
Both the non-trivial pole with real part hR= -1/4 and the pole h=-1 correspond to tachyons. I
have earlier proposed conformal confinement meaning that the total conformal weight for the
- 41 -
state is real. If so, one obtains for a conformally confined two-particle states corresponding to
conjugate non-trivial zeros in minimal situation hR= -1/2 assignable to N-S representation.
In p-adic mass calculations ground state conformal weight must be -5/2. The negative
fermion ground state weight could explain why the ground state conformal weight must be
tachyonic -5/2. With the required 5 tensor factors one would indeed obtain this with minimal
conformal confinement. In fact, arbitrarily large tachyonic conformal weight is possible but
physical state should always have conformal weights h>0.
3. h=0 is not possible for generators, which reminds of Higgs mechanism for which the naive
ground states corresponds to tachyonic Higgs. h=0 conformally confined massless states are
necessarily composites obtained by applying the generators of Kac-Moody algebra or supersymplectic algebra to the ground state. This is the case according to p-adic mass calculations,
and would suggest that the negative ground state conformal weight can be associated with supersymplectic algebra and the remaining contribution comes from ordinary super-conformal
generators. Hadronic masses whose origin is poorly understood could come from supersymplectic degrees of freedom. There is no need for p-adic thermodynamics in super-symplectic
degrees of freedom.
Are the zeros of Riemann zeta number theoretically universal?
Dyson's comment about Fourier transform of Riemann Zeta is very interesting concerning NTU for
Riemann zeta.
1. The numerical calculation of Fourier transform for the distribution of the imaginary parts iy of
zeros s=1/2+iy of zeta shows that it is concentrated at discrete set of frequencies coming as
log(pn), p prime. This translates to the statement that the zeros of zeta form a 1-dimensional
quasicrystal, a discrete structure Fourier spectrum by definition is also discrete (this of course
holds for ordinary crystals as a special case). Also the logarithms of powers of primes would
form a quasicrystal, which is very interesting from the point of view of p-adic length scale
hypothesis. Primes label the "energies" of elementary fermions and bosons in arithmetic number
theory, whose repeated second quantization gives rise to the hierarchy of infinite primes. The
energies for general states are logarithms of integers.
2. Powers pn label the points of quasicrystal defined by points log(pn) and Riemann zeta has
interpretation as partition function for boson case with this spectrum. Could pn label also the
points of the dual lattice defined by iy?
3. The existence of Fourier transform for points log(pin) for any vector ya requires piiya to be a root
of unity. This could define the sense in which zeros of zeta are universal. This condition also
guarantees that the factor n-1/2-iy appearing in zeta at critical line are number theoretically
universal (p1/2 is problematic for Qp: the problem might be solved by eliminating from p-adic
analog of zeta the factor 1-p-s.
- 42 -
1. One obtains for the pair (pi,sa) the condition log(pi)ya= qia2π, where qia is a rational
number. Dividing the conditions for (i,a) and (j,a) gives pi= pjqia/qja
for every zero sa so that the ratios qia/qja do not depend on sa. Since the exponent is
rational number one obtains piM= pjN for some integers, which cannot be true.
2. Dividing the conditions for (i,a) and (i,b) one obtains ya/yb= qia/qib
so that the ratios qia/qib do not depend on pi. The ratios of the imaginary parts of zeta
would be therefore rational number which is very strong prediction and zeros could be
mapped by scaling ya/y1 where y1 is the zero which smallest imaginary part to rationals.
3. The impossible consistency conditions for (i,a) and (j,a) can be avoided if each prime and
its powers correspond to its own subset of zeros and these subsets of zeros are disjoint:
one would have infinite union of sub-quasicrystals labelled by primes and each p-adic
number field would correspond to its own subset of zeros: this might be seen as an
abstract analog for the decomposition of rational to powers of primes. This
decomposition would be natural if for ordinary complex numbers the contibution in the
complement of this set to the Fourier trasform vanishes. The conditions (i,a) and (i,b)
require now that the ratios of zeros are rationals only in the subset associated with pi.
For the general option, the Fourier transform can be delta function for x=log(pk) and the set {ya(p)}
contains Np zeros. The following argument inspires the conjecture that for each p there is an infinite
number Np of zeros ya(p) satisfying piya(p)=u(p)=e(r(p)/m(p))i2π where u(p) is a root of unity that is ya(p)=2π
(m(a)+r(p))/log(p) and forming a subset of a lattice with a lattice constant y0=2π/log(p), which itself
need not be a zero.
In terms of stationary phase approximation the zeros ya(p) associated with p would have constant
stationary phase whereas for ya(pi≠ p)) the phase piya(pi) would fail to be stationary. The phase eixy would
be non-stationary also for x≠ log(pk) as function of y.
1. Assume that for x =qlog(p), q not a rational, the phases eixy fail to be roots of unity and are
random implying the vanishing/smallness of F(x) .
2. Assume that for a given p all powers piy for y not in {ya(p)} fail to be roots of unity and are also
random so that the contribution of the set y not in {ya(p)} to F(p) vanishes/is small.
3. For x= log(pk/m) the Fourier transform should vanish or be small for m different from 1 (rational
roots of primes) and give a non-vanishing contribution for m=1. One has
F(x= log(pk/m ) =∑1≤ n≤ N(p) e[kM(n,p)/mN(n,p)]i2π . Obviously one can always choose N(n,p)=N(p).
4. For the simplest option N(p)=1 one would obtain delta function distribution for x=log(pk). The
sum of the phases associated with ya(p) and -ya(p) from the half axes of the critical line would
give
F(x= log(pn)) ∝ X(pn)==2cos(n× (r(p)/m(p))× 2π) . The sign of F would vary.
- 43 -
5. The rational r(p)/m(p) would characterize given prime (one can require that r(p) and m(p) have
no common divisors). F(x) is non-vanishing for all powers x=log(pn) for m(p) odd. For p=2, also
m(2)=2 allows to have |X(2n)|=2. An interesting ad hoc ansatz is m(p)=p or ps(p). One has
periodicity in n with period m(p) that is logarithmic wave. This periodicity serves as a test and in
principle allows to deduce the value of r(p)/m(p) from the Fourier transform.
What could one conclude from the data (see this)?
1. The first graph gives |F(x=log(pk| and second graph displays a zoomed up part of |F(x| for small
powers of primes in the range [2,19]. For the first graph the eighth peak (p=11) is the largest one
but in the zoomed graphs this is not the case. Hence something is wrong or the graphs
correspond to different approximations suggesting that one should not take them too seriously.
In any case, the modulus is not constant as function of pk. For small values of pk the envelope of
the curve decreases and seems to approach constant for large values of pk (one has x< 15 (e15≈
3.3× 106).
2. According to the first graph | F(x)| decreases for x=klog(p)<8, is largest for small primes, and
remains below a fixed maximum for 8<x<15. According to the second graph the amplitude
decreases for powers of a given prime (say p=2). Clearly, the small primes and their powers have
much larger | F(x)| than large primes.
There are many possible reasons for this behavior. Most plausible reason is that the sums involved
converge slowly and the approximation used is not good. The inclusion of only 104 zeros would show
the positions of peaks but would not allow reliable estimate for their intensities.
1. The distribution of zeros could be such that for small primes and their powers the number of
zeros is large in the set of 104 zeros considered. This would be the case if the distribution of
zeros ya(p) is fractal and gets "thinner" with p so that the number of contributing zeros scales
down with p as a power of p, say 1/p, as suggested by the envelope in the first figure.
2. The infinite sum, which should vanish, converges only very slowly to zero. Consider the
contribution Δ F(pk,p1) of zeros not belonging to the class p1≠ p to F(x=log(pk)) =∑pi Δ F(pk,pi),
which includes also pi=p. Δ F(pk,pi), p≠ p1 should vanish in exact calculation.
1. By the proposed hypothesis this contribution reads as
l Δ F(p,p1)= ∑a cos[X(pk,p1)(M(a,p1)+ r(p1)/m(p1))2π)t] .
X(pk,p1)=log(pk)/log(p1).
Here a labels the zeros associated with p1. If pk is "approximately divisible" by p1 in other
words, pk≈ np1, the sum over finite number of terms gives a large contribution since
interference effects are small, and a large number of terms are needed to give a nearly
vanishing contribution suggested by the non-stationarity of the phase. This happens in
several situations.
- 44 -
2. The number π(x) of primes smaller than x goes asymptotically like π(x) ≈ x/log(x) and
prime density approximately like 1/log(x)-1/log(x)2 so that the problem is worst for the
small primes. The problematic situation is encountered most often for powers pk of small
primes p near larger prime and primes p (also large) near a power of small prime (the
envelope of | F(x)| seems to become constant above x∼ 103).
3. The worst situation is encountered for p=2 and p1=2k-1 - a Mersenne prime and p1=
22k+1, k≤ 4 - Fermat prime. For (p,p1)=(2k,Mk) one encounters X(2k,Mk)= (log(2k)/log(2k1) factor very near to unity for large Mersennes primes. For (p,p1)=(Mk,2) one encounters
X(Mk,2)= (log(2k-1)/log(2) ≈ k. Examples of Mersennes and Fermats are
(3,2),(5,2),(7,2),(17,2),(31,2), (127,2),(257,2),... Powers 2k, k=2,3,4,5,7,8,.. are also
problematic.
4. Also twin primes are problematic since in this case one has factor
X(p=p1+2,p1)=log(p1+2)/log(p1). The region of small primes contains many twin prime
pairs: (3,5), (5,7), (11,13), (17,19), (29,31),....
These observations suggest that the problems might be understood as resulting from including
too small number of zeros.
3. The predicted periodicity of the distribution with respect to the exponent k of pk is not consistent
with the graph for small values of prime unless the periodic m(p) for small primes is large
enough. The above mentioned effects can quite well mask the periodicity. If the first graph is
taken at face value for small primes, r(p)/m(p) is near zero, and m(p) is so large that the
periodicity does not become manifest for small primes. For p=2 this would require m(2)>21
since the largest power 2n≈ e15 corresponds to n∼ 21.
To summarize, the prediction is that for zeros of zeta should divide into disjoint classes {ya(p)\ labelled
by primes such that within the class labelled by p one has piya(p)=e(r(p)/m(p))i2π so that has ya(p) = [M(a,p)
+r(p)/m(p))] 2π/log(p).
What is this speculative picture from the point of view of TGD?
1. A possible formulation for number theoretic universality for the poles of fermionic Riemann zeta
ζF(s)= ζ(s)/ζ(2s) could be as a condition that is that the exponents pksn(p)/2= pk/4pikyn(p)/2 exist in a
number theoretically universal manner for the zeros sn(p) for given p-adic prime p and for some
subset of integers k. If the proposed conditions hold true, exponent reduces pk/4 requiring that k is
a multiple of 4. The number of the non-trivial generating elements of super-symplectic algebra in
the monomial creating physical state would be a multiple of 4. These monomials would have real
part of conformal weight -1. Conformal confinement suggests that these monomials are products
of pairs of generators for which imaginary parts cancel. The conformal weights are however
effectively real for the exponents automatically. Could the exponential formulation of the
- 45 -
number theoretic universality effectively reduce the generating elements to those with conformal
weight -1/4 and make the operators in question hermitian?
2. Quasi-crystal property might have an application to TGD. The functions of light-like radial
coordinate appearing in the generators of supersymplectic algebra could be of form rs, s zero of
zeta or rather, its imaginary part. The eigenstate property with respect to the radial scaling rd/dr
is natural by radial conformal invariance.
The idea that arithmetic QFT assignable to infinite primes is behind the scenes in turn
suggests light-like momenta assignable to the radial coordinate have energies with the dual
spectrum log(pn). This is also suggested by the interpretation of ζ as square root of
thermodynamical partition function for boson gas with momentum log(p) and analogous
interpretation of ζF.
The two spectra would be associated with radial scalings and with light-like translations of lightcone boundary respecting the direction and light-likeness of the light-like radial vector. log(pn)
spectrum would be associated with light-like momenta whereas p-adic mass scales would
characterize states with thermal mass. Note that generalization of p-adic length scale hypothesis
raises the scales defined by pn to a special physical position: this might relate to ideal structure of
adeles.
3. Finite measurement resolution suggests that the approximations of Fourier transforms over the
distribution of zeros taking into account only a finite number of zeros might have a physical
meaning. This might provide additional understand about the origins of generalized p-adic length
scale hypothesis stating that primes p≈ p1k, p1 small prime - say Mersenne primes - have a special
physical role.
See the chapter Unified Number Theoretic Vision of "TGD as Generalized Number Theory" or the
article Could one realize number theoretical universality for functional integral?. For a summary of
earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 3:43 AM
09/20/2015 - http://matpitka.blogspot.com/2015/09/invisible-magnetic-fields-asdark.html#comments
"Invisible magnetic fields" as dark magnetic fields
A further victory for the notion of dark magnetic fields: scientists create a "portal" that conceals
electromagnetic fields (see this). The popular article talks about wormholes and invisible magnetic
fields. A couple of comments about the official terminology are in order.
"Wormhole" is translation of "flux tube carrying monopole current". Since TGD is "bad science",
"wormhole" is the correct wording although it does not have much to do with reality. Similar practice is
- 46 -
applied by stringy people when they speak of wormholes connecting blackholes. The original TGD
terminology talks about partonic 2-surfaces and magnetic flux tubes but again: this is "bad" science.
Reader can invent an appropriate translation for "bad".
"Invisible magnetic field" translates to "dark magnetic field carrying monopole flux". Dark magnetic
fields give rise to one of the basic differences between TGD and Maxwellian theory. These magnetic
fluxes can exist without generating currents. This makes them especially important in early cosmology
since they explain the long range magnetic fields having no explanation in standard cosmology. Superconductivity is second application and central in TGD-inspired quantum biology.
What is fantastic that technology based on TGD is now being created without a slightest idea that it
is technology based on TGD (really;-)?- sorry for my alter ego, which does not understand the
importance of political correctness). Experimenters have now created a magnetic field for which flux
travels from position to another as invisible flux. The idea is to use flux, which propagates radially from
point (in good approximation) through a spherical super-conductor and is forced to run along dark flux
tubes by Meissner effect.
Indeed, in TGD-based model of superconductor the supracurrents flow along dark flux tubes as dark
Cooper pairs through super-conductor (see the earlier posting). Since magnetic field cannot penerate on
super-conductor they travel along dark flux tubes carrying magnetic monopole fluxes and supra currents.
One of the applications is to guide magnetic fluxes to desired positions in MRI. Precisely targeted
communication and control in general. This is one of the basic ideas of TGD inspired quantum biology.
See the chapter Criticality and dark matter. For a summary of earlier postings see Links to the latest
progress in TGD.
posted by Matti Pitkanen @ 9:57 PM
2 Comments:
At 1:47 PM, Ulla said...
http://www.nature.com/ncomms/2015/150701/ncomms8446/full/ncomms8446.html
At 8:34 PM, [email protected] said...
Thank you.
09/19/2015 - http://matpitka.blogspot.com/2015/09/macroscopically-quantum-coherentfluid.html#comments
Macroscopically quantum coherent fluid dynamics at criticality?
Evidence for the hierarchy of Planck constants implying macroscopic quantum coherence in
quantum critical systems is rapidly accumulating. Also people having the courage to refer to TGD in
their articles are gradually emerging. The most recent fluid dynamics experiment providing this kind of
evidence is performed by Yves Couder and Emmanuel Fort (see for instance the article Single particle
diffraction in macroscopic scale). Mathematician John W. M. Bush has commented these findings in the
Proceedings of National Academy of Sciences and the article provides references to a series of papers by
Couder and collaborators.
The system studied consist of a tray containing water at a surface, which is oscillating. The intensity
of vibration is just below the critical value inducing so called Faraday waves at the surface of water.
Although the water surface is calm, water droplet begins to bounce and generates waves propagating
- 47 -
along the water surface - "walkers". Walkers behave like classical particles at Bohr orbits. As they pass
through a pair of slits they behave they choose random slit but several experiments produce interference
pattern. Walkers exhibit an effect analogous to quantum tunneling and even the analogs of quantum
mechanical bound states of walkers realized as circular orbits emerge as the water tray rotates!
The proposed interpretation of the findings is in terms of Bohm's theory. Personally I find it very
difficult to believe in this since Bohm's theory has profound mathematical difficulties. Bohm's theory
was inspired by Einstein's belief on classical determinism and the idea that quantum non-determinism is
not actual but reduces to the presence of hidden variables. Unfortunately, this idea led to no progress.
TGD is analogous to Bohm's theory in that classical theory is exact but quantum theory is now only
an exact classical correlate: there is no attempt to eliminate quantum non-determinism. Quantum jumps
are between superpositions of entire classical time evolutions rather than their time=constant snapshots:
this solves the basic paradox of Copenhagen interpretation. A more refined formulation is in terms of
zero energy ontology which in turn forces to generalize quantum measurement theory to a theory of
consciousness.
Macroscopic quantum coherence associated with the behavior of droplets bouncing on the surface of
water is suggested by the experiments. For instance, quantum measurement theory seems to apply to the
behavior of single droplet as it passes through slit. In TGD the prerequisite for macroscopic quantum
coherence would be quantum criticality at which large heff=n×h is possible. There indeed is an external
oscillation of the tray containing water with an amplitude just below the criticality for the generation of
Faraday waves at the surface of water. Quantum-Classical correspondence states that the quantum
behavior should have a classical correlate.
The basic structure of Classical TGD is that of hydrodynamics in the sense that dynamics reduces to
conservation laws plus conditions expressing the vanishing of an infinite number of so called supersymplectic charges - the conditions guarantee strong form of holography and express quantum
criticality. The generic solution of classical field equations could reduce to Frobenius integrability
conditions guaranteing that the conserved isometry currents are integrable and thus define global
coordinates varying along the flow lines.
One should be of course very cautious. For ordinary Schrödinger equation, the system is closed.
Now the system is open. This is not a problem if the only function of external vibration is to induce
quantum criticality. The experiment brings in mind the old vision of Frölich about external vibrations as
induced of what looks like quantum coherence.
In TGD framework, this coherence would be forced coherence at the level of visible matter but the
oscillation itself would correspond to genuine macroscopic quantum coherence and large value of h eff. A
standard example are penduli, which gradually start to oscillate in unisono in presence of weak
synchronizing signal. In brain neurons would start to oscillator synchronously by the presence of dark
photons with large heff.
See the chapter Criticality and dark matter of of "Hyperfinite factors and dark matter hierarchy". See
the chapter Criticality and dark matter. For a summary of earlier postings see Links to the latest
progress in TGD.
posted by Matti Pitkanen @ 10:03 PM
- 48 -
09/17/2015 - http://matpitka.blogspot.com/2015/09/algebraic-universality-and-value-ofk.html#comments
Algebraic universality and the value of Kähler coupling strength
With the development of the vision about number theoretically universal view about functional
integration in WCW, a concrete vision about the exponent of Kähler action in Euclidian and
Minkowskian space-time regions. The basic requirement is that exponent of Kähler action belongs to an
algebraic extension of rationals and therefore to that of p-adic numbers and does not depend on the
ordinary p-adic numbers at all - this at least for sufficiently large primes p. Functional integral would
reduce in Euclidian regions to a sum over maxima since the troublesome Gaussian determinants that
could spoil number theoretic universality are cancelled by the metric determinant for WCW.
The adelically exceptional properties of Neper number e, Kähler metric of WCW, and strong form of
holography posing extremely strong constraints on preferred extremals, could make this possible. In
Minkowskian regions the exponent of imaginary Kähler action would be root of unity. In Euclidian
space-time regions expressible as power of some root of e which is is unique in sense that ep is ordinary
p-adic number so that e is p-adically an algebraic number - p:th root of ep.
These conditions give conditions on Kähler coupling strength αK= gK2/4π (hbar=1)) identifiable as an
analog of critical temperature. Quantum criticality of TGD would thus make possible number theoretical
universality (or vice versa).
1. In Euclidian regions the natural starting point is CP2 vacuum extremal for which the maximum
value of Kähler action is SK= π2/2gK2= π/8αK .
The condition reads SK=q =m/n if one allows roots of e in the extension. If one requires
minimal extension of involving only e and its powers one would have SK=n. One obtains
1/αK= 8q/π where the rational q=m/n can also reduce to integer. One cannot exclude the
possibiity that q depends on the algebraic extension of rationals defining the adele in question.
For CP2-type extremals, the value of p-adic prime should be larger than pmin=53. One can
consider a situation in which large number of CP2 type vacuum extremals contribute and in this
case the condition would be more stringent. The condition that the action for CP2 extremal is
smaller than 2 gives 1/αK≤ 16/π ≈ 5.09 .
It seems there is lower bound for the p-adic prime assignable to a given space-time surface
inside CD suggesting that p-adic prime is larger than 53× N where N is particle number. This
bound has not practical significance. In condensed matter particle number is proportional to
(L/a)3 - the volume divided by atomic volume. On basis p-adic mass calculations, p-adic prime
can be estimated to be of order (L/R)2. Here a is atomic size of about 10 Angstroms and R CP2
"radius.
Using R≈ 104 LPlanck this gives as upper bound for the size L of condensed matter blob a
completely super-astronomical distance L≤ a3/R2 ∼ 1025 ly to be compared with the distance of
about 1010 ly travelled by light during the lifetime of the Universe. For a blackhole of radius rS=
2GM with p∼ (2GM/R)2 and consisting of particles with mass above M≈ hbar/R one would
obtain the rough estimate M>(27/2)× 10-12mPlanck ∼ 13.5× 103 TeV trivially satisfied.
2. The physically motivated expectation from earlier arguments - not necessarily consistent with the
recent ones - is that the value αK is quite near to fine structure constant at electron length scale:
αK≈ αem≈ 137.035999074(44).
- 49 -
The latter condition gives n=54=2× 33 and 1/αK≈ 137.51. The deviation from the fine
structure constant is Δ α/α= 3× 10-3 -- .3 per cent. For n=53 one obtains 1/αK= 134.96 with error
of 1.5 per cent. For n=55 one obtains 1/αK= 150.06 with error of 2.2 per cent. Is the relatively
good prediction could be a mere accident or there is something deeper involved?
What about Minkowskian regions? It is difficult to say anything definite. For cosmic string like
objects the action is non-vanishing but proportional to the area A of the string like object and the
conditions would give quantization of the area. The area of geodesic sphere of CP2 is proportional to π.
If the value of gK is same for Minkowskian and Euclidian regions, gK2∝ π2 implies SK ∝ A/R2π so that
A/R2∝ π2 is required.
This approach leads to different algebraic structure of αK than the earlier arguments.
1. αK is rational multiple of π so that gK2 is proportional to π2. At the level of Quantum-TGD, the
theory is completely integrable by the definition of WCW integration(!) and there are no
radiative corrections in WCW integration. Hence αK does not appear in vertices and therefore
does not produce any problems in p-adic sectors.
2. This approach is consistent with the proposed formula relating gravitational constant and p-adic
length scale. G/Lp2 for p=M127 would be rational power of e now and number theoretically
universally. A good guess is that G does not depend on p. As found this could be achieved also if
the volume of CP2 type extremal depends on p so that the formula holds for all primes. α K could
also depend on algebraic extension of rationals to guarantee the independence of G on p. Note
that preferred p-adic primes correspond to ramified primes of the extension so that extensions are
labelled by collections of ramified primes, and the ramimified prime corresponding to gravitonic
space-time sheets should appear in the formula for G/Lp2.
3. Also the speculative scenario for coupling constant evolution could remain as such. Could the padic coupling constant evolution for the gauge coupling strengths be due to the breaking of
number theoretical universality bringing in dependence on p? This would require mapping of padic coupling strength to their real counterparts and the variant of canonical identification used is
not unique.
4. A more attractive possibility is that coupling constants are algebraically universal (no
dependence on number field). Even the value of αK, although number theoretically universal,
could depend on the algebraic extension of rationals defining the adele. In this case coupling
constant evolution would reflect the evolution assignable to the increasing complexity of
algebraic extension of rationals. The dependence of coupling constants on p-adic prime would be
induced by the fact that so called ramified primes are physically favored and characterize the
algebraic extension of rationals used.
5. One must also remember that the running coupling constants are associated with QFT limit of
TGD obtained by lumping the sheets of many-sheeted space-time to single region of Minkowski
space. Coupling constant evolution would emerge at this limit. Whether this evolution reflects
number theoretical evolution as function of algebraic extension of rationals, is an interesting
question.
6.
See the chapter Coupling Constant Evolution in Quantum TGD and the chapter Unified Number
Theoretic Vision of "Physics as Generalized Number Theory" or the article Could one realize number
theoretical universality for functional integral?. For a summary of earlier postings see Links to the
latest progress in TGD.
posted by Matti Pitkanen @ 11:00 PM
5 Comments:
At 2:38 AM,
Ulla said...
- 50 -
Compare to the prime-thread in Josephs simulations. What define the distance between
the mirrors? The squeezed states? How is acceleration created or expanded at microscale,
how is it regulated so we get a border condition?
You say the distance between borders are a timescale, or a p-adic hierarchy, a Plancks
constant squaring, etc. How are these related? The electron timescale comes from its
winding?
At 5:30 AM,
Ulla said...
Wrong links. See the chapter Coupling Constant Evolution in Quantum TGD "Towards
M-matrix", and the chapter Unified Number Theoretic Vision of "Physics as Generalized
Number Theory"
At 5:41 AM,
[email protected] said...
Thank you. Corrected.
At 5:42 AM,
[email protected] said...
Sorry. I could not understand the question.
At 12:10 AM,
L. Edgar Otto said...
Matti, nice to see you are re-evaluating the general questions.
09/17/2015 - http://matpitka.blogspot.com/2015/09/number-theoretical-vision-relieson.html#comments
Could one realize number theoretical universality for functional integral?
Number theoretical vision relies on the notion of number theoretical universality (NTU). In
fermionic sector NTU is necessary: one cannot speak about real and p-adic fermions as separate entities
and fermionic anti-commutation relations are indeed number theoretically universal.
By supersymmetry, NTU should apply also to functional integral over WCW (or its sector defined by
given causal diamond CD) involved with the definition of scattering amplitudes. The expression for the
integral should make sense in all number fields simultaneously. At first this condition looks horrible but
the Kähler structure of WCW and the identification of vacuum functional as exponent of Kähler
function, and the unique adelic properties of Neper number e give excellent hopes about NTU and also
predict the general forms of the functional integral and of the value spectrum of Kähler action for
preferred extremals.
See the chapter Unified Number Theoretic Vision of "Physics as Generalized Number Theory" or the
article Could one realize number theoretical universality for functional integral?. For a summary of
earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 5:31 AM
09/15/2015 - http://matpitka.blogspot.com/2015/09/the-effects-of-psychedelics-as-keyto.html#comments
The effects of Psychedelics as a key to the understanding of Remote Mental Interactions?
- 51 -
There is a book about psychedelics titled as Inner paths to outer space: Journies to Alien Worlds
through Psychedelics and Other Spiritual Technics written by Rick Strassman, Slawek Wojtowicz, Luis
Eduardo Luna and Ede Frecska (see this). The basic message of the book is that psychedelics might
make possible instantaneous remote communications with distant parts of the Universe.
The basic objection is that light velocity sets stringent limits on classical communications. Second
objection is that the communications require huge amount of energy unless they are precisely targeted.
The third objection is that quantum coherence in very long, even astrophysical scales is required. In
TGD framework this argument does not apply.
In Zero Energy Ontology (ZEO), communications in both directions of geometric time are possible
and kind of time-like zig-zag curves make possible apparent superluminal velocities. Negentropic
quantum entanglement provides second manner to share mental images, say sensory information
remotely.
The proposed model leads to a general idea that the attachment of information molecules such as
neurotransmitters and psychedelics to a receptor as a manner to induce a remote connection involving
transfer of dark potons signals in both directions of geometric time to arbitrarily long distances. The
formation of magnetic flux tube contact is a prerequisite for the connection having interpretation as
direct attention or sense of presence.
One can see Living organisms as systems continually trying to build this kind of connections created
by a reconnection of U-shaped flux tubes serving as magnetic tentacles. Dark matter as a hierarchy of
phases with arbitrary large value of Planck constants guarantees quantum coherence in arbitrary long
scales.
The natural TGD-inspired hypothesis about what happens at the level of brain to be discussed in
sequel in detail goes as follows.
1. Psychedelics bind to the same receptors as the neurotransmitters with similar aromatic rings
(weaker assumption is that neurotransmitters in question possess aromatic rings). This is
presumably consistent with the standard explanation of the effect of classical psychedelics as a
modification of serotonin uptake. This binding replaces the flux tube connection via
neurotransmitter to some part of the personal magnetic body with a connection via psychedelic to
some other system, which might be even in outer space. A communication line is created making
among other things possible remote sensory experiences.
Magnetic fields extending to arbitrary large distances in Maxwell's theory are replaced with
flux tubes in TGD framework. The magnetic bodies of psychedelics would carry very weak
magnetic fields and would have very large heff - maybe serving as a kind of intelligence quotient.
2. This would be like replacing the connection to the nearby computer server with a connection to a
server at the other side of the globe. This would affect the usual function of transmitter and
possibly induce negative side effects. Clearly, TGD inspired hypothesis gives for the
psychedelics much more active role than standard hypothesis.
3. Phychedelics can be classified into two groups depending on whether they contain derivative of
amino-acid trp with two aromatic rings or phe with one aromatic ring. Also DNA nucleotide
resp. its conjugate have 2 resp. 1 similar aromatic rings. This suggests that the coupling between
information molecule and receptor is universal and same as the coupling between the two bases
in DNA double strand and consists of hydrogen bonds. This hypothesis is testable since it
requires that the trp:s/phe:s of the information molecule can be brought to same positions as
- 52 -
phe:s/trp:s in the receptor. If also protein folding relies on this coupling, one might be able to
predict the folding to a high degree.
4. A highly suggestive idea is that molecules with aromatic rings are fundamental conscious entities
at the level of molecular biology, and that more complex conscious entities are created from
them by reconnection of flux tubes. DNA/RNA sequences and microtubules would be basic
examples about this architecture of consciousness. If so, protein folding would be dictated by the
formation trp-phe contacts giving rise to larger conscious entities.
See the chapter Meditation, Mind-Body Medicine and Placebo: TGD point of view of "TGD-Based
View about Consciousness, Living Matter, and Remote Mental Interactions" or the revised article
Psychedelic induced experiences as key to the understanding of the connection between magnetic body
and information molecules?. For a summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 3:13 AM
09/07/2015 - http://matpitka.blogspot.com/2015/09/first-indications-for-breakingof.html#comments
First indications for the breaking of lepton universality due to the higher weak boson generations
Lepton and quark universality of weak interactions is basic tenet of the standard model. Now the
first indications for the breaking of this symmetry have been found.
1. Lubos tells that LHCb has released a preprint with title Measurement of the ratio of branching
ratios (Bbar0→ Dbar *+ τ ντ)/ (Bbar0→ Dbar*+ μ νμ). The news is that the measured branching
ratio is is about 33 per cent instead of 25 percent determined by mass ratios if standard model is
correct. The outcome differs by 2.1 standard deviations from the prediction so that it might be a
statistical fluke.
2. There are also indications for second Bbar0 anomaly (see this). B mesons have to long and shortlived variants oscillating to their antiparticles and back - this relates to CP breaking. The surprise
is that the second B meson - I could not figure out was it short- or long-lived - prefers to decay to
eν instead of μnu;.
3. There are also indications for the breaking of universality (see this) from B+ → K+e+ e- and B+ →
K+ μ+mu;- decays.
In TGD framework, my first (and wrong) guess for an explanation was CKM mixing for leptons.
TGD predicts that also leptons should suffer CKM mixing induced by the different mixings of
topologies of the partonic 2-surfaces assignable to charged and neutral leptons. The experimental result
would give valuable information about the values of leptonic CKM matrix. What new this brings is that
the decays of W bosons to lepton pairs involve the mixing matrix and CKM matrix whose deviation
from unit matrix brings effects anomalous in standard model framework.
The origin of the mixing would be topological - usually it is postulated in completely ad hoc manner
for fermion fields. Particles correspond to partonic 2-surfaces- actually several of them but in the case of
fermions the standard model quantum numbers can be assigned to one of the partonic surfaces so that its
topology becomes especially relevant. The topology of this partonic 2- surface at the end of causal
diamond (CD) is characterized by its genus - the number of handles attached to sphere - and by its
conformal equivalene class characterized by conformal moduli.
Electron and its muon correspond to spherical topology before mixing, muon and its neutrino to
torus before mixing etc. Leptons are modelled assuming conformal invariance meaning that the leptons
- 53 -
have wave functions (elementary particle vacuum functionals) in the moduli space of conformal
equivalence classes known as Teichmueller space.
Contrary to the naive expection mixing alone does not explain the experimental finding. Taking into
account mass corrections, the rates should be same to different charged leptons since neutrinos are not
identified. That mixing does not have any implications follows from the unitary of the CKM matrix.
The next trial is based on the prediction of 3 generations of weak bosons suggested by TGD.
1. TGD based explanation of family replication phenomenon in terms of genus-generation
correspondence forces to ask whether gauge bosons identifiable as pairs of fermion and
antifermion at opposite throats of wormhole contact could have bosonic counterpart for family
replication. Dynamical SU(3) assignable to three lowest fermion generations/genera labelled by
the genus of partonic 2-surface (wormhole throat) means that fermions are combinatorially
SU(3) triplets. Could 2.9 TeV state (if it would exist) correspond to this kind of state in the
tensor product of triplet and antitriplet? The mass of the state should depend besides p-adic mass
scale also on the structure of SU(3) state so that the mass would be different. This difference
should be very small.
2. Dynamical SU(3) could be broken so that wormhole contacts with different genera for the throats
would be more massive than those with the same genera. This would give SU(3) singlet and two
neutral states, which are analogs of η′ and η and π0 in Gell-Mann's quark model. The masses of
the analogs of η and π0 and the the analog of η′, which I have identified as standard weak boson
would have different masses. But how large is the mass difference?
3. These 3 states are expected to have identical mass for the same p-adic mass scale, if the mass
comes mostly from the analog of hadronic string tension assignable to magnetic flux tube.
connecting the two wormhole contacts associates with any elementary particle in TGD
framework (this is forced by the condition that the flux tube carrying monopole flux is closed
and makes a very flattened square shaped structure with the long sides of the square at different
space-time sheets). p-Adic thermodynamics would give a very small contribution genus
dependent contribution to mass if p-adic temperature is T=1/2 as one must assume for gauge
bosons (T=1 for fermions). Hence 2.95 TeV state for which there are some indications could
indeed correspond to second Z generation. W should havesimilar state at 2.5 TeV.
The orthogonality of the 3 weak bosons implies that their charge matrices are orthogonal. As a
consequence, the higher generations of weak bosons do not have universal couplings to leptons and
quarks. The breaking of universality implies a small breaking of universality in weak decays of hadrons
due to the presence of virtual MG,79 boson decaying to lepton pair. These anomalies should be seen both
in the weak decays of hadrons producing Lν pairs via the decay of virtual W or its partner W G,79 and via
the decay of virtual Z or its partner Zg,79 to L+ L-. Also γG,79 could be involved.
This could explain the three anomalies associated with the neutral B mesons, which are analogs of
neutral K mesons having long- and short-lived variants.
1. The two anomalies involving W bosons could be understood if some fraction of decays takes
place via the decay b→ c+WG,79 followed by WG,79→ L+ν. The charge matrix of WG,79 is not
universal and CP breaking is involved. Hence one could have interference effects, which
increase the branching fraction to τν or eν relative to μν depending on whether the state is longor short-lived B meson.
2. The anomaly in decays producing charged lepton pairs in decayse of B+ does not involve CP
breaking and would be due to the non-universality of ZG,79 charge matrix.
- 54 -
TGD allows also to consider leptoquarks as pairs of leptons and quarks and there is some evidence
for them too! I wrote a blog posting about this too (for an article see this). Also indications for M89 and
MG,79 hadron physics with scaled up mass scales are accumulating and QCD is shifting to the verge of
revolution (see this).
It seems that TGD is really there and nothing can prevent it showing up. I predict that next decades
in physics will be a New Golden Age of both experimental and theoretical physics. I am eagerly and
impatiently waiting that theoretical colleagues finally wake up from their 40 year long sleep and CERN
will again be full of working physicists also during weekends (see this);-).
See the chapter New Particle Physics Predicted by TGD: Part I of "p-Adic Length Scale Hypothesis".
For a summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 8:46 AM
09/03/2015 - http://matpitka.blogspot.com/2015/09/indication-for-scaled-up-variant-ofz.html#comments
Indication for a scaled up variant of Z boson
Both Tommaso Dorigo and Lubos Motl tell about a spectacular 2.9 TeV di-electron event not
observed in previous LHC runs. Single event of this kind is of course most probably just a fluctuation
but human mind is such that it tries to see something deeper in it - even if practically all trials of this
kind are chasing of mirages.
Since the decay is leptonic, the typical question is whether the dreamed for state could be an exotic Z
boson. This is also the reaction in TGD framework. The first question to ask is whether weak bosons
assignable to Mersenne prime M89 have scaled up copies assignable to Gaussian Mersenne M79. The
scaling factor for mass would be 2(89-89)/2= 32. When applied to Z mass equal to about .09 TeV one
obtains 2.88 TeV, not far from 2.9 TeV. Eureka!? Looks like a direct scaled up version of Z!? W should
have similar variant around 2.6 TeV.
TGD indeed predicts exotic weak bosons and also gluons. TGD based explanation of family
replication phenomenon in terms of genus-generation correspondence forces to ask whether gauge
bosons identifiable as pairs of fermion and antifermion at opposite throats of wormhole contact could
have bosonic counterpart for family replication. Dynamical SU(3) assignable to three lowest fermion
generations/genera labelled by the genus of partonic 2-surface (wormhole throat) means that fermions
are combinatorially SU(3) triplets. Could 2.9 TeV state - if it would exist - correspond to this kind of
state in the tensor product of triplet and antitriplet? The mass of the state should depend besides p-adic
mass scale also on the structure of SU(3) state so that the mass would be different. This difference
should be very small.
Dynamical SU(3) could be broken so that wormhole contacts with different genera for the throats
would be more massive than those with the same genera. This would give SU(3) singlet and two neutral
states, which are analogs of η′ and η and π0 in Gell-Mann's quark model. The masses of the analogs of η
and π0 and the the analog of η′, which I have identified as standard weak boson would have different
masses. But how large is the mass difference?
These 3 states are expected to have identical mass for the same p-adic mass scale if the mass comes
mostly from the analog of hadronic string tension assignable to magnetic flux tube. Connecting the two
- 55 -
wormhole contacts associates with any elementary particle in TGD framework (this is forced by the
condition that the flux tube carrying monopole flux is closed and makes a very flattened square shaped
structure with the long sides of the square at different space-time sheets). p-Adic thermodynamics would
give a very small contribution genus dependent contribution to mass if p-adic temperature is T=1/2 as
one must assume for gauge bosons (T=1 for fermions). Hence 2.95 TeV state could indeed correspond to
this kind of state.
Can one imagine any pattern for the Mersennes and Gaussian Mersennes involved? Charged leptons
correspond to electron (M127), muon (MG,113) and tau (M107): Mersenne- Gaussian Mersenne-Mersenne.
Does one have similar pattern for gauge bosons too: M89- MG,79 - M61?
Recall that Lubos reported a dijet at 5.2 TeV: see the earlier posting. Dijet structure suggests some
meson. One can imagine several candidates but no perfect fit if one assumes M89 meson and one applies
naive scaling. For instance, if kaon mass is scaled by factor 210 rather than 512 - just like the mass of
pion to get mass of the proposed M89 pion candidate, one obtains 4.9 TeV. Naive scaling of 940 MeV
mass of nucleon by 512 would predict that M89 has mass of 4.8 TeV.
See the chapter New Particle Physics Predicted by TGD: Part I of "p-Adic Length Scale Hypothesis".
For a summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 10:28 PM
5 Comments:
At 1:58 AM,
Leo Vuyk said...
IMHO, there could be a different explanation of the W Z and H mass jumps: “local
vacuum chirality reversal”. Our material universe could be equipped with a left hand
chirality vacuum and our mirror (anti-matter) universe equipped with a right hand chirality
, (For an alternative symmetric big bang) Then inside the LHC collider a local anti
material vacuum could have formed around the colliding process, with much higher mass
readings. see http://vixra.org/abs/1312.0143 and http://vixra.org/pdf/1306.0065v2.pdf
At 3:52 AM,
[email protected] said...
You are clearly referring to the model discussed by Lubos in which the possible bumps at
2.1, 2.9, and 5.2 TeV would correspond to W, Z, and H. My model is different. To my
best knowledge, the identification of the quantum numbers for these is states is open.
My proposal is 2.1 TeV could correspond to pion of M_G,79 hadron physics: both
neutral and charged pions would be there: this interpretation is strongly supported by the
dominance of the decays to quarks suggesting strongly meson like states. For W like state
one should not have this kind of quark dominance. This leads to un-natural assumption of
leptophoby which is completely ad hoc.
2.9 TeV state could correspond to analog of Z predicted by genus-generation
correspondence. Also W should be there at about 2.5 TeV. Maybe also the analog of Higgs
at 4 TeV - by ultra naive scaling.
5.2 TeV state could have several identifications as M_89 mesons or even M_89
nucleon.
At 1:49 PM, Ulla said...
Look at this! http://arxiv.org/pdf/math/0110072.pdf
- 56 -
At 3:31 AM,
Ulla said...
...suggesting that neutrino mass scale depends on environment can be understood if
neutrinos can suffer topological condensation in several p-adic length scales...
environment
=
primes?
hence
they are
chaotic?
defines
fractality?
http://arxiv.org/abs/0708.2567
compare to "we derive the area law for the ground state of a scalar field on a generic
lattice in the limit of small speed of sound" http://arxiv.org/abs/1507.01567
At 5:32 AM,
[email protected] said...
p-Adicity means fractality. Basic aspect of fractality is existence of several scales and this
is also what p-adicity means. The possibility that particle can correspond to several p-adic
primes and mass scales would be manifestation of this. I would not assign chaoticity to
this.
08/31/2015 - http://matpitka.blogspot.com/2015/08/evidence-of-ancient-life-discoveredin.html#comments
Evidence of ancient life discovered in mantle rocks deep below the seafloor
Physicalist who has learned his lessons sees life, evolution, generation of genetic code, etc.. as
random thermal fluctuations. Empirical facts suggests that the situation is just the opposite. The
emergence of life seems to be unavoidable but water seems to be a prerequisite for it. Now researchers
have found evidence for ancient life in deep mantle rocks for about 125 million years ago. The
emergence of life in mantle is believed to involve interaction of rocks with hydrothermal water
originating from seawater and circulating in mantle (see the illustration).
A serious objection against the successful Urey-Miller experiments as a guideline to how prebiotic
life emerged is that the atmosphere was not at that time reducing (reducing means that there are atoms
able to donate electrons, oxygen does just the reverse).
This objection could serve as a motivation for assuming that prebiotic life evolved in mantle. For
detailed vision about underground prebiology see the article More Precise TGD Based View about
Quantum Biology and Prebiotic Evolution. This model predicts that Cambrian Explosion lasting for 2025 million years was associated with a sudden expansion of Earth radius by a factor 2 about 542 million
years ago.
Expanding Earth hypothesis would be reduced in TGD framework to the replacement of continuous
cosmic expansion for astrophysical objects with a sequence of short expanding periods followed by long
non-expanding periods in accordance with the finding that astrophysical objects do not participate in
cosmic expansion (see this). This sudden expansion would have led to a burst of underground oceans to
the surface of Earth and generated the oceans as we know them now. This prediction is consistent with
the assumption that the hydrothermal water explaining the above described finding originated from
seawater.
A killer prediction is that underground life would have developed photosynthesis, and the lifeforms
would have been rather highly evolved as they burst on the surface of Earth. How this was possible is
one of the questions for which answer is proposed in the article More Precise TGD Based View about
- 57 -
Quantum Biology and Prebiotic Evolution. The new physics predicted by TGD (in particular, hierarchy
of Planck constants identified in terms of dark matter) is an essential element of the model.
For a summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 8:29 PM
08/31/2015 - http://matpitka.blogspot.com/2015/08/ontology-epistomology-duality.html#comments
Ontology-Epistemology duality?
Theorist suffering from a tendency to philosophize sooner or later encouters the strange selfreferential question "Does my theory about Universe represent also itself as an aspect of the Universe?".
For a dualist theory and reality are of course two different things. But are they? Could one make sense of
the statement that theory is the reality, which it describes? Or more or less equivalently: epistemology is
self-dual to ontology. This would be very nice, theory would not be something outside the reality. This
must also relate closely to the question about observer in physics: in the recent quantum measurement
theory observer is outsider affecting reality by quantum measuring it but is not described by the quantum
theory.
TGD-inspired theory of Consciousness generalizes quantum measurement theory to a theory of
consciousness and brings physicist part of the physical system. Indeed, the new view about state
function reduction inspired by Zero Energy Ontology allows to identify "self" as a quantum physical
entity and makes several testable killer predictions. In ZEO zero energy states are pairs of positive and
negative energy states at the opposite light-like boundaries of CD. Self is identified as a sequence of
state function reductions at given boundary of CD - the members of state pairs at this passive boundary
are not changed and the passive boundary itself remains unaffected: obviously this corresponds to Zeno
effect.
What is new that the members of state pairs at opposite (active) boundary are changed and the active
boundary recedes from the passive boundary reduction by reduction so that the size of CD increases.
This gives rise to the experienced arrow of geometric time identified as the proper time distance between
the tips of CD. The first reduction to the opposite boundary is forced by Negentropy Maximization
Principle to eventually occur, and means "death" of self and re-incarnation at the opposite boundary.
Time reversal of the original self is generated and geometric time begins to flow in opposite direction.
This suggests that one could indeed see also theory as something not outside the physical world
understood in sufficiently general sense.
What do I mean with theory? Can I imbed it to my tri-partistic ontology with three forms of
existence: space-time surfaces as counterparts of classical states; quantum states as mathematical objects
in ZEO ; quantum jumps as of building bricks of conscious existence giving rise to moments of
consciousness and integrating to selves. This trinity is analogous to shape-matter-mind trinity. Let us
call this holy trinity just A-B-C to reduce the amount of typing.
I want the equation Theory= Reality. There would be no separate reality behind theory. What would
I mean with this statement?
The first attempt to give content to this equation is as equation Theory= quantum state as a
mathematical object. Theory would be something restricted to the compartment B in A-B-C. Quantum
- 58 -
state as quantum superposition of space-time surfaces implied by holography-implied General
Coordinate Invariance) would be theory about reality, but there would be no distinct "physical reality"
behind it. As far as conscious experiences are considered, this is enough since conscious experience is in
the quantum jump between these mathematical objects.
One can however develop objections.
1. Quantum state in ZEO is counterpart of only one possible quantal time evolution. The theory is
therefore very restricted and not enough in quantum Universe in which quantum jumps re-create
this reality again and again. A real theory must be able to describe counterparts of all possible
time evolutions: the collection of these evolutions should define kind of unified theory. The
space of WCW spinor field would be the next trial for a theory and quantum jumps between
different evolutions (points of WCW by holography) make it possible to gather conscious
information about this landscape.
2. Theories involve also self-referentiality: statements about statements. Boolean algebra of set
defining exponent set is the basic example and corresponds to binary valued from the set. Second
quantization is what gives rise to a mathematical structure analogous to statements about
statements. Many-fermion Fock states have the structure of quantum Boolean algebra.
But this is not enough. Theorists make also statements about statements. In particular, very
strong statetements about theories of other theorists. This can generate entire hierarchy of highly
emotional statements about statements about... known as scientific debate. This suggests that one
should allow iterated second quantization emerging from the notion of infinite primes obtained
by a repeated second quantization of arithmetic quantum field theory with supersymmetry by
starting from boson and fermion states labeled by finite primes.
In a given quantization, one obtains the analogs of both Fock states and even bound states
purely number theoretically, and one can repeat this procedure again and again. This hierarchical
process corresponds at the level of Boolean algebra formation of statements about statements
about... It can be seen also as a hierarchy of logics with order labeled by non-negative integer n.
Theories about theories about.... This hierarchy would have many-sheeted space-time as a spacetime correlate with the hierarchy of quantizations assigned with the hierarchy of sheets.
3. But can "theory" really reside only inside compartment B? Theory should contain also the mental
images of theoreticians and documentations about these. The documentations are represented in
term of classical space-time as a huge generalization of written language - this forces to include
compartment A. Also subselves defining mental images of theoreticians and thus entire self
hierarchy must be there: therefore also compartment C is needed.
Our equation would become Epistemelogy= Ontology in 3-partistic sense. Theory about what
can be known would be equivalent to the theory about what can exist. This duality is self-duality
rather than duality: the latter identification bothered me originally since the nex step would be to
construct theory for theory and reader can guess the rest. Notice that this identification is not the
physicalist's view saying that consciousness is an epiphenomenon since the ontology is monistic.
Consider now possible objections.
1. Theories are never complete: they have all kinds of failures. How can reality=theory be
incomplete? How can it have failures? This is possible: the incompleteness is in conscious
experience about theory, not theory. For some reason theorists have a strong tendency to
erratically call it the theory. In tripartistic view about theory, incompleteness of the theory would
be located at sector C. Conscious experience contains limited amount of information due to the
presence of finite measurement resolution and cognitive resolution.
- 59 -
Finite resolution is necessary in order to avoid drowning into a sea of irrelevant information.
Finite resolution leads to an ordering of bits (more generally pinary digits) by 2-adic (p-adic)
norm. The realization of the finite measurement resolution is in terms of quantum criticality
involving hierarchy of Planck constants and hierarchy of inclusions of hyper-finite factors to
which one can assign a hierarchy of dynamical symmetry groups represented physically.
Therefore finite measurement resolution is actually something very useful - many beautiful
things follow just by being sloppy (but only with the non-significant bits!).
What Ontology=Epistemology implies that quantum states themselves provide a
representation for the finite measurement resolution. It is not something characterizing only the
measurement but also the target of measurement. This is very radical change of view point. This
is realized quite concretely in the representation of quantum states in terms of partonic twosurfaces and strings connecting them. The large the number of partonic 2-surfaces and the
number of strings connecting them, the better the measurement resolution.
2. There is also an objection relating to self-referentiality. Quantum states provide a
representation/theory about itself, are their own mirror images. Doesn't this lead to a kind of selfreferential loop and infinite regression? If self is conscious about being conscious about
something, one ends up to a similar infinite regression.
The resolution of the problem is that one self becomes conscious about contents
consciousness for previous moment of consciousness. The infinite regress is replaced with
endless evolution. Zero energy states become more and more complex as the information about
previous moments of consciousness is represented quantum mechanically and classically. By
NMP, the Universe is generating negentropic entanglement giving rise to kind of Akashic
records and negentropy resources are increasing. Biological evolution and evolution of sciences
are not just random thermodynamical fluctuations but coded to the basic laws of quantum
physics and consciousness.
For a summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 3:36 AM
1 Comments:
At 6:18 PM, Anonymous said...
View that Ontology=Epistemology connects, funnily enough, with quantum approach to
liar's paradox: http://www.vub.ac.be/CLEA/aerts/publications/1999BostonLiar.pdf
08/30/2015 - http://matpitka.blogspot.com/2015/08/sharpening-of-hawkings-argument.html#comments
Sharpening of Hawking's argument
I already told about the latest argument of Hawking to solve information paradox associated with
black holes (see this and this).
There is now a popular article explaining the intuitive picture behind Hawking's proposal. The
blackhole horizon would involve tangential flow of light and particles of the infalling matter would
induce supertranslations on the pattern of this light thus coding information about their properties to this
light. After that this light would be radiated away as analog of Hawking radiation and carry out this
information.
- 60 -
The objection would be that in GRT horizon is no way special - it is just a coordinate singularity.
Curvature tensor does not diverge either and Einstein tensor and Ricci scalar vanish. This argument has
been used in the firewall debates to claim that nothing special should occur as horizon is traversed. Why
light would rotate around it? No reason for this!
The answer in TGD would be obvious: horizon is replaced for TGD analog of blackhole with a lightlike 3-surface at which the induced metric becomes Euclidian. Horizon becomes analogous to light front
carrying not only photons but all kinds of elementary particles. Particles do not fall inside this surface
but remain at it!
The objection now is that photons of light front should propagate in direction normal to it, not
parallel. The point is however that this light-like 3-surface is the surface at which induced 4-metric
becomes degenerate: hence massless particles can live on it.
For a summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 11:01 PM
1 Comments:
At 7:06 PM, Anonymous said...
The image of non-euclidean spacetimes within euclidean embedding spaces, which
contain euclidean embedding spaces, etc. etc. comes relatively(sic!) easily. But (in terms
of ontology=epistemology and quantum liars paradox, etc.), can you imagine a
superposition or qubit of 5th and/or (non)parallel axiom, and connect it with notion of
spin(s) - and spin doctors? ;)
08/26/2015 - http://matpitka.blogspot.com/2015/08/tgd-view-about-black-holes-andhawking.html#comments
TGD view about black holes and Hawking radiation: Part II
In the second part of posting, I discuss the TGD view about blackholes and Hawking radiation.
There are several new elements involved but concerning black holes the most relevant new element is
the assignment of Euclidian space-time regions as lines of generalized Feynman diagrams implying that
also blackhole interiors correspond to this kind of regions. Negentropy Maximization Principle is also an
important element and predicts that number theoretically defined black hole negentropy can only
increase. The real surprise was that the temperature of the variant of Hawking radiation at the flux tubes
of proton Sun system is room temperature! Could TGD variant of Hawking radiation be a key player in
Quantum Biology?
The basic ideas of TGD relevant for blackhole concept
My own basic strategy is to not assume anything not necessitated by experiment or not implied by
general theoretical assumptions - these of course represent the subjective element. The basic
assumptions/predictions of TGD relevant for the recent discussion are following.
1. Space-times are 4-surfaces in H=M4× CP2 and ordinary space-time is replaced with manysheeted space-time. This solves what I call energy problem of GRT by lifting gravitationally
broken Poincare invariance to an exact symmetry at the level of imbedding space H.
GRT type description is an approximation obtained by lumping together the space-time
sheets to single region of M4, with various fields as sums of induced fields at space-time surface
geometrized in terms of geometry of H.
- 61 -
Space-time surface has both Minkowskian and Euclidian regions. Euclidian regions are
identified in terms of what I call generalized Feynman/twistor diagrams. The 3-D boundaries
between Euclidian and Minkowskina regions have degenerate induced 4-metric and I call them
light-like orbits of partonic 2-surfaces or light-like wormhole throats analogous to blackhole
horizons and actually replacing them. The interiors of blackholes are replaced with the Euclidian
regions and every physical system is characterized by this kind of region.
Euclidian regions are identified as slightly deformed pieces of CP2 connecting two
Minkowskian space-time regions. Partonic 2-surfaces defining their boundaries are connected to
each other by magnetic flux tubes carrying monopole flux.
Wormhole contacts connect two Minkowskian space-time sheets already at elementary
particle level, and appear in pairs by the conservation of the monopole flux. Flux tube can be
visualized as a highly flattened square traversing along and between the space-time sheets
involved. Flux tubes are accompanied by fermionic strings carrying fermion number. Fermionic
strings give rise to string world sheets carrying vanishing induced em charged weak fields
(otherwise em charge would not be well-defined for spinor modes). String theory in space-time
surface becomes part of TGD. Fermions at the ends of strings can get entangled and
entanglement can carry information.
2. Strong form of General Coordinate Invariance (GCI) states that light-like orbits of partonic 2surfaces on one hand and space-like 3-surfaces at the ends of causal diamonds on the other hand
provide equivalent descriptions of physics. The outcome is that partonic 2-surfaces and string
world sheets at the ends of CD can be regarded as basic dynamical objects.
Strong form of holography states the correspondence between quantum description based on
these 2-surfaces and 4-D classical space-time description, and generalizes AdS/CFT
correspondence. Conformal invariance is extended to the huge super-symplectic symmetry
algebra acting as isometries of WCW and having conformal structure. This explains why 10-D
space-time can be replaced with ordinary space-time and 4-D Minkowski space can be replaced
with partonic 2-surfaces and string world sheets. This holography looks very much like the one
we are accustomed with!
3. Quantum criticality of TGD Universe fixing the value(s) of the only coupling strength of TGD
(Kähler coupling strength) as analog of critical temperature. Quantum criticality is realized in
terms of infinite hierarchy of sub-algebras of super-symplectic algebra actings as isometries of
WCW, the "World of Classical Worlds" consisting of 3-surfaces or by holography preferred
extremals associated with them.
Given sub-algebra is isomorphic to the entire algebra and its conformal weights are n≥ 1multiples of those for the entire algebra. This algebra acts as conformal gauge transformations
whereas the generators with conformal weights m<n act as dynamical symmetries defining an
infinite hierarchy of simply laced Lie groups with rank n-1 acting as dynamical symmetry groups
defined by Mac-Kay correspondence so that the number of degrees of freedom becomes finite.
This relates very closely to the inclusions of hyper-finite factors - WCW spinors provide a
canonical representation for them.
This hierarchy corresponds to a hierarchy of effective Planck constants heff=n× h defining an
infinite number of phases identified as dark matter. For these phases Compton length and time
are scale up by n so that they give rise to macroscopic quantum phases. Super-conductivity is
one example of this kind of phase - charge carriers could be dark variants of ordinary electrons.
- 62 -
Dark matter appears at quantum criticality and this serves as an experimental manner to produce
dark matter. In Living matter, dark matter identified in this manner would play a central role.
Magnetic bodies carrying dark matter at their flux tubes would control ordinary matter and carry
information.
4. I started the work with the hierarchy of Planck constants from the proposal of Nottale stating that
it makes sense to talk about gravitational Planck constant hgr=GMm/v0, v0/c≤ 1 (the
interpretation of symbols should be obvious). Nottale found that the orbits of inner and outer
planets could be modelled reasonably well by applying Bohr quantization to planetary orbits
with tge value of velocity parameter differing by a factor 1/5. In TGD framework hgr would be
associated with magnetic flux tubes mediating gravitational interaction between Sun with mass
M and planet or any object, say elementary particle, with mass m. The matter at the flux tubes
would be dark as also gravitons involved. The Compton length of particle would be given by
GM/v0 and would not depend on the mass of particle at all.
The identification hgr=heff is an additional hypothesis motivated by quantum biology, in
particular the identification of biophotons as decay products of dark photons satisfying this
condition. As a matter fact, one can talk also about hem assignable to electromagnetic
interactions: its values are much lower. The hypothesis is that when the perturbative expansion
for two particle system does not converge anymore, a phase transition increasing the value of the
Planck constant occurs and guarantees that coupling strength proportional to 1/heff increases.
This is one possible interpretation for quantum criticality. TGD provides a detailed geometric
interpretation for the space-time correlates of quantum criticality.
Macroscopic gravitational bound states not possible in TGD without the assumption that
effective string tension associated with fermionic strings and dictated by strong form of
holography is proportional to 1/heff2. The bound states would have size scale of order Planck
length since for longer systems string energy would be huge. heff=hgr makes astroscopic quantum
coherence unavoidable. Ordinary matter is condensed around dark matter. The counterparts of
black holes would be systems consisting of only dark matter.
5. Zero energy ontology (ZEO) is central element of TGD. There are many motivations for it. For
instance, Poincare invariance in standard sense cannot make sense since in standard cosmology
energy is not conserved. The interpretation is that various conserved quantum numbers are length
scale dependent notions.
Physical states are zero energy states with positive and negative energy parts assigned to ends
of space-time surfaces at the light-like boundaries of causal diamonds (CDs). CD is defined as
Cartesian products of CP2 with the intersection of future and past directed lightcones of M4. CDs
form a fractal length scale hierarchy. CD defines the region about which single conscious entity
can have conscious information, kind of 4-D perceptive field. There is a hierarchy of WCWs
associated with CDs. Consciously experienced physics is always in the scale of given CD.
Zero energy states identified as formally purely classical WCW spinor fields replace positive
energy states and are analogous to pairs of initial and final, states and the crossing symmetry of
quantum field theories gives the mathematical motivation for their introduction.
6. Quantum measurement theory can be seen as a theory of consciousness in ZEO. Conscious
observer or self as a conscious entity becomes part of physics. ZEO gives up the assumption
about unique universe of classical physics and restricts it to the perceptive field defined by CD.
In each quantum jump a re-creation of Universe occurs. Subjective experience time
corresponds to state function reductions at fixed, passive bounary of CD leaving it invariant as
- 63 -
well as state at it. The state at the opposite, active boundary changes and also its position changes
so that CD increases state function by state function reduction doing nothing to the passive
boundary. This gives rise to the experienced flow of geometric time since the distance between
the tips of CD increases and the size of space-time surfaces in the quantum superposition
increases. This sequence of state function reductions is counterpart for the unitary time evolution
in ordinary quantum theory.
Self "dies" as the first state function reduction to the opposite boundary of CD meaning reincarnation of self at it and a reversal of the arrow of geometric time occurs: CD size increases
now in opposite time direction as the opposite boundary of CD recedes to the geometric past
reduction by reduction.
Negentropy Maximization Principle (NMP) defines the variational principle of state function
reduction. Density matrix of the subsystem is the universal observable and the state function
reduction leads to its eigenspaces. Eigenspaces, not only eigenstates as usually.
Number theoretic entropy makes sense for the algebraic extensions of rationals and can be
negative unlike ordinary entanglement entropy. NMP can therefore lead to a generation of NE if
the entanglement correspond to a unitary entanglement matrix so that the density matrix of the
final state is higher-D unit matrix. Another possibility is that entanglement matrix is algebraic
but that its diagonalization in the algebraic extension of rationals used is not possible. This is
expected to reduce the rate for the reduction since a phase transition increasing the size of
extension is needed.
The weak form of NMP does not demand that the negentropy gain is maximum: this allow
the conscious entity responsible for reduction to decide whether to increase maximally NE
resources of the Universe or not. It can also allow larger NE increase than otherwise. This
freedom brings the quantum correlates of ethics, moral, and good and evil. p-Adic length scale
hypothesis and the existence of preferred p-adic primes follow from weak form of NMP and one
ends up naturally to adelic physics.
The analogs of blackholes in TGD
Could blackholes have any analog in TGD? What about Hawking radiation? The following
speculations are inspired by the above general vision.
1. Ordinary blackhole solutions are not appropriate in TGD. Interior space-time sheet of any
physical object is replaced with an Euclidian space-time region. Also that of blackhole by
perturbation argument based on the observation that if one requires that the radial component of
blackhole metric is finite, the horizon becomes light-like 3-surface analogous to the light-like
orbit of partonic 2-surface and the metric in the interior becomes Euclidian.
2. The analog of blackhole can be seen as a limiting case for ordinary astrophysical object, which
already has blackhole like properties due to the presence of heff=n× h dark matter particles, which
cannot appear in the same vertices with visible manner. Ideal analog of blackhole consist of dark
matter only, and is assumed to satisfy the hgr=heff already discussed. It corresponds to region with
a radius equal to Compton length for arbitrary particle R=GM/v0=rS/2v0, where rS is
Schwartschild radius. Macroscopic quantum phase is in question since the Compton radius of
particle does not depend on its mass. Blackhole limit would correspond to v0/c→ 1 and dark
matter dominance. This would give R=rS/2. Naive expectation would be R=rS (maybe factor of
two is missing somewhere: blame me!).
- 64 -
3. NMP implies that information cannot be lost in the formation of blackhole-like state but tends to
increase. Matter becomes totally dark and the NE with the partonic surfaces of external world is
preserved or increases. The ingoing matter does not fall to a mass point but resides at the
partonic 2-surface which can have arbitrarily large surface. It can have also wormholes
connecting different regions of a spherical surface and in this manner increase its genus. NMP,
negentropy , negentropic entanglement between heff=n× h dark matter systems would become the
basic notions instead of second law and entropy.
4. There is now a popular article explaining the intuitive picture behind Hawking's proposal. The
blackhole horizon would involve tangential flow of light and particles of the infalling matter
would induce supertranslations on the pattern of this light thus coding information about their
properties to this light. After that this light would be radiated away as analog of Hawking
radiation and carry out this information.
The objection would be that in GRT horizon is no way special - it is just a coordinate
singularity. Curvature tensor does not diverge either and Einstein tensor and Ricci scalar vanish.
This argument has been used in the firewall debates to claim that nothing special should occur as
horizon is traversed. So: why light would rotate around it? No reason for this! The answer in
TGD would be obvious: horizon is replaced for TGD analog of blackhole with a light-like 3surface at which the induced metric becomes Euclidian. Horizon becomes analogous to light
front carrying not only photons but all kinds of elementary particles. Particles do not fall inside
this surface but remain at it!
5. The replacement of second law with NMP leads to ask whether a generalization of blackhole
thermodynamics does make sense in TGD Universe. Since blackhole thermodynamics
characterizes Hawking radiation, the generalization could make sense at least if there exist
analog for the Hawking radiation. Note that also geometric variant of second law makes sense.
Could the analog of Hawking radiation be generated in the first state function reduction to
the opposite boundary, and be perhaps be assigned with the sudden increase of radius of the
partonic 2-surface defining the horizon? Could this burst of energy release the energy
compensating the generation of gravitational binding energy? This burst would however have
totally different interpretation: even gamma ray bursts from quasars could be considered as
candidates for it and temperature would be totally different from the extremely low general
relativistic Hawking temperature of order TGR=[hbar/8π GM ] which corresponds to an energy
assignable to wavelength equal to 4π times Schwartschild radius. For Sun with Schwartschild
radius rS=2GM=3 km one has TGR= 3.2× 10-11 eV.
One can of course have fun with formulas to see whether the generalizaton of blackhole
thermodynamics assuming the replacement h→ hgr could make sense physically. Also the replacement
rS→ R, where R is the real radius of the star will be made.
1. Blackhole temperature can be formally identified as surface gravity
T=(hgr/hbar) × [GM/2π R2] = [hgr/h] × [rS2/R2]× TGR = 1/[4π v0] [rS2/R2] .
For Sun with radius R= 6.96× 105 km, one has T/m= 3.2× 10-11 giving T= 3× 10-2 eV for proton.
This is by 9 orders higher than ordinary Hawking temperature. Amazingly, this temperature
equals to room temperature! Is this a mere accident? If one takes seriously TGD inspired
quantum biology in which quantum gravity plays a key role (see this) this does not seem to be
the case. Note that for electron the temperature would correspond to energy 3/2× 10-5 eV which
corresponds to 4.5 GHz frequency for ordinary Planck constant.
- 65 -
It must be however made clear that the value of v0 for dark matter could differ from that
deduced assuming that entire gravitational mass is dark. For M→ MD= kM and v0→ k1/2v0 the
orbital radii remain unchanged but the velocity of dark matter object at the orbit scales to k 1/2v0.
This kind of scaling is suggested by the fact that the value of hgr seems to be too large as
compared by the identification of biophotons as decay results of dark photons with heff=hgr (some
arguments suggest the value k≈ 2× 10-4).
Note that for the radius R=[rS/2v0π] the thermal energy exceeds the rest mass of the particle. For
neutron stars this limit might be achieved.
2. Blackhole entropy
SGR= [A/4 hbar G]= 4π GM2/hbar=4π [M2/MPl2]
would be replaced with the negentropy for dark matter making sense also for systems containing
both dark and ordinary matter. The negentropy N(m) associated with a flux tube of given type
would be a fraction h/hgr from the total area of the horizon using Planck area as a unit:
N(m)=[h/hgr] × [A/4hbar G]= [h/hgr] × [R2/rS2] ×SGR = v0×[M/m]× [R2/rS2] .
The dependence on m makes sense since a given flux tube type characterized by mass m
determining the corresponding value of hgr has its own negentropy and the total negentropy is the
sum over the particle species. The negentropy of Sun is numerically much smaller that
corresponding blackhole entropy.
3. Horizon area is proportional to (GM/v0)2∝ heff2 and should increase in discrete jumps by scalings
of integer and be proportional to n2.
How does the analog of blackhole evolve in time? The evolution consists of sequences of repeated
state function reductions at the passive boundary of CD followed by the first reduction to the opposite
boundary of CD followed by a similar sequence. These sequences are analogs of unitary time
evolutions. This defines the analog of blackhole state as a repeatedly re-incarnating conscious entity and
having CD, whose size increases gradually. During given sequence of state function reductions the
passive boundary has constant size. About active boundary one cannot say this since it corresponds to a
superposition of quantum states.
The reduction sequences consist of life cycles at fixed boundary and the size of blackhole-like state
as of any state is expected to increase in discrete steps if it participates to cosmic expansion in average
sense. This requires that the mass of blackhole like object gradually increases. The interpretation is that
ordinary matter gradually transforms to dark matter and increases dark mass M= R/G.
Cosmic expansion is not observed for the sizes of individual astrophysical objects, which only comove.
The solution of the paradox is that they suddenly increase their size in state function reductions. This
hypothesis allows to realize Expanding Earth hypothesis in TGD framework (see this). Number
theoretically preferred scalings of blackhole radius come as powers of 2 and this would be the scaling
associated with Expanding Earth hypothesis.
See the chapter Criticality and dark matter" or the article TGD view about black holes and Hawking
radiation. For a summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 11:48 PM
- 66 -
08/26/2015 - http://matpitka.blogspot.com/2015/08/tgd-view-about-blackholes-andhawking.html#comments
TGD view about blackholes and Hawking radiation: part I
The most recent revealation of Hawking was in Hawking radiation conference held in KTH Royal
Institute of Technology in Stockholm. The title of the posting of Bee telling about what might have been
revealed is "Hawking proposes new idea for how information might escape from black holes". Also
Lubos has - a rather aggressive - blog post about the talk. A collaboration of Hawking, Andrew
Strominger and Malcom Perry is behind the claim and the work should be published within few months.
T
The first part of posting gives a critical discussion of the existing approach to black holes and
Hawking gravitation. The intention is to demonstrate that a pseudo problem following from the failure
of General Relativity below black hole horizon is in question.
In the second past of posting I will discuss TGD view about blackholes and Hawking radiation.
There are several new elements involved but concerning black holes the most relevant new element is
the assignment of Euclidian space-time regions as lines of generalized Feynman diagrams implying that
also blackhole interiors correspond to this kind of regions. Negentropy Maximization Principle is also an
important element and predicts that number theoretically defined black hole negentropy can only
increase. The real surprise was that the temperature of the variant of Hawking radiation at the flux tubes
of proton Sun system is room temperature! Could TGD variant of Hawking radiation be a key player in
Quantum Biology?
Is information lost or not in blackhole collapse?
The basic problem is that classically the collapse to blackhole seems to destroy all information about
the matter collapsing to the blackhole. The outcome is just infinitely dense mass point. There is also a
theorem of classical GRT stating that blackhole has no hair: blachole is characterized only by few
conserved charges.
Hawking has predicted that blackhole loses its mass by generating radiation, which looks like
thermal. As blackhole radiates its mass away, all information about the material which entered to the
blackhole seems to be lost. If one believes in standard quantum theory and unitary evolution preserving
the information, and also forgets the standard quantum theory's prediction that state function reductions
destroy information, one has a problem. Does the information really disappear? Or is the GRT
description incapable to cope with the situation? Could information find a new representation?
Superstring models and AdS/CFT correspondence have inspired the proposal that a hologram results
at the horizon and this hologram somehow catches the information by defining the hair of the blackhole.
Since the radius of horizon is proportional to the mass of blackhole, one can however wonder what
happens to this information as the radius shrinks to zero when all mass is Hawking radiated out.
What Hawking suggests is that a new kind of symmetry known as super-translations - a notion
originally introduced by Bondi and Metzner - could somehow save the situation. Andrew Strominger
has recently discussed the notion. The information would be "stored to super-translations".
Unfortunately this statement says nothing to me nor did not say to Bee and New Scientist reporter. The
idea however seems to be that the information carried by Hawking radiation emanating from the
blackhole interior would be caught by the hologram defined by the blackhole horizon.
- 67 -
Super-translation symmetry acts at the surface of a sphere with infinite radius in asymptotically flat
space-times looking like empty Minkowski space in very distant regions. The action would be
translations along sphere plus Poincare transformations.
What comes in mind in TGD framework is conformal transformations of the boundary of 4-D
lightcone which act as scalings of the radius of sphere and conformal transformations of the sphere.
Translations however translate the tip of the light-cone and Lorentz transformations transform the sphere
to an ellipsoid so that one should restrict to rotation subgroup of Lorentz group. Besides this TGD
allows huge group of symplectic transformations of δ CD× CP2 acting as isometries of WCW and
having structure of conformal algebra with generators labelled by conformal weights.
Sharpening of the argument of Hawking
There is now a popular article explaining the intuitive picture behind Hawking's proposal. The
blackhole horizon would involve tangential flow of light and particles of the infalling matter would
induce supertranslations on the pattern of this light thus coding information about their properties to this
light. After that this light would be radiated away as analog of Hawking radiation and carry out this
information.
The objection would be that in GRT horizon is no way special - it is just a coordinate singularity.
Curvature tensor does not diverge either and Einstein tensor and Ricci scalar vanish. This argument has
been used in the firewall debates to claim that nothing special should occur as horizon is traversed. Why
light would rotate around it? I see no reason for this! The answer in TGD framework would be obvious:
horizon is replaced for TGD analog of blackhole with a light-like 3-surface at which the induced metric
becomes Euclidian. Horizon becomes analogous to light front carrying not only photons but all kinds of
elementary particles. Particles do not fall inside this surface but remain at it!
What are the problems?
My fate is to be an aggressive dissident listened by no-one, and I find it natural to continue in the
role of angry old man. Be cautious, I am arrogant, I can bite, and my bite is poisonous!
1. With all due respect to Big Guys, to me the problem looks like a pseudo problem caused
basically by the breakdown of classical GRT. Irrespective of whether Hawking radiation is
generated, the information about matter (apart from mass, and some charges) is lost if the matter
indeed collapses to single infinitely dense point. This is of course very unrealistic and the
question should be: how should we proceed from GRT.
Blackhole is simply too strong an idealization and it is no wonder that Hawking's calculation
using blackhole metric as a background gives rise to blackbody radiation. One might hope that
Hawking radiation is genuine physical phenomenon, and might somehow carry the information
by being not genuinely thermal radiation. Here a theory of quantum gravitation might help. But
we do not have it!
2. What do we know about blackholes? We know that there are objects, which can be well
described by the exterior Schwartschild metric. Galactic centers are regarded as candidates for
giant blackholes. Binary systems for which another member is invisible are candidates for stellar
blackholes. One can however ask wether these candidates actually consist of dark matter rather
than being blackholes. Unfortunately, we do not understand what dark matter is!
3. Hawking radiation is extremely weak and there is no experimental evidence pro or con. Its
existence assumes the existence of blackhole, which presumably represents the failure of
classical GRT. Therefore we might be seeing a lot of trouble and inspired heated debates about
something, which does not exist at all! This includes both blackholes, Hawking radiation and
various problems such as firewall paradox.
- 68 -
There are also profound theoretical problems.
1. Contrary to the intensive media hype during last three decades, we still do not have a generally
accepted theory of quantum gravity. Super string models and M-theory failed to predict anything
at fundamental level, and just postulate effective quantum field theory limit, which assumes the
analog of GRT at the level of 10-D or 11-D target space to define the spontaneous
compactification as a solution of this GRT type theory. Not much is gained.
AdS/CFT correspondence is an attempt to do something in absence of this kind of theory but
involves 10- or 11- D blackholes and does not help much. Reality looks much simpler to an
innocent non-academic outsider like me. Effective field theorizing allows intellectual laziness
and many problems of recent day physics will be probably seen in future as being caused by this
lazy approach avoiding attempts to build explicit bridges between physics at different scales.
Something very similar has occurred in hadron physics and nuclear physics and one has kind of
stable of Aigeias to clean up before one can proceed.
2. A mathematically well-defined notion of information is lacking. We can talk about
thermodynamical entropy - single particle observable - and also about entanglement entropy basically a 2-particle observable. We do not have genuine notion of information and second law
predicts that the best that one can achieve is no information at all!
Could it be that our view about information as single particle characteristic is wrong? Could
information be associated with entanglement and be 2-particle characteristic? Could information
reside in the relationship of object with the external world, in the communication line? Not inside
blackhole, not at horizon but in the entanglement of blackhole with the external world.
3. We do not have a theory of quantum measurement. The deterministic unitary time evolution of
Schrödinger equation and non-deterministic state function reduction are in blatant conflict.
Copenhagen interpretation escapes the problem by saying that no objective reality/realities exist.
Easy trick once again! A closely related Pandora's box is that experienced time and geometric
time are very different but we pretend that this is not the case.
The only way out is to bring observer part of quantum physics: this requires nothing less than
quantum theory of consciousness. But the gurus of theoretical physics have shown no interest to
consciousness. It is much easier and much more impressive to apply mechanical algorithms to
produce complex formulas. If one takes consciousness seriously, one ends up with the question
about the variational principle of consciousness. Yes, your guess was correct! Negentropy
Maximization Principle! Conscious experience tends to maximize conscious information gain.
But how information is represented?
In the second part I will discuss TGD view about blackholes and Hawking radiation. See the chapter
Criticality and dark matter" or the article TGD view about black holes and Hawking radiation. For a
summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 11:47 PM
08/25/2015 - http://matpitka.blogspot.com/2015/08/field-equations-as-conservationlaws.html#comments
Field equations as conservation laws, Frobenius integrability conditions, and a connection with
quaternion analyticity
- 69 -
The following represents qualitative picture of field equations of TGD trying to emphasize the
physical aspects. What is new is the discussion of the possibility that Frobenius integrability conditions
are satisfied and correspond to quaternion analyticity.
1. Kähler action is Maxwell action for induced Kähler form and metric expressible in terms of
imbedding space coordinates and their gradients. Field equations reduce to those for imbedding
space coordinates defining the primary dynamical variables. By GCI only four of them are
independent dynamical variables analogous to classical fields.
2. The solution of field equations can be interpreted as a section in fiber bundle. In TGD the fiber
bundle is just the Cartesian product X4× CD× CP2 of space-time surface X4 and causal diamond
CD× CP2. CD is the intersection of future and past directed light-cones having two light-like
boundaries, which are cone-like pieces of light-boundary δ M4+/-× CP2. Space-time surface
serves as base space and CD× CP2 as fiber. Bundle projection Π is the projection to the factor X4.
Section corresponds to the map x→ hk(x) giving imbedding space coordinates as functions of
space-time coordinates. Bundle structure is now trivial and rather formal.
By GCI, one could also take suitably chosen 4 coordinates of CD× CP2 as space-time
coordinates, and identify CD× CP2 as the fiber bundle. The choice of the base space depends on
the character of space-time surface. For instance CD, CP2 or M2× S2 (S2 a geodesic sphere of
CP2), could define the base space. The bundle projection would be projection from CD× CP2 to
the base space. Now the fiber bundle structure can be non-trivial and make sense only in some
space-time region with same base space.
3. The field equations derived from Kähler action must be satisfied. Even more: one must have a
preferred extremal of Kähler action. One poses boundary conditions at the 3-D ends of spacetime surfaces and at the light-like boundaries of CD× CP2.
One can fix the values of conserved Noether charges at the ends of CD (total charges are
same) and require that the Noether charges associated with a sub-algebra of super-symplectic
algebra isomorphic to it and having conformal weights coming as n-ples of those for the entire
algebra, vanish. This would realize the effective 2-dimensionality required by SH. One must
pose boundary conditions also at the light-like partonic orbits. So called weak form of electricmagnetic duality is at least part of these boundary conditions.
It seems that one must restrict the conformal weights of the entire algebra to be non-negative
r≥ 0 and those of subalgebra to be positive: mn>0. The condition that also the commutators of
sub-algebra generators with those of the entire algebra give rise to vanishing Noether charges
implies that all algebra generators with conformal weight m≥ n vanish so the dynamical algebra
becomes effectively finite-dimensional. This condition generalizes to the action of supersymplectic algebra generators to physical states.
M4 time coordinate cannot have vanishing time derivative dm0/dt so that four-momentum is
non-vanishing for non-vacuum extremals. For CP2 coordinates time derivatives dsk/dt can vanish
and for space-like Minkowski coordinates dmi/dt can be assumed to be non-vanishing if M4
projection is 4-dimensional. For CP2 coordinates dsk/dt=0 implies the vanishing of electric parts
of induced gauge fields. The non-vacuum extremals with the largest conformal gauge symmetry
(very small n) would correspond to cosmic string solutions for which induced gauge fields have
only magnetic parts. As n increases, also electric parts are generated. Situation becomes
increasingly dynamical as conformal gauge symmetry is reduced and dynamical conformal
symmetry increases.
4. The field equations involve besides imbedding space coordinates hk also their partial derivatives
up to second order. Induced Kähler form and metric involve first partial derivatives ∂αhk and
- 70 -
second fundamental form appearing in field equations involves second order partial derivatives
∂α∂βhk.
Field equations are hydrodynamical, in other worlds represent conservation laws for the
Noether currents associated with the isometries of M4× CP2. By GCI there are only 4
independent dynamical variables so that the conservation of m≤ 4 isometry currents is enough if
chosen to be independent. The dimension m of the tangent space spanned by the conserved
currents can be smaller than 4. For vacuum extremals one has m= 0 and for massless extremals
(MEs) m= 1! The conservation of these currents can be also interpreted as an existence of m≤ 4
closed 3-forms defined by the duals of these currents.
5. The hydrodynamical picture suggests that in some situations it might be possible to assign to the
conserved currents flow lines of currents even globally. They would define m≤ 4 global
coordinates for some subset of conserved currents (4+8 for four-momentum and color quantum
numbers). Without additional conditions the individual flow lines are well-defined but do not
organize to a coherent hydrodynamic flow but are more like orbits of randomly moving gas
particles. To achieve global flow the flow lines must satisfy the condition dφA/dxμ= kABJBμ or
dφA= kABJB so that one can special of 3-D family of flow lines parallel to kABJB at each point - I
have considered this kind of possibility in detail earlier but the treatment is not so general as in
the recent case.
Frobenius integrability conditions follow from the condition d2φA=0= dkAB∧ JB+ kABdJB=0
and implies that dJB is in the ideal of exterior algebra generated by the JA appearing in kABJB. If
Frobenius conditions are satisfied, the field equations can define coordinates for which the
coordinate lines are along the basis elements for a sub-space of at most 4-D space defined by
conserved currents. Of course, the possibility that for preferred extremals there exists m≤ 4
conserved currents satisfying integrability conditions is only a conjecture.
It is quite possible to have m<4. For instance for vacuum extremals the currents vanish
identically For MEs various currents are parallel and light-like so that only single light-like
coordinate can be defined globally as flow lines. For cosmic strings (cartesian products of
minimal surfaces X2 in M4 and geodesic spheres S2 in CP2 4 independent currents exist). This is
expected to be true also for the deformations of cosmic strings defining magnetic flux tubes.
6. Cauchy-Riemann conditions in 2-D situation represent a special case of Frobenius conditions.
Now the gradients of real and imaginary parts of complex function w=w(z)= u+iv define two
conserved currents by Laplace equations. In TGD isometry currents would be gradients apart
from scalar function multipliers and one would have generalization of C-R conditions. In
citeallb/prefextremals,twistorstory I have considered the possibility that the generalization of
Cauchy-Riemann-Fuerter conditions could define quaternion analyticity (having many nonequivalent variants) as a defining property of preferred extremals. The integrability conditions
for the isometry currents would be the natural physical formulation of CRF conditions. Different
variants of CRF conditions would correspond to varying number of independent conserved
isometry currents.
7. This picture allows to consider a generalization of the notion of solution of field equation to that
of integral manifold. If the number of independent isometry currents is smaller than 4 (possibly
locally) and the integrability conditions hold true, lower-dimensional sub-manifolds of spacetime surface define integral manifolds as kind of lower-dimensional effective solutions.
Genuinely lower-dimensional solutions would of course have vanishing (g41/2) and vanishing
Kähler action.
- 71 -
String world sheets can be regarded as 2-D integral surfaces. Charged (possibly all) weak
boson gauge fields vanish at them since otherwise the electromagnetic charge for spinors would
not be well-defined. These conditions force string world sheets to be 2-D in the generic case. In
special case 4-D space-time region as a whole can satisfy these conditions. Well-definedness of
Kähler-Dirac equation demands that the isometry currents of Kähler action flow along these
string world sheets so that one has integral manifold. The integrability conditions would allow
2<m≤ n integrable flows outside the string world sheets, and at string world sheets one or two
isometry currents would vanish so that the flows would give rise 2-D independent sub-flow.
8. The method of characteristics is used to solve hyperbolic partial differential equations by
reducing them to ordinary differential equations. The (say 4-D) surface representing the solution
in the field space has a foliation using 1-D characteristics. The method is especially simple for
linear equations but can work also in the non-linear case. For instance, the expansion of wave
front can be described in terms of characteristics representing light rays. It can happen that two
characteristics intersect and a singularity results. This gives rise to physical phenomena like
caustics and shock waves.
In TGD framework, the flow lines for a given isometry current in the case of an integrable
flow would be analogous to characteristics, and one could also have purely geometric
counterparts of shockwaves and caustics. The light-like orbits of partonic 2-surface at which the
signature of the induced metric changes from Minkowskian to Euclidian might be seen as an
example about the analog of wave front in induced geometry. These surfaces serve as carriers of
fermion lines in generalized Feynman diagrams. Could one see the particle vertices at which the
4-D space-time surfaces intersect along their ends as analogs of intersections of characteristics kind of caustics? At these 3-surfaces the isometry currents should be continuous although the
space-time surface has "edge".
For details see the chapter Recent View about Kähler Geometry and Spin Structure of "World of
Classical Worlds" of "Quantum physics as infinite-dimensional geometry" or the article Could One
Define Dynamical Homotopy Groups in WCW?. For a summary of earlier postings see Links to the
latest progress in TGD.
posted by Matti Pitkanen @ 1:26 AM
08/22/2015 - http://matpitka.blogspot.com/2015/08/does-color-deconfinement-reallyoccur.html#comments
Does color deconfinement really occur?
Bee had a nice blog posting related to the origin of hadron masses and the phase transition from
color confinement to quark-gluon plasma involving also restoration of chiral symmetry in the sigma
model description. In the ideal situation the outcome should be a black body spectrum with no
correlations between radiated particles.
The situation is however not this. Some kind of transition occurs and produces a phase, which has
much lower viscosity than expected for quark-gluon plasma. Transition occurs also in much smoother
manner than expected. And there are strong correlations between opposite charged particles - charge
separation occurs. The simplest characterization for these events would be in terms of decaying strings
emitting particles of opposite charge from their ends. Conventional models do not predict anything like
this.
- 72 -
Some background
The masses of current quarks are very small - something like 5-20 MeV for u and d. These masses
explain only a minor fraction of the mass of proton. The old fashioned quark model assumed that quark
masses are much bigger: the mass scale was roughly one third of nucleon mass. These quarks were
called constituent quarks and - if they are real - one can wonder how they relate to current quarks.
Sigma model provide a phenomenological decription for the massivation of hadrons in confined
phase. The model is highly analogous to Higgs model. The fields are meson fields and baryon fields.
Now neutral pion and sigma meson develop vacuum expectation values and this implies breaking of
chiral symmetry so that nucleon become massive. The existence of sigma meson is still questionable.
In a transition to quark-gluon plasma one expects that mesons and protons disappear totally. Sigma
model however suggests that pion and proton do not disappear but become massless. Hence the two
descriptions might be inconsistent.
The authors of the article assumes that pion continues to exist as a massless particle in the transition
to quark gluon plasma. The presence of massless pions would yield a small effect at the low energies at
which massless pions have stronger interaction with magnetic field as massive ones. The existence of
magnetic wave coherent in rather large length scale is an additional assumption of the model: it
corresponds to the assumption about large heff in TGD framework, where color magnetic fields
associated with M89 meson flux tubes replace the magnetic wave.
In TGD framework, sigma model description is at best a phenomenological description as also Higgs
mechanism. p-Adic thermodynamics replaces Higgs mechanism and the massivation of hadrons
involves color magnetic flux tubes connecting valence quarks to color singles. Flux tubes have quark
and antiquark at their ends and are meson-like in this sense. Color magnetic energy contributes most of
the mass of hadron. Constituent quark would correspond to valence quark identified as current quark
plus the associated flux tube and its mass would be in good approximation the mass of color magnetic
flux tube.
There is also an analogy with sigma model provided by twistorialization in TGD sense. One can
assign to hadron (actually any particle) a light-like 8-momentum vector in tangent space M8=M4× E4 of
M4× CP2 defining 8-momentum space. Massless implies that ordinary mass squared corresponds to
constant E4 mass which translates to a localization to a 3-sphere in E4. This localization is analogous to
symmetry breaking generating a constant value of π0 field proportional to its mass in sigma model.
An attempt to understand charge asymmetries in terms of charged magnetic wave and charge
separation
One of the models trying to explain the charge asymmetries is in terms of what is called charged
magnetic wave effect and charge separation effect related to it. The experiment discussed by Bee
attempts to test this model.
1. So called chiral magnetic wave effect and charge separation effects are proposed as an
explanation for the the linear dependence of the asymmetry of so called elliptic flow on charge
asymmetry. Conventional models explain neither the charge separation nor this dependence.
Chiral magnetic wave would be a coherent magnetic field generated by the colliding nuclei in a
relatively long scale, even the length scale of nuclei.
2. Charged pions interact with this magnetic field. The interaction energy is roughly h× eB/E,
where E is the energy of pion. In the phase with broken chiral symmetry the pion mass is nonvanishing and at low energy one has E=m in good approximation. In chirally symmetric phase
- 73 -
pion is massless and magnetic interaction energy becomes large a low energies. This could serve
as a signature distginguishing between chirally symmetric and asymmetric phases.
3. The experimenters try to detect this difference and report slight evidence for it. This is change of
the charge asymmetry of so called elliptic flow for positively and negatively charged pions
interpreted in terms of charge separation fluctuation caused by the presence of strong magnetic
field assumed to lead to separation of chiral charges (left/righ handedness). The average
velocities of the pions are different and average velocity depends azimuthal angle in the collision
plane: second harmonic is in question (say sin(2φ)).
In TGD framework, the explanation of the un-expected behavior of should-be quark-gluon plasma is in
terms of M89 hadron physics.
1. A phase transition indeed occurs but means a phase transition transforming the quarks of the
ordinary M107 hadron physics to those of M89 hadron physics. They are not free quarks but
confined to form M89 mesons. M89 pion would have mass about 135 GeV. A naive scaling gives
half of this mass but it seems unfeasible that pion like state with this mass could have escaped
the attention - unless of course the unexpected behavior of quark gluon plasma demonstrates its
existence! Should be easy for a professional to check. Thus a phase transition would yield a
scaled up hadron physics with mass scale by a factor 512 higher than for the ordinary hadron
physics.
2. Stringy description applies to the decay of flux tubes assignable to the M 89 mesons to ordinary
hadrons. This explains charge separation effect and the deviation from the thermal spectrum.
3. In the experiments discussed in the article the cm energy for nucleon-nucleon system associated
with the colliding nuclei varied between 27-200 GeV so that the creation of even on mass shell
M89 pion in single collision of this kind is possible at highest energies. If several nucleons
participate simultaneosly even many-pion states are possible at the upper end of the interval.
4. These hadrons must have large heff=n× h since collision time is roughly 5 femtoseconds, by a
factor about 500 (not far from 512!) longer than the time scale associated with their masses if
M89 pion has the proposed mass of 135 MeV for ordinary Planck constant and scaling factor 2×
512 instead of 512 in principle allowed by p-adic length scale hypothesis. There are some
indications for a meson with this mass. The hierarchy of Planck constants allows at quantum
criticality to zoom up the size of much more massive M89 hadrons to nuclear size! The phase
transition to dark M89 hadron physics could take place in the scale of nucleus producing several
M89 pions decaying to ordinary hadrons.
5. The large value of heff would mean quantum coherence in the scale of nucleus explaining why
the value of viscosity was much smaller than expected for quark gluon plasma. The expected
phase transition was also much smoother than expected. Since nuclei are many-nucleon systems
and the Compton wavelength of M89 pion would be of order nucleus size, one expects that the
phase transition can take place in a wide collision energy range. At lower energies, several
nucleon pairs could provide energy to generate M89 pion. At higher energies even single nucleon
pair could provide the energy. The number of M89 pions should therefore increase with nucleonnucleon collision energy, and induce the increase of charge asymmetry and strength of the
charge asymmery of the elliptic flow.
6. Hydrodynamical behavior is essential in order to have low viscosity classically. Even more, the
hydrodynamics had better to be that of an ideal liquid. In TGD framework the field equations
have hydrodynamic character as conservation laws for currents associated with various
isometries of imbedding space. The isometry currents define flow lines. Without further
conditions the flow lines do not however integrate to a coherent flow: one has something
analogous to gas phase rather than liquid so that the mixing induced by the flow cannot be
described by a smooth map.
- 74 -
To achieve this given isometry flow must make sense globally - that is to define coordinate
lines of a globally defined coordinate ("time" along flow lines). In this case one can assign to the
flow a continuous phase factor as an order parameter varying along the flow lines. Superconductivity is an example of this. The so called Frobenius conditions guarantee this at least the
preferred extremals could have this complete integrability property making TGD an integrable
theory (see the appendix of the article at my homepage). In the recent case, the dark flux tubes
with size scale of nucleus would carry ideal hydrodynamical flow with very low viscosity.
See the chapter New Particle Physics Predicted by TGD: Part I or the article Does color deconfinement
really occur? For a summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 2:55 AM
08/19/2015 - http://matpitka.blogspot.com/2015/08/could-one-define-dynamicalhomotopy.html#comments
Could one define dynamical homotopy groups in WCW?
I learned that Agostino Prastaro has done highly interesting work with partial differential equations,
also those assignable to geometric variational principles such as Kähler action in TGD. I do not
understand the mathematical details but the key idea is a simple and elegant generalization of Thom's
cobordism theory, and it is difficult to avoid the idea that the application of Prastaro's idea might provide
insights about the preferred extremals, whose identification is now on rather firm basis.
One could also consider a definition of what one might call dynamical homotopy groups as a
genuine characteristics of WCW topology. The first prediction is that the values of conserved classical
Noether charges correspond to disjoint components of WCW. Could the natural topology in the
parameter space of Noether charges zero modes of WCW metric) be p-adic and realize adelic physics at
the level of WCW? An analogous conjecture was made on basis of spin glass analogy long time ago.
Second surprise is that the only the 6 lowest dynamical homotopy/homology groups of WCW would be
non-trivial. The Kähler structure of WCW suggets that only Π0, Π2, and Π4 are non-trivial.
The interpretation of the analog of Π1 as deformations of generalized Feynman diagrams with
elementary cobordism snipping away a loop as a move leaving scattering amplitude invariant conforms
with the number theoretic vision about scattering amplitude as a representation for a sequence of
algebraic operation can be always reduced to a tree diagram. TGD would be indeed topological QFT:
only the dynamical topology would matter.
For details see the chapter Recent View about K\"ahler Geometry and Spin Structure of "World of
Classical Worlds" of "Quantum physics as infinite-dimensional geometry" or the article Could One
Define Dynamical Homotopy Groups in WCW?.
For a summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 5:59 AM 2 comments
2 Comments:
At 3:22 PM, Anonymous said...
I meant to post that comment on the other blog post to this one.. oops, here it is. I read
some of the paper.. this is incredible...
http://arxiv.org/abs/1503.07851
this seems related to 3.8#SpectralFunctions of @Book{bochner1955harmonic, Title =
- 75 -
{Harmonic Analysis and the Theory of Probability}, Author = {Bochner, Solomon},
Publisher = {University of California Press}, Year = {1955}, Series = {California
Monographs in mathematical sciences}, }
or am I getting that confused with something else? I remember thinking it was the
appearance of the polynomials and the anti-symmetrization involing the binomial
processes and theorem, etc.
$\lim_{n \rightarrow \infty} \sum_{j = 1}^n \sum_{k = 1}^n \frac{f_{\gamma}\left(
\frac{j}{n} \right) f_{\gamma}^{\ast} \left( \frac{k}{n} \right) Q\left( \frac{j - k}{n}
\right)}{n^2} = \lim_{n \rightarrow \infty} E \left|\sum_{j = 1}^n \frac{f_{\gamma} \left(
\frac{j}{n} \right) g \left(\frac{j}{n} \right)}{n} \right|^2$ see a (much more readable
image) at http://i.imgur.com/FhLxAO4.png ;-)--crοw
At 4:13 PM, Anonymous said...
oops, wrong book reference... here is book with the formula I transliterated
https://books.google.com/books?id=xM3PStPHYsC&pg=PA74&lpg=PA74&dq=%22it+is+unique+aside+from+an+additive+constant+and
+a+harmless+ambiguity+at+its+jumps%22&source=bl&ots=kfl99ZVaKa&sig=f0AwhCjh
tRDXpxkiXJ2V85oEXQ&hl=en&sa=X&ved=0CCEQ6AEwAGoVChMI3tD9hui9xwIVzJkeCh1
IJg7q#v=onepage&q=%22it%20is%20unique%20aside%20from%20an%20additive%20c
onstant%20and%20a%20harmless%20ambiguity%20at%20its%20jumps%22&f=false
08/18/2015 - http://matpitka.blogspot.com/2015/08/hydrogen-sulfide-superconducts-at70.html#comments
Hydrogen sulfide superconducts at -70 degrees Celsius!
The newest news is that hydrogen sulfide (the compound responsible for the smell of rotten eggs)
conducts electricity with zero resistance at a record high temperature of 203 Kelvin (–70 degrees C),
reports a paper published in Nature. This super-conductor however suffers from a serious existential
crisis: it behaves very much like old fashioned super-conductor for which superconductivity is believed
to be caused by lattice vibrations and is therefore not allowed to exist in the world of standard physics!
To be or not to be!
The TGD Universe allows however all flowers to bloom. The interpretation is that the mechanism is
large enough value of heff=n×h implying that critical temperature scales up. Perhaps it is not a total
accident that hydrogen sulfide H2S - chemically analogous to water - results from the bacterial
breakdown of organic matter, which according to TGD is high temperature super-conductor at room
temperature and mostly water, which is absolutely essential for the properties of Living matter in the
TGD Universe.
See the earlier posting about pairs magnetic flux tubes carrying dark electrons of Cooper pair as an
explanation of high Tc (and maybe also of low Tc) superconductivity. For a summary of earlier
postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 8:34 PM
- 76 -
08/18/2015 - http://matpitka.blogspot.com/2015/08/about-negentropic-entanglementas.html#comments
About negentropic entanglement as analog of an error correction code
In classical computation, the simplest manner to control errors is to take several copies of the bit
sequences. In quantum case no-cloning theorem prevents this. Error correcting codes
(\urlhttps://en.wikipedia.org/wiki/Quantumerrorcorrection) code n information qubits to the entanglement
of N>n physical qubits. Additional contraints represents the subspace of n-qubits as a lower-dimensional
sub-space of N qubits. This redundant representation is analogous to the use of parity bits. The failure of
the constraint to be satisfied tells that the error is present and also the character of error. This makes
possible the automatic correction of the error is simple enough - such as the change of the phase of spin
state or or spin flip.
Negentropic entanglement (NE) obviously gives rise to a strong reduction in the number of states of
tensor product. Consider a system consisting of two entangled systems consisting of N1 and N2 spins.
Without any constraints the number of states in state basis is 2N1× 2N2 and one as N1+N2 qubits. The
elements of entanglement matrix can be written as EA,B, A== ⊗i=1N1 (mi,si), B== ⊗k=1N2 (mk,sk) in order
to make manifest the tensor product structure. For simplicity one can consider the situation N1=N2=N.
The un-normalized general entanglement matrix is parametrized by 2× 22N independent real numbers
with each spin contributing two degrees of freedom. Unitary entanglement matrix is characterized by 22N
real numbers. One might perhaps say that one has 2N real bits instead of almost 2N+1 real qubits. If the
time evolution according to ZEO respects the negentropic character of entanglement, the sources of
errors are reduced dramatically.
The challenge is to understand what kind of errors NE eliminates and how the information bits are
coded by it. NE is respected if the errors act as unitary transformations E→ UEU† of the unitary
entanglement matrix. One can consider two interpretations.
1. The unitary automorphisms leave information content unaffected only if they commute with E.
In this case unitary automorphisms acting non-trivially would give rise genuine errors and an
error correction mechanism would be needed and would be coded to quantum computer
program.
2. One can also consider the possibility that the unitary automorphisms do not affect the
information content so that the diagonal form of entanglement matrix coded by N phases would
carry of information. Clearly, the unitary automorphisms would act like gauge transformations.
Nature would take care that no errors emerge. Of course, more dramatic things are in principle
allowed by NMP: for instance, the unitary entanglement matrix could reduce to a tensor product
of several unitary matrices. Negentropy could be transferred from the system and is indeed
transferred as the computation halts.
By number theoretic universality, the diagonalized entanglement matrix would be
parametrized by N roots of unity with each having n possible values so that n N different NEs
would be obtained and information storage capacity would be I=log(n)/log(2) × N bits for n=2k
one would have k× N bits. Powers of two for n are favored. Clearly the option for which only the
eigenvalues of E matter, looks more attractive realization of entanglement matrices. If overall
phase of E does not matter as one expects, the number of full bits is k× N-1. This option looks
more attractive realization of entanglement matrices.
In fact, Fermat polygons for which cosine and sine for the angle defining the polygon are
expressible by iterating square root besides basic arithmetic operations for rationals (ruler and
- 77 -
compass construction geometrically) correspond to integers, which are products of a power of
two and of different Fermat primes Fn=22n+1. l
This picture can be related to much bigger picture.
1. In TGD framework, number theoretical universality requires discretization in terms of algebraic
extension of rationals. This is not performed at space-time level but for the parameters
characterizing space-time surfaces at the level of WCW. Strong form of holography is also
essential and allows to consider partonic 2-surfaces and string world sheets as basic objects.
Number theoretical universality (adelic physics) forces a discretization of phases and number
theoretically allowed phases are roots of unity defined by some algebraic extension of rationals.
Discretization can be also interpreted in terms of finite measurement resolution. Notice that the
condition that roots of unity are in question realizes finite measurement resolution in the sense
that errors have minimum size and are thus detectable.
2. Hierarchy of quantum criticalities corresponds to a fractal inclusion hierarchy of isomorphic subalgebras of the super-symplectic algebra acting as conformal gauge symmetries. The generators
in the complement of this algebra can act as dynamical symmetries affecting the physical states.
Infinite hierarchy of gauge symmetry breakings is the outcome and the weakening of
measurement resolution would correspond to the reduction in the size of the broken gauge group.
The hierarchy of quantum criticalities is accompanied by the hierarchy of measurement
resolutions and hierarchy of effective Planck constants heff=n× h.
3. These hierarchies are argued to correspond to the hierarchy of inclusions for hyperfinite factors
of type II1 labelled by quantum phases and quantum groups. Inclusion defines finite
measurement resolution since included sub-algebra does induce observable effects on the state.
By Mac-Kay correspondence the hierarchy of inclusions is accompanied by a hierarchy of
simply laced Lie groups which get bigger as one climbs up in the hierarchy. There interpretation
as genuine gauge groups does make sense since their sizes should be reduced. An attractive
possibility is that these groups are factor groups G/H such that the normal subgroup H
(necessarily so) is the gauge group and indeed gets smaller and G/H is the dynamical group
identifiable as simply laced group which gets bigger. This would require that both G and H are
infinite-dimensional groups. An interesting question is how they relate to the super-symplectic
group assignable to "light-cone boundary" δ M4+/-× CP2. I have proposed this interpretation in the
context of WCW geometry earlier.
4. Here I have spoken only about dynamical symmetries defined by discrete subgroups of simply
laced groups. I have earlier considered the possibility that discrete symmetries provide a
description of finite resolution, which would be equivalent with quantum group description.
Summarizing, these arguments boil down to the conjecture that discrete subgroups of these groups
act as effective symmetry groups of entanglement matrices and realize finite quantum measurement
resolution. A very deep connection between quantum information theory and these hierarchies would
exist.
Gauge invariance has turned out to be a fundamental symmetry principle, and one can ask whether
unitary entanglement matrices assuming that only the eigenvalues matter, could give rise to a simulation
of discrete gauge theories. The reduction of the information to that provided by the diagonal form be
interpreted as an analog of gauge invariance?
1. The hierarchy of inclusions of hyper-finite factors of type II1 suggests strongly a hierarchy of
effective gauge invariances characterizing measurement resolution realized in terms of hierarchy
of normal subgroups and dynamical symmetries realized as coset groups G/H. Could these
effective gauge symmetries allow to realize unitary entanglement matrices invariant under these
symmetries.
- 78 -
2. A natural parametrization for single qubit errors is as rotations of qubit. If the error acts as a
rotation on all qubits, the rotational invariance of the entanglement matrix defining the analog of
S-matrix is enough to eliminate the effect on information processing.
Quaternionic unitary transformations act on qubits as unitary rotations. Could one assume
that complex numbers as the coefficient field of QM is effectively replaced with quaternions? If
so, the multiplication by unit quaternion for states would leave the physics and information
content invariant just like the multiplication by a complex phase leaves it invariant in the
standard quantum theory.
One could consider the possibility that quaternions act as a discretized version of local gauge
invariance affecting the information qubits and thus reducing further their number and thus also
errors. This requires the introduction of the analog of gauge potential and coding of quantum
information in terms of SU(2) gauge invariants. In discrete situation gauge potential would be
replaced with a non-integrable phase factors along the links of a lattice in lattice gauge theory. In
TGD framework the links would correspond the fermionic strings connecting partonic twosurfaces carrying the fundamental fermions at string ends as point-like particles. Fermionic
entanglement is indeed between the ends of these strings.
3. Since entanglement is multilocal and quantum groups accompany the inclusion, one cannot avoid
the question whether Yangian symmetry crucial for the formulation of quantum TGD
\citeallb/twistorstory could be involved.
For details see the chapter Negentropy Maximization Principleor the article Quantum Measurement and
Quantum Computation in TGD Universe For a summary of earlier postings see Links to the latest
progress in TGD.
posted by Matti Pitkanen @ 1:17 AM
2 Comments:
At 5:40 PM, Anonymous said...
See
also:
https://en.wikipedia.org/wiki/Byzantine_fault_tolerance
https://en.wikipedia.org/wiki/Quantum_Byzantine_agreement
At 7:19 PM, [email protected] said...
Thank you for the link.
08/16/2015 - http://matpitka.blogspot.com/2015/08/sleeping-beauty-problem.html#comments
Sleeping Beauty Problem
Lubos wrote polemically about Sleeping Beauty Problem. The procedure is as follows.
Sleeping Beauty is put to sleep and coin is tossed. If the coin comes up heads, Beauty will be
awakened and interviewed only on Monday. If the coin comes up tails, she will be awakened and
interviewed on both Monday and Tuesday. On Monday she will be put into sleep by amnesia inducing
drug. In either case, she will be awakened on Wednesday without interview and the experiment ends.
Any time Sleeping Beauty is awakened and interviewed, she is asked, "What is your belief now for the
proposition that the coin landed heads?" No other communications are allowed so that the Beauty does
not know whether it is Monday or Tuesday.
- 79 -
The question is about the belief of the Sleeping Beauty on basis of the information she has, not about
the actual probability that the coined landed heads. If one wants to debate one imagine oneself to the
position of Sleeping Beauty. There are 2 basic debating camps -- halfers and thirders.
1. Halfers argue that the outcome of coin tossing cannot in any manner depend on future events and
one has have P(Heads)= P(Tails)=1/2 just from the fact that that the coin is fair. To me this view
is obvious. Lubos has also this view. I however vaguely remember that years ago, when first
encountering this problem, I was ready to take the thirder view seriously.
2. Thirders argue in the following manner using conditional probabilities. The conditional
probability P(Tails|Monday) =P(Head|Monday) (P(X/Y) denotes probability for X assuming Y)
and from the basic formula for the conditional probabilities stating P(X|Y)= P(X and Y)P(Y) and
from P(Monday)= P(Tuesday)=1/2 (this actually follows from P(Heads)= P(Tail)=1/2 in the
experiment considered!) , one obtains P(Tails and Tuesday)= P(Tails and Monday).
Furthermore, one also has P(Tails and Monday)= P(Heads and Monday) (again from P(Heads)=
P(Tails)=1/2!) giving P(Tails and Tuesday)= P(Tails and Monday)=P(Heads and Monday). Since
these events are independent for one trial and one of them must occur, each probability must
equal to 1/3. Since "Heads" implies that the day is Monday, one has P(Heads and Monday)=
P(Heads)=1/3 in conflict with P(Heads)=1/2 used in the argument. To me this looks like a
paradox telling that some implicit assumption about probabilities in relation to time is wrong.
In my opinion, the basic problem in the argument of thirders is there assumption that events
occurring at different times can form a set of independent events. Also the difference between
experienced and geometric time is involved in an essential manner when one speaks about amnesia.
When one speaks about independent events and their probabilities in physics they are must be
causally independent and occur at the same moment of time. This is crucial in the application of
probability theory in quantum theory and also classical theory. If time would not matter, one should be
able to replace time-line with space-like line - say x-axis. The counterparts of Monday, Tuesday, and
Wednesday can be located to x-axis with a mutual distance of say one meter. One cannot however
realize the experimental situation since the notion of space-like amnesia does not make sense! Or
crystallizing it: independent events must have space-like separation. The arrow of time is also essential.
For the conditional probabilitys P(X|Y) used above X occurs before Y and this breaks the standard
arrow of time.
This clearly demonstrates that philosophy and mathematics cannot be separated from physics and
that the notion of time should be fundamental issued both in philosophy, mathematics and physics!
posted by Matti Pitkanen @ 10:18 PM 9 comments
9 Comments:
At 6:35 AM,
Anonymous said...
What's with the quantum theorists and their sadistic thought experiments? As if
Schrödinger's can was not enough, now they are putting damsell in distress and torturing
also Sleeping Beauty!!!
Philosophy indeed. How much energy, time and effort is used for trying cage quantum
holonomy outside physicist's box? Where genuine cat with nine lives claws or quantum
- 80 -
jumps outside the box and Sleaping beauty gives the researcher finger on Monday and
double finger on Tuesday until Prince Valiant rides to rescue and gives a kiss. :)
But in terms of physics, the argument that causally independent events have to occur at
the same moment of time - or that causally dependent events have to occur before and
after - is not valid, at least in relativity. For a relativistic-time-space observer observing
events and their relations all options are observable. Einstein wanted to save causality, and
in modern physics linear causality is not logically derived from anything, just defined
tautologically as basic axiom. Hume's scepticism of linear causality is fundamental issue
of philosophy, as is everyday psychological acting AS-IF there was linear causality with
single arrow of time.
That said, there are also many kinds of psychological states - and narratives - where a
teleological Purpose from future affects now and past.
At 7:31 AM,
[email protected] said...
I mentioned with causality relativistic causality. Proper time distance is the time that is
in question in the framework or special relativity. Events with space-like separation are not
independent not only those in time=constant hyper-surface.
Logical causality, causality of conscious experience, and the causation of classical
physics must be distinguished from each. For the causality of free will the causality
corresponds to experienced time order.
Causality in Einstein's sense is precisely defined and follows from Lorentz invariance
when one speaks of classical fields. In quantum theory the formulations states the absence
of tachyons and states that energies are positive.
I did not say anything about the possibility that the arrow of geometric causality can
vary as it indeed does in zero energy ontology. It is an additional finesse.
My point was that one cannot do anymore mathematics and philosophy without taking
into account the world view of quantum physics.
At 8:35 AM,
Anonymous said...
Does Einstein causality follow from Lorentz invariance or the other way around?
Which is cause and which is effect and in what time?
Stenger lends additional light on the matter at hand: "When you read, "Einstein proved
that particles cannot go faster than the speed of light" you have to understand that this was
not a consequence of the basic axioms of the theory of special relativity. To prove this he
introduced an additional assumption now called the "principle of Einstein causality": cause
must always precede effect. In that case, it then follows that we can't have superluminal
motion."
http://www.huffingtonpost.com/victor-stenger/no-cause-to-dispute-einst_b_982429.html
And even reversible notion of alone is not enough if you take delayed choice
experiment seriously and I believe you do. If I'm not mistaken, one of the key notions of
TGD is (was?) part-whole principle of various scales of 'now'.
- 81 -
Against that presumably deeper and more fundamental background, is there logical and
mathematical necessity for Einstein-causality (and Lorentz invariance), or are we speaking
about metaphysical belief and wishful thinking?
PS: Lubos post and discussion was purely about mathematical problem that, as said, is
not clearly defined and formulated, hence only intelligent position mentioned was that
there is no logical reason to lend support to either position.
PPS: I'm not at all certain that quantum physics has definite "the" world view. ;)
At 9:30 PM, [email protected] said...
Einstein causality is Lorentz invariance + fixed arrow of time. In ZEO, one weakens
Einstein causality since selves can have both arrows of time: in reincarnation of self, the
arrow of geometric time changes. Each self is irreversible by NMP - I do not assume
reversibility at quantum level.
This allows to consider effective superluminality since signals can be reflected in time
direction. Say from my brain in geometric future or past: during sleep we remember future
events- I have somewhere in my bookshelf a book about documents memories of future!
Do not remember the author.
Signals can be reflected also from brain of some alien in some distant galaxy: this I
proposed as a possible interpretation for the experiences of meeting ETs induced by
psychedelics. These events would always involve re-incarnation of subself representing
signal to the geometric past of self (geometric past is relative notion- future for self which
is sleeping!).
The most interesting application in neuroscience is new view about memory. Also
sensory-motor cycle could be understood as a sequence in which sensory mental image
dies and re-incarnates as motor mental image.
I am not sure about what you mean with part-whole principle - Wikipedia did not help.
There is hierarchy of selves, maybe you mean that.
I take Lorentz invariance as a fact. In TGD framework it has deep roots: the geometry
of infinite-D WCW exists if it has infinite-D group of isometries. This is the case under
very restricted conditions. 4+4 dimensionally follows from general number theoretical
reasons. Lightone boundary for M^4 has gigantic conformal symmetries and 4-D
spacetime surfaces have light-like partonic orbits with similar huge symmetries. In 8-D
M^4xCP_2 one has huge super-symplectic symmetries for WCW.
The existence of twistor structure relating to Yangian invariance is one further
prerequisite. M^4xCP_2 is completely unique in the sense that its factors are the only 4geometries for which twistor space has Kaehler structure. This was discovered at the same
time I discovered M^4xCP_2 but no one told about this to me! It would be interesting to
know, whether some colleagues knew about this but did not bother to tell for some
reason;-). In any case, once you have M^4 you have Lorentz invariance.
- 82 -
Arrow of Time follows from quantum measurement theory generalised to ZEO and
accepting NMP (and already in its ordinary form). Einstein causality follows both as an
experimental fact (even in its weakened for) and as a mathematical necessity.
I restate my view and also that of Lubos although we might have different
justifications. Thirders are wrong because they bring in conditional probability although it
is not needed at all. Even worse, they also apply it in wrong manner. It might be that
probability theorists outside physics circles do not realize that causal independence in
physical sense is highly relevant factor when one talks about probabilities.
At 7:59 AM,
Anonymous said...
I saw term "part-whole principle" together with "numerosities" as attempt to save set
theory of infinite sets, and search gave e.g. this link:
http://www.math.uni-hamburg.de/home/loewe/HiPhI/Slides/parker.pdf
The connection to hierarchy of selves is obvious, but the notion of part-whole principle
is at least implied already in the ordinal foundational level of number theory of ordered
fields and their arithmetics, ie. e.g. 1<2 is a part - whole relation given that 1+1=2. Hence,
it can be said that p-adic numbers are by definition part-whole relations, but same does not
seem to apply with equal clarity on the atomistic-reductionistic real side. When three dots
refer to "larger than", part-whole relation works, but when three dots refer to "smaller
than", what is the whole and what is the part?
At 4:57 AM,
[email protected] said...
I do not see any reason for saving set theory from infinite sets: if something is saved it
is the naive belief that the world is what our sensory limitations tell it must be. Both
geometric and number theoretic aspects are important and one ends up with difficulties
when on accepts only the other one. Modifying merciselly Einstein's well-known
statement about religion and science: Mathematics without geometry is blind and
Mathematics without number theory is lame;).
Part-whole principle states is geometry based and states that if set is subset of another
set, it is "smaller". This is ok but one should not confuse the size as the number of
elements of set used by Cantor and defined in terms of 1-1 correspondences with the
metric size used in part-whole principle.
Reals from 0/1 to infinity is a good example. Size of set as number of elements is
something different from the size of set defined by metric, since metric brings in structure.
The notion of infinite prime involves power sets or their subsets.
One considers second quantisation of supersymmetric arithmetic QFT. Should one
allow states with literally infinite number of fermions and bosons or only states with finite
but unbounded number? Depending on the choice each step in the hierarchy of
quantisations gives/does gives power set/subset of power set having larger
cardinality/same cardinality. If one demands finite energy /particle numbers for the states,
each quantisation gives same enumerable number of states.
At 2:02 PM,
Anonymous said...
- 83 -
-----BEGIN
PGP
SIGNED
MESSAGE-----Hash:
SHA1
Matti, have you considered continuing your blog over at wordpress.com? It has the ability
to
generate
pictures
from
LaTeX.
for
instance,
see
https://almostsure.wordpress.com/2009/11/08/filtrations-and-adapted-processes/
--Stephen-----BEGIN
PGP
SIGNATURE-----Version:
GnuPG
v1iQEcBAEBAgAGBQJV2jS4AAoJEJYyVAps4b5dGgIH/AxBZwF4FzbiXFZk/3GLG3
mW
F+sSjqMeA8VT3sjtOLjOxZgD8+7PKCdV/csijo8sNk5xRigSrRM0y+Qy+wQicDB90mL
FrQbRj9V69Fqk3GBrrup+xFOahKUL73K1HHQFSxvuQzi4o2kjwBXlEHPSBDaliO1SM
lnKS2+DTyadMd4M3g2dTfbr6L08NQMYdyHsC0z9Crvw5DgZWbbIdJID9PLVpvE2Zi
EoECQgMojzCIeu963R8GFUMrPhxZ/wbmKwJtYtCGk2A4zZsOoswVE0vAIlKjhgf29Q
9orN4pN9+XM3cz61K/7m7MuagrMWDUk4M4Z6+t9Wt/78ym10JTLmSrQ==p+D8
-----END PGP SIGNATURE----At 7:55 PM, [email protected] said...
Thank you, could you elaborate a little bit. Is the signature below all that is needed to this
continuation?
At 10:56 AM,
Anonymous said...
https://en.support.wordpress.com/latex/
08/14/2015 - http://matpitka.blogspot.com/2015/08/about-quantum-measurement-andquantum.html#comments
About quantum measurement and quantum computation in the TGD Universe
During the years I have been thinking how quantum computation could be carried out in the TGD
Universe (see this). There are considerable deviations from the standard view. Zero Energy Ontology
(ZEO), weak form of NMP dictating the dynamics of state function reduction, negentropic entanglement
(NE), and hierarchy of Planck constants define the basic differences between TGD based and standard
quantum measurement theory. TGD suggests also the importance of topological quantum computation
(TQC) like processes with braids represented as magnetic flux tubes/strings along them.
The natural question that popped in my mind was how NMP and Zero Energy Ontology (ZEO) could
affect the existing view about TQC. The outcome was a more precise view about TQC. The basic
observation is that the phase transition to dark matter phase reduces dramatically the noise affecting
quantum quits. This together with robustness of braiding as TQC program raises excellent hopes about
TQC in TGD Universe. The restriction to negentropic space-like entanglement (NE) defined by a
unitary matrix is something new but does not seem to have any fatal consequences as the study of Shor's
algorithm shows.
NMP strongly suggests that when a pair of systems - the ends of braid - suffer state function reduction,
the NE must be transferred somehow from the system. How? The model for quantum teleportation
allows to identify a possible mechanism allowing to achieve this. This mechanism could be fundamental
mechanism of information transfer also in living matter and phosphorylation could represent the transfer
of NE according to this mechanism: the transfer of metabolic energy would be at deeper level transfer of
negentropy. Quantum measurements could be actually seen as transfer of negentropy at deeper level.
- 84 -
For details see the chapter Negentropy Maximization Principleor the article Quantum Measurement and
Quantum Computation in TGD Universe . For a summary of earlier postings see Links to the latest
progress in TGD.
posted by Matti Pitkanen @ 8:49 PM
08/13/2015 - http://matpitka.blogspot.com/2015/08/flux-tube-description-seems-toapply.html#comments
Flux tube description seems to apply also to low Tc superconductivity
Discussions with Hans Geesink have inspired sharpening of the TGD view about biosuperconductivity (bio-SC), high Tc superconductivity (SC) and relate the picture to standard
descriptions in a more detailed manner. In fact, also standard low temperature super-conductivity
modelled using BCS theory could be based on the same universal mechanism involving pairs of
magnetic flux tubes possibly forming flattened square like closed flux tubes and members of Cooper
pairs residing at them.
A brief summary about strengths and weakness of BCS theory
First I try to summarise what patent reminds about BCS theory.
1. BCS theory is successful in 3-D superconductors and explains a lot: supracurrent, diamagnetism,
and thermodynamics of the superconducting state, and it has correlated many experimental data
in terms of a few basic parameters.
2. BCS theory has also failures.
1. The dependence on crystal structure and chemistry is not well-understood: it is not
possible to predict, which materials are super-conducting and which are not.
2. High-Tc SC is not understood. Antiferromagnetism is known to be important. The quite
recent experiment demonstrates conductivity- maybe even conductivity - in topological
insulator in presence of magnetic field (see this). This is compete paradox and suggests in
TGD framework that the flux tubes of external magnetic field serve as the wires (see
previous posting).
3. BCS model based on crystalline long range order and k-space (Fermi sphere). BCS-difficult
materials have short range structural order: amorphous alloys, SC metal particles 0-down to 50
Angstroms (lipid layer of cell membrane) transition metals, alloys, compounds. Real space
description rather than k-space description based on crystalline order seems to be more natural.
Could it be that the description of electrons of Cooper pair is not correct? If so, k-space and
Fermi sphere would be only appropriate description of ordinary electrons needed to model the
transition to to super-conductivity? Super-conducting electrons could require different
description.
4. Local chemical bonding/real molecular description has been proposed. This is of course very
natural in standard physics framework since the standard view about magnetic fields does not
provide any ideas about Cooper pairing and magnetic fields are only a nuisance rather than
something making SC possible. In TGD framework the situation is different.
TGD-based view about SC
TGD proposal for high Tc SC and bio-SC relies on many-sheeted space-time and TGD based view
about dark matter as heff=n× h phase of ordinary matter emerging at quantum criticality (see this).
Pairs of dark magnetic flux tubes would be the wires carrying dark Cooper pairs with members of
the pair at the tubes of the pair. If the members of flux tube pair carry opposite B:s, Cooper pairs have
- 85 -
spin 0. The magnetic interaction energy with the flux tube is what determines the critical temperature.
High Tc superconductivity, in particular the presence of two critical temperatures can be understood.
The role of anti-ferromagnetism can be understood.
TGD model is clearly x-space model: dark flux tubes are the x-space concept. Momentum space and
the notion of Fermi sphere are certainly useful in understanding the transformation ordinary lattice
electrons to dark electrons at flux tubes but the super conducting electron pairs at flux tubes would have
different description.
Now come the heretic questions.
1. Do the crystal structure and chemistry define the (only) fundamental parameters in SC? Could
the notion of magnetic body - which of course can correlate with crystal structure and chemistry
- equally important or even more important notion?
2. Could also ordinary BCS SC be based on magnetic flux tubes? Is the value of h eff=n× h only
considerably smaller so that low temperatures are required since energy scale is cyclotron energy
scale given by E= heff=n× fc, fc = eB/me. High Tc SC would only have larger heff and biosuperconductivity even larger heff!
3. Could it be that also in low Tc SC there are dark flux tube pairs carrying dark magnetic fields in
opposite directions and Cooper pairs flow along these pairs? The pairs could actually form
closed loops: kind of flattened O:s or flattened squares.
One must be able to understand Meissner effect. Why dark SC would prevent the penetration of the
ordinary magnetic field inside superconductor?
1. Could Bext actually penetrate SC at its own space-time sheet? Could opposite field Bind at its own
space-time sheet effectively interfere it to zero? In TGD, this would mean generation of spacetime sheet with Bind=-Bext so that test particle experiences vanishing B. This is obviously new.
Fields do not superpose: only the effects caused by them superpose.
Could dark or ordinary flux tube pairs carrying Bind be created such that the first flux tube
portion Bind in the interior cancels the effect of Bext on charge carriers? The return flux of the
closed flux tube of Bind would run outside SC and amplify the detected field Bext outside SC. Just
as observed.
2. What happens, when Bext penetrates to SC? heff→ h must take place for dark flux tubes whose
cross-sectional area and perhaps also length scale down by h eff and field strength increases by
heff. If also the flux tubes of Bind are dark they would reduce in size in the transition heff→ h by
1/heff factor and would remain inside SC! Bext would not be screened anymore inside
superconductor and amplified outside it! The critical value of Bext would mean criticality for this
heff → h phase transition.
3. Why and how the phase transition destroying SC takes place? Is it energetically impossible to
build too strong Bind? So that effective field Beff=Bdark+ Bind+Bext experienced by electrons is
reduced so that also the binding energy of Cooper pair is reduced and it becomes thermally
unstable. This in turn would mean that Cooper pairs generating the dark Bdark disappear and also
Bdark disappears. SC disappears.
Addition:The newest news is that hydrogen sulfide (the compound responsible for the smell of rotten
eggs)conducts electricity with zero resistance at a record high temperature of 203 Kelvin (–70 degrees
C), reports a paper published in Nature. This super-conductor however suffers from a serious existential
crisis: it behaves very much like old fashioned super-conductor for which superconductivity is believed
to be caused by lattice vibrations and is therefore not allowed to exist in the world of standard physics!
To be or not to be!
- 86 -
TGD Universe allows however all flowers to bloom: the interpretation is that the mechanism is large
enough value of heff=n×h implying that critical temperature scales up. Perhaps it is not a total accident
that hydrogen sulfide H2S - chemically analogous to water - results from the bacterial breakdown of
organic matter, which according to TGD is high temperature super-conductor at room temperature and
mostly water, which is absolutely essential for the properties of Living matter in the TGD Universe.
See the chapter Quantum model for bio-superconductivity: II
Links to the latest progress in TGD.
For a summary of earlier postings see
posted by Matti Pitkanen @ 11:52 PM
07/31/2015 - http://matpitka.blogspot.com/2015/07/5-tev-bump-at-cms.html#comments
5 TeV bump at CMS?
Lubos Motl tells about indications for a bump in dijet spectrum at CMS. The decay to dijets suggests
interpretation as a meson like states consisting of exotic quarks. I already told about evidence for a bump
at 2 TeV whose decay signatures suggests interpretation as similar exotic meson state.
I talked in the earlier posting about the possible interpretation of 2 TeV bump as pions of M G,79
hadron physics with masses obtained by scaling with a factor 214. Both charged and neutral pion are
predicted and the splitting of the bump to two with splitting of order 2.5 GeV serves as signature. A
possible problem is that the naive scaling of the mass of ordinary pion gives mass 2.2 TeV, which is 10
per cent too high. If the mass is scaled up from the mass of MG,89, which could have made itself visible
through gamma pairs coming from galactic center one would obtain 4.4 TeV.
What about the interpretation of 5.15 TeV bump? It is easy to scale up the masses of ordinary
mesons by multiplying them by factor 512 for M89 and by 214 for MG,79. This gives a rough estimate but
one can try it first in order to get at least order of magnitude estimates. The list of mesons can be found
here.
Consider first MG,79 hadron physics. m(K(79)) = 8 TeV is considerably higher than 5 TeV so that the
interpretation in terms of MG,79 hadron physics is not favored.
Consider next the interpretation as a meson of M89> hadron physics.
1. m(π(89))= 67 GeV by naive scaling. There are indications for a meson like state with mass of
about 135 GeV coming from gamma rays arriving from galactic center. p-Adic length scale
hypothesis allows however to consider octaves of the masses and there are even slight
indications for their occurrence. One can of course play with the possibility that a pionlike state
with mass with about 67 GeV could have escaped detection? ρ(89) would have mass m(ρ(770,
89) = 385 GeV by naive scaling and if additioanal scaling by two is indeed present one would
obtain 770 GeV.
2. Kaon mass would scale up to m(K(89))= 250 GeV. The search with "bump at 250 GeV" in
Google produces search items in which 250 GeV bump indeed appears. It might well be that I
have also commented this. The counterpart of spin 1 kaon K*(891)would have mass about 445
GeV.
3. The counterpart of D meson with mass 2 GeV would have mass around 1.00 TeV. Strange D
meson would have mass 1.05 TeV. The counterpart of J/Psi would have mass around 1.5 TeV.
This is considerably lower than 2 GeV.
- 87 -
4. There are ccbar resonances Ψ and X around 4 GeV: for instance, Ψ(4160) would scale up to 2.08
TeV and could serve as a candidate for 2 TeV bump. It would be however more natural to have
ground state meson and J/Psi candidate has only 1.5 TeV mass. Maybe the interpretation as
M(G,79) pion is more appropriate.
5. B meson with mass 5.3 GeV would scale up to 2.65 TeV. Charmed B meson would have mass
3.14 TeV.
6. The naive scaling of the mass about 9.5 GeV bbar (Upsilon) meson would give 4.75 GeV - ten
percet smaller than 5.15 GeV. If one scales up t quark mass for M89 one obtains 3.6 TeV. If
constituent quark mass squared is additive as p-adic mass calculations suggest, one obtains for
ttbar meson mass 5.1 TeV! Note that ordinary top quark does not form hadrons since the life
time against weak decays is shorter than timescale of strong interactions. For M 89 physics the
situation might be different.
The ultra-cautious conclusion is that if the bumps are real (they need not be), the 2 GeV bump provides
support for M(G,79) hadron physics and 5 GeV bump support for M89 hadron physics.
posted by Matti Pitkanen @ 3:59 AM
07/25/2015 - http://matpitka.blogspot.com/2015/07/are-bacteria-able-to-inducedphase.html#comments
Are bacteria able to induce phase transition to super fluid phase?
Claims about strange experimental findings providing support for TGD have started to accumulate in
accelerating pace. During about week I have learned about four anomalies! The identification of the dark
matter as heff phases is the common denominator of the explanations of these findings.
1. First I learned about 2 TeV bump at LHC providing evidence for MG,79 hadron physics (I had
realized that it might show itself at LHC only few weeks earlier!
2. Then emerged the finding that the knockdown of genes need not affect gene expression
providing support for the vision that dark analogs of basic bio-molecules identifiable in terms of
dark proton states are behind the biochemistry serving only as a shadow for the deeper quantum
biology.
3. Two days ago I learned about the discoveries about Pluto by New Horizons space probe and
having explanation in terms of the same model that justifies Expanding Earth hypothesis in TGD
framework explaining among other things the mysteries of Cambrian explosion in biology.
4. Today I learned from Nature News that a team led by H. Auradou reports in the article "Turning
Bacteria Suspensions into Superfluids" published in Phys Rev Letters that bacterium swimming
in fluid do not only reduce its viscosity associated with shear stress (viscous force parallel to the
surface) but makes it to behave in super-fluid like manner above a critical concentration of
bacteria.
As the number of bacteria (E. coli) was increased, the viscosity associated with shear stress (the
viscous force parallel to the surface) dropped: this in accordance with theoretical expectations. Adding
about 6 billion cells (the fluid volume is not mentioned but it seems that the effect occurs above critical
density of bacteria), the apparent viscosity dropped to zero (or more precisely, below the experimental
resolution). The super-fluid like behavior was preserved above the critical concentration. What is
important that this did not happen for dead bacteria: bacteria play an active role in the reduction of
viscosity.
- 88 -
Researchers are not able to identify the mechanism leading to the superfluid-like behavior but some
kind of collective effect is believed to be in question. The findings suggest that the flagellae - kind of
spinning hairs used by the bacteria to propel themselves - should play an essential part in the
phenomenon. As bacteria swim, they fight against current, decreasing the local forces between
molecules that determine the fluid's viscosity. Above critical density the local effects would somehow
become global.
Cates et al have proposed this kind of phenomenon: see the article "Shearing Active Gels Close to
the Isotropic-Nematic Transition". The authors speak in the abstract about zero apparent viscosity.
1. The title of the article of Cates et al tells that the phenomenon occurs near isotropic-nematic
transition. Nematic is defined as a liquid crystal for which the molecules are thread-like and
parallel. I dare guess that in the recent case the approximately parallel flagellae would be
modelled as liquid crystal like 2-D phase at the surface of bacterium. In the isotropic phase the
orientations of the flagellae would be uncorrelated and long range orientational correlations
would emerge in the phase transition to nematic phase.
2. Also the notions of contractile and extensile gels are introduced. Contraction and extension of
gels are though to occur through molecular motors. The transformation of the fluid to apparent
superfluid would require energy to run the molecular motors using metabolic energy and
ordinary superfluidity would not be in question.
3. The model predicts divergence of viscosity for contractile gels. For extensile gels a zero of
apparent viscosity is predicted. There is a hydrodynamical argument for how this would occur
but I did not understand it. The active behavior of the bacteria would means that the gel like
surface phase (nematic liquid crystal) formed by the flagellae extends to reduce viscosity. If I
have understood correctly, this applies only to the behavior of single bacterium and is about the
reduction of viscosity in the immediate vicinity of cell.
My deep ignorance about rheology allows me freedom to speculate freely about the situation in TGD
framework.
1. In TGD-inspired biology, gel phase corresponds to a phase which involves flux tube connections
between basic units. Flux tubes contain dark matter with non-standard value heff=n× h. The heff
changing phase transitions scaling the lengths of flux tubes proportional to heff are responsible
for the contractions and extensions of gel.
The extension of the gel should lead to a reduction of viscosity since one expects that
dissipative effects are reduced as heff increases and quantum coherence is established in longer
scales. Large heff phases are associated with criticality. Now the criticality would be associated
with isotropic-nematic phase transition. The parallelization of flagellae would be due to the
quantum coherence assignable with the flagellae.
Note that the mechanism used by bacteria to control the liquid flow would be different since
now molecular motors are replaced by heff changing phase transitions playing key role in TGD
inspired view about biochemistry. For instance, reacting biomolecules find each other by h eff
reducing phase transition contracting the flux tubes connecting them.
2. This model does not yet explain the reduction of apparent viscosity to zero in the entire fluid
occurring above a critical density of bacteria. What could happen could be analogous to the
emergence of high Tc superconductivity according to TGD. Below pseudo gap temperature the
emergence of magnetic flux tube pairs makes possible super-conductivity in short scales. At
critical temperature a phase transition in which flux tubes reconnect to form larger
thermodynamically stable networks occurs. One can speak about quantum percolation.
- 89 -
The reduction of viscosity for a single bacterium could be based on the phase transition of
liquid molecules to dark molecules flowing along the resulting flux tubes with very small friction
(large heff) but only below certain scale smaller than the typical distance between bacteria. This
would be the analog for what happens below pseudo gap. Above critical density he magnetic flux
tubes associated with bacteria would reconnect and forming a net of connected flux tube paths at
scale longer than inter-bacterial distances. This would be the counterpart for the emergence of
superconductivity by percolation in long scales.
For a summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 11:56 PM
07/25/2015 - http://matpitka.blogspot.com/2015/07/expanding-horizons-aboutpluto.html#comments
New Horizons about Pluto
New Horizons is a space probe that has just been passing by Pluto and has taken pictures about the
surface of Pluto and its Moon Kharon. The accuracy of the pictures is at best measured in tens of meters.
Pluto has lost its status as a genuine planet and is now regarded as dwarf planet in the Kuiper belt - a
ring of bodies beyond Neptune. Using Earthly unis its radius, mass (from New Horizons data), and
distance from Sun are R=.18RE, M= .0022× ME and d= 40 dE.
Pictures have yielded a lot of surprises. Pluto is not the geologically dead planet it was though to be.
The following summarizes what I learned by reading a nice popular article by Markku Hotakainen in
finnish weekly journal ("Suomen Kuvalehti") and also represents a TGD based interpretation of the
findings.
1. Surprisingly, the surface of the Pluto is geologically young: the youngest surface shapes have
age about 108 years that is .1 billion years. This is strange since the temperature is about -240 oC
at the cold side and it receives from Sun only 1/1000 of the energy received by Earth. Textbook
wisdom tells that everything should have been geologically totally frozen for billions of years.
2. There is a large champaign in Pluto - one guess is that it has born as an asteroid or comet has
collided with the surface of Pluto. The region is now officially called Tombaugh Regio. The
reader can Google the reason for this. The flat region does not seem to have any craters so that it
should be rather young. The boundary of this lowland area is surrounded by high (up to 3.5 km)
mountains. Also these formations seem to be young. Nitrogen, methane and CO-ice cannot form
so high formations.
Several explanations have been imagined for the absence of craters: maybe there are active
processes destroying the craters very effectively. Maybe there is tectonic activity. This however
requires energy source. Radioactivity inside Pluto? Underground oceans liberating heat? Or
maybe tidal forces: the motions of Pluto and its moon Kharon are locked and they turn always
the same side towards each other. There is a small variation in the distance of Kharon causing
tidal forces. Could this libration deform Pluto and force the liberation of heat produced by
frictional forces?
3. The flat region decomposes to large polygons with diameter of 20-30 km. The mechanism
producing the polygons is a mystery. Also their presence tells that the surface is geologically
young: at some places only .1 billion years old.
- 90 -
4. The atmosphere of Pluto has also yielded a surprise. About 90 per cent of atmosphere (78 per
cent at Earth) is nitrogen but it is estimated to leak with a rate of 500 tons per hour since the
small gravitational acceleration (6 per cent of that on Earth) cannot prevent the gas molecules
from leaking out. How Pluto manages to keep so much nitrogen in its atmosphere?
5. Kharon - the largest moon of Pluto - has radius which is half of that for Pluto. Also the surface
texture of Kharon exhibits signs about upheavals and has similarities to that in Pluto. Craters
seem to be lacking. North Pole has great dark region - maybe crater. Equator is surrounded by
precipices with depths of hundreds of meters, maybe up to kilometers. If they are torn away so
should have been also the precipices.
Can one understand the surface texture of Pluto and Kharon? For years I proposed a model for the
finding that the continents of Earth seem to fit nicely to form a single supercontinent if the radius of
Earth is taken to be one half of its recent radius. This led to a TGD variant of Expanding Earth theory.
1. It is known that cosmic expansion does not occur locally. In many-sheeted space-time of TGD
this could mean that the space-time sheets of astrophysical objects comove at the the large spacetime sheet representing expanding background but do not themselves expand. Another possibility
is that they expand in rapid jerks by phase transitions increasing the radius. p-Adic length scale
hypothesis suggests that scaling of the radius by two is the simplest possibility.
2. If this kind of quantum phase transition occurred for the space-time sheet of Earth about .54
billion years ago it can explain the weird things associated with Cambrian explosion. Suddenly
totally new life forms appeared as from nowhere to only disappear soon in fight for survival.
Could highly evolved life in underground seas shielded from UV radiation and meteoric
bombardment have burst to the surface. The process would have also reduced the value of the
gravitational acceleration by factor 1/4 and increased the length of the day by factor 4. The
reduction of the surface gravity might have led to emergence of various gigantic lifeforms such
as dinosauri, which later lost the evolutionary battle because of their small brains. Climate would
have changed dramatically also and the Snowball Earth model is replaced by a new view.
If these sudden quantum phase transitions at the level of dark matter (heff=n× h phases of ordinary
matter) is the manner how cosmic expansion universally happens, then also Pluto might so the signs of
this mechanism.
1. The surface of Pluto is indeed geologically young: the age is measured in hundreds of millions of
years. Could the sudden jerkwise expansion have occurred - not only for Earth but - for objects
in some region surrounding Earth and containing also Pluto?
2. The polygonal structure could be understood as a ripping of the surface of Pluto in the sudden
expansion involving also cooling of magma and its compression (the analogy is what happens to
the wet clay as it dries and becomes solid). The lowland region could correspond to the magma
burst out from the interior of Pluto being analogous to the magma at the bottom of oceans at
Earth. The young geological age of this region would explain the absence of craters. Also the
surface texture of Kharon could be understood in the similar manner.
Could one understand the presence of nitrogen?
1. If the gravitational acceleration was 4 times larger (24 percent of that in Earth) before the
explosion, the leakage would have been slower before it. Could this make it easier to understand
why Pluto has so much nitrogen? Could the burst of material from the interior have increased the
amount of nitrogen in the atmosphere? Geochemist could probably answer these questions.
2. A more radical explanation is that primitive life forms have prevented the leakage by binding the
nitrogen to organic compounds like methane. If underground oceans indeed existed (and maybe
still exist) in Pluto as they seem to exist in Mars, one can wonder whether life has been evolving
- 91 -
as an underground phenomenon also in Pluto - as so many nice things in this Universe must do;). Could these lifeforms have erupted to the surface of Pluto in the sudden expansion from
underground seas and could some of them - maybe primitive bacteria - have survived. Nitrogen
is essential for life and binds the nitrogen to heavier chemical compounds so that its leakage
slows down. Could there exist an analog of nitrogen cycle meaning that underground life bind
the nitrogen from the atmosphere of Pluto and slow down its leakage?
For background see the chapter Expanding Earth Model and Pre-Cambrian Evolution of Continents,
Climate, and Life of "Genes and Memes". For a summary of earlier postings see Links to the latest
progress in TGD.
posted by Matti Pitkanen @ 12:45 AM
3 Comments:
At 2:59 AM,
Leo Vuyk said...
Matti, I surely would support your interesting view about expanding planets. However
something tells me that we need comets crashing into planets and moons which violate of
the second law of thermo, by some sort of a new micro dark matter black hole.
At 3:39 AM,
[email protected] said...
Dark matter could have served as template for the formation of planets and also comets.
My point here was that if this jerk wise step in cosmic expansion occurred for only .1
billion years ago, the number of craters is small as observed. I do not understand why the
Second Law must be violated.
At 10:06 AM,
Leo Vuyk said...
"I do not understand why the Second Law must be violated." it is my assumptional
perception indeed
07/22/2015 - http://matpitka.blogspot.com/2015/07/direct-evidence-for-dark-dna.html#comments
Direct evidence for dark DNA?!
This morning I learned in ScienceDaily about extremely interesting finding related to DNA. The
finding is just what breakthrough discovery should be: it must be something impossible in the existing
world view.
What has been found is that knock-out (removing parts of gene to prevent transcription to mRNA)
and knock-down of gene (prevent protein translation) seem to have different consequences. Removing
parts of gene need not have the expected effect at the level of proteins! Does this mean that somehow
DNA as a whole can compensate the effects caused by knock-out but not those by knock-down? Could
this be explained by assuming that genome is a hologram as Gariaev et al have first suggested?
Also TGD leads to a vision about living system as a conscious hologram. Small local changes of
genes could be compensated. Somehow the entire genome would react like brain to a local brain
damage: other regions of brain take the duties of the damaged region.
Could the idea about DNA double strand as nano-brain having left and right strands instead of
hemispheres help here. Does DNA indeed act as a macroscopic quantum unit? The problem is that
transcription is local rather than holistic process. Something very simple should lurk behind the
compensation mechanism.
- 92 -
Could transcription transform dark DNA to dark mRNA?
Also the TGD-based notion of dark DNA comes in mind (see this and this). Dark DNA consists of
dark proton sequences for which states of single DNA proton correspond to those of DNA, mRNA,
aminoacids, and tRNA. Dark DNA is one of the speculative ideas of TGD inspired quantum biology
getting support from Pollack's findings . Ordinary biomolecules would only make their dark
counterparts visible: dark biomolecules would serve as a template around which ordinary biomolecules
such as DNA strands are formed in the TGD Universe.
Although ordinary DNA is knocked out of ordinary gene, dark gene would still exist! If dark DNA
actually serves as template for the transcription to mRNA, everything is still ok after knockout! Could it
be that we do not understand even transcription correctly? Could it actually occur at the level of dark
DNA and mRNA?! Dark mRNA would attach to dark DNA after which ordinary mRNA would attach to
the dark mRNA. One step more!
Damaged DNA could still do its job! DNA transcription would would have very little to do with biochemistry! If this view about DNA transcription is correct, it would suggest a totally new manner to fix
DNA damages. These damages could be actually at the level of dark DNA, and the challenge of dark
genetic engineering would be to modify dark DNA to achieve a proper functioning.
Could dark genetics help to understand the non-uniqueness of the genetic code?
Also translation could be based on pairing of dark mRNA and dark tRNA. This suggests a fresh
perspective to some strange and even ugly looking features of the genetic code. Are DNA and mRNA
always paired with their dark variants? Do also amino-acids and anticodons of tRNA pair in this manner
with their dark variants? Could the pairings at dark matter level be universal and determined by the
pairing of dark amino-acids with the anticodons of dark RNA? Could the anomalies of the code be
reduced to the non-uniqueness of the pairing of dark and ordinary variants of basic bio-molecules
(pairings RNA--dark RNA, amino-acid-- dark amino-acid, and amino-acid--ordinary amino-acid in
tRNA).
1. There are several variants of the genetic code differing slightly from each other: correspondence
between DNA/mRNA codons and amino-acids is not always the same. Could dark-dark pairings
be universal? Could the variations in dark anticodon - anticodon pairing and dark amino-acidamino-acid pairing in tRNA molecules explain the variations of the genetic code?
2. For some variants of the genetic code a stop codon can code for amino-acid. The explanation at
the level of tRNA seems to be the same as in standard framework. For the standard code the stop
codons do not have tRNA representatives. If stop codon codes for amino-acids, the stop codon
has tRNA representation. But how the mRNA knows that the stop codon is indeed stop codon if
the tRNA associated with it is present in the same cell?
Could it be that stop codon property is determined already at the level of DNA and mRNA? If
the dark variant of genuine stop codon is missing in DNA and therefore also in mRNA the
translation stops if it is induced from that at the level of dark mRNA. Could also the splicing of
mRNA be due to the splitting of dark DNA and dark mRNA? If so genes would be separated
from intronic portions of DNA in that they would pair with dark DNA. Could it be that the
intronic regions do not pair with their dark counterparts. They would be specialized to
topological quantum computations in the TGD inspired proposal.
Start codon (usually AUG coding met) serves as a start codon defining the reading frame (there
are 3 possible reading frames). Dark DNA would naturally begin from this codon.
- 93 -
3. Also two additional amino-acids Pyl and Sec appear in Nature. Gariaev et al have proposed that
the genetic code is context dependent so that the meaning of DNA codon is not always the same.
This non-universality could be reduced to the non-uniqueness of dark amino-acid--amino-acid
pairing in tRNA if genetic code is universal.
Could dark genetics help to understand wobble base pairing?
Wobble base pairing is second not-so-well understood phenomenon. In the standard variant of the
code there are 61 mRNAs translated to amino-acids. The number of tRNA anticodons (formed by the
pairs of amino-acid and RNA molecules) should be also 61 in order to have 1-1 pairing between tRNA
and mRNA. The number of ordinary tRNAs is however smaller than 61 in the sense that the number of
RNAs associated with them is smaller than 45. tRNA anticodons must be able to pair with several
mRNA codons coding for given amino-acid. This is possible since tRNA anticodons can be chosen to be
representative for the mRNA codons coding a given amino-acid in such that all mRNA codons coding
for the same amino-acid pair with at least one tRNA anticodon.
1. This looks somewhat confusing but is actually very simple: genetic code can be seen as a
composite of two codes: first 64 DNAs/mRNAs to are coded to N<45 anticodons in tRNA, and
then these N anticodons are coded to 20 amino-acids. One must select N anticodon
representatives for the mRNAs in the 20 sets of mRNA codons coding for a given amino-acid
such that each amino-acid has at least one anticodon representative. A large number of choices is
possible and the wobble hypothesis of Crick pose reduce the number of options.
2. The wobble hypothesis of Crick states that the nucleotide in the third codon position of RNA
codon of tRNA has the needed non-unique base pairing: this is clear from the high symmetries of
the third basis. There is exact U-C symmetry and approximate A-G symmetry with respect to the
third basis of RNA codon (note that the conjugates of RNA codons are obtained by A↔U and
C↔G permutations).
3. The first two basis in the codon pair in 1-1 manner to the second and third basis of anticodon.
The third basis of anticodon corresponds to the third letter of mRNA codon. If it is A or C the
correspondence is assumed to be 1-to-1: this gives 32 tRNAs. If the first basis of anticodon is G
or U the 2 mRNA basis can pair with it: they would be naturally A for G and C for U by
symmetry. One would select A from A-G doublet and C from U-C double. This would give 16
anticodons: 48 anticodons altogether, which is however larger than 45. Furthermore, this would
not give quite the correct code since A-G symmetry is not exact.
Smaller number of tRNAs is however enough since the code has almost symmetry also with
respect to A and C exchange not yet utilized. The trick is to replace in some cases the first basis
of anticodon with Inosine I, which pairs with 3 mRNA basis. This replacement is possible only
for those amino-acids for which the number of RNAs coding the amino-acid is 3 or larger (the
amino-acids coded by 4 or 6 codons).
4. It can be shown at least 32 different tRNAs are needed to realize genetic code by using wobble
base pairing. Full A-C and G-U symmetry for the third basis of codon would give 16+16=32
codons. Could one think that tRNA somehow realizes this full symmetry?
How dark variants of could help to understand wobble base pairing? Suppose for a moment that the
visible genetics be a shadow of the dark one and fails to represent it completely. Suppose the pairing of
ordinary and dark variants of tRNA anticodons resp. amino-acids and that translation proceeds at the
level of dark mRNA, dark anticodons, and dark amino-acids, and is made visible by its bio-chemical
shadow. Could this allow to gain insights about wobble base pairing? Could the peculiarities of tRNA
serve for some other - essentially bio-chemical - purposes?
The basic idea would be simple: chemistry does not determine the pairing but it occurs at the level of
the dark mRNA codons and dark tRNA anticodons. There would be no need to reduce wobble
- 94 -
phenomenon to biochemistry and the only assumption needed would be that chemistry does not prevent
the natural dark pairing producing standard genetic code apart from the modifications implied by nonstandard dark amino-acid--amino-acid pairing explaining for different codes and the possibility that stop
codon can in some situation pair with dark mRNA.
One can consider two options.
1. The number of dark tRNAs is 64 and the pairings between dark mRNA and dark anticodons and
dark anticodons and dark amino-acids are 1-to-1 and only the pairing between dark RNA codons
and anticodons in tRNA is many-to-1.
2. The model of dark genetic code) suggests that there are 40 dark proton states, which could serve
as dark analogs of tRNA. This number is larger than 32 needed to realize the genetic code as a
composite code. I have cautiously suggested that the proposed universal code could map dark
mRNA states of the same total spin (there is breaking of rotational symmetry to that around the
axis of dark proton sequences) to dark tRNA/dark amino-acid states with the same total spin. The
geometric realization would in terms of color flux tubes connecting the dark protons of
corresponding dark proton sequences. Also in ordinary nuclei nucleons are proposed to be
connected by color flux tubes so that they form nuclear strings and dark proton sequences would
be essentially dark variants of nuclei.
One should understand the details of the dark mRNA--tRNA anticodon correspondence. One can
also ask whether the dark genetic code and the code deduced from the geometric model for music
harmony in terms of Platonic solids are mutually consistent. This model implies the decomposition of
60+4 DNA codons to 20+20+20+4 codons, where each "20" corresponds to one particular icosahedral
Hamilton's cycle with characteristic icosahedral symmetries. "4" can be assigned to tetrahedron regarded
either disjoint from icosahedron or glued to it along one of its faces. This allows to understand both the
standard code and the code with two stop codons in which exotic amino-acids Pyl and Sec appear. One
should understand the compositeness 64→ 40\→20 of the dark genetic code and and whether it relates
to the icosatetrahedral realization of the code.
I have proposed that dark variants of transcription, translation, etc.. can occur and make possible
kind of R&D laboratory so that organisms can test the consequences of variations of DNA. If ordinary
translation and transcription are induced from their dark variants and if dark biomolecules could also
appear as unpaired variants, these processes could occur as purely dark variants. Organisms could
indeed do experimentation in the virtual world model of biology and pairing with ordinary biomolecules would make things real.
For background see the chapter Quantum Gravity, Dark Matter, and Prebiotic Evolution. For a
summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 9:33 PM
4 Comments:
At 9:29 AM,
L. Edgar Otto said...
Matti, only in the last decade have ideas of quantum effects say on the scale of a brain
been taken seriously by more and more thinkers. But do we not have the idea of jumping
genes? (How some confluence of gene material may 'know at a distance another has been
deleted'?
This post is a qualitative speculation, not to say a profound general breakthrough by a
few brave physics and mathematical thinkers, and this poet, independently converge to
similar or ultimate considerations... Such as the genome and dark matter, black holes or
- 95 -
worm holes, qm effects on a hierarchy of various levels or of a hierarchy of fundamental
constants. Vague or indefinite things intuitively based on vague things.
But what about quantitative measure? For example, if I asserted qualitatively we can
have an explicit calculation for something like string or (brane) theory weights - Pitkanen
particle states based on prime number theory (p-adic, Gaussian etc...) concepts could as
well implied or in actuality be applied to the background of the DNA-RNA systems
qualitatively in the role of organic or inorganic expression, although we have not explicitly
shown the simple arithmetic or topology save in theory, including the generalization as
you point out for Nature's various forms of error correction to complete gaps for healing
both in the body and the brain.
This is what my quasics is all about in the sense explicit middle ground between the
qualitative and quantitative, where these can be physically realized as a universe of
quasifinite modeling. In this we tend to make surprising simulations such as the generation
problem applied to Nova explosions or particle and biological symmetries.
At 12:16 AM,
[email protected] said...
My view is that precise quantitative predictions are the candle in the cake when theory
has been really understood. First the principles, then qualitative models, general
expressions for say scattering amplitudes, and finally calculational algorithms.
M-theory and actually the entire TOE approach tried to go directly to the third stage
and failed miserably: the super stringy explanation of the 2 TeV bump mentioned in a
another posting is a comic manifestation of the wrong order of things.
One must predict first correctly the significant bits. Dark DNA might represent one of
these significant bits forcing to change entire world view: just single bit! See for instance
the possible clarification it brings to the understanding of the peculiar ugly features of
genetic code such as non-uniquencess and wobbling phenomenon.
At 1:06 PM,
L. Edgar Otto said...
Matti, concerning the,wobble hypothesis, you obviously have not seen my 'quasic:' chart
of my weeks moment of 1974. We are old warriors on the next new frontiers. But do any
of us have time just to duplicate the same old ideas? With respect & little time left, and a
struggle with detractors, your take on new physics and wisdom should better be free to
focus on and discover new things at the frontier Yours is an example that should inspire
generations of young inquirers.
At 8:56 PM, [email protected] said...
I must confess that I feel myself frustrated now and then. I have few years left and I
work like a horse to articulate TGD views so that it would be easy for those to come learn
it and the official academic science refuses to even admit even my existence. I know - a
stupidity of even this caliber should not be reason for me to lose my temper. Human
stupidity is simply a fundamental renewing resource - I cannot do anything for it.
I have said what follows many times but is so important that I do not bother to
represent excuses for repeating myself instead of doing my best to be just entertaining.
- 96 -
For the last decade I have been developing a radically new vision having its roots in
Quantum TGD: dark matter as quintessence of life. Genetic code would be realised in
terms of dark proton sequences and biochemistry would be only a shadow of what happens
at the level of dark matter. This is a revolutionary idea. I wish so much that a typical
academic reader could somehow get this point if he ever reads the above lines.
This vision of course remains a mere useless speculation unless I manage to bind it
with empiria. I have worked for a decade to identify anomalies having explanation in
terms of dark matter.
This finding about strange behavior of DNA looks very much like anomaly and gives a
direct support for one key aspect of this view: realisation of genetic code already at the
level of dark nuclear physics- large h_eff etc… The believe that biochemistry is behind
genetic code would be a huge mistake created by the belief in reductionistic dogma.
Even better - this hypothesis also gives excellent hopes for understanding why the
attempt to explain genetic code in terms of biochemistry yields something so ugly as
wobble hypothesis. At the deeper level everything would be elegant.
Here I could try to talk about huge importance of real understanding of biology for
medicine to stir the interest of the reader interested in future applications.
Problems do not get old, solutions do. What I love in science is that old dead dogmas
are eventually replaced by better theories. This process has started now. I expected that the
revolution would begin from theoretical elementary particle physics but condensed matter
physicists, biologists and neuroscientists seem to be doing it while particle physicists are
concentrating on desperate attempts to keep SUSY and superstrings alive.
Sorry, I got emotional;-). Sometimes it cannot be avoided.
07/21/2015 - http://matpitka.blogspot.com/2015/07/further-evidence-that-2-tev-bumpcould.html#comments
Further evidence that 2 TeV bump could be MG,79 pion
A few days ago I told about 3.5 local sigma local evidence for a bump at about 2 TeV observed both
at ATLAS and CMS. The title of the posting was "Pion of MG,79 hadron physics at LHC?". I learned
about the bump from the posting of Lubos. Lubos was enthusiastic about left-right symmetric variant of
the standard model as an explanation of the bump.
Lubos is 5 days later more enthusiastic about superstring inspired explanation of the bump . The title
of the posting of Lubos is The 2 TeV LHC excess could prove string theory. The superstringy model
involves as many as six superstring phenomenologists as chefs and the soup contains intersecting branes
and anomalies as ingredients.
The article gives further valuable information about the bump also for those who are not terribly
interested on intersecting branes and addition of new anomalous factors to the standard model gauge
group. The following arguments show that the information is qualitatively consistent with the TGDbased model.
- 97 -
1. Bump is consistent with both ZZ, WZ, and according to Lubos also Zγ final states and is in the
range 1.8-2.1 TeV. Therefore bump could involve both charged and neutral states. If the bump
corresponds to neutral elementary particle such as new spin 1 boson Z' as proposed by
superstring sextet, the challenge is to explain ZZ and Zγ bumps. WZ pairs cannot result from
primary decays.
2. There is dijet excess, which is roughly by a factor of 20 larger than weak boson excesses. This
would suggest that some state decays to quarks or their excitations and the large value of QCD
coupling strength gives rise to a the larger excess. This also explains also why no lepton excess is
observed.
For the superstring inspired model the large branching fraction to hadronic dijets suggesting the
presence of strong interactions is a challenge: Lubos does not comment this problem. Also the
absence of leptonic pairs is problematic and model builders deduce that Z' suffers syndrome
known as lepto-phobia.
3. Neutral and charged MG,79 pions can decay to virtual MG,79 or M89 quark pair annihilating further
to a pair of weak bosons (also γγ pair is predicted) or by exchange of gluon to M G,79, M89 (or
M107) quark pair producing eventually the dijet. This would explain the observations
qualitatively. If the order of magnitude for the relative mass splitting between neutral and
charged MG,79 pion is same as for ordinary pion one, the relative splitting is of order Δ M/M≈
1/14 - less that 10 per cent meaning Δ M<.2 TeV. The range for the position of the bump is about
.3 TeV.
4. The predictions of TGD model are in principle calculable. The only free parameter is the MG,79
color coupling strength so that the model is easy to test.
For more details see the chapter New Particle Physics Predicted by TGD: part I of "p-Adic Length Scale
Hypothesis" or the article What is the role of Gaussian Mersennes in TGD Universe? For a summary of
earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 3:47 AM
07/20/2015 - http://matpitka.blogspot.com/2015/07/topological-order-and-quantumtgd.html#comments
Topological order and quantum TGD
Topological order is a rather advanced concept of condensed matter physics. There are several
motivations for the notion of topological order in TGD.
1. TGD can be seen as almost topological QFT. 3-D surfaces are by holography equivalent with 4D space-time surfaces and by strong form of holography equivalent with string world sheets and
partonic 2-surfaces. What make this duality possible is super-symplectic symmetry realizing
strong form of holography and quantum criticality realized in terms of hierarchy of Planck
constants characterizing hierarchy of phases of ordinary matter identified as dark matter. This
hierarchy is accompanied by a fractal hierarchy of sub-algebras of supersymplectic algebra
isomorphic to the entire algebra: Wheeler would talk about symmetry breaking without
symmetry breaking.
2. heff=n× h hierarchy corresponds to n-fold singular covering of space-time surface for which the
sheets of the covering co-incide at the boundaries of the causal diamond (CD), and the n sheets
together with superconformal invariance give rise n additional discrete topological degrees of
freedom - one has particles in space with n points. Kähler action for preferred extremals reduces
to Abelian Chern-Simons terms characterizing topological QFT. Furthermore, the simplest
- 98 -
example of topological order - point like particles, which can be connected by links - translates
immediately to the collections of partonic 2-surfaces and strings connecting them.
3. There is also braiding of fermion lines/magnetic flux tubes and Yangian product and co-product
defining fundamental vertics, quantum groups associated with finite measurement resolution and
described in terms of inclusions of hyper-finite factors.
In the article Topological order and Quantum TGD, topological order and its category theoretical
description are considered from TGD point of view - category theoretical notions are indeed very natural
in TGD framework. The basic finding is that the concepts developed in condensed matter physics
(topological order, rough description of states as tangles (graphs imbedded in 3-D space), ground state
degeneracy, surface states protected by symmetry or topology) fit very nicely to TGD framework and
has interpretation in terms of the new space-time concept. This promises applications also in the
conventional areas of condensed matter physics such as more precise description of solid, liquid, and gas
phases.
See the chapter Criticality and dark matter of "Hyper-finite Factors, p-Adic Length Scale Hypothesis,
and Dark Matter Hierarchy" For a summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 8:11 PM
8 Comments:
At 8:24 AM,
Anonymous said...
Hi!
During my spare time, I do research on probability and statistics with basic knowledge of
physics (up to QFT). I believe TGD is the next step once I master QFT, which still may
take a while. Recently I have been trying to understand the functioning of the brain from
statistical point of view and how it relates to quantum mechanics, which should enter the
picture at some point as conciousness is hard to explain otherwise.
My question relates to the functioning of the Hippocampus. This far I found the most
plausible idea is the Hopfield neural network, which is now rather well understood and
shows how memories could be encoded in the synapses. I thought about this for a while
and found out that it seems a rather robust mechanism.
On the other hand, you propose that quantum phenomena enter the picture already
when considering human memory. May I ask, if there is specific evidence that a classical
model such as the Hopfield network could not explain the formation of human memories?
Some years ago I posed a few questions on the open problems in the mathematics of
TGD. Since then I moved to the private sector and my progress has slowed down. But I
think I will eventually take on these issues as well.
At 10:00 AM,
[email protected] said...
Thank you for the question. I think that one must make distinction between memories
as learned behaviours - I would not call them memories - and memories as episodal
memories which are literally re-experiences, for instance, sensory ones.
I believe that Hopfield model and changes in synapses explain quite well memories as
behaviors but fail in the case of episodal memories.
- 99 -
I read recently Musicophilia by Sacks: it is marvelous book and represents a lot of
examples about sensory memories. People who are often idiot savants or suffered damage
to left brain can listen music: on victim of this kind damage became composer. Phantom
leg could be one example of sensory memories. Wittgenstein's brother who lost his hand in
World War I played with his phantom hand and developed fingerings to piano pieces!
Also the visual and auditory sensations just when one falls asleep or wakes up would be
sensory memories.
Here the new view about time would make itself concrete in a rather surreal looking
manner: Wittgenstein's right hand fingers would be literally in geometric the past located
in the space-time before World War I! He played together with his past me!
At 11:14 AM,
Anonymous said...
Thanks for the reference. That book sounds interesting. Also, as for me personally, I
realized I probably should not neglect the role of music. It is interesting that another aspect
of the mind, imagination, relates to the episodic memories and savant skills. I have studied
a lot the works of Dominic O'Brien and others in the memory sports and realized the
human mind can memorize images quite efficiently.
Originally I wanted to disprove the validity of the Hopfield Network for bounded
synaptic weights, but later on realized you could in principle allow this case as well
without losing too much capacity.
It is true that it is not that easy to think about a process to store episodic memories in
the Hopfield Network. You would probably need some sort of encoding mechanism and I
am not sure if this makes sense. Do you think there are strong arguments for quantum
phenomena related to episodic memory? As for the physiology, I am only at the beginning
stages in learning neuroscience, as a hobby again :-)
By the way, did you work on the short-term (working memory) as well? I studied this
at the level of Badley's model. It seems interesting that the visual and auditory working
memory seems to be limited.
As a matter of fact, I am also from Finland but have emigrated to Belgium. I found the
atmosphere in Finland getting too isolated and negative. But I am happy to see some
people keep up the good work like yourself!
At 9:52 PM, [email protected] said...
In TGD framework, the general mechanism does not distinguish between short term
and long term memories.
Limitations for auditory and sensory working memory could derive simply from the
finite information storage capacity of the brain areas in question. Also from finite
metabolic resources: 7+/-2 rule.
You ask what problems Hopfield model has? The following is poorly articulated view.
*Time is crucial for memory and we do not understand time in standard physics
framework. Hopfield model accepts the identification of subjective time and geometric
- 100 -
time. I have mentioned Libet's findings many times: they are taken as proof that free
will is illusion.
*The identification of all memories as behavioral patterns is wrong- episodal memories
are the genuine memories and probably very little to do with the formation of
association.
*Synaptic contacts develop all the time. How to avoid the change for longest term
memories? It is difficult to understand the fact that the episodal memories of youth
seem to be the most stable one. My Grandma literally lived in her youth for several
years!
*The neurons in hippocampus (at least) and therefore also their synaptic contents are
regenerated. How the memories can survive in this process? I remember also a
document program telling about a person had lost almost all his brain and was able to
do mathematics!
To me, memory recall looks different from memory storage: in Hopfield model this is
not the case since essentially learned behaviours stimulated by inputs as association
sequences are in question.
At 10:00 PM, [email protected] said...
Something which I probably already more or less told. You can skip if you wish;-)!
In TGD framework where brain is essentially 4-D (I have talked a lot about ZEO, CDs
and their hierarchy). Memory storage in the brain of geometric past when the event
occurred is the most elegant option since it gives maximal storage capacity. Memory
recalls also create automatically new copies of the memory.
Memory recall could be seen as communication with the brain of geometric past by
negative energy signals reflected back- seeing in time direction. This is possible in ZEO
based quantum theory only.
Episodal memories could be actually genuine experiences if one accepts the TGD view
about self. Mental images are subselves and time reversed subselves give rise episodal
memories with sensory input from geometric past, even childhood! I already told about
Wittgenstein's brother. A person divided to two parts living in different geometric times is
a good idea for a conscifi story!
One challenge is to gain a more concrete understanding of the role of hippocampus in
generating signals to geometric past/building time reversed mental images. What this the
relation to neuroscience. Ultra low frequency dark photons below EEG range, which yield
bio-photons in their decays would be the communication tool.
At 10:05 PM, [email protected] said...
I have lived my whole life in Finland. I like ordinary people although I cannot share
the attitudes towards say Greeks which have become rather irrational thanks to the
propaganda in Finnish media. We are living a crisis of leadership: kind of ethical and
moral decline.
- 101 -
Extreme negativity is the problem: the law of Jante expresses what this negativity is. I
have worked almost 40 years with TGD without a single coin of funding and still labelled
as a madman in academic circles. Often the attitudes of academic people are openly
hostile. These people refuse from communications and even to admit my existence.
"Ordinary" people behave differently in this respect.
At 12:21 AM,
Anonymous said...
Thank you for your answers! Libet's findings are interesting and I understood they
seem to support backward causation. Though the community has also attacked the validity
of these results.
I think it is indeed hard to store long-term memories in the Hopfield network. In the
bounded synaptic weights case, memories tend to be forgotten. I will probably analyze this
in detail later on.
I think the problems with the academic world relate to the social dynamics, indeed
Jante's law. Success in the society, even in the supposedly enlightened academic world,
does not seem to go hand in hand with actual competence as ability and will to control
group dynamics is at least as important. Personally I was a researcher before, but dropped
out at the post-doc phase and moved to companies.
At 2:00 AM,
[email protected] said...
I think that no-one denies that readiness potentials are real. One can of course imagine
all kinds of explanations. For instance, one can argue that person is subconsciously
thinking about initiation of the action before conscious decision.
Sticking to determinism has simple motivations: intentional free will does not allow a
description in Newtonian world and also standard quantum theory allows only randomness
at ensemble level. Phenomenon is accepted only after a language and concepts to describe
it, exist.
07/20/2015 - http://matpitka.blogspot.com/2015/07/stephen-crowley-made-veryinteresting.html#comments
Congruence subgroups of SL(2,R), Monster Moonshine, Gaussian Mersennes, and p-adic physics
Stephen Crowley made a very interesting observation about Gaussian Mersennes in the comment
section of the posting Pion of MG,79 hadron physics at LHC?. I glue the comment below.
Matti, why Low Gaussian primes? Your list of primes is a subset of the factors of the dimension of
the friendly giant group.
The monster group was investigated in the 1970s by mathematicians Jean-Pierre Serre, Andrew Ogg
and John G. Thompson; they studied the quotient of the hyperbolic plane by subgroups of SL2(R),
particularly, the normalizer Γ0(p)+ of Γ0(p) in SL(2,R). They found that the Riemann surface resulting
from taking the quotient of the hyperbolic plane by Γ0(p)+ has genus zero if and only if p is 2, 3, 5, 7, 11,
13, 17, 19, 23, 29, 31, 41, 47, 59 or 71. When Ogg heard about the monster group later on, and noticed
- 102 -
that these were precisely the prime factors of the size of Monster, he published a paper offering a bottle
of Jack Daniel's whiskey to anyone who could explain this fact (Ogg (1974)).
I must first try to clarify to myself some definitions so that I have some idea about what I am talking
about.
1. Congruence group Γ0(p) is the kernel of the modulo homomorphism mapping SL(2,Z) to
SL(2,Z/pZ) and thus consists of SL(2,Z) matrices which are are unit matrices modulo p. More
general congruence subgroups SL(2,Z/nZ) are subgroups of SL(2,Z/pZ) for primes p dividing n.
Congruence group can be regarded as subgroup of p-adic variant of SL(2,Z) with elements
restricted to be finite as real integers. One can give up the finiteness in real sense by introducing
p-adic topology so that one has SL(2,Zp). The points of hyperbolic plane at the orbits of the
normalizer of Γ0(p)+ in SL(2,C) are identified.
2. Normalizer Γ0(p)+ is the subgroup of SL(2,R) commuting with Γ0(p) but not with its individual
elements. The quotient of hyperbolic space with the normalizer is sphere for primes k associated
with Gaussian Mersennes up to k=47. The normalizer in SL(2,Zp) would also make sense and an
interesting question is whether the result can be translated to p-adic context. Also the possible
generalization to SL(2,C) is interesting.
First some comments inspired by the observation about Gaussian Mersennes by Stephen.
1. Gaussian primes are really big but the primes defining them are logarithmically smaller. k=379
defines scale slightly large than that defined by the age of the Universe. Larger ones exist but are
not terribly interesting for human physicists for a long time.
Some primes k define Gaussian Mersenne as MG,k= (1+i)k-1 and the associated real prime
defined by its norm is rather large - rather near to 2k and for k= 79 this is already quite big.
k=113 characterises muon and nuclear physics, k=151,157,163,167 define a number theoretical
miracle in the range cell membrane thickness- size of cell nucleus. Besides this there are astrophysically and cosmoplogically important Gaussian Mersennes (see the earlier posting).
2. The Gaussian Mersennes below M89 correspond to k=2, 3, 5, 7, 11, 19, 29, 47, 73. Apart from
k=73 this list is indeed contained by the list of the lowest monster primes k= 2, 3, 5, 7, 11, 13,
17, 19, 23, 29, 31, 41, 47, 59, 71. The order d of Monster is product of powers of these primes:
d= 246× 320× 59× 76× 112× 133× 17× 19× 23× 29× 31× 41× 47× 59× 71 .
Amusingly, Monster contains subgroup with order, which is product of exactly those primes
k associated with Gaussian Mersennes, which are definitely outside the reach of LHC! Should
one call this subgroup Particle Physics Monster? Number theory and particle physics would meet
each other! Or actually they would not!
Speaking seriously, could this mean that the high energy physics above M G,79 energy is somehow
different from that below in TGD Universe? Is k=47 somehow special: it correspond to energy
scale 17.6× 103 TeV=17.6 PeV (P for Peta). Pessimistic would argue that this scale is the
Monster energy scale never reached by human particle physicists.
The continuations of congruence groups and their normalizers to the p-adic variants SL(2,Zp) of
SL(2,Z+iZ) (SL(2,C) are very interesting in TGD framework and are expected to appear in the
adelization. Now hyperbolic plane is replaced with 3-D hyperbolic space H3 (mass shell for particle
physicist and cosmic time constant section for cosmologist).
1.
2. One can construct hyperbolic manifolds as spaces of the orbits of discrete subgroups in 3-D
hyperbolic space H3 if the discrete subgroup defines tesselation/lattice of H3. These lattices are of
special interest as the discretizations of the H3 parametrizing the position for the second tip of
- 103 -
causal diamond (CD) in zero energy ontology (ZEO), when the second tip is fixed. By number
theoretic arguments this moduli space should be indeed discrete.
3. In TGD inspired cosmology the positions of dark astrophysical objects could tend to be localized
in hyperbolic lattice and visible matter could condense around dark matter. There are infinite
number of different lattices assignable to the discrete subgroups of SL(2,C). Congruence
subgroups and/or their normalizers might define p-adically natural tesselations. In ZEO this kind
of lattices could be also associated with the light-like boundaries of CDs obtained as the limit of
hyperbolic space defined by cosmic time constant hyperboloid as cosmic time approaches zero
(moment of big bang). In biology there is evidence for coordinate grid like structures and I have
proposed that they might consist of magnetic flux tubes carrying dark matter.
Only a finite portion of the light-cone boundary would be included and modulo p arithmetics
refined by using congruence subgroups Γ0(p) and their normalizers with the size scale of CD
identified as secondary p-adic time scale could allow to describe this limitation mathematically.
Γ(n) would correspond to a situation in which the CD has size scale given by n instead of prime:
in this case, one would have multi-p p-padicity.
4. In TGD framework one introduces entire hierarchy of algebraic extensions of rationals. Preferred
p-adic primes correspond to so called ramified primes of the extension, and also p-adic length
scale hypothesis can be understood and generalized if one accepts Negentropy Maximization
Principle (NMP) and the notion of negentropic entanglement. Given extension of rationals
induces an extension of p-adic numbers for each p, and one obtains extension of of ordinary
adeles. Algebraic extension of rationals leads also an extension of SL(2,Z). Z can be replaced
with any extension of rationals and has p-adic counterparts associated with p-adic integers of
extensions of p-adic numbers. The notion of primeness generalizes and the congruence
subgroups Γ0(p) generalize by replacing p with prime of extension.
Above I have talked only about algebraic extensions of rationals. p-Adic numbers have however also
finite-dimensional algebraic extensions, which are not induced by those of rational numbers.
1.
2. The basic observation is that ep exists as power series p-adically as p-adic integer of norm 1 - ep
cannot be regarded as a rational number. One can introduce also roots of ep and define in these
manner algebraic extensions of p-adic numbers. For rational numbers the extension would be
algebraically infinite-dimensional.
In real number-based Lie group theory, e is in special role more or less by convention. In padic context the situation changes. p-adic variant of a given Lie group is obtained by
exponentiation of elements of Lie algebra which are proportional to p (one obtains hierarchy of
sub-Lie groups in powers of p) so that the Taylor series converges p-adically.
These subgroups and algebraic groups generate more interesting p-adic variants of Lie
groups: they would decompose into unions labelled by the elements of algebraic groups, which
are multiplied by the p-adic variant of Lie group. The roots of e are mathematically extremely
natural serving as hyperbolic counterparts for the roots of unity assignable to ordinary angles
necessary if one wants to talk about the notion of angle and perform Fourier analysis in p-adic
context: actually one can speak only about trigonometric functions of angles p-adically but not
about angles. Same is true in hyperbolic sector.
3. The extension of p-adics containg roots of e could even have application to cosmology! If the
dark astrophysical objects tend to form hyperbolic lattices and visible matter tends to condensed
around lattice points, cosmic redshifts tend to have quantized values. This tendency is observed.
Also roots of ep could appear. The recently observed evidence for the oscillation of the cosmic
- 104 -
scale parameter could be understood if one assumes this kind of dark matter lattice, which can
oscillate. Roots of e2 appear in the model! (see the posting Does the rate of cosmic expansion
oscillate?). Analogous explanation in terms of dark matter oscillations applies to the recently
observed anomalous periodic variations of Newton's constant measured at the surface of Earth
and of the length of day (Variation of Newton's constant and of length of day).
4. Things can get even more complex! eΠ converges Π-adically for any generalized p-adic number
field defined by a prime Π of an algebraic extension and one can introduce genuinely p-adic
algebraic extensions by introducing roots eΠ/n! This raises interesting questions. How many real
transcendentals can be represented in this manner? How well the hierarchy of adeles associated
with extensions of rationals allowing also genuinely p-adic finite-dimensionals extensions of padics is able to approximate real number system? For instance, can one represent Π in this
manner?
See the chapter More about TGD inspired cosmology of "Physics in Many-Sheeted Space-time" p> For
a summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 5:37 AM
3 Comments:
At 2:46 PM,
L. Edgar Otto said...
Matti, very good post with many deeper issues in the mathematics that can lead to
variations on our emphases for speculative models. He seems to be suggesting your
emphasis in particular may fit into the idea that primes are subsets of the monsters. But
like string theory, it is not clear in the details that we can say just what measures can be
made. I thought your idea was a better generalization extending the idea of particle high
numbers to low ones as well. So without a deeper understanding of the dark stuff (and the
transcendental's as he mentions) we can only intuit vague questions of things like cosmic
expansion oscillation or Newton's constant varying.
There is also Surreal Calculus principles (Conway). Pi as a 10-fold symmetry base
looks rather stringy with structural gaps such as the primes show rather than issues of
binary dualities. Consciousness adds to the complexity of the problem. What does it take
to convey such ideas?
We are at the beginning of a whole new physics and evidently mathematics. I see it as
a very small part of what is to come. But it was nice to see articles with thoughts like our
own that may find a wider truth of things in such detail narrow as it was as a looking back
at our great theoreticians modest achievements. A good summary of our ongoing
bottleneck to wider theories of everything.
At 6:34 PM, [email protected] said...
The point concerning monster was that the number N of its elements have primes
associated with the definition of Gaussian Mersennes (1+i)^k-1, up to k=47 as prime
factors. Why this is the case is mystery.
What makes this so funny is that in these primes happen to correspond to possible
copies of hadron physics definitely outside the reach of LHC;-). k=73 might have been
already discovered: there is 3.5 sigma evidence for 2.2 TeV bump: something completely
unexpected, it could be M_{G,73} pion. Even more amusing, I realised the possibility of
k=73 hadron physics besides k=89 only recently.
- 105 -
Deeper understanding of dark matter is the key problem of physics now. I am
convinced that it requires going beyond the reductionistic view and number theoretical
vision/quantum criticality having as its satellite concepts fractality, h_eff hierarchy, p-adic
length scale hierarchy, adelic physics, ZEO, etc… is to my opinion sufficiently general
approach.
This is deeply conceptual problem, not about hypothesising some new exotic particle.
It will not be solved by master symbol manipulator but real thinker.
Varying Newton's constants, oscillating acceleration of cosmic expansion, as also
recent progress in condensed matter physics suggesting that condensed matter physicists
are already "seeing" many sheeted physics without knowing that this is the case, are
extremely welcome empirical inputs helping to bind this picture with empirical reality.
This is like solving a huge modern crossword with some overall hidden theme and full
of jokes: once you discover it, everything becomes suddenly very easy.
At 9:09 AM,
L. Edgar Otto said...
Matti,
I agree that this view can be a quantitative or calculable measure. The idea of Branes
as such, as embedded or as multiple sheets is part of a wider interpretation (which I think
Lubos for example is coming around to a more general view which does resemble string
theory where it soberly does not claim to be the total picture.) In the simulations and
disconnections of implied or deeper space structures of the say vacuum or so called dark
issues I do appreciate your qualitative intuitions including what happens at the brain stem.
My concern is at the physics of higher brain regions where we seem born with a sense
of space yet our personal orientation comes from something within. I would say off hand
that what is happening here and in connection to the brain stem, and the right and left brain
parity issues supports the standard theory in the string or loopy landscape... but it is in a
more general level than say Lie groups which is to say that it is a topological dynamic
view thus even in the superimposition of branes by group theory and issues of scale we
have to reach a little higher than interpretations of Gaussian Prime ideas or any simply
limited complex space structures (such particles may have been seen already as you said.)
So it seems we must know the more general theory to show explicit calculations or
explain why we can imagine some things or sense some things by which we do understand
diverse representations of what is consciousness. I think a lot of this could be calculated
more as arithmetic rather than abstract algebra. Keep up the good thinking.
07/17/2015 - http://matpitka.blogspot.com/2015/07/pion-of-m-hadron-physics-atlhc.html#comments
Pion of MG,79 hadron physics at LHC?
Some time ago I wrote about Gaussian Mersennes MG,n=(1+i)n-1 in cosmology, biology, and nuclear
and particle physics. In particle physicsm paragraph appears the following line about new ultra high
energy physics - perhaps scaled up copies of hadron physics.
- 106 -
n∈{2, 3, 5, 7, 11, 19, 29, 47, 73} correspond to energies not accessible at LHC. n= 79 might define
new copy of hadron physics above TeV range - something which I have not considered seriously before.
The scaled variants of pion and proton masses (M107 hadron physics) are about 2.2 TeV and 16 TeV. Is
it visible at LHC is a question mark to me.
Today I saw the posting of Lubos suggesting that MG,79 pion might have been already seen!! Lubos
tells about a bump around 2(!)TeV energy observed already earlier at ATLAS and now also at CMS. See
the article in Something goes bump in Symmetry Magazine. The local signficance is about 3.5 sigma and
local significance about 2.5 sigma. Bump decays to weak bosons.
Many interpretations are possible. An interpretation as new Higgs like particle has been suggested.
Second interpretation - favored by Lubos - is as right-handed W boson predicted by left-rightsymmetric variants of the standard model. If this is correct interpretation, one can forget about TGD
since the main victory of TGD is that the very strange looking symmmetries of standrad model have an
elegant explanation in terms of CP2 geometry, which is also twistorially completely unique and
geometrizes both electroweak and color quantum numbers.
Note that the masses masses of MG,79 weak physics would be obtained by scaling the masses of
ordinary M89 weak bosons by factor 2(89-79)/2)= 512. This would give the masses about 2.6 TeV and 2.9
TeV.
There is however an objection. If one applies p-adic scaling 2(107-89)/2=29 of pion mass in the case of
M89 hadron physics, M89 pion should have mass about 69 GeV (this brings in mind the old and forgotten
anomaly known as Aleph anomaly at 55 GeV). I proposed that the mass is actually an octave higher and
thus around 140 GeV: p-adic length scale hypothesis allows to consider octaves. Could it really be that a
pion like state with this mass could have slipped through the sieve of particle physicists? Note that the
proton of M89 hadron physics would have mass about .5 TeV.
I have proposed that M89 hadron physics has made itself visible already in heavy ion collisions at
RHIC and in proton- heavy ion collisions at LHC as strong deviation from QCD plasma behavior
meaning that charged particles tended to be accompanied by particles of opposite charged in opposite
direction as if they would be an outcome of a decay of string like objects, perhaps M89 pions. There has
been attempts - not very successful - to explain non-QCD type behavior in terms of AdS/CFT. Scaled up
variant of QCD would explain them elegantly. Strings would be in D=10. The findings from LHC
during this year probably clarify this issue.
See the chapter New particle physics predicted by TGD: part I of "p-Adic Length Scale Hypothesis" or
the article What is the role of Gaussian Mersennes in TGD Universe? For a summary of earlier postings
see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 5:04 AM
5 Comments:
At 12:14 PM, Anonymous said...
Matti, why Low Gaussian primes? your list of primes is a subset of the factors of the
dimension of the friendly giant group.
https://en.m.wikipedia.org/wiki/Monstrous_moonshine
The monster group was investigated in the 1970s by mathematicians Jean-Pierre Serre,
Andrew Ogg and John G. Thompson; they studied the quotient of the hyperbolic plane by
subgroups of SL2(R), particularly, the normalizer Γ0(p)+ of Γ0(p) in SL(2,R). They found
- 107 -
that the Riemann surface resulting from taking the quotient of the hyperbolic plane by
Γ0(p)+ has genus zero if and only if p is 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 41, 47, 59 or
71. When Ogg heard about the monster group later on, and noticed that these were
precisely the prime factors of the size of M, he published a paper offering a bottle of Jack
Daniel's whiskey to anyone who could explain this fact (Ogg (1974)).
--crow
At 8:52 PM, [email protected] said...
Gaussian primes are really big but the primes defining them are logarithmically
smaller. k=379 defines scale slightly large than that defined by the age of the Universe.
Larger ones exist but are not terribely interesting for human physicists for a long time;-)
The prime k above defines Gaussian Mersenne as (1+i)^k-1 and the associated real
prime defined by its norm is rather large - rather near to 2^k and for k= 79 this is already
quite big. k=113 characterises muon and nuclear physics, k=151,157,163,167 define a
number theoretical miracle in the range cell membrane thickness- size of cell nucleus.
Then there are astro- and cosmophysically important Gaussian Mersennes:
http://matpitka.blogspot.fi/2015/06/gaussian-mersennes-in-cosmology-biology.html .
The lowest Gaussian Mersennes correspond to k=2, 3, 5, 7, 11, 19, 29, 47, 73. This list
is indeed contained by the list of lowest monster primes p= 2, 3, 5, 7, 11, 13, 17, 19, 23,
29, 31, 41, 47, 59 or 71 up to 47 . 73 does not appear as a factor of Monster anymore.
Gamma_0( p) is the kernel of the modulo homomorphism mapping SL(2,Z) to
Sl(2,Z/pZ). The points of hyperbolic plane at the orbits of this group are identified. The
outcome is sphere for p==k associated with Gaussian Mersennes.
If I have understood correctly, the order of Monster is product of these primes and
therefore Monster contains a subgroup with order, which is product of the primes
associated with Gaussian Mersennes, which are definitely outside the reach of LHC. This
group could be christened as Gaussian Monster or Particle Physics Monster;) Number
theory and particle physics meet each other!;-).
Speaking seriously, could this mean that the high energy physics above M_G,79
energy is somehow different from that below in TGD Univese? Is k=47 somehow special:
it correspond to energy scale 17.6*10^3 TeV=17.6 PeV (P==Peta). Pessimistic would
argue that this scale is the Monster energy scale never reached by human particle
physicists;-).
At 6:29 AM,
[email protected] said...
Additional comment about discrete subgroups of SL(2,C). The modding operation modulo
p or p^n for SL(2,C) , p prime, or even modulo Gaussian prime gives a sub-group allowing
to construct tessellation (lattice) of 3-D hyperbolic space by hyperbolic manifolds.
Modding is also possible with respect to integer or Gaussian integer. What could be the
physical interpretation?
This could relate directly to p-adicity! Also in cosmological context:
http://matpitka.blogspot.fi/2015/06/transition-from-flat-to-hyperbolic.html
http://matpitka.blogspot.fi/2015/06/gaussian-mersennes-in-cosmology-biology.html
- 108 -
At 7:06 AM,
[email protected] said...
Sorry for a typo. SL(2,C) should read SL(2,Z) or SL(2,Z+iZ). Z could be extended to
some algebraic extension of rationals and prime p with corresponding prime.
http://eclass.uoa.gr/modules/document/file.php/MATH154/Σεμινάριο%202011/kulkarni1991.pdf
it is mentioned that the congruence subgroups associated with primes are very special.
At 8:07 PM, [email protected] said...
Still a correction. I used the term modding loosely. Congruence groups are subgroups
of SL(2,Z) which consist of identity matrices mod n. The notion generalises to integers of
any algebraic extension of rationals. Congruence subgroup associated with n is subgroup
of that associated with any of its factors.
The special role of primes is due to the fact that these groups can be extended to the pdic counterparts of SL(2,Z) by replacing Z with p-adic integers. These subgroups would be
especially interesting concerning the construction hyperbolic manifolds of 3-D hyperbolic
space defining lattice cell of its tessellation.
e^p is p-adic number so that roots of e define finite-D algebraic extensions of p-adics.
The special role of e in real context can be seen as a convention but in p-adic context it is
not: p-adic variant of Lie group is obtained by exponentiation of elements of Lie algebra
which are proportional to p (one obtains hierarchy of sub-Lie groups in powers of p).
These subgroups and algebraic groups generate p-adic variants of Lie groups.
What is nice that in algebraic extension of p-adics its primes P also have e( P) as
number in extension. Also now finite roots of e(P/n) exist and one can ask how large is the
number of real transcendentals having this kind of representation?
07/16/2015 - http://matpitka.blogspot.com/2015/07/what-music-can-teach-aboutconsciousness.html#comments
What Music can teach about Consciousness
Recently I have been reading the book of Oliver Sacks titled Musicophilia dealing with various
aspects of music experience. Humans as a species indeed have a very special relation to music. But is it
really genuine characteristic of human consciousness? One can even ask whether consciousness emerges
only in higher species or whether it could be in some form a characteric of any living or even inanimate
system? I am not the only quantum consciousness theorists forced to consider panpsychism in some
form. In this framework one can ask whether music like aspects of conscious experience could be
universal and only especially highly developed in humans?
I restrict the consideration to those stories of Musicophilia,which I find of special interest from the
point of view of TGD-inspired theory of Consciousness. The outcome is a more precise formulation for
the general TGD inspired vision about brain based on basic ideas of Quantum-TGD.
Zero Energy Ontology (ZEO) implies a new view about the relation between geometric- and
experienced- time and allowing to generalize quantum measurement theory to a theory of consciousness.
- 109 -
Strong form of holography implies the analog of AdS/CFT duality between 2-D representation of
physics based on string world sheets and partonic 2-surfaces and 4-D space-time representations. This
duality is not tautology and this inspires the idea that these two representations correspond to two modes
for consciousness motivating "Left brain talks, right brain sings" metaphor.
1. Language and music could relate to two dual representations of conscious information - local and
holistic, cognitive and sensory. Discretization of function/its Fourier transform as a collection of
its values at discrete set values of time/frequencies would correspond local/holistic
approximations of function. In principle any conscious entity - self- could utilize these two
representational modes at appropriate quantum criticality.
2. The holistic "musical consciousness" is assignable to right brain hemisphere and according to the
stories of Sacks seems to characterized by episodal sensory memories. TGD based view about
memories relies on ZEO: the memories would be mental images with sensory input from
geometric past, genuine sensory experiences of time reversed sub-selves! This picture simplifies
considerably and one can see all memories - sensory, cognitive, or emotional - as analogs of
phantom pain, which would be also a sensory memory and even more a genuine sensory
experience. It is even possible that our biological bodies are used by two selves: right brain
hemisphere sleeps when we are awake and vice versa. Even the experiences of epileptics about
having double consciousness could be understood.
3. A more concrete realization of "Left brain talks, right brain sings" metaphor relies on the
assumption that "magneto-anatomy" is universal. Only the "magneto-physiology"> characterized
by the values of heff characterizing quantum criticality and defining a kind of intelligence
quotient dictating the span of long term memory and planned action varies.
heff would differ for the magnetic bodies of various brain areas, and the spectrum of h eff for right
and left brain would differ and characterize their specializations. For instance, the value of heff
would be large (small) for the cognitive areas of left (right) brain and small (large) for some
higher sensory areas of right (left) brain. Magnetic bodies form a fractal hierarchy and one can
characterize even individual cells and neurons by the value of heff associated with them. The
spectrum for heff allows also to distinguish between members of the same species since it defines
the skill profile. This obviously goes far beyond the genetic determinism.
See the chapter What music can teach about consciousness? of "TGD and EEG" or the article What
music can teach about consciousness? For a summary of earlier postings see Links to the latest progress
in TGD.
posted by Matti Pitkanen @ 5:58 AM
3 Comments:
At 12:39 PM, Wes Hansen said...
http://www.livescience.com/27802-plants-trees-talk-with-sound.html
This is really interesting,Matti. Long ago I read about an experiment conducted by
DARPA to see if plants could sense the presence of other biological organisms; they
wanted to see if they could use plants as advance sentries around remote military
establishments
20 + years later I met an electrical engineer in Dallas, Texas who had actually built all
of the Faraday cages for the experiment as an undergrad. He told me that plants could in
fact sense other organisms and that yuccas were the best at it. He also told me that it
appeared as though plants experienced empathy for the other plants.
- 110 -
The head researcher withheld water from all of the plants until their bio-rhythms
demonstrated they were under stress; he then watered some of the plants but not others.
The curious thing was that the bio-rhythms of the watered plants continued to demonstrate
stress and did not return to normal until after all of the plants had received sufficient water
I've tried to find that research paper but haven't had any luck thus far. I return to the
project periodically . . .
With regards,
Wes Hansen
At 9:14 PM, [email protected] said...
Very interesting findings. Seems that we have a lot to learn from plants about empathy and
compasion. Cognition is fantastic tool for survival but replaces living beings with symbols
representing them. This tends to isolate us from other levels in the hierarchy of
consciousness.
At 9:23 PM, [email protected] said...
The above comment gives me an excuse to write about my own feelings during last weeks
while following in detail the negotiations with Greek about their debts. To put it briefly: I
have been feeling deep co-shame for the behaviour of finnish negotiators. They could
learn a lot from plants about empathy, compassion, and emotional intelligence.
Greece is at the verge of breakdown. There is a gigantic unemployment and health
care is breaking down. The reason is that EU has been punishing Greeks for five years.
Media has been telling us on daily basis how lazy and inhonest Greeks are and how they
deserve their fate. The decision makers have been humiliating Greek negotiators and
Finland has been especially eager.
There was even a special session in which the idea was to give up polite manners and
openly insult the participants from Greece. Journalists in Finland have been proudly
informing how our minister of finance Alexander Stubb was raging to the Greek
negotiators and humiliating them. Cannot feel but deep co-shame, both for Finland and
Stubb.
I believe that rather view ordinary people in Finland are as inhumane as the
representatives our extreme rightist government, which is making also the life of the
poorest people in Finland very difficult
07/08/2015 - http://matpitka.blogspot.com/2015/07/strange-behavior-of-smb6-andnew.html#comments
Strange behavior of SmB6 and new mechanism of quantum bio-control
The idea that the TGD Universe is quantum critical is the corner stone of Quantum-TGD and fixes
the theory more or less uniquely since the only coupling constant parameter of the theory (Kähler
coupling strength)is analogous to critical temperature. Also more than one basic parameters are in
principle possible - maximal quantum criticality fixes the values of all of them - but it seems that only
Kähler coupling strength is needed.
- 111 -
The TGD Universe is a quantum critical fractal: like a ball at the top of hill at the top of hill at....
Quantum criticality allows to avoid the fine tuning problems plaguing as a rule various unified theories.
Quantum criticality
The meaning of quantum criticality at the level of dynamics has become only gradually clearer. The
development of several apparently independent ideas generated for about decade ago have led to the
realization that quantum criticality is behind all of them. Behind quantum criticality are in turn number
theoretic vision and strong forms of general coordinate invariance and holography.
1. The hierarchy of Planck constants defining hierarchy of dark phases of ordinary matter
corresponds to a hierarchy of quantum criticalities assignable to a fractal hierarchy of subalgebras of super-symplectic algebra for which conformal weights are n-ples of those for the
entire algebra, n corresponds to the value of effective Planck constant h eff/h=n. These algebras
are isomorphic to the full algebra and act as gauge conformal algebras so that a broken superconformal invariance is in question.
2. Quantum criticality in turn reduces to the number theoretic vision about strong form of
holography. String world sheets carrying fermions and partonic 2-surfaces are the basic objects
as far as pure quantum description is considered. Also space-time picture is needed in order to
test the theory since quantum measurements always involve also the classical physics, which in
TGD is an exact part of quantum theory.
Space-time surfaces are continuations of collections of string world sheets and partonic 2surfaces to preferred extremals of Kähler action for which Noether charges in the sub-algebra of
super-symplectic algebra vanish. This condition is the counterpart for the reduction of the 2-D
criticality to conformal invariance. This eliminates huge number of degrees of freedom and
makes the strong form of holography possible.
3. The hierarchy of algebraic extensions of rationals defines the values of the parameters
characterizing the 2-surfaces, and one obtains a number theoretical realization of an evolutionary
hierarchy. One can also algebraically continue the space-time surfaces to various number fields reals and the algebraic extensions of p-adic number fields. Physics becomes adelic. p-Adic
sectors serve as correlates for cognition and imagination. One can indeed have string world
sheets and partonic 2-surfaces, which can be algebraically continued to preferred extremals in padic sectors by utilizing p-adic pseudo constants giving huge flexibility. If this is not possible in
the real sector, figment of imagination is in question! It can also happen that only part of real
space-time surface can be generated: this might relate to the fact that imaginations can be seen as
partially realized motor actions and sensory perceptions.
Quantum criticality and TGD inspired quantum biology
In TGD-inspired quantum biology quantum criticality is in crucial role. First some background.
1. Quantum measurement theory as a theory of consciousness is formulated in zero energy
ontology (ZEO) and defines an important aspect of quantum criticality. Strong form of NMP
states that the negentropy gain in the state function reduction at either boundary of causal
diamond (CD) is maximal. Weak form of NMP allows also quantum jumps for which
negentropic entanglement is not generated: this makes possible ethics (good and evil) and
morally responsible free will: good means basically increase of negentropy resources.
2. Self corresponds to a sequence state function reductions to the same boundary of CD and h eff
does not change during that period. The increase of heff (and thus evolution!) tends to occur
spontaneously, and can be assigned to the state function reduction to the opposite boundary of
CD in zero energy ontology (ZEO). The reduction to the opposite boundary means death of self
and living matter is fighting in order to avoid this even. To me the only manner to make sense
about basic myth of Christianity is that death of self generates negentropy.
- 112 -
3. Metabolism provides negentropy resources for self and hopefully prevents NMP to force the
fatal reduction to the opposite boundary of CD. Also homeostasis does the same. In this process
self makes possible evolution of sub-selves (mental images dying and re-incarnating) state
function by state function reduction so that the negentropic resources of the Universe increase.
A new mechanism of quantum criticality
Consider now the mechanisms of quantum criticality. The TGD based model (see this) for the recent
paradoxical looking finding (see this) that topological insulators can behave like conductors in external
magnetic field led to a discovery of a highly interesting mechanism of criticality, which could play a key
role in living matter.
1. The key observation is that magnetic field is present. In TGD framework the obvious guess is
that its flux tubes carry dark electrons giving rise to anomalous currents running in about million
times longer time scales and with velocity, which is about million times higher than expected.
Also supra-currents can be considered.
The currents can be formed of the cyclotron energies of electrons are such that they
correspond to energies near the surface of the Fermi sphere: recall that Fermi energy for
electrons is determined by the density of conduction electrons and is about 1 eV. Interestingly,
this energy is at the lower end of bio-photon energy spectrum. In the field of 10 Tesla the
cyclotron energy of electron is .1 mV so that the integer characterizing cyclotron orbit must be
n≅ 105 if conduction electron is to be transferred to the cyclotron orbit.
2. The assumption is that external magnetic field is realized as flux tubes of fixed radius, which
correspond to space-time quanta in TGD framework. As the intensity of magnetic field is varied,
one observes so called de Haas-van Alphen effect used to deduce the shape of the Fermi sphere:
magnetization and some other observables vary periodically as function of 1/B.
This can be understood in the following manner. As B increases, cyclotron orbits contract.
For certain increments of 1/B n+1:th orbit is contracted to n:th orbit so that the sets of the orbits
are identical for the values of 1/B, which appear periodically. This causes the periodic oscillation
of say magnetization.
3. For some critical values of the magnetic field strength a new orbit emerges at the boundary of the
flux tube. If the energy of this orbit is in the vicinity of Fermi surface, an electron can be
transferred to the new orbit. This situation is clearly quantum critical.
If the quantum criticality hypothesis holds true, heff/h=n dark electron phase can generated
for the critical value of magnetic fields. This would give rise to the anomalous conductivity
perhaps involving spin current due to the spontaneous magnetization of the dark electrons at the
flux tube. Even super-conductivity based on the formation of parallel flux tube pairs with either
opposite or parallel directions of the magnetic flux such that the members of the pair are at
parallel flux tubes, can be considered and I have proposed this a mechanism of biosuperconductivity and also high Tc super-conductivity
A new mechanism of quantum criticality and bio-control
The quantum criticality of the process in which new electron orbit emerges near Fermi surface
suggests a new mechanism of quantum bio-control by generation of super currents or its reversal.
1. In TGD-inspired quantum biology magnetic body uses biological body as motor instrument and
sensory receptor and EEG and its fractal variants with dark photons with frequencies in EEG
range but energy E=hefff in the range of bio-photon energies make the necessary signalling
possible.
- 113 -
2. Flux tubes can become braided and this makes possible quantum computation like processes.
Also so called 2-braids - defined by knotted 2-surfaces imbedded in 4-D space-time surface - are
possible for the string world sheets defined by flux tubes identified to be infinitely thin, are
possible. As a matter fact, also genuine string world sheets accompany the flux tubes. 2-braids
and knots are purely TGD based phenomenon and not possible in superstring theory or Mtheory.
3. It is natural to speak about motor actions of the magnetic body. It is assumed that the flux tubes
of the magnetic body connect biomolecules to form a kind of Indra's web explaining the gel like
character of living matter. heff reducing phase transitions contract flux tubes connecting
biomolecules so that they can find each other by this process and bio-catalysis becomes possible.
This explains the mysterious looking ability of bio-molecules to find each other in the dense
molecular soup. In fact, the dark matter part is far from being soup! The hierarchy of Planck
constants and heff=hgr hypothesis imply that dark variants of various particles with magnetic
moment are neatly at their own flux tubes like books in shelf.
Reconnection of the U-shaped flux tubes emanating from two subsystems generates a flux tube
pair between them and gives rise to supracurrents flowing between them. Also cyclotron
radiation propagating along flux tubes and inducing resonant transitions is present. This would
be the fundamental mechanism of attention.
4. I have proposed that the variation of the thickness of the flux tubes could serve as a control
mechanism since it induces a variation of cyclotron frequencies allowing to get in resonance or
out of it. For instance, two molecules could get in flux tube contact when the cyclotron
frequencies are identical and this can be achieved if they are able to vary their flux tube
thickness. The molecules of immune system are masters in identifying alien molecules and the
underlying mechanism could be based on cyclotron frequency spectrum and molecular attention.
This would be also the mechanism behind water memory and homeopathy (see this) which still
is regarded as a taboo by mainstreamers.
5. Finally comes the promised new mechanism of bio-control! The variation of the magnetic field
induced by that of flux tube thickness allows also to control whether there is quantum criticality
for the generation of dark electron supra currents of electrons. The Fermi energy of the
conduction electrons at the top of Fermi sphere is the key quantity and dictated by the density of
these electrons. This allows to estimate the order of magnitude of the integers N characterizing
cyclotron energy for ordinary Planck constant and the maximal value of heff/h=n cannot be larger
than N.
See the chapter Criticality and Dark Matter of "Hyper-finite Factors and Hierarchy of Planck Constants"
or the article A new control mechanism of TGD inspired quantum biology. For a summary of earlier
postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 2:40 AM
2 Comments:
At 9:36 AM,
L. Edgar Otto said...
So, Matti, what then is magnetism other than anyone who has played with magnets and
can see and feel what is really sort of invisible? We can see the magnetic lines form say
from powdered iron filings. But is this the magnetism itself no matter how we make
hierarchies or expand our flux tubes in the imagined dark and negative spaces? I could
agree that such tubes are real and material in effect and yes something limits this process if
seen as quantum or relativistic in the space.
- 114 -
The representation of a series of sheets is one form of a few models but the deep
calculation still escapes us. Some things seem as much fixed as critical. With a more
general theory it seems reasonable to say light itself may work rather well in redesigning
or healing our biological systems, as well as where there can be an observation and unique
diagnosis. These are important thoughts, mainstream or not, and have a role in the Bigger
Picture...it is not even sufficient to say we cannot exceed N unless we understand better
the prime calculation and environs from one of our perspectives we assume a necessity.
At 7:01 PM, [email protected] said...
Our sensory percepts about magnetism give only an extremely narrow glimpse about
what it is involved. It gives only consistency conditions on theories. Maxwell's theory is
simple linear theory but not enough - even its generalisation to standard model is not
enough.
TGD is the next step in which Maxwel's fields and also other fields are geometrized:
extremely non-linear structure is the outcome but it simplifies physics enormously: four
imbedding space coordinates become the fundamental field degrees of freedom. At the
level of single space-time sheet things are ridiculously simple. Many-sheetedness having
approximate description using standard model is what masks this simplicity. Here is the
next step of reduction and has already begun - mainstreamers have of course not yet
realised that this has happened.
Quantitative testing of theories means expanding of our sensory perception. At this
moment we are at the verge of expanding it so that we can "see" also dark matter. This
SmBr6 business is just about that. Condensed matter physicists are taking the lead from
particle physicists who have lost their way by sticking to wrong theories.
If things had proceeded in times of Einstein and others, I had proposed in some
conference the detection of conductivity in topological insulator made possible by dark
magnetic flux tubes - very dramatic and paradoxical prediction: insulator behaving like
conductor! Some group would have worked for few years and announced in some
conference that dark currents have been found!
Imagination is necessary when we do not know or cannot yet measure. Imaginations
must be testable and be tested and this evolutionary pressure gradually selects the
imaginations which we begin to call reality. One of the most charming applications of the
adelic vision about strong holography is understanding of what imagination is. Physics is
now reaching the same level as mathematics achieved thanks to Godel: it can also say
something about physicist.
A more familiar example about similar expansion of perception is provided by hadron
physics. Now we "see" quarks. This is a complex process involving all aspects of human
consciousness (and perhaps even that above human consciousness;-).
07/05/2015 - http://matpitka.blogspot.com/2015/07/does-physics-of-smb-6-makefundamental.html#comments
Does the physics of SmB6 make the fundamental dynamics of TGD directly visible?
- 115 -
The group of Suchitra Sebastian has discovered very unconventional condensed matter system which
seems to be simultaneously both insulator and conductor of electricity in presence of magnetic field.
Science article is entitled "Unconventional Fermi surface in an insulating state". There is also a popular
article "Paradoxical Crystal Baffles Physicists" in Quanta Magazine summarizing the findings. I learned
about the finding first from the blog posting of Lubos (I want to make absolutely clear that I do not share
the racistic attitudes of Lubos towards Greeks. I find the discussions between Lubos and same minded
blog visitor barbarians about the situation in Greece disgusting).
Observations
The crystal studied at superlow temperatures was Samarium hexaboride - briefly SmB6. The high
resistance implies that electron cannot move more that one atom's width in any direction. Sebastian et al
however observed electrons traversing over a distance of millions of atoms- a distance of orde 10-4 m,
the size of a large neuron. So high mobility is expected only in conductors. SmB6 is neither metal or
insulator or is both of them!
The finding is described by Sebastian as a "big schock and as a "magnificent paradox by condensed
matter theorists Jan Zaanen. Theoreticians have started to make guesses about what might be involved
but according to Zaanen there is no even remotely credible hypothesis has appeared yet.
On basis of its electronic structure, SmB6 should be a conductor of electricity and it indeed is at
room temperature: the average number conduction electrons per SmB6 is one-half. At low temperatures,
the situation however changes. At low temperatures electrons behave collectivly. In superconductors
resistance drops to zero as a consequence. In SmB6 just the opposite happens. Each Sm nucleus has the
average 5.5 electrons bound to it at tight orbits. Below 223 degrees of Celsius the conduction electrons
of SmB6 are thought to "hybridize" around samarium nuclei so that the system becomes an insulator.
Various signatures demonstrate that SmB6 indeed behaves like an insulator.
During the last 5 years it has been learned that SmB6 is not only an insulator but also so called
topological insulator. The interior of SmB6 is insulator but the surface acts as a conductor. In their
experiments Sebastian et al hoped to find additional evidence for the topological insulator property and
attempted to measure quantum oscillations in the electrical resistance of their crystal sample. The
variation of quantum oscillations as sample is rotated can be used to map out the Fermi surface of the
crystal. No quantum oscillations were seen.
The next step was to add magnetic field and just see whether something interesting happens and
could save the project. Suddenly the expected signal was there! It was possible to detect quantum
oscillations deep in the interior of the sample and map the Fermi surface! The electrons in the interior
travelled 1 million times faster than the electrical resistance would suggest. Fermi surface was like that
in copper, silver or gold. A further surprise was that the growth of the amplitude of quantum oscillations
as temperature was decreased, was very different from the predictions of the universal Lifshitz-Kosevich
formula for the conventional metals.
Could TGD help to understand the strange behavior of SmB6?
There are several indications that the paradoxical effect might reveal the underlying dynamics of
Quantum-TGD. The mechanism of conduction must represent new physics and magnetic field must
play a key role by making conductivity possible by somehow providing the "current wires". How? The
TGD based answer is completely obvious: magnetic flux tubes.
One should also understand topological insulator property at deeper level that is the conduction along
the boundaries of topological insulator. One should understand why the current runs along 2-D surfaces.
- 116 -
In fact, many exotic condensed matter systems are 2-dimensional in good approximation. In the models
of integer and fractional quantum Hall effect electrons form a 2-D system with braid statistics possible
only in 2-D system. High temperature super-conductivity is also an effectively 2-D phenomenon.One
should also understand topological insulator property at deeper level that is the conduction along the
boundaries of topological insulator.
1. Many-sheeted space-time is second fundamental prediction TGD. The dynamics of single sheet
of many-sheeted space-time should be very simple by the strong form of holography implying
effective 2-dimensionality. The standard model description of this dynamics masks this
simplicity since the sheets of many-sheeted space-time are replaced with single region of slightly
curved Minkowski space with gauge potentials sums of induced gauge potentials for sheets and
deviation of metric from Minkowski metric by the sum of corresponding deviations for spacetime sheets. Could the dynamics of exotic condensed matter systems give a glimpse about the
dynamics of single sheet? Could topological insulator and anyonic systems provide examples of
this kind of systems?
2. Second basic prediction of TGD is strong form of holography: string world sheets and partonic
2-surfaces serve as kind of "space-time genes" and the dynamics of fermions is 2-D at
fundamental level. It must be however made clear that at QFT limit the spinor fields of
imbedding space replace these fundamental spinor fields localized at 2-surface. One might argue
that the fundamental spinor fields do not make them directly visible in condensed matter physics.
Nothing however prevents from asking whether in some circumstances the fundamental level
could make itself visible.
In particular, for large heff dark matter systems (, whose existence can be deduced from the
quantum criticality of quantum TGD) the partonic 2-surfaces with CP2 size could be scaled up to
nano-scopic and even longer size scales. I have proposed this kind of surfaces as carriers of
electrons with non-standard value of heff in QHE and FQHE.
The long range quantum fluctuations associated with large, heff=n× h phase would be
quantum fluctuations rather than thermal ones. In the case of ordinary conductivity thermal
energy makes it possible for electrons to jump between atoms and conductivity becomes very
small at low temperatures. In the case of large scale quantum coherence just the opposite
happens as observed. One therefore expects that Lifshitz-Kosevich formula for the temperature
dependence of the amplitude does not hold true.
The generalization of Lifschitz-Kosevich formula to quantum critical case deduced from
quantum holographic correspondence by Hartnoll and Hofman might hold true qualitatively also
for quantum criticality in TGD sense but one must be very cautious.
The first guess is that by underlying super-conformal invariance scaling laws typical for
critical systems hold true so that the dependence on temperature is via a power of dimensionless
parameter x=T/mu where μ is chemical potential for electron system. As a matter fact, exponent
of power of x appears and reduces to first power for Lifshitz-Konsevich formula. Since magnetic
field is important, one also expects that the ratio of cyclotron energy scale Ec∝ ℏeff eB/me to
Fermi energy appears in the formula. One can even make an order of magnitude guess for the
value of heff/h≅ 106 from the facts that the scale of conduction and conduction velocity were
millions times higher than expected.
Strings are 1-D systems and strong form of holography implies that fermionic strings
connecting partonic 2-surfaces and accompanied by magnetic flux tubes are fundamental. At
- 117 -
light-like 3-surfaces fermion lines can give rise to braids. In TGD framework AdS/CFT
correspondence generalizes since the conformal symmetries are extended. This is possible only
in 4-D space-time and for the imbedding space H=M4× CP2 making possible to generalize
twistor approach.
3. Topological insulator property means from the perspective of modelling that the action reduces
to a non-abelian Chern-Simons term. The quantum dynamics of TGD at space-time level is
dictated by Kähler action. Space-time surfaces are preferred extremals of Kähler action and for
them Kähler action reduces to Chern-Simons terms associated with the ends of space-time
surface opposite boundaries of causal diamond and possibly to the 3-D light-like orbits of
partonic 2-surfaces. Now the Chern-Simons term is Abelian but the induced gauge fields are
non-Abelian. One might say that single sheeted physics resembles that of topological insulator.
4. The effect appears only in magnetic field. I have been talking a lot about magnetic flux tubes
carrying dark matter identified as large heff phases: topological quantization distinguishes TGD
from Maxwell's theory: any system can be said to possess "magnetic body, whose flux tubes can
serve as current wires. I have predicted the possibility of high temperature super-conductivity
based on pairs of parallel magnetic flux tubes with the members of Cooper pairs at the
neighboring flux tubes forming spin singlet or triplet depending on whether the fluxes are have
same or opposite direction.
Also spin and electric currents assignable to the analogs of spontaneously magnetized states
at single flux tube are possible. The obvious guess is that the conductivity in question is along
the flux tubes of the external magnetic field. Could this kind of conductivity explains the strange
behavior of SmB6. The critical temperature would be that in which the parallel flux tubes are
stable. The interaction energy of spin with the magnetic field serves as a possible criterion for the
stability if the presence of dark electrons stabilizes the flux tubes.
The following represents an extremely childish attempt of a non-specialist to understand how the
conductivity might be understood. The current carrying electrons at flux tubes near the top of Fermi
surface are current carriers. heff=n×h and magnetic flux tubes as current wires bring in the new elements.
Also in the standard situation one considers cylinder symmetric solutions of Schrödinger equation in
external magnetic field and introduces maximal radius for the orbits so that formally the two situations
seem to be rather near to each other. Physically the large h eff and associated many-sheeted covering of
space-time surface providing the current wire makes the situation different since the collisions of
electrons could be absent in good approximation so that the velocity of charge carriers could be much
higher than expected as experiments indeed demonstrate.
Quantum criticality is the crucial aspect and corresponds to the situation in which the magnetic field
attains a value for which a new orbit emerges/disappears at the surface of the flux tube: in this situation
dark electron phase with non-standard value of heff can be generated. This mechanism is expected to
apply also in bio-superconductivity and to provide a general control tool for magnetic body.
1. Let us assume that flux tubes cover the whole transversal area of the crystal and there is no
overlap. Assume also that the total number of conduction electrons is fixed, and depending on
the value of heff is shared differently between transversal and longitudinal degrees of freedom.
Large value of heff squeezes the electrons from transversal to longitudinal flux tube degrees of
freedom and gives rise to conductivity.
2. Consider first Schrödinger equation. In radial direction one has harmonic oscillator and the orbits
are Landau orbits. The cross sectional area behaves like πR2= nTheff/2mωc giving nT∝1/heff.
Increase of the Planck constant scales up the radii of the orbits so that the number of states in
cylinder of given radius is reduced.
- 118 -
Angular momentum degeneracy implies that the number of transversal states is NT= nT2∝
1/heff2. In longitudinal direction one has free motion in a box of length L with states labelled by
integer nL. The number of states is given by the maximum value NL of nL.
3. If the total number of states is fixed to N = NLNT is fixed and thus does not depend on heff, one
has NL ∝ heff2. Quanta from transversal degrees of freedom are squeezed to longitudinal degrees
of freedom, which makes possible conductivity.
4. The conducting electrons are at the surface of the 1-D "Fermi-sphere", and the number of
conduction electrons is Ncond≅ dN/dε × δ ε≅dN/dε T= NT/2εF ∝ 1/heff4. The dependence on heff
does not favor too large values of heff. On the other hand, the scattering of electrons at flux tubes
could be absent. The assumption L∝heff increases the range over which current can flow.
5. To get a non-vanishing net current one must assume that only the electrons at the second end of
the 1-D Fermi sphere are current carriers. The situation would resemble that in semiconductor.
The direction of electric field would induce symmetry breaking at the level of quantum states.
The situation would be like that for a mass in Earth's gravitational field treated quantally and
electrons would accelerate freely. Schrödinger equation would give rise to Airy functions as its
solution.
What about quantum oscillations in TGD framework?
1. Quantum oscillation refers to de Haas-van Alphen effect - an oscillation of the induced magnetic
moment as a function of 1/B with period τ= 2πe/ℏS, where S is the momentum space area of the
extremal orbit of the Fermi surface, in the direction of the applied field. The effect is explained
to be due to the Landau quantization of the electron energy. I failed to really understand the
explanation of this source and in my humble opinion the following arguments provide a clearer
view about what happens.
2. If external magnetic field corresponds to flux tubes Fermi surface decomposes into cylinders
parallel to the magnetic field since the motion in transversal degrees of freedom is along circles.
In the above thought experiment also a quantization in the longitudinal direction occurs if the
flux tube has finite length so that Fermi surface in longitudinal direction has finite length. One
expects on basis of Uncertainty Principle that the area of the cross section in momentum space is
given by S∝ heff2/πR2, where S is the cross sectional area of the flux tube. This follows also from
the equation of motion of electron in magnetic field. As the external magnetic field B is
increased, the radii of the orbits decrease inside the flux tube, and in momentum space the radii
increase.
3. Why does the induced magnetic moment (magnetization) and other observables oscillate?
1. The simplest manner to understand this is to look at the situation at space-time level.
Classical orbits are harmonic oscillator orbits in radial degree of freedom. Suppose that
that the area of flux tube is fixed and B is increased. The orbits have radius rn2= (n+1/2) ×
hbar/eB and shrink. For certain field values the flux eBA =n×hbar corresponds to an
integer multiple of the elementary flux quantum - a new orbit at the boundary of the flux
tube emerges if the new orbit is near the boundary of Fermi sphere providing the
electrons. This is clearly a critical situation.
2. In de Haas- van Alphen effect the orbit n+1 for B has same radius as the orbit n for
1/B+Δ (1/B): rn+1(1/B) =rn(1/B+Δ (1/B)). This gives approximate differential equation
with respect to n and one obtains (1/B)(n)= (n+1/2)× Δ (1/B) . Δ (1/B) is fixed from the
condition the flux quantization. When largest orbit is at the surface of the flux, tube the
orbits are same for B(n) and B(n+1), and this gives rise to the de Haas - van Alphen
effect.
3. It is not necessary to assume finite radius for the flux tube, and the exact value of the
radius of the flux tube does not play an important role. The value of flux tube radius can
- 119 -
be estimated from the ratio of the Fermi energy of electron to the cyclotron energy. Fermi
energy about .1 eV depending only on the density of electrons in the lowest
approximation and only very weakly on temperature. For a magnetic field of 1 Tesla
cyclotron energy is .1 meV. The number of cylinders defined by orbits is about n=104.
4. What happens in TGD Universe in which the areas of flux tubes identifiable as space-time
quanta are finite? Could quantum criticality of the transition in which a new orbit emerges at the
boundary of flux tube lead to a large heff dark electron phase at flux tubes giving rise to
conduction?
1. The above argument makes sense also in TGD Universe for the ordinary value of Planck
constant. What about non-standard values of Planck constant? For heff/h =n the value of
flux quantum is n-fold so that the period of the oscillation in de Haas - van Alphen effect
becomes n times shorter. The values of the magnetic field for which the orbit is at the
surface of the flux tube are however critical since new orbit emerges assuming that the
cyclotron energy corresponds is near Fermi energy. This quantum criticality could give
rise to a phase transition generating non-standard value of Planck constant.
What about the period for Δ (1/B)? For heff/h=n? Modified flux quantization for
extremal orbits implies that the area of flux quantum is scaled up by n. The flux changes
by n units for the same increment of Δ (1/B) as for ordinary Planck constant so that de
Haas -van Alphen effect does not detect the phase transition.
2. If the size scale of the orbits is scaled up by n1/2 as the semiclassical formula suggests
the number of classical orbits is reduced by a factor 1/n if the radius of the flux tube is
not changed in the transition h→ heff to dark phase. n-sheetedness of the covering
however compensates this reduction.
3. What about possible values of heff/h? The total value of flux seems to give the upper
bound of heff/h=nmax, where nmax is the value of magnetic flux for ordinary value of
Planck constant. For electron and magnetic field for B=10 Tesla and has n≤ 105. This
value is of the same order as the rough estimate from the length scale for which
anomalous conduction occurs.
Clearly, the mechanism leading to anomalously high conductivity might be the
transformation of the flux tubes to dark ones so that they carry dark electrons currents. The
observed effect would be dark, quantum critical variant of de Haas-van Alphen effect!
Also bio-superconductivity is quantum critical phenomenon and this observation would
suggests sharpening of the existing TGD based model of bio-super-conductivity. Superconductivity would occur for critical magnetic fields for which largest cyclotron orbit is at the
surface of the flux tube so that the system is quantum critical. Quantization of magnetic fluxes
would quantify the quantum criticality. The variation of magnetic field strength would serve as
control tool generating or eliminating supra currents. This conforms with the general vision
about the role of dark magnetic fields in Living matter.
To sum up, a breaktrough of TGD is taking place. I have written about thirty articles during this year
- more than one article per week. There is huge garden there and trees contain fruits hanging low! It is
very easy to pick them: just shatter and let them drop to the basket! New experimental anomalies having
a nice explanation using TGD-based concepts appear on weekly basis and the mathematical and physical
understanding of TGD is taking place with great leaps. It is a pity that I must do it all alone. I would like
to share. I can only hope that colleagues could take the difficult step: admit what has happened and
make a fresh start.
- 120 -
See the article Does the physics of SmB6 make the fundamental dynamics of TGD directly visible? For
a summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 9:11 AM
2 Comments:
At 3:43 PM,
L. Edgar Otto said...
Matti, there is a Yeats poem that the roebuck would describe God as a roebuck. If there is
such a bigger picture then many will see their own take on general theory in this...Lubos
seems to understand this is an important breakthrough the tries to incorporate it into
methods of string theory. Of course you predicted some of this as having connections to
other representations of a more general physics. You, see, it is not clear to me at all that in
our exchanging of models that you are alone and that confirms your long speculations.
You are just a little topologically insulated is all :-)
At 10:04 PM, [email protected] said...
Edgar, I am not a man of one theory. I am not a phanatic. 37 years of hard work and what
is happening on the experimental side (and what is not happening on theoretical side) force
me to talk straightly. It is my duty.
In every breakthrough in physics, some theory has been selected by its internal virtues and
others have dropped away from consideration. This is cruel but unavoidable. In the recent
situation when AMS is warning about disastrous effects of the only game in the town
mentality of super stringers and for forgetting the experimental reality, it is important that
theories which really work would get the attention they desire. Unfortunately, this is not
the case.
I of course understand the motivations of Lubos and it is nice that he talks about the
experimental side. And strings are an important part of physics but definitely not in the
manner that superstringers want to think. Strings are the key element of TGD too. But
they live in 4-D space-time and one avoids the non-sensical attempt to reduce physics to
that of 10-D blackholes. By extending the superconformal symmetries one gets as a bonus
many nice things: one understand 4-dimensionality of space-time and many other things.
If the only competitor of TGD trying to explain the SmB6 is a theory describing them in
terms of 10-D blackholes, there are very little doubts about the winner in the long run.
I want to make clear that I am not saying to underline my own contributions. It just
happened to be me who became first conscious about TGD. It could have been Witten or
some other "name" or "non-name".
07/04/2015 - http://matpitka.blogspot.com/2015/07/deconstruction-and-reconstructionin.html#comments
Deconstruction and reconstruction in quantum physics and conscious experience
Deconstruction means roughly putting something into pieces. Often deconstruction is thought to
involve also the reconstruction. This process is applied in deconstructivistic architecture as one can learn
by going to Wikipedia and also cubism brings in mind this kind of approach. Reconstruction organizes
typical features of existing styles in new - one might even say "crazy" manner. There can be even a kind
of "social interaction" between buildings: as if they were communicating by exchanging features.
- 121 -
Similar recombination of elements from various styles have appeared also in music - neoclassicism
comes in mind immediately.
Postmodernism is a closely related movement and claims that truths are social constructs: great
narratives are dead. Nothing could irritate more the physicist who has learned how much mistakes,
wrong tracks, and hard work are needed to distill the truth! Everything does not go! On the other hand,
one can argue that the recent state of stagnation in the frontier of theoretical physics suggests that
postmodernists are right. Superstrings and multiverse are definitely highly social constructs: superstrings
were the only game in the town for decades but now American Mathematical Society is worried that
super string theoreticians are spoiling the public image of science. Multiverse was in fashion only few
years. Certainly one great narrative - the story or reductionism and materialism thought to find its final
culmination as M-theory - is dead. It is however nonsense to claim that all great narratives are dead.
That telling alternative great narratives in respected journals is impossible does not mean that they are
dead!
But the association of deconstruction with postmodernism does not justify throwing away the ideas
of deconstruction and reconstruction. Rather, one can ask whether they could be made part of a new
great narrative about physical world and consciousness.
1. Deconstruction and reconstruction in perception, condensed matter physics and in TGD
inspired theory of consciousness
Deconstruction and reconstruction appear in the construction of percepts, in condensed matter
physics, and are also part of TGD inspired theory of consciousness.
1.1 Perception
The very idea of deconstruction in architectural sense is highly interesting from the perspective of both
quantum physics and consciousness.
I was astonished as I learned about 35 years ago that the buildup of our perception involves very
concretely what I would now call deconstruction and reconstruction and I could not understand why this.
First the sensory input is decomposed into features. Edges, corners, positions, motions analyzed to
direction and velocity, colors,... Objects are replaced with collections of attributes: position, motion,
shape, surface texture, color,.... Deconstruction occurs at lower cortical layers. After this reconstruction
takes place: various kinds of features are combined together through a mysterious looking process of
binding - and the outcome is a percept.
Reconstruction can occur also in "wrong" manner. This occurs in hallucinations, delusions, and
dreams. Humour is based on association of "wrong" things together, making intentional category errors.
Synesthesia involves association between different sensory modalities: note with a given pitch has
characteristic color or numbers correspond to colors or shapes. I remember an article telling about how
subject persons in hypnosis can experience what circle with four corners looks like. Some attribute can
be lacking from the reconstruction: person can perceive the car as object but not its motion. The car is
there now and a moment later it is here. Nothing between.
Also non - standard reconstructions are possible. Could these non-standard reconstructions define a
key aspect of creativity? Could reconstruction create in some lucky situations new idea rather than
hallucination or delusion?
A few years ago I listened a radio document about a professional who builds soundscapes to movies
and learned that the construction of soundscape is deconstruction followed by reconstruction. One starts
- 122 -
from natural sounds but as such they are not very impressive: driving by car over some-one does not
create any dramatic sound effect - just "splat" - nothing else. This is so non-dramatic that it has been be
used to create black humour. In order to cure the situation the real sounds are analyzed to features and
then reconstructed by amplifying some features and by throwing away the unessential ones. The fictive
output sounds much more real than the real input. Actors are masters of this technique and this is why
videos about ordinary people doing something funny is like looking autistic ghosts. And if you look at
the collection of modules of video game you see modules with name "Aargh", "Auch", "Bangggg", etc.
Association is the neuroscientist's key notion allowing to get an idea about what happens in
reconstruction. Reconstruction involves association of various features to form percepts. First this
process occurs for various sensory modalities. These intermediate sensory percepts are then combined to
full percept in association regions.
But what associations are at deeper level? What features are? Heretic could ask whether they could
correspond to conscious experiences not conscious to us but conscious at lower sub-conscious level.
Reader perhaps noticed that deconstruction and reconstruction took place here: the student is not
supposed to ask this question since most theories of consciousness for some funny reason - maybe a
pure accident - make the assumption that consciousness has no structure - no selves with subselves with
sub-selves with... For physicist this kind deconstruction of consciousness is very natural. How do these
features bind to our conscious percepts? Neuroscience alone cannot tell much about this since it is based
on physicalism: "hard problem" articulates this dead end.
The following considerations represent deconstructions and reconstructions and I will not explicitly
mention when this happens - just warning.
1.2 Condensed matter physics
One must bring in some basic notions of quantum theory if one wants to reduce de- and
reconstruction to quantum physics. The key mathematical fact is that in quantum theory each particle in
many-particle state corresponds to a tensor factor of state space of the entire system. This notion is very
difficult to explain without actually having a lecture series about quantum theory and I prove in the
following that this is indeed the case.
1. The space of quantum states of system is the basic notion: technically it is known as Hilbert
space, which can have finite or even infinite dimension and also infinite dimension (and this in
many senses!).
The basic idea is that one can build bigger Hilbert spaces as their tensor products. If you have
Hilbert spaces of dimensions n1 and n2, the tensor product has dimension n1× n2. This is
algebraically like multiplying numbers and one can indeed identify prime Hilbert spaces as those
with prime dimension. Also direct sums of Hilbert spaces are possible.
Hilbert spaces represent physical systems: say electron and proton. To describe a world
consisting of proton and electron one forms the tensor product of electron and proton Hilbert
spaces. This is somewhat like playing with legos.
I was cheating a little bit. Life is not quite so simple. One can also form bound states of two
systems - say hydrogen atom from proton and electron, and the bound states of hydrogen atom
represent only a sub-space of the tensor product. Connes tensor product is more exotic example:
only certain kind of entangled states in tensor product for which the composites are strongly
correlated are allowed. As a matter of fact, also the gluing the legos together creates strong
- 123 -
correlations between them so that it serves as a good analogy for Connes tensor product and
tensor product assignable to bound states.
2. Even elementary particles have several degrees of freedom - say spin and charge - to which one
can assign Hilbert spaces decomposing formally into tensor product of Hilbert spaces associated
with these degrees of freedom. Sub-space of the full tensor product is allowed, and one can
purely formally say that elementary particle is a bound state of even more elementary particles.
Somewhat like written word having meaning to us consists of letters, which as such represent
nothing to us (but could represent something to lower level conscious entities). Could it be
possible to apply deconstruction to elementary particles?
Now comes the surprise: condensed matter physicists have discovered deconstruction long time
ago)! Condensed matter electron can be deconstructed under some circumstances.
1. Electron in the valence band of conductor has three kinds of degrees of freedom labelled by spin,
charge and orbital state - state of electron in atom - characterizing the valence band. The state of
electron decomposes in purely formal sense to a bound state of spinon, chargon, and holon
carrying spin, charge, and phase of electron wave function. Could one deconstruct this bound
state to its composites? If so, one would have effectively three particles - three quantum waves
moving with different velocities. For free electrons obeying Dirac equation this is certainly
impossible. But this might be (and is!) possible in condensed matter.
Instead of single wave motion there can be three free wave motions occurring with different
velocities (wave vectors) corresponding to spinon, chargon and holon. In popular articles this
process is called "splitting" of electron. The term is optimal choice if the purpose is to create
profound mis-understandings in the layman reader associating naturally splitting with a
geometric process of putting tiny ball into pieces. As already explained, it is Hilbert space, which
is split into tensor factors, not the tiny ball. The correlations between factors forced by bound
state property are broken in this divorce of degrees of freedom.
2. What condensed matter theorist propose is roughly following. The consideration is restricted to
effectively one-dimensional systems - wires. Electron has spin, charge, and orbital degrees of
freedom if in conduction band and delocalized and thus shared by the atoms. Usually these
degrees of freedom are bound to single entity.
The holy trinity of charge, spin, and orbital degrees of freedom can be however split under
some circumstances prevailing in condensed matter. The phase of the spinor representing
electron can vary along wire and defines wave motion with some velocity/wave vector
assignable with the ordinary electric current. The spin of electron can rotate at each point and the
phase of this rotation can vary along wire so that a quantum wave moving along wire with
velocity different from that for charge: this is spin wave having as classical analog the rotation of
bicycle pedals. If the wire is a linear lattice of atoms, the orbital excitation can also vary along
the wire and a third quantum wave moving with its own velocity is possible. One has three
particle like entities moving with different velocities! This kind of waves are certainly not
possible for the solutions of Dirac equation representing freely moving fermions and particle
physicists do not encounter them.
3. These wave motions are different from the wave motions associated with phonons and magnons.
For sound it is periodic oscillation for the position of atom, which propagates in sound wave. For
magnon it is change of spin value, which propagates and defines a spin 1 collective excitation.
Spinon as a quasiparticle has spin 1/2 so that spinon and magnon are different things. Spinon is
formal constituent of electron made visible by the condensed matter environment. Magnon is
collective excitation of condensed matter system.
- 124 -
Spin currents provide an example of a situation in which spin and charge currents can flow at
different speeds and are becoming important in a new technology known as spintronics. Spin
currents have very low resistance and the speculation is that they might relate to high T c
superconductivity.
From the articles that I have seen. one might conclude that deconstruction is in practice possible only
for effectively 1-dimensional systems. I do not see any obvious mathematical reason why the
deconstruction could not occur also in higher-dimensional systems.
It is however true that 1-dimensional systems are very special physically. Braid statistics replaces
ordinary statistics bringing in a lot of new effects. Furthermore, 2-D integrable gauge theories allow to
model interactions as permutations of quantum numbers and lead to elegant models describing
deconstructed degrees of fields as quantum fields in 2-D Minkowski space with interactions reducing to
2-particle interactions decribable in terms of R-matrix satisfying the Yang-Baxter equations. It is
difficult to say how much the association of deconstruction to 1-D systems is due the fact that they are
mathematically easier to handle than higher-D ones and there is existing machinery.
The rise of superstring models certainly was to a high degree due to this technical easiness. As I tried
to tell about 3-surfaces replacing strings as fundamental dynamical objects, the almost reflect like
debunking of this idea was to say that super-conformal invariance of super-string models is lost and the
theory is not calculable and does not even exist - period. It took indeed a long time to realize that superconformal symmetry allows a huge generalization, when space-time is 4-D and imbedding space has
Minkowski space as its Cartesian factor. Twistorial considerations fixed the imbedding space uniquely
to M4× CP2. The lesson is clear: theoretician should be patient and realize that theory building is much
more than going to math library and digging the needed mathematics. Maybe colleagues are mature to
learn this lesson some day.
1.3 TGD-inspired theory of Consciousness
The believer in quantum consciousness of course wonders what could be the quantum counterparts
of de- and reconstruction as mechanism of perception. It would seem that analysis and synthesis of the
sensory input deconstructs the mental image associated with it to features - perhaps simpler fundamental
mental images - and reconstruct from these the percept as mental image. What does this correspond at
the level of physics?
Before one can really answer one must understand what the quantum physical correlates of mental
image are. How mental images die and are born? What features are as mental images? What their
binding to sensory percepts does mean physically?
Here I can answer only on my own behalf and to do it I must introduce the basic notions and ideas of
TGD inspired theory of consciousness. I will not go into details here because I have done this so many
times and just suggest that the reading of some basic stuff about TGD-inspired theory of Consciousness.
Suffice it to list just the basic ideas and notions.
1. Zero energy ontology (ZEO), the closely related causal diamonds (CDs), and hierarchy of Planck
constants assignable to quantum criticality are basic notions. Number theoretic vision is also
central. In particular, adelic physics fusing real physics and various p-adic physics as correlates
for cognition is also a basic building brick.
2. TGD-inspired theory of Consciousness theory can be seen as a generalization of quantum
measurement theory constructed to solve the basic problems of ordinary quantum measurement
theory: observer becomes "self" - conscious entity - described by physics and part of physical
- 125 -
system rather than being an outsider. Consciousness does not cause state function reduction:
consciousness is state function reduction. Consciousness is therefore not in the world but
between two worlds. This resolves the basic paradox of quantum measurement theory since there
are two causalities: causality of consciousness and causality of field equations.
Negentropy Maximization Principle (NMP) defines the basic variational principle. Strong
form of NMP states that the negentropy gain in state function reduction is maximal. Weak form
of NMP leaves for self free will in that self can choose also non-maximal negentropy gain. This
makes possible universe with ethics and moral with good defined as something which increases
negentropy resources of the Universe.
Self hierarchy is the basic notion of TGD inspired theory of consciousness. Self experiences
sub-selves as mental images. Self corresponds to a state function reduction sequence to the same
boundary of causal diamond (CD). In standard quantum measurement theory this sequence does
not change the state but in TGD framework the state at the opposite boundary of CD and even
opposite boundary changes. This gives rise to the experience flow of time having the increases of
the temporal distance between the tips of CD as a geometric correlate. Self dies as the first
reduction to the opposite boundary takes place and re-incarnates at the opposite boundary as its
time reversal. Negentropy Maximization Principle forces it to occur sooner or later. The
continual birth and death of mental images supports this view if one accepts the idea about
hierarchy. One can also consider a concrete identification for what the change of the arrow of
time means for mental image (see this).
3. Magnetic bodies carrying dark matter identified as heff=n× h phases of ordinary matter define
quantum correlates for selves. Magnetic body has hierarchical onion-like structure and it
communicates with biological body using dark photons propagating along magnetic flux tubes.
EEG and its fractal generalizations make possible both communication from/control of
biological body to/by magnetic body. Dark matter hierarchy can be reduced to quantum
criticality and this in turn has deep roots in the adelic physics. Magnetic body means an
extension of the usual organism-environment double to triple involving magnetic body as
intentional agent using biological body for its purposes.
What reconstruction could mean in TGD inspired theory of consciousness?
1. The restriction of deconstruction to the degrees of freedom of elementary particle i s unnecessarily restrictive. One can consider also larger units such a as ..., molecules, cells,... and
corresponding magnetic bodies and their representations using tensor products.
2. Besides bound state formation also negentropic entanglement (NE) allows states which are
almost stable with respect to NMP. One can imagine two kinds of NE, which can be metastable
with respect to NMP. In the first case density matrix is a projector with n identical eigenvalues of
density matrix. This state can be an outcome of a state function reduction since it is an eigenstate
of the universal observable defined by the density matrix.
Density matrix has matrix elements in an algebraic extension algebraic extension of rationals
characterizing the system in the evolutionary hierarchy. It can also happen that the eigenvalues
of the density matrix (probabilities) do not belong to this extension. One can argue that since
diagonalization is not possible inside the extension, also state function reduction is impossible
without a phase transition extending the extension and identifiable as a kind of evolutionary step.
This is assumed - at least tentatively.
Both kinds of NEs would have natural place in the world order. The first kind of NE would
correspond to a kind of enlightened consciousness since any orthonormal state basis would
- 126 -
define an eigenstate basis of density matrix. Schrödinger cat would be exactly half alive and half
dead or exactly half of X and half of Y, where X and Y are any orthonormal superpositions of
alive and dead. For the second kind of NE there would be a unique state basis. For instance, the
cat could be 1/21/2 alive and 1 - 1/21/2 dead. The words dead and alive have meaning. This would
correspond to a state of rational mind discriminating between things. If a phase transition
bringing into daylight 21/2 takes place, state function reduction makes cat fully alive or fully
dead.
3. In condensed matter example the velocity of quantal wave motion serves as a criterion allowing
to tell whether the degrees of freedom bind or not. Velocity/wave fector is obviously too limited
criterion for binding or its absence. In neuroscience the coherence of EEG is seen as a signature
of binding: maybe oscillation with same EEG frequency could serve the signature of fusion of
mental images to a larger one. In TGD inspired theory of consciousness EEG frequencies
correspond to differences of generalized Josephson frequencies that is sums of Josephson
frequency for the resting potential and of the difference of cyclotron frequencies for ions at
different sides of cell membrane (see this, this, and this ).
4. At the level of magnetic flux tubes binding would correspond to a reconnection of magnetic flux
tubes of synchronously firing region to form a larger structure for which the magnetic field
strength is same for the composites and therefore also cyclotron frequencies are identical.
Reconstruction would have a concrete geometric correlate at the level of magnetic flux tubes as a
reconnection. Different parts of brain containing quantum states serving as mental images
defining features would connected by flux tubes of the magnetic body and binding of mental
images would take place.
5. In TGD-inspired quantum biology, dark matter identified as large heff=n× h phases give rise to a
deconstruction if one accepts the hypothesis heff=hgr =GMm/v0, where M represents mass of dark
matter and m particle mass (see this and this). Here hgr is assigned with a flux tube connecting
masses M and m and v0 is a velocity parameter characterizing the system. This hypothesis
implies that dark cyclotron energy is given Ec=hgrfc, where fc is cyclotron frequency, is
independent of particle mass: universal cyclotron energy spectrum is the outcome. The dark
cyclotron photons can transform to ordinary photons identified as biophotons having energy
spectrum in visible and UV range where also the energy spectrum of molecules is. Magnetic
body could use dark photons to control bio-chemistry.
What makes this also so remarkable is that particles with magnetic dipole moment possessing
different masses correspond to different values of heff and reside at different magnetic flux tubes.
This is mass spectroscopy - or deconstruction of matter by separating charged particles with
different masses to their own dark worlds! Dark living matter would not be a random soup of
particles: each charged particle (also neutral particles with magnetic dipole moment) would sit
neatly at its own shelf labelled by hgr!
In TGD-inspired theory of Consciousness, magnetic flux tubes can be associated with
magnetic bodies serving as correlates of selves so that deconstruction for mental images would
reduce to this process with each charged particle representing one particular combination and
perhaps also a quale (see this).
What about re-construction in this framework?
1. In reconstruction, flux tube connections between two subsystems representing sub-selves
(experienced by self as mental images) would be formed so that they would fuse to single system
characterized by the same cyclotron frequency. Flux tube connection would be formed by the
reconnection of U-shaped flux tubes to form single pair of connecting flux tubes connecting the
systems.
- 127 -
Resonant exchange of dark cyclotron photons and also dark super-conductivity would
accompany this process. This process would represent a correlate for directed attention and
would take place already at bio-molecular level. I have proposed that bio-molecules with
aromatic rings in which circulating electron pair currents generate magnetic bodies are especially
important and in some sense fundamental level of the self hierarchy at molecular level
\citeallb/pulse. In brain different brain regions could connect to single coherently firing region in
this manner.
2. The magnetic bodies associated with brain regions representing features could be connected in
this manner to larger sub-selves. Negentropic quantum entanglement - a purely TGD based
notion - could define a further correlate for the binding. This entanglement could take place in
discrete degrees of freedom related to the hierarchy heff=n× h of Planck constants having no
correlate in standard physics. The discrete degree of freedom would correspond to n sheets of
singular coverings representing space-time surfaces. The sheets would co-incide at the ends of
causal diamonds (CDs): one possible interpretation (holography allows many of them) could be
that entire closed 3-surfaces formed by the unions of space-like 3-surfaces at the boundaries of
CD and light-like 3-surfaces connecting them serve as basic objects.
3. Reconstruction by negentropic quantum entanglement and flux tube connections inducing
resonance could also lead to non-standard composites. Synesthesia could be understood in this
manner and even the sensory experience about circle with four corners could be understood. The
binding of left and right brain visual experiences to single one could take place through
negentropic entanglement and effectively generate the experience of third dimension. The
dimensions would not however simply add: 3-D experience instead of 4-D. The dream of a
mathematician is to perceive directly higher dimensional objects. Could sensory perception of
higher than 3-D objects be possible by a reconstruction fusing several visual percepts - maybe
even from different brains - together? Could higher levels of self hierarchy carry out this kind of
reconstruction? Could Mother Gaia fuse our experiences to single experience about what it is to
be a human kind, species, or bio-sphere?
2. Could condensed matter physics and consciousness theory have something to share?
Magnetic bodies are present in all scales and one can ask whether consciousness theory and
condensed matter physics might have something in common. Could the proposed picture of matter as
consisting of selves with sub-selves with.... defining analogs of quasiparticles and collective excitations
make sense even at the level of condensed matter? Could construction and reconstruction of mental
images identifiable as sub-selves take place already at this level and have interpretation in terms of
primitive information processing building standardized primitive mental images?
Deconstruction need not be restricted to electron and velocity could be replaced by oscillation
frequency for various fields: at quantum level there is not actually real distinction since in quantum
theory velocity defines wave vector. Also more complex objects, atoms, molecules, etc. could be
deconstructed and the process could occur at the level of magnetic bodies and involve in essential
manner reconnection and other "motor actions" of flux tubes. The notions of quasi-particle and
collective excitation would generalized dramatically and the general vision about basic mechanism
might help to understand this zoo of exotics.
Future condensed matter theorists might also consider the possibility of reconstruction in new
manner giving rise to the analogs of synesthesia. Could features from different objects be recombined to
form exotic quasi-objects having parts all around. Could dark matter in TGD sense be involved in an
essential manner? Could cyclotron resonance or its absence serve as a correlate for the binding? Note
that the disjoint regions of space would be in well-defined sense near to each other in the reconstructed
state. Topology would be different: effective p-adic topology could provide a natural description for a
- 128 -
situation: in p-adic topology systems at infinite distance in real sense can be infinitesimally close to each
other p-adically.
See the aritcle Deconstruction and reconstruction in quantum physics and conscious experience. For a
summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 1:23 AM
2 Comments:
At 12:44 AM,
Leo Vuyk said...
You wrote:
The lesson is clear: theoretician should be patient and realize that theory building is much
more than going to math library and digging the needed mathematics. Maybe colleagues
are mature to learn this lesson some day.
I would advise write a LuLu book onit its free!
At 1:45 AM,
[email protected] said...
Maybe sometime. Now I am busily gathering the fruits. Still some years left;-).
It is really amazing how fast things go. Few years ago I had a science fictive vision that
the reductionistic dogma might be some day given up. Now it has happened! This is not
due to theoretical advances by simple experimental facts discovered in condensed matter
physics.
06/30/2015 - http://matpitka.blogspot.com/2015/06/gaussian-mersennes-in-cosmologybiology.html#comments
Gaussian Mersennes in cosmology, biology, nuclear, and particle physics
p-Adic length scale hypothesis states that primes slightly below powers of two are physically
preferred ones. Mersenne primes Mn=2n-1 obviously satisfy this condition optimally. The proposal
generalizes to GAussian Mersenne primes MG,n=(1+i)n-1. It is now possible to understand preferred padic primes as so called ramified primes of an algebraic extension of rationals to which the parameters
characterizing string world sheets and partonic 2-surfaces belong. Strong form of holography is crucial:
space-time surfaces are consctructible from these 2-surfaces: for p-adic variants the construction should
be easy by the presence of pseudo-constants. In real sector very probably continuation is possible only in
special cases. In the framework of consciousness theory the interpretation is that in this case
imaginations (p-adic space-time surfaces) are realizable. Also p-adic length scale hypothesis can be
understood and generalizes: primes near powers of any prime are preferred.
The definition of p-adic length scale a convention to some degree.
1. One possible definition for Lp is as Compton length for the smallest mass possible in p-adic
thermodynamics for a given prime if the first order contribution is non-vanishing.
2. Second definition is the Compton length Lp,e for electron if it would correspond to the prime in
question: in good approximation one has Lp= 51/2× Lp,e from p-adic mass calculations. If p-adic
length scale hypothesis is assumed (p≈ 2k) one has Lp,e== L(k,e)=2(k-127)/2Le, where Le is electron
Compton length (electron mass is .5 MeV). If one is interested in Compton time T(k,e), one
obtains it easily from electrons Compton time .1 seconds (defining fundamental biorhythm) as
T(k,e)= 2(k-2×127)/2× 0.1 seconds. In the following I will mean with p-adic length scale T(k,e)≈51/2
× T(k).
- 129 -
Mersenne primes Mn=2n-1 are as near as possible to power of two and are therefore of special interest.
1. Mersenne primes corresponding to n∈{2, 3, 5, 7, 13, 17, 19, 31, 61} are out of reach of recent
accelerators.
2. n=89 characterizes weak bosons and suggests a scaled up version of hadron physics which
should be seen at LHC. There are already several indications for its existence.
3. n=107 corresponds to hadron physics and tau lepton.
4. n=127 corresponds to electron. Mersenne primes are clearly very rare and characterize many
elementary particle physics as well as hadrons and weak bosons. The largest Mersenne prime
which does not define completely super-astrophysical p-adic length scale is M127 associated with
electron.
Gaussian Mersennes (complex primes for complex integers) are much more abundant and in the
following I demonstrate that corresponding p-adic time scales might seem to define fundamental length
scales of cosmology, astrophysics, biology, nuclear physics, and elementary physics. I have not
previously checked the possible relevance of Gaussian Mersennes for cosmology and for the physics
beyond standard model above LHC energies: there are as many as 10 Gaussian Mersennes besides 9
Mersennes above LHC energy scale suggesting a lot of new physics in sharp contrast with the GUT
dogma that nothing interesting happens above weak boson scale- perhaps copies of hadron physics or
weak interaction physics. The list of Gaussian Mersennes is following.
1. n∈{2, 3, 5, 7, 11, 19, 29, 47, 73} correspond to energies not accessible at LHC. n= 79 might
define new copy of hadron physics above TeV range -something which I have not considered
seriously before. The scaled variants of pion and proton masses (M107 hadron physics) are about
2.2 TeV and 16 TeV. Is it visible at LHC is a question mark to me.
2. n=113 corresponds to nuclear physics. Gaussian Mersenne property and the fact that Gaussian
Mersennes seem to be highly relevant for life at cell nucleus length scales inspires the question
whether n=113 could give rise to something analogous to life and genetic code. I have indeed
proposed realization of genetic code and analogs of DNA, RNA, amino-acids and tRNA in terms
of dark nucleon states.
3. n= 151, 157, 163, 167 define 4 biologically important scales between cell membrane thickness
and cell nucleus size of 2.5 μ m. This range contains the length scales relevant for DNA and its
coiling.
4. n=239, 241 define two scales L(e,239)=1.96× 103 km and L(e,241)=3.93× 103 km differing by
factor 2. Earth radius is 6.3 × 103 km, outer core has radius 3494 km rather near to L(2,241) and
inner core radius 1220 km, which is smaller than 1960 km but has same order of magnitude.
What is important that Earth reveals the two-core structure suggested by Gaussian Mersennes.
5. n=283: L(283)= .8× 1010 km defines the size scale of a typical star system. The diameter of the
solar system is about d=.9 × 1010 km.
6. n=353: L(353,e)= 2.1 Mly, which is the size scale of galaxies. Milky Way has diameter about 0.9
Mly.
7. n=367 defines size scale L(267,e)= 2.8× 108 ly, which is the scale of big voids.
8. n=379: The time scale T(379,e)=1.79× 1010 years is slightly longer than the recently accepted
age of the Universe about T=1.38× 1010 years and the nominal value of Hubble time 1/H=1.4×
1010 years. The age of the Universe measured using cosmological scale parameter a(t) is equal to
the light-cone proper time for the light-cone assignable to the causal diamond is shorter than t.
For me, these observations are without any exaggeration shocking and suggest that number theory is
visible in the sructure of entire cosmos. Standard skeptic of course saves the piece of his mind by
labelling all this as numerology: human stupidity beats down even the brightest thinker. Putting it more
diplomatically: Only understood fact is fact. TGD indeed allows one to understand these facts.
- 130 -
For a summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 12:23 AM
06/24/2015 - http://matpitka.blogspot.com/2015/06/transition-from-flat-tohyperbolic.html#comments
Transition from flat to hyperbolic geometry and q-deformation
In Thinking Allowed Original there was a link to a very interesting article with title "Designing
Curved Blocks of Quantum Space-Time...Or how to build quantum geometry from curved tetrahedra in
loop quantum gravity" telling about the work of Etera Livine working at LPENSL (I let the reader to
learn what this means;-).
The idea of the article
The popular article mentions a highly interesting mathematical result relevant for TGD. The idea is to
build 3-geometry - not by putting together flat tetrahedra or more general polyhedra along their
boundaries - but by using curved hyperbolic tetrahedra or more generally polygons) defined in 3-D
hyperbolic space - negative constant curvature space with Lorentz group acting as isometries - cosmic
time=constant section of standard cosmology.
As a special case, one obtains tesselation of 3-D hyperbolic space H3. This is somewhat trivial outcome
so that one performs a "twisting". Some words about tesselations/lattices/crystals are in order first.
1. In 2-D case you would glue triangles (say) together to get curved surface. For instance, at the
surface of sphere you would get finite number of lattice like structures: the five Platonic solids
tetrahdron, cube, octahedron, icosahedron, and dodecahedron, which are finite geometries
assignable to finite fields corresponding to p=2, 3, and 5 and defining lowest approximaton of padic numbers for these primes.
2. In 2-D hyperbolic plane H2 one obtains hyperbolic tilings used by Escher (see this).
3. One can also consider decomposition of hyperbolic 3-space H3 to lattice like structure.
Essentially a generalization of the ordinary crystallography from flat 3-space E3to H3. There are
indications for the quantization of cosmic redshifts completely analogous to the quantization of
positions of lattice cells, and my proposal is that they reflect the existing of hyperbolic crystal
lattice in which astrophysical objects replace atoms. Macroscopic gravitational quantum
coherence due to huge value of gravitational Planck constant could make them possible.
Back to the article and its message. The condition for tetrahedron property stating in flat case that the
sum of the 4 normal vectors vanishes generalizes, and is formulated in group SU(2) rather than in group
E3 (Euclidian 3-space). The popular article states that deformation of sum to product of SU(2) elements
is equivalent with a condition defining classical q-deformation for gauge group. If this is true, a
connection between "quantum quantum mechanics" and hyperbolic geometries therefore might exist and
would correspond to a transition from flat E3 to hyperbolic H3.
Let loop gravity skeptic talk first
This looks amazing but it is better to remain skeptic since the work relates to loop quantum gravity
and involves specific assumptions and different motivations.
- 131 -
1. For instance, the hyperbolic geometry is motivated by the attempts to quantum geometry
producing non-vanishing and negative cosmological constant by introducing it through
fundamental quantization rules rather than as a physical prediction and using only algebraic
conditions, which allow representation as a tetrahedron of hyperbolic space. This is alarming to
me.
2. In loop quantum gravity, one tries to quantize discrete geometry. Braids are essential for
quantum groups unless one wants to introduce them independently. In loop gravity one considers
strings defining 1-D structures and the ordinary of points representing particles at string like
entity might be imagined in this framework. I do not know enough loop gravity to decide
whether this condition is realized in the framework motivating the article.
3. In zero energy ontology, hyperbolic geometry emerges in totally different manner. One wants
only a discretization of geometry to represent classically finite measurement resolution and
Lorentz invariance fixes it at the level of moduli space of CDs. At space-time level discretization
would occur for the parameters charactering strings world sheets and partonic 2-surfaces
defining "space-time genes" in strong form of holography.
4. One possible reason to worry is that H3 allows infinite number of different lattice like structures
(tesselations) with the analog of lattice cell defining hyperbolic manifold. Thus the
decomposition would be highly non-unique and this poses practical problems if one wants to
construct 3-geometries using polyhedron like objects as building bricks. The authors mention
twisting: probably this is what would allow to get also other 3-geometries than 3-D hyperbolic
space. Could this resolve the non-uniqueness problem?
I understand (on basis of this) that hyperbolic tetrahedron can be regarded as a hyperbolic 3manifold and gives rise to a tesselation of hyperbolic space. Note that in flat case tetrahedral
crystal is not possible. In any case, there is an infinite number of this kind of decompositions
defined by discrete subgroups G of Lorentz group and completely analogous to the
decompositions of flat 3-space to lattice cells: now G replaces the discrete group of translations
leaving lattice unaffected. An additional complication from the point of view of loop quantum
gravity in the hyperbolic case is that the topology of the hyperbolic manifold defining lattice cell
varies rather than being that of ball as in flat case (all Platonic solids are topologically balls).
The notion of finite measurement resolution
The notion of finite measurement resolution emerged first in TGD through the realization that von
Neumann algebras known as hyper-finite factors of type I1 (perhaps also of type III1) emerge naturally in
TGD framework. The spinors of "world of classical worlds" (WCW) identifiable in terms of fermionic
Fock space provide a canonical realization for them.
The inclusions of hyperfinite factors provide a natural description of finite measurement resolution
with included factor defining the sub-algebra, whose action generates states not distinguishable from the
original ones. The inclusions are labelled by quantum phases coming as roots of unity and labelling also
quantum groups. Hence the idea that quantum groups could allow to describe the quantal aspects of
finite measurement resolution whereas discretization would define its classical aspects.
p-Adic sectors of TGD define a correlate for cognition in TGD Universe and cognitive resolution is
forced by number theory. Indeed, one cannot define the notion of angle in p-adic context but one can
define phases in terms of algebraic extensions of p-adic numbers defined by roots of unity: hence a finite
cognitive resolution is unavoidable and might have a correlate also at the level of real physics.
The discrete algebraic extensions of rationals forming a cognitive and evolutionary hierarchy induce
extensions of p-adic numbers appearing in corresponding adeles and for them quantum groups should be
- 132 -
a necessary ingredient of description. The following arguments support this view and make it more
concrete.
Quantum groups and discretization as two manners to describe finite measurement resolution in
TGD framework
What about quantum groups in TGD framework? I have also proposed that q-deformations could
represent finite measurement resolution. There might be a connection between discretizing and quantum
groups as different aspects of finite measurement resolution. For instance, quantum group SU(2)q allows
only a finite number of representations (maximum value for angular momentum): this conforms with
finite angular resolution implying a discretization in angle variable. At the level of p-adic number fields
the discretization of phases exp(iφ) as roots Un=exp(i2π/n) of unity is unavoidable for number
theoretical reasons and makes possible discrete Fourier analysis for algebraic extension.
There exist actually a much stronger hint that discretization and quantum groups related to each
other. This hint leads actually to a concrete proposal how discretization is described in terms of quantum
group concept.
1. In TGD discretization for space-time surface is not by a discrete set of points but by a complex
of 2-D surfaces consisting of strings world sheets and partonic 2-surface. By their 2dimensionality these 2-surfaces make possible braid statistics. This leads to what I have called
"quantum quantum physics" as the permutation group defining the statistics is replaced with
braid group defining its infinite covering. Already fermion statistics replaces this group with its
double covering. If braids are present there is no need for "quantum quantum". If one forgets the
humble braidy origins of the notion begins to talk about quantum groups as independent concept
the attribute "quantum quantum" becomes natural. Personally I am skeptic about this approach: it
has not yielded anything hitherto.
2. Braiding means that the R-matrix characterizing what happens in the permutation of nearby
particles is not anymore multiplication by +1 or -1 but more complex operation realized as a
gauge group action (no real change to change by gauge invariance). The gauge group action
could in electroweak gauge group for instance.
What is so nice that something very closely resembling the action of quantum variant of
gauge group (say electroweak gauge group) emerges. If the discretization is by the orbit of
discrete subgroup H of SL(2,C) defining hyperbolic manifold SL(2,C)/H as the analog of lattice
cell, the action of the discrete subgroup H is leaves "lattice cell" invariant but could induce gauge
action on state. R-matrix defining quantum group representation would define the action of
braiding as a discrete group element in H. Yang-Baxter equations would give a constraint on the
representation.
This description looks especially natural in the p-adic sectors of TGD. Discretization of both
ordinary and hyperbolic angles is unavoidable in p-adic sectors since only the phases, which are
roots of unity exist (p-adically angle is a non-existing notion): there is always a cutoff involved:
only phases Um=exp(i2π/m), m<r exist and r should be a factor of the integer defining the value
of Planck constant heff/h=n defining the dimension of the algebraic extension of rational numbers
used. In the same manner hyperbolic "phases" defined by roots e1/mp of e (the very deep number
theoretical fact is that e is algebraic number (p:th root) p-adically since ep is ordinary p-adic
number!). The test for this conjecture is easy: check whether the reduction of representations of
groups yields direct sums of representations of corresponding quantum groups.
3. In TGD framework H3 is identified as light-cone proper time=constant surface, which is 3-D
hyperboloid in 4-D Minkowski space (necessary in zero energy ontology). Under some
additional conditions a discrete subgroup G of SL(2,C) defines the tesselation of H3 representing
- 133 -
finite measurement resolution. Tesselation consists of a discrete set of cosets gSL(2,C). The right
action of SL(2,C) on cosets would define the analog of gauge action and appear in the definition
of R-matrix.
The original belief was that discretization would have continuous representation and
powerful quantum analog of Lie algebra would become available. It is not however clear
whether this is really possible or whether this is needed since the R-matrix would be defined by a
map of braid group to the subgroup of Lorentz group or gauge group. The parameters defining
the q-deformation are determined by the algebraic extension and it is quite possible that there are
more than one parameters.
4. The relation to integrable quantum field theories in M2 is interesting. Particles are characterized
by Lorentz boosts in SO(1,1) defining their 2-momenta besides discrete quantum numbers. The
scattering reduces to a permutation of quantum numbers plus phase shifts. By 2-particle
irreducibility defining the integrability the scattering matrix reduces to 2-particle S-matrix
depending on the boost parameters of particles, and clearly generalizes the R-matrix as a
physical permutation of particles having no momentum. Could this generalize to 4-D context?
Could one speak of the analog of this 2-particle S-matrix as having discrete Lorentz boosts hi in
sub-group H as arguments and representable as element h( h1, h2) of H: is the ad hoc guess h= h1
h2-1 trivial?
5. The popular article says that one has q>1 in loop gravity. As found, in TGD quantum
deformation has at least two parameters are needed in the case of SL(2,C). The first corresponds
to the n:th root of unity (Un= exp(i2π/n)) and second one to n×p:th root of ep. One could do
without quantum group but it would provide an elegant representation of discrete coset spaces. It
could be also powerful tool as one considers algebraic extensions of rationals and the extensions
of p-adic numbers induced by them.
One can even consider a concrete prediction follows for the unit of quantized cosmic redhifts
if astrophysical objects form tesselations of H3 in cosmic scales. The basic unit appearing in the
exponent defining the Lorentz boost would depend on the algebraic extension invlved and of padic prime defining effective p-adcity and would be e1/np.
For details see the chapter Was von Neumann right after all? or the article Discretization and quantum
group description as different aspects of finite measurement resolution. For a summary of earlier
postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 11:25 PM
2 Comments:
At 6:03 PM,
L. Edgar Otto said...
Matti,
In the case of the 4D polytope called the 24 cell, it is more than a spherical top (that is
topological ball) and is an exception in four space to all dimensions above and below it.
This trend of thinking so too narrow to reach new understanding if we at least know the ndimensional Euclidean geometry. This is nothing else that the generation problem where
iterative looping (not sure it applies to the deepest idea of gravity) is an isolation. But even
if this building of a real space is the case and case only as a hierarchy fractal it oscillates in
duality tripling the partial fractal.
Have we not understood this from Riemann when we cross an Euclidean boundary - or of
Feynman as you also mentioned when we rotate his diagrams 90 degrees to describe
particles? I would say that in a sense the hyperbolic and the elliptical are primordially
- 134 -
equivalent in the description and that to build such an elaborate theory as they put forth is
equally only half the picture. In the bigger picture such infinite lattices would then be
remote and not hold up, even with the known consideration of complex hypernumbers.
The age of such physics is over and in my opinnion your general take on things still stands
among a handful of other future mainstream theories.
At 9:26 PM, [email protected] said...
The main new observation of the posting came clear only one day after writing it as it
often happens. The TGD inspired conjecture that finite measurement as a discretisation
and described using quantum groups are aspects of one and same thing. The connection is
very concrete and testable: the reduction of representations of groups to representations of
discrete subgroups should give representations of corresponding quantum groups.
*Discretization defined by a coset space of Lie group with discrete subgroup can be
described in terms of gauge group action defined by the right action of group
leaving the coset invariant.
*Braid statistics makes sense by strong form of holography implying that 2-surfaces
(string world sheets and partonic 2-surfaces are basic objects as far as scattering
amplitudes are considered - space-time sheets.
* R-matrix defining braiding and quantum group is representable as this gauge action.
This is very beautiful result and in p-adic context quantum groups are unavoidable
since phases and exponents of hyperbolic angles must be discretized.
*Also the observation that e^p is p-adic number and e thus an algebraic number (p:th)
root is essential and makes possible p-adic discretisation of hyperbolic "phases"
and and also implies discretisation of Lorentz boosts and thus cosmic redshifts if
astrophysical objects from hyperbolic tesselations. A testable prediction.
06/24/2015 - http://matpitka.blogspot.com/2015/06/criticality-of-higgs-is-plancklength.html#comments
Criticality of Higgs: is Planck length dogmatics physically feasible?
While studying the materials related to Convergence conference running during this week at
Perimeter Institute, I ended up with a problem related to the fact that the mass Mh= 125.5+/- .24 GeV
implies that Higgs as described by standard model (now new physics at higher energies) is at the border
of metastability and stability - one might say near criticality (see this and this), and I decided to look
from the TGD perspective what is really involved.
Absolute stability would mean that the Higgs potential becomes zero at Planck length scale assumed
to be the scale at which QFT description fails: this would require Mh>129.4 GeV somewhat larger that
the experimentally determined Higgs mass in standard model framework. Metastability means that a
new deep minimum is developed at large energies and the standard model Higgs vacuum does not
anymore correspond to a minimum energy configuration and is near to a phase transition to the vacuum
with lower vacuum energy. Strangely enough, Higgs is indeed in the metastable region in absence of any
new physics.
- 135 -
Since the vacuum expectation of Higgs is large at high energies the potential is in a reasonable
approximation of form V= λ h4, where h is the vacuum expectation in the high energy scale considered
and λ is dimensionless running coupling parameter. Absolute stability would mean λ=0 at Planck scale.
This condition cannot however hold true as follows from the input provided by top quark mass and
Higgs mass to which λ at LHC energies is highly sensitive. Rather, the value of λ at Planck scale is
small and negative: λ(MPl)=-0.0129 is the estimate to be compared with λ(Mt)=0.12577 at top quark
mass. This implies that the potential defining energy density associated with the vacuum expectation
value of Higgs becomes negative at high enough energies.The energy at which λ becomes negative is in
the range 1010-1012 GeV, which is considerably lower than Planck mass about 1019 GeV. This estimate
of course assumes that there is no new physics involved.
The plane defined by top and Higgs masses can be decomposed to regions (see figure 5 of this),
where perturbative approach fails (λ too large), there is only single minimum of Higgs potential
(stability), there is no minimum of Higgs potential (λ<0, instability) and new minima with smaller
energy is present (metastability). This metastability can lead to a transition to a lower energy state and
could be relevant in early cosmology and also in future cosmology.
The value of λ turns out to be rather small at Planck mass. λ however vanishes and changes sign in a
much lower energy range 1010-1012 GeV. Is this a signal that something interesting takes place
considerably below Planck scale? Could Planck length dogmatics is wrong? Is criticality only an artefact
of standard model physics and as such a signal for a new physics?
How could this relate to TGD? Planck length is one of the unchallenged notions of modern physics
but in TGD p-adic mass calculations force to challenge this dogma. Planck length is replaced with CP2
length scale which is roughly 104 longer than Planck length and determined by the condition that
electron corresponds to the largest Mersenne prime (M127), which does not define completely superastrophysical p-adic length scale, and by the condition that electron mass comes out correctly. Also
many other elementary particles correspond to Mersenne primes. In biological relevant scales there are
several (4) Gaussian Mersennes.
In CP2 length scale the QFT approximation to quantum TGD must fail since the the replacement of
the many-sheeted space-time with GRT space-time with Minkowskian signature of the metric fails, and
space-time regions with Euclidian signature of the induced metric defining the lines of generalized
Feynman diagrams cannot be anymore approximated as lines of ordinary Feynman diagrams or twistor
diagrams. From electron mass formula and electron mass of .5 MeV one deduces that CP2 mass scale is
2.53× 1015 GeV - roughly three orders of magnitudes above 1012 GeV obtained if there is no new
physics emerges above TeV scale.
TGD "almost-predicts" several copies of hadron physics corresponding to Mersenne primes Mn,
n=89, 61, 31,.. and these copies of hadron physics are expected to affect the evolution of λ and maybe
raise the energy 1012 GeV to about 1015 GeV. For M31 the electronic p-adic mass scale happens to be
2.2× 1010 GeV. The decoupling of Higgs by the vanishing of λ could be natural at CP2 scale since the
very notion of Higgs vacuum expectation makes sense only at QFT limit becoming non-sensical in CP2
scale. In fact, the description of physics in terms of elementary particles belonging to three generations
might fail above this scale. Standard Model quantum numbers make still sense but the notion of family
replication becomes questionable since in TGD framework the families correspond to different boundary
topologies of wormhole throats and the relevant physics is above this mass scale inside the wormhole
contacts: there would be only single fermion generation below CP2 scale.
- 136 -
This raises questions. Could one interpret the strange criticality of the Higgs as a signal about the fact
that CP2 mass scale is the fundamental mass scale and Newton's constant might be only a macroscopic
parameter. This would add one more nail to the coffin of superstring theory and of all theories relying on
Planck length scale dogmatics. One can also wonder whether the criticality might somehow relate to the
quantum criticality of the TGD Universe. My highly non-educated guess is that it is only an artefact of
standard model description. Note however that below CP2 scale the transition from the phase dominated
by cosmic strings to a phase in which space-time sheets emerge and leading to the radiation dominated
cosmology would take place: this period would be the TGD counterpart for the inflationary period and
also involve a rapid expansion.
For a summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 12:49 AM
06/23/2015 - http://matpitka.blogspot.com/2015/06/strings-2015-and-convergence-twovery.html#comments
Strings 2015 and Convergence: two very different conferences
There are two very different conferences going on. The first one is Strings 2015 commented by Peter
Woit. Superstrings live nowadays mostly in grant applications: 10 per cent of titles of lectures mention
superstrings and the rest are about quantum field theories in various dimension, about applications of
AdS/CFT, and about the latest fashion taking seriously wormhole contacts connecting blackholes - a
GRT adaptation of magnetic flux tubes accompanied by strings connecting partonic 2-surfaces
introduced by Maldacena and Susskind. Susskind also introduced p-adic numbers a couple of years ago.
Also the holography is due to Susskind but introduced in TGD much earlier as an implication of general
coordinate invariance in sub-manifold gravity.
Asoke Sen is talking about how to save humankind from a catastrophe caused by inflationary phase
transition and destroying ordinary matter: anthropically estimated to occur about once per lifetime of the
Universe from the fact that it has not occurred. This attempt to save human kind has mostly been taken
as a joke but Sen is right in that the worry is real if inflationary multiverse scenario and GRT are right.
Logic is correct but premises might be wrong;-): this can be said quite generally about super string
theories.
This is definitely an application of superstring theory. But I would be delighted on more concrete
applications: say deducing the standard model from super string theory and maybe saying something
about quantum biology and even consciousness as one might theory of everything to do. Unfortunately
the only game in the town cannot do this. The spirit in Strings 2015 does not seem to be very high. Even
Lubos did not bother to comment the talks, which he said are "interesting" and asked whether some-one
in the conference might do this. It is clear that Strings 2015 is were big old guys guys meet and refresh
memories. It is not for young revolutionaries.
Another conference - Convergence - is held in Perimeter institute at the same time - perhaps not an
accident: see the comments of Peter Woit: Lubos has not commented this conference (I expected the
usual rant about intellectually inferior imbecilles). The spirit is totally different. It is admitted that
theoretical physics has been on wrong track for 4 decades and are now actively searching the way out of
the dead end. People are talking about revolution taking place in near future! Some-one mentioned even
consciousness. There are a lot of young participants present. I am highly optimistic. Things might begin
to move again.
- 137 -
posted by Matti Pitkanen @ 8:44 PM
06/18/2015 - http://matpitka.blogspot.com/2015/06/about-quantum-cognition.html#comments
About quantum cognition
The talks in the conference Towards Science of Consciousness 2015 held in Helsinki produced
several pleasant surprises, which stimulated more precise views about TGD-inspired theory of
Consciousness. Some of the pleasant surprises were related to quantum cognition. It is a pity that I lost
most of the opening talk of Harald Atmanspacher (see this).
The general idea is to look whether one could take the formalism of quantum theory and look whether it
might allow to construct testable formal models of cognition. Quantum superposition, entanglement, and
non-commutativity, are the most obvious notions to be considered. The problems related to quantum
measurement are however present also now and relate to the basic questions about consciousness.
1. For instance, non-commutativity of observables could relate to the order effects in cognitive
measurements. Also the failure of classical probability to which Bell inequalities relate could
have testable quantum cognitive counterpart. This requires that one should be able to speak about
the analogs of quantization axis for spin in cognition. Representation of Boolean logic statements
as tensor product of qubits would resolve the problem and in TGD framework fermionic Fock
state basis defines a Boolean algebra: fermions would be interpretation as quantum correlates of
Boolean cognition.
2. The idea about cognitive entanglement described by density matrix was considered and the
change of the state basis was suggested to have interpretation as a change of perspective. Here I
was a little bit puzzled since the speakers seemed to assume that density matrix rather than only
its eigenvalues has an independent meaning. This probably reflects my own assumption that
density matrix is always assignable to a system and its complement regarded as subsystems of
large system in pure state. The states are purifiable - as one says. This holds true in TGD but not
in the general case.
3. The possibility that quantum approach might allow to describe this breaking of uniqueness in
terms of entanglement - or more precisely in terms of density matrix, which in TGD framework
can be diagonalized and in cognitive state function reduction reduces in the generic case to a 1-D
density matrix for one of the meanings. The situation would resemble that in hemispheric rivalry
or for illusions in which two percepts appear as alternatives. One must be of course very cautious
with this kind of models: the spoken and written language do not obey strict rules. I must
however admit that I failed to get the gist of the arguments completely.
One particular application discussed in the conference was to a problem of linguistics.
1. One builds composite words from simpler ones. The proposed rule in classical linguistics is that
the composites are describable as unique functions of the building bricks. The building brick
words can however have several meanings and meaning is fixed only after one tells to which
category the concept to which the world refers belongs. Therefore also the composite word can
have several meanings.
2. If the words have several meanings, they belong to at least n=2 two categories. The category
associated with the word is like spin n=2 and one can formally treat the words as spins, kind of
cognitive qubits. The category-word pairs - cognitive spins- serve building bricks for 2
composite worlds analogous to two-spin systems.
- 138 -
3. A possible connection with Bell's inequalities emerges from the idea that if word can belong to
two categories it can be regarded as analogous to spin with two values. If superpositions of same
word with different meanings make sense, the analogs for the choice of spin quantization axis
and measurement of spin in particular quantization direction make sense. A weaker condition is
that the superpositions make sense only for the representations of the words. In TGD framework
the representations would be in terms of fermionic Fock states defining quantum Boolean
algebra.
1. Consider first a situation in which one has two spin measurement apparatus A and B with
given spin quantization axis and A' and B' with different spin quantization axis. One can
construct correlation functions for the products of spins s1 and s2 defined as outcomes of
measurements A and A' and s3 and s4 defined as outcomes of B and B'. One obtains pairs
13, 14, 23, 24.
2. Bell inequalities give a criterion for the possibility to model the system classically. One
begins from 4 CHSH inequalities follow as averages of inequalities holding for individual
measurement always (example: -2≤ s1s3 + s1s4+s2s3- s2s4≤ 2) outcomes by assuming
classical probability concept implying that the probability distributions for sisj are simply
marginal distributions for a probability distribution P(s1,22,s3,s4). CHSH inequalities are
necessary conditions for the classical behavior. Fine's theorem states that these conditions
are also sufficient. Bell inequalities follow from these and can be broken for quantum
probabilities.
3. Does this make sense in the case of cognitive spins? Are superpositions of meanings
really possible? Are conscious meanings really analogous to Schrödinger cats? Or should
one distinguish between meaning and cognitive representation? Experienced meanings
are conscious experiences and consciousness identified as state function reduction makes
the world look classical in standard quantum measurement theory. I allow the reader to
decide but represent TGD view below.
What about quantum cognition in TGD framework? Does the notion of cognitive spin make sense?
Do the notions of cognitive entanglement and cognitive measurement have sensible meaning? Does the
superposition of meanings of words make sense or does it make sense for representations only?
1. In TGD, quantum measurement is measurement of density matrix defining the universal
observable leading to its eigenstate (or eigen space when NE is present in final state) meaning
that degenerate eigenvalues of the density matrix are allowed). In the generic case the state basis
is unique as eigenstates basis of density matrix and cognitive measurement leads to a classical
state.
If the density matrix has degenerate eigenvalues, the situation changes since state function
can take place to a sub-space instead of a ray of the state space. In this sub-space there is no
preferred basis. Maybe "enlightened" states of consciousness could be identified as this kind of
states carrying negentropy (number theoretic Shannon entropy is negative for them and these
states are fundamental for TGD inspired theory of consciousness. Note that p-adic negentropy is
well-defined also for rational (or even algebraic) entanglement probabilities but the condition
that quantum measurement leads to an eigenstate of density matrix allows only projector as a
density matrix for the outcome of the state function reduction. In any case, in TGD Universe the
outcome of quantum measurement could be enlightened Schrödinger cat which is as much dead
as olive.
Entangled states could represent concepts or rules as superpositions of their instances
consisting of pairs of states. For NE generated in state function reduction density matrix would
be a projector so that these pairs would appear with identical probabilities. The entanglement
- 139 -
matrix would be unitary. This is interesting since unitary entanglement appears also in quantum
computation. One can consider also the representation of associations in terms of entanglement possibly negentropic one.
2. Mathematician inside me is impatiently raising his hand: it clearly wants to add something. The
restriction to a particular extension of rationals - a central piece of the number theoretical vision
about quantum TGD - implies that density matrix need not allow diagonalization. In eigen state
basis, one would have has algebraic extension defined by the characteristic polynomial of the
density matrix and its roots define the needed extension which could be quite well larger than the
original extension. This would make state stable against state function reduction.
If this entanglement is algebraic, one can assign to it a negative number theoretic entropy.
This negentropic entanglement is stable against NMP unless the algebraic extension associated
with the parameters characterizing the parameters of string world sheets and partonic surfaces
defining space-time genes is allowed to become larger in a state function reduction to the
opposite boundary of CD generating re-incarnated self and producing eigenstates involving
algebraic numbers in a larger algebraic extension of rationals. Could this kind of extension be a
Eureka! experience meaning a step forwards in cognitive evolution?
If this picture makes sense, one would have both the unitary NE with a density matrix, which
is projector and the algebraic NE with eigen values and NE for which the eigenstates of density
matrix outside the algebraic extension associated with the space-time genes. Note that the unitary
entanglement is "meditative" in the sense that any state basis is possible and therefore in this
state of consciousness it is not possible to make distinctions. This strongly brings in mind koans
of Zen Buddhism. The more general algebraic entanglement could represent abstractions as rules
in which the state pairs in the superposition represent the various instances of the rule.
3. Can one really have superposition of meanings in TGD framework where Boolean cognitive spin
is represented as fermion number (1,0), spin, or weak isospin in TGD, and fermion Fock state
basis defines quantum Boolean algebra.
In the case of fermion number, the superselection rule demanding that state is eigenstate of
fermion number implies that cognitive spin has unique quantization axis.
For the weak isopin symmetry breaking occurs and superpositions of states with different em
charges (weak isospins) are not possible. Remarkably, the condition that spinor modes have a
well-defined em charge implies in the generic case their localization to string world sheets at
which classical W fields carrying em charge vanish. This is essential also for the strong form of
holography, and one can say that cognitive representations are 2-dimensional and cognition
resides at string world sheets and their intersections with partonic 2-surfaces. Electroweak
quantum cognitive spin would have a unique quantization axes?
But what about ordinary spin? Does the presence of Kähle magnetic field at flux tubes select
a unique quantization direction for cognitive spin as ordinary spin so that it is not possible to
experience superposition of meanings? Or could the rotational invariance of meaning mean
SU(2) gauge invariance allowing to rotate given spin to a fixed direction by performing SU(2)
gauge transformation affecting the gauge potential?
4. A rather concrete linguistic analogy from TGD inspired biology relates to the representation of
DNA, mRNA, amino-acids, and even tRNA in terms of dark proton triplets. One can decompose
ordinary genetic codons to letters but dark genetic codons represented by entangled states of 3
linearly order quarks and do not allow reduction to sequence of letters. It is interesting that some
eastern written languages have words as basic symbols whereas western written languages tend
- 140 -
to have as basic units letters having no meaning as such. Could Eastern cognition and languages
be more holistic in this rather concrete sense?
For details see the chapter p-Adic physics as physics of cognition and intention of "TGD-Inspired
Theory of Consciousness" or the article Impressions created by TSC 2015 conference. For a summary
of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 10:09 PM
06/18/2015 - http://matpitka.blogspot.com/2015/06/aromatic-rings-as-lowest-levelin.html#comments
Aromatic rings as the lowest level in the molecular self hierarchy?
I had the opportunity to participate the conference Towards Science of Consciousness 2015 held in
Helsinki June 8-13. Of special interest from TGD point of view were the talks of Hameroff and
Bandyopadphyay who talked about aromatic rings (ARs) (see this).
I have also wondered whether ARs might play key role with motivations coming from several
observations.
1. In photosynthesis, ARs are a central element in the energy harvesting system , and it is now
known that quantum effects in longer length and time scales than expected are involved. This
suggests that the ARs involved fuse to form a larger quantum system connected by flux tubes,
and that electron pair currents follow along the flux tubes as supra currents.
DNA codons involve ARs with delocalized pi electrons, neurotransmitters and psychoactive
drugs involve them, 4 amino-acids Phe, trp, tyr and his involve them and they are all
hydrophobic and tend to be associated with hydrophobic pockets. Phe and trp appear in
hydrophobic pockets of microtubules.
2. The notion of self hierarchy suggests that at molecular level ARs represent the basic selves. ARs
would integrate to larger conscious entities by a reconnection of the flux tubes of their magnetic
bodies (directing attention to each other!). One would obtain also linear structures such as DNA
sequence in this manner. In proteins the four aromatic amino-acids would represent subselves
possibly connected by flux tubes. In this manner one would obtain a concrete molecular
realization of self hierarchy allowing precise identification of the basic conscious entities as
aromatic rings lurking in hydrophobic pockets.
3. Given AR would be accompanied by a magnetic flux tube and the current around it would
generate magnetic field. The direction of the current would represent a bit (or perhaps even qbit).
In the case of microtubules the phe-trp dichotomy and direction of current would give rise to 4
states identifiable as a representation for four genetic letters A,T,C,G. The current pathways
proposed by Hameroff et al consisting of sequences of current rings (see this) could define the
counterparts of DNA sequences at microtubule level.
For B type microtubules, 13 tubulins which correspond to single 2π rotation would represent
basic unit followed by a gap. This unit could represent a pair of helical strands formed by flux
tubes and ARs along them completely analogous to DNA double strand. This longitudinal strand
would be formed by a reconnection of magnetic flux tubes of the magnetic fields of ARs and
reconnection occurring in two different manners at each step could give rise to braiding.
4. The magnetic flux tubes associated with the magnetic fields of nearby aromatic rings could
suffer reconnection and in this manner a longitudinal flux tubes pair carrying supra current could
- 141 -
be generated by the mechanism of bio-superconductivity discussed (see this) and working also
for the ordinary high Tc super conductivity. The interaction of microtubule with frequencies in
the scales kHz, GHz, and THz scales would induce longitudinal superconductivity as a transition
to phase A from phase B meaning generation of long super-conducting wires.
This view suggests that also DNA is superconductor in longitudinal direction and that
oscillating AC voltage induces the superconductivity also now. Bandyopadphyay indeed
observed the 8 AC resonance frequencies first for DNA with frequency scales of GHz, THz,
PHz, which suggests that dark photon signals or AC voltages at these frequencies induce DNA
superconductivity. According to the model of DNA as topological quantum computer DNA is
superconductor also in the transversal degrees of freedom meaning that there are flux tubes
connecting DNA to a lipid layer of the nuclear or cell membrane (see this and this).
5. Interestingly, the model of Hameroff et al for the helical pathway (see this) assumes that there
are three aromatic rings per d=1 nm length along microtubule. This number is same as the
number of DNA codons per unit length. It is however mentioned that the distance between
aromatic rings trp and phe in MT is about d=2 nm. Does this refer to average distance or is d=1
nm just an assumption? In TGD framework, the distance would scale as heff so that also scaling
of DNA pathway by a factor 6 could be considered. In this case single tubulin could correspond
to genetic codon.
If d=1 nm is correct, these helical pathways might give rise to a representation of memetic
codons representable as sequences of 21 genetic codons meaning that there are 2126 different
memetic codons this. DNA would represent the lowest level of hierarchy of consciousness and
microtubules the next level. Note that each analog of DNA sequences corresponds to different
current pathway.
6. What is especially interesting, that codon and its conjugate have always altogether 3 aromatic
cycles. Also phe and trp appearing in MTs have this property as also tyr and his. Could these 3
cycles give rise to 3-braid? The braid group B3, which is covering of permutation group of 3
objects. Since B2 is Abelian group of integers, 3-braid is the smallest braid, which can give rise
to interesting topological quantum computation.
B3 is also the knot group of trefoil knot, and the universal central extension of the modular
group PSL(2,Z) (a discrete subgroup of Lorentz group playing a key role in TGD since it defines
part of the discrete moduli space for the CDs with other boundary fixed (see this). Quite
generally, B(n) is the mapping class group of a disk with n punctures fundamental both in string
model: in TGD where disk is replaced with partonic 2-surface.
For details see the chapter Quantum model for nerve pulse of "TGD and EEG" or the article
Impressions created by TSC 2015 conference. For a summary of earlier postings see Links to the latest
progress in TGD.
posted by Matti Pitkanen @ 10:06 PM
06/18/2015 - http://matpitka.blogspot.com/2015/06/two-kinds-of-negentropicentanglements.html#comments
Two kinds of negentropic entanglements
The most general view is that negentropic entanglement NE corresponds to algebraic entanglement
with entanglement coefficients in some algebraic extension of rationals. The condition that the outcome
- 142 -
of state function reduction is eigenspace of density matrix fixes the density matrix of the final state to be
a projector with identical eigenvalues defining the probabilities of various states.
But what if the eigenvalues and thus also eigenvectors of the density matrix, which are algebraic
numbers, do not belong to the algebraic extensions involved. Can state function reduction reduction
occur at all so that this kind of NE would be stable?
The following argument suggests that also more general algebraic entanglement could be reasonably
stable against NMP, namely the entanglement for which the eigenvalues of the density matrix and
eigenvectors are outside the algebraic extension associated with the parameters characterizing string
world sheets and partonic 2-surfaces as space-time genes.
The restriction to a particular extension of rationals - a central piece of the number theoretical vision
about quantum TGD - implies that density matrix need not allow diagonalization. In eigen state basis,
one would have has algebraic extension defined by the characteristic polynomial of the density matrix
and its roots define the needed extension which could be quite well larger than the original extension.
This would make state stable against state function reduction.
If this entanglement is algebraic, one can assign to it a negative number theoretic entropy. This
negentropic entanglement is stable against NMP unless the algebraic extension associated with the
parameters characterizing the parameters of string world sheets and partonic surfaces defining spacetime genes is allowed to become larger in a state function reduction to the opposite boundary of CD
generating re-incarnated self and producing eigenstates involving algebraic numbers in a larger algebraic
extension of rationals. Could this kind of extension be a Eureka! experience meaning a step forwards in
cognitive evolution?
If this picture makes sense, one would have both the unitary NE with a density matrix, which is
projector and the algebraic NE with eigen values and NE for which the eigenstates of density matrix
outside the algebraic extension associated with the space-time genes. Note that the unitary entanglement
is "meditative" in the sense that any state basis is possible and therefore in this state of consciousness it
is not possible to make distinctions. This strongly brings in mind koans of Zen Buddhism and
enlightment experience. The more general irreducible algebraic entanglement could represent
abstractions as rules in which the state pairs in the superposition represent the various instances of the
rule.
For details see the chapter Negentropy Maximization Principle of "TGD-Inspired Theory of
Consciousness" or the article Impressions created by TSC2015 conference. For a summary of earlier
postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 10:04 PM
7 Comments:
At 8:45 AM,
Ulla said...
Negentropy is formed from a decrease in energy, and as energy is a graded scale, formed
of 'bits' also its counterpart, or negentropy, should express the same, but opposite to
entropy. Hence negentropy is describing a degraded energetic state, what is a quantum
state, also it is fractal, so a comparision to FQHE is in order.
Entanglement can also be a result of superposition, added energy, and also this state can be
graded, fractal, but upwards to higher energy level.
- 143 -
The electron story of this (excitation - jump) is maybe just that - a story, a fancy tale?
What is the real electron?
At 7:57 PM, [email protected] said...
In the case of thermodynamics, one can express differential of energy in terms of
differentials of entropy, pressure etc.. Entropy flow involves flow of energy typically. In
case of entanglement entropy which becomes negative for negentropic entanglement
(number theoretic definition is essential) situation is not so clear. One can certainly image
generalisation of thermodynamics formulas.
Entanglement entropy and thermodynamical entropy are closely related but no
identical in TGD. There has been rumours that gravity has killed Schrodinger cat
(decoherence would happen in isolated system and be due to gravity). Bee defended in her
last posting gravity and clearly stated that gravity is innocent;-).
In her defines she discussed how she sees the generation of decoherence
.http://backreaction.blogspot.fi/2015/06/no-gravity-hasnt-killed-schrodingers-cat.html. Bee
identified decoherence as increase of entanglement entropy reflecting the generation of
entanglement.
I would in turn identify decoherence as increase of thermodynamical entropy for
ensemble and it would be a result of state function reduction. I tend to forget that TGD
view about state function reduction is much more refined than ordinary one. For instance,
quantum measurement as measurement of density matrix is assumed to be completely
universal process and occurs for that subsystem-complement pair of isolated system for
which maximal negentropy gain is largest (if can of course happen that nothing occurs as
during the sequence of repeated state function reductions defining self ). This is definitely
new as compared to standard quantum measurement theory. Also the contribution of zero
energy ontology is totally new element.
The question about real electron looks to me unnecessary. If one accepts identification
of quantum state- such as electron- with its mathematical description, the question
disappears. This makes sense if conscious experiences are assign with state function
reductions and are thus in the replacement of mathematical object representing quantum
state with a new one. Between two worlds rather than in world. I have found that this view
about ontology of consciousness is practically non-communicable;-).
At 10:35 AM,
Ulla said...
(if can of course happen that nothing occurs as during the sequence of repeated state
function reductions defining self ). This must in my opinion be a part of a protected state,
having very few state function reductions. This means that the symmetry breakings are not
happening at random, but are purely directed and also in a way hierarchial.
Regarding the electron, say the Tesla magnetic induction, how does that electron form?
It is not that the electron is a stable particle? I know magnetic and electric flow interact,
forming 'magnetic' and 'electric' prticles, temporal ones. Is this interaction basement for the
time-scale? This interaction basically comes from state function reduction too.
At 7:52 PM,
[email protected] said...
- 144 -
In ZEO, state function reductions by definition leave the state at second "passive"
boundary invariant. This is TGD counterpart of Zeno effect. The original hypothesis was
that during its lifetime self manages to avoid entanglement with environment but I could
not justify it in positive energy ontology. This is also the basic problem of quantum
computers scientists who work in positive energy ontology. Quantum computer program
running is in TGD framework the life cycle of conscious entity.
At 8:06 PM, [email protected] said...
Electron is stable particle in ZEO but observationally in the sense that we do not observe
decays of electrons! This is true also in ZEO. CD represents the space-time region about
which self has experiences and electron cannot say anything about what happens outside
its CD. Does the space-time surface representing electron continue outside CD or not. By
the way, to electron one can assign CD with minimum size of order Earth size - 0.1 light
seconds.
Self with bigger CD having electron CD would experience electron sub-CD as sub-self
mental image representing observation of electron. The bigger self cannot say whether the
electron's space-time surface continues beyond the boundaries if its CD.
The basic question still not answered conclusively is whether CD represents the 4-D
"perceptive field" of self -say electron- or whether it represents entire 4-D existence for
which boundaries CD mean the beginning and end of the world. Or saying the same
differently: do only the events defined by CDs exist or are interiors of CDs parts of bigger
structure so that space-time surfaces continue above/below upper/lower boundaries of
CDs. The latter option would conform with Western view, the first option with Eastern
view. I must admit that I have not been able to decide;-).
At 11:43 AM,
Ulla said...
Read here about protected
http://arxiv.org/pdf/1110.4469.pdf
this is much more rational?
states,
one
among
many,
many
papers.
At 8:24 PM, [email protected] said...
I am talking here about theory of quantum measurement generalised to a theory of
consciousness, not about some specific model suggesting a mechanism of high
temperature super conductivity. Negentropic entanglement is completely general notion
and key to the understanding of basic aspects of Consciousness.
The only association to TGD is via the word "room-temperature super-conductivity". For
this TGD provides a general new physics mechanism applying in living matter and also
explaining high T_c superconductivity. This will be taken seriously only after the notion
of magnetic flux tube is accepted: the empirical evidence for it is emerging all the time but
it takes few years before physicists are mature to realize how elegantly they explain high
Tc superconductivity for which anti ferromagnetism is essential ingredient.
The so-called protected states discussed in the article relate to an attempt to obtain roomtemperature superconductivity without introducing new physics. Quite generally, these
attempts have failed (and very probably continue to fail) and high T_c superconductivity
still remains a mystery. Protected states assigned with a very specific condensed matter
- 145 -
system. Also pseudo Majora fermions are introduced: in Nature there are no Majorana
fermions although media hypists make different claims;-)..
06/16/2015 - http://matpitka.blogspot.com/2015/06/towards-theory-ofconsciousness.html#comments
Impressions created by TSC 2015 conference
"Towards a Science of Consciousness" conference (TCS 2015) was held in Helsinki June 8-13. The
conference was extremely interesting with a lot of excellent lectures, and it is clear that a breakthrough
is taking place and quantum theories of consciousness are becoming a respected field of science.
In the article below, my impressions about the conference are described. They reflect only my limited
scope of interest, and not even this since the number of representations was so large that it was possible
to listen only a minor fraction of representations.
From my point of view, the most interesting representations were related to the experimental findings
about microtubules and also DNA. These findings allowing a much more detailed view about biomolecular level of the self hierarchy having at the lowest level molecules having aromatic cycles
carrying supra currents of pi electron pairs creating magnetic bodies of these basic selves. DNA,
microtubules, and chlorophyll represent the basic biomolecules containing these aromatic cycles. Also
neuro-active compounds (neurotransmitters, hallucinogens,…) involve them. Amino-adics
phe,trp,tyr,and his would represent subselves (mental images) of proteins in this picture so that the
picture about molecular self hierarchy is becoming very concrete.
In the earlier posting, I already considered TGD-based model for the action of anesthetes. My
impressions about TSC 2015 are described in the article Impressions created by TSC 2015 conference.
posted by Matti Pitkanen @ 3:17 AM
10 Comments:
At 4:39 AM,
Plato Hagel said...
I was wondering if there were any abstracts or considerations on Quantum Cognition?
At 4:47 AM,
[email protected] said...
There were. I commented the talks about quantum cognition in the article: there you can
find also a link to abstracts about quantum cognition. The program with links to abstracts
is at https://tsc2015.sched.org .
At 9:28 AM,
Plato Hagel said...
Thanks Matti,
Quantum Cognition and Intergrative Models.pdf .pdf
At 2:19 PM, streamfortyseven said...
you might find this to be of interest - http://williambrownscienceoflife.com/wpcontent/uploads/2014/07/Fractal-geometry-enables-information-transmission-throughresonance1.pdf It's a paper by the Bandyopadhyay group.
At 2:29 PM,
streamfortyseven said...
- 146 -
This is where Bandyopadhyay got his scaling factors, which might map onto TGD - there
might
be
a
homeomorphism
here:
http://hiup.org/wpcontent/uploads/2013/05/scalinglaw_paper.pdf
At 8:44 PM, [email protected] said...
The abstract in the link brings strongly in my mind my proposal that magnetic flux tubes
form a kind of 3-D co-ordinate grid serving as a kind of sensory backbone for living
systems making it possible to have consciousness about existence of body. There exist
evidence for this kind of coordinate grids. The flux tubes would involve at molecular level
aromatic cycles as basic molecular conscious entities.
DNA would have this kind of "coordinate line" along it as superconducting flux tube pair
(always pairs to obtain high temperature superconductivity by TGD-inspired mechanism).
The superconductivity would be induced by dark photon irradiation or AC fields at
resonance frequencies as Bandyopadhyay found already before similar discovery for
microtubules.
DNA as topological quantum computer model predicted flux tubes connecting DNA to
lipid layers: these would define other two orthogonal co-ordinate axis.
A further interesting finding is that conjugated DNA nucleotides have 1 and 2 cycles
respectively. In double strand one has 3 cycles and the mechanism for super-conductivity
suggests strongly that 3-braid results in reconnection of the flux tubes. 3-braid defines the
minimal system able to topologically quantum compute and corresponding braid group is
minimal interesting one.
Also in the case of MTs trp and phe define similar pair of cycles and the analog of double
strand could define 3-braid. Also tyr and his define conjugate pair in the similar manner. It
would be nice to look for the structure of proteins and folded proteins to see whether one
can imagine braids formed by phe,trp, tyr, his.
At 8:46 PM, [email protected] said...
To streamfortyseven:
Thank you. I will look!
At 2:03 PM, Wes Hansen said...
http://www.dreamyoga.com/tibetan-dream-yoga/lucid-dreaming-in-tibetan-dream-yoga
http://www.rinpoche.com/teachings/dreamyoga.htm
https://www.kagyu.org/kagyulineage/buddhism/cul/cul02.php
At 7:42 PM, [email protected] said...
Thank you for the links.
I remember the Godel,Escher, Bach of Hofstadter where he described the essence of
englightment according to Zen Buddhism. The answer to question "Yes or No?" is "mu".
That is there is no answer. This is the essence of negentropic entanglement for qubit when
probabilities are identical since any basis for quits is possible: there are no notion of
opposite.
- 147 -
Conference helped to take seriously the possibility that if the algebraic valued
probabilities defined by eigenvalues of density matrix are not identical but outside the
algebraic extension used, then also entanglement could be negentropic and stable against
state function reduction, unless a phase transition extending the extension occurs.
This would correspond to NE representing rule or concept as superposition of pairs of
instances. This would represent the usual cognition when" Yes or No?" has answer.
At 11:36 AM,
Stephen said...
http://m.iopscience.iop.org/1367-2630/15/2/023002/article
also, this discussion brings to mind the double entendre
https://en.m.wikipedia.org/wiki/Double_entendre
06/06/2015 - http://matpitka.blogspot.com/2015/06/a-model-for-anesthetic-action.html#comments
A model for anesthetic action
The mechanism of anesthetic action has remained mystery although a lot of data exist. The MeyerOverton correlation suggests that the changes occurring at lipid layers of cell membrane are responsible
for anesthesia but this model fails. Another model assumes that the binding of anesthetes to membrane
proteins is responsible for anesthetic effects but also this model has problems. The hypothesis that the
anesthetes bind to the hydrophobic pockets of microtubules looks more promising.
The model should also explain hyperpolarization of neuronal membranes taking also place when
consciousness is lost. The old finding of Becker is that the reduction or reversal of voltage between
frontal brain and occipital regions correlates with the loss of consciousness. Microtubules and DNA are
negatively charged and the discovery of Pollack that so called fourth phase of water involves generation
of negatively charged regions could play a role in the model. Combining these inputs with TGD-inspired
theory of consciousness and quantum biology, one ends up to a microtubule based model explaining the
basic aspects of anaestesia.
For details see the article TGD based model for anesthetic action or the chapter Quantum model for
nerve pulse of "TGD and EEG". For a summary of earlier postings see Links to the latest progress in
TGD.
posted by Matti Pitkanen @ 5:58 AM
06/03/2015 - http://matpitka.blogspot.com/2015/06/quantitative-model-of-high-t-csuper.html#comments
Quantitative model of high Tc super-conductivity and bio-super-conductivity
I have developed already earlier a rough model for high Tc superconductivity. The members of
Cooper pairs are assigned with parallel flux tubes carrying fluxes which have either same or opposite
directions. The essential element of the model is hierarchy of Planck constants defining a hierarchy of
dark matters.
- 148 -
1. In the case of ordinary high Tc superconductivity, bound states of charge carriers at parallel short
flux tubes become stable as spin-spin interaction energy becomes higher than thermal energy.
The transition to superconductivity is known to occur in two steps as if two competing
mechanisms were at work. A possible interpretation is that at higher critical temperature Cooper
pairs become stable but that the flux tubes are stable only below rather short scale: perhaps
because the spin-flux interaction energy for current carriers is below thermal energy. At the
lower critical temperature, the stability would is achieved and supra-currents can flow in long
length scales.
2. The phase transition to super-conductivity is analogous to a percolation process in which flux
tube pairs fuse by a reconnection to form longer superconducting pairs at the lower critical
temperature. This requires that flux tubes carry anti-parallel fluxes: this is in accordance with the
anti-ferro-magnetic character of high Tc superconductivity. The stability of flux tubes very
probably correlates with the stability of Cooper pairs: coherence length could dictate the typical
length of the flux tube.
3. A non-standard value of heff for the current carrying magnetic flux tubes is necessary since
otherwise the interaction energy of spin with the magnetic field associated with the flux tube is
much below the thermal energy.
There are two energies involved.
1. The spin-spin-interaction energy should give rise to the formation of Cooper pairs with members
at parallel flux tubes at higher critical temperature. Both spin triplet and spin singlet pairs are
possible and also their mixture is possible.
2. The interaction energy of spins with magnetic fluxes, which can be parallel or antiparallel
contributes also to the gap energy of Cooper pair and gives rise to mixing of spin singlet and spin
triplet. In TGD based model of quantum biology antiparallel fluxes are of special importance
since U-shaped flux tubes serve as kind of tentacles allow magnetic bodies form pairs of
antiparallel flux tubes connecting them and carrying supra-currents. The possibility of parallel
fluxes suggests that also ferro-magnetic systems could allow super-conductivity.
One can wonder whether the interaction of spins with magnetic field of flux tube could give
rise to a dark magnetization and generate analogs of spin currents known to be coherent in long
length scales and used for this reason in spintronics (see this). One can also ask whether the spin
current carrying flux tubes could become stable at the lower critical temperature and make superconductivity possible via the formation of Cooper pairs. This option does not seem to be
realistic.
In the article Quantitative model of high Tc super-conductivity and bio-super-conductivity the earlier
flux tube model for high Tc super-conductivity and bio-super-conductivity is formulated in more precise
manner. The model leads to highly non-trivial and testable predictions.
1. Also in the case of ordinary high Tc super-conductivity large value of heff=n× h is required.
2. In the case of high Tc super-conductivity two kinds of Cooper pairs, which belong to spin triplet
representation in good approximation, are predicted. The average spin of the states vanishes for
antiparallel flux tubes. Also super-conductivity associated with parallel flux tubes is predicted
and could mean that ferromagnetic systems could become super-conducting.
3. One ends up to the prediction that there should be a third critical temperature not lower than T**=
2T*/3, where T* is the higher critical temperature at which Cooper pairs identifiable as mixtures
of Sz=+/- 1 pairs emerge. At the lower temperature Sz=0 states, which are mixtures of spin triplet
and spin singlet state emerge. At temperature Tc the flux tubes carrying the two kinds of pairs
become thermally stable by a percolation type process involving re-connection of U-shaped flux
tubes to longer flux tube pairs and supra-currents can run in long length scales.
- 149 -
4. The model applies also in TGD inspired model of living matter. Now however the ratio of
critical temperatures for the phase transition in which long flux tubes stabilize is roughly by a
factor 1/50 lower than that in which stable Cooper pairs emerge and corresponds to thermal
energy at physiological temperatures which corresponds also the cell membrane potential. The
higher energy corresponds to the scale of bio-photon energies (visible and UV range).
For details see the article Quantitative model of high Tc super-conductivity and bio-superconductivity. For a summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 1:55 AM
05/25/2015 - http://matpitka.blogspot.com/2015/05/the-relation-between-u-and-mmatrices.html#comments
The relation between U- and M-matrices
U- and M-matrices are key objects in zero energy ontology (ZEO). M-matrix for large causal
diamonds (CDs) is the counterpart of thermal S-matrix and proportional to scale dependent S-matrix: the
dependence on the scale of CD characterized by integer is S(n)=Sn in accordance with the idea that S
corresponds to the counterpart of ordinary unitary time evolution operator. U-matrix characterizes the
time evolution as dispersion in the moduli space of CDs tending to increase the size of CD and giving
rise to the experience arrow of geometric time and also to the notion of self in TGD-inspired theory of
consciousness.
The original view about the relationship between U- and M-matrices was a purely formal guess: Mmatrices would define the orthonormal rows of U-matrix. This guess is not correct physically and one
must consider in detail what U-matrix really means.
1. First about the geometry of CD. The boundaries of CD will be called passive and active: passive
boundary correspond to the boundary at which repeated state function reductions take place and
give rise to a sequence of unitary time evolutions U followed by localization in the moduli of CD
each. Active boundary corresponds to the boundary for which U induces delocalization and
modifies the states at it.
The moduli space for the CDs consists of a discrete subgroup of scalings for the size of CD
characterized by the proper time distance between the tips and the sub-group of Lorentz boosts
leaving passive boundary and its tip invariant and acting on the active boundary only. This group
is assumed to be represented unitarily by matrices Λ forming the same group for all values of n.
The proper time distance between the tips of CDs is quantized as integer multiples of the
minimal distance defined by CP2 time: T= nT0. Also in quantum jump in which the size scale n
of CD increases the increase corresponds to integer multiple of T0. Using the logarithm of proper
time, one can interpret this in terms of a scaling parametrized by an integer. The possibility to
interpret proper time translation as a scaling is essential for having a manifest Lorentz
invariance: the ordinary definition of S-matrix introduces preferred rest system.
2. The physical interpretation would be roughly as follows. M-matrix for a given CD codes for the
physics as we usually understand it. M-matrix is product of square root of density matrix and Smatrix depending on the size scale of CD and is the analog of thermal S-matrix. State function at
the opposite boundary of CD corresponds to what happens in the state function reduction in
particle physics experiments. The repeated state function reductions at same boundary of CD
correspond to TGD version of Zeno effect crucial for understanding consciousness. Unitary U- 150 -
matrix describes the time evolution zero energy states due to the increase of the size scale of CD
(at least in statistical sense). This process is dispersion in the moduli space of CDs: all possible
scalings are allowed and localization in the space of moduli of CD localizes the active boundary
of CD after each unitary evolution.
In the following I will proceed by making questions. One ends up to formulas allowing to
understand the architecture of U-matrix and to reduce its construction to that for S-matrix having
interpretation as exponential of the generator L-1 of the Virasoro algebra associated with the supersymplectic algebra.
What one can say about M-matrices?
1. The first thing to be kept in mind is that M-matrices act in the space of zero energy states rather
than in the space of positive or negative energy states. For a given CD M-matrices are products
of hermitian square roots of hermitian density matrices acting in the space of zero energy states
and universal unitary S-matrix S(CD) acting on states at the active end of CD (this is also very
important to notice) depending on the scale of CD: Mi=HiS(CD) .
Hi is hermitian square root of density matrix and the matrices Hi must be orthogonal for
given CD from the orthonormality of zero energy states associated with the same CD. The zero
energy states associated with different CDs are not orthogonal and this makes the unitary time
evolution operator U non-trivial.
2. Could quantum measurement be seen as a measurement of the observables defined by the
Hermitian generators Hi? This is not quite clear since their action is on zero energy states. One
might actually argue that the action of this kind of observables on zero energy states does not
affect their vanishing net quantum numbers. This suggests that Hi carry no net quantum numbers
and belong to the Cartan algebra. The action of S is restricted at the active boundary of CD and
therefore it does not commute with Hi unless the action is in a separate tensor factor. Therefore
the idea that S would be an exponential of generators Hi and thus commute with them so that Hi
would correspond to sub-spaces remaining invariant under S acting unitarily inside them does
not make sense.
3. In TGD framework symplectic algebra actings as isometries of WCW is analogous to a KacMoody algebra with finite-dimensional Lie-algebra replaced with the infinite-dimensional
symplectic algebra with elements characterized by conformal weights. There is a temptation to
think that the Hi could be seen as a representation for this algebra or its sub-algebra. This algebra
allows an infinite fractal hierarchy of sub-algebras of the super-symplectic algebra isomorphic to
the full algebra and with conformal weights coming as n-ples of those for the full algebra. In the
proposed realization of quantum criticality, the elements of the sub-algebra characterized by n
act as a gauge algebra. An interesting question is whether this sub-algebra is involved with the
realization of M-matrices for CD with size scale n. The natural expectation is that n defines a
cutoff for conformal weights relating to finite measurement resolution.
How does the size scale of CD affect M-matrices?
1. In standard quantum field theory, S-matrix represents time translation. The obvious
generalization is that now scaling characterized by integer n is represented by a unitary S-matrix
that is as n:th power of some unitary matrix S assignable to a CD with minimal size: S(CD)= Sn.
S(CD) is a discrete analog of the ordinary unitary time evolution operator with n replacing the
continuous time parameter.
2. One can see M-matrices also as a generalization of Kac-Moody type algebra. Also this suggests
S(CD)= Sn, where S is the S-matrix associated with the minimal CD. S becomes representative
- 151 -
of phase exp(iφ). The inner product between CDs of different size scales can n 1 and n2 can be
defined as ⟨ Mi(m), Mj(n)⟩ =Tr(S-m• HiHj• Sn) × θ(n-m) , θ(n)=1 for n≥ 0 , θ (n)=0 for n<0 .
Here I have denoted the action of S-matrix at the active end of CD by "•" in order to
distinguish it from the action of matrices on zero energy states which could be seen as belonging
to the tensor product of states at active and passive boundary.
It turns out that unitarity conditions for U-matrix are invariant under the translations of n if
one assumes that the transitions obey strict arrow of time expressed by nj-ni≥ 0. This simplifies
dramatically unitarity conditions. This gives orthonormality for M-matrices associated with
identical CDs. This inner product could be used to identify U-matrix.
3. How do the discrete Lorentz boosts affecting the moduli for CD with a fixed passive boundary
affect the M-matrices? The natural assumption is that the discrete Lorentz group is represented
by unitary matrices λ: the matrices Mi are transformed to Mi•λ for a given Lorentz boost acting
on states at active boundary only.
One cannot completely exclude the possibility that S acts unitarily on zero energy states. In
this case the scaling would be interpreted as acting on zero energy states rather than those at
active boundary only. The zero energy state basis defined by Mi would depend on the size scale
of CD in more complex manner. This would not affect the above formulas except by dropping
away the "•".
Unitary U must characterize the transitions in which the moduli of the active boundary of causal
diamond (CD) change and also states at the active boundary (paired with unchanging states at the
passive boundary) change. The arrow of the experienced flow of time emerges during the period as state
function reductions take place to the fixed ("passive") boundary of CD and do not affect the states at it.
Note that these states form correlated pairs with the changing states at the active boundary. The
physically motivated question is whether the arrow of time emerges statistically from the fact that the
size of CD tends to increase in average sense in repeated state function reductions or whether the arrow
of geometric time is strict. It turns out that unitarity conditions simplify dramatically if the arrow of time
is strict.
What can one say about U-matrix?
1. Just from the basic definitions the elements of a unitary matrix, the elements of U are between
zero energy states (M-matrices) between two CDs with possibly different moduli of the active
boundary. Given matrix element of U should be proportional to an inner product of two Mmatrices associated with these CDs. The obvious guess is as the inner product between Mmatrices
Uijm,n= ⟨Mi(m,λ1), Mj(n,λ2)⟩ =Tr(λ1† S-m• HiHj• Sn λ2) =Tr(S-m• HiHj • Sn λ2λ1-1)θ(n-m) .
Here the usual properties of the trace are assumed. The justification is that the operators acting at
the active boundary of CD are special case of operators acting non-trivially at both boundaries.
2. Unitarity conditions must be satisfied. These conditions relate S and the hermitian generators Hi
serving
as
square
roots
of
density
matrices.
Unitarity
conditions
†
†
UU =U U=1 is defined in the space of zero energy states and read as
∑j1n1 Uij1mn1(U†)j1jn1n = δi,jδm,nδλ1,λ2
To simplify the situation, let us make the plausible hypothesis contribution of Lorentz boosts in
unitary conditions is trivial by the unitarity of the representation of discrete boosts and the
independence on n.
3. In the remaining degrees of freedom, one would have
∑j1,k≥ Max(0,n-m) Tr(Sk• HiHj1) Tr(Hj1Hj• Sn-m-k)= δi,jδm,n .
- 152 -
The condition k≥ Max(0,n-m) reflects the assumption about a strict arrow of time and implies
that unitarity conditions are invariant under the proper time translation (n,m)→ (n+r,m+r).
Without this condition n back-wards translations (or rather scalings) to the direction of geometric
past would be possible for CDs of size scale n and this would break the translational invariance
and it would be very difficult to see how unitarity could be achieved. Stating it in a general
manner: time translations act as semigroup rather than group.
4. Irreversibility reduces dramatically the number of the conditions. Despite this their number is
infinite and correlates the Hermitian basis and the unitary matrix S. There is an obvious analogy
with a Kac-Moody algebra at circle with S replacing the phase factor exp(inφ) and Hi replacing
the finite-dimensional Lie-algebra. The conditions could be seen as analogs for the orthogonality
conditions for the inner product. The unitarity condition for the analog situation would involve
phases exp(ikφ1)↔ Sk and exp(i(n-m-k)φ2)↔ Sn-m-k and trace would correspond to integration ∫
dφ1 over φ1 in accordance with the basic idea of non-commutative geometry that trace
corresponds to integral. The integration of φi would give δk,0 and δm,n. Hence there are hopes that
the conditions might be satisfied. There is however a clear distinction to the Kac-Moody case
since Sn does not in general act in the orthogonal complement of the space spanned by Hi.
5. The idea about reduction of the action of S to a phase multiplication is highly attractive and one
could consider the possibility that the basis of Hi can be chosen in such a manner that Hi are
eigenstates of of S. This would reduce the unitarity constraint to a form in which the summation
over k can be separated from the summation over j1.
∑k≥ Max(0,n-m) exp(iksi-(n-m-k)sj)∑j1Tr(HiHj1) Tr(Hj1Hj)= δi,jδm,n .
The summation over k should gives a factor proportional to δsi,sj. If the correspondence
between Hi and eigenvalues si is one-to-one, one obtains something proportional to δ (i,j) apart
from a normalization factor. Using the orthonormality Tr(HiHj)=δi,j one obtains for the left hand
side of the unitarity condition exp(isi(n-m)) ∑j1Tr(HiHj1) Tr(Hj1Hj)= exp(isi(n-m)) δi,j .
Clearly, the phase factor exp(isi(n-m)) is the problem. One should have Kronecker delta δm,n
instead. One should obtain behavior resembling Kac-Moody generators. Hi should be analogs of
Kac-Moody generators and include the analog of a phase factor coming visible by the action of
S.
It seems that the simple picture is not quite correct yet. One should obtain somehow an integration
over angle in order to obtain Kronecker delta.
1. A generalization based on replacement of real numbers with function field on circle suggests
itself. The idea is to the identify eigenvalues of generalized Hermitian/unitary operators as
Hermitian/unitary operators with a spectrum of eigenvalues, which can be continuous. In the
recent case S would have as eigenvalues functions λi(φ) = exp(isiφ). For a discretized version φ
would have has discrete spectrum φ(n)= 2π k/n. The spectrum of λi would have n as cutoff. Trace
operation would include integration over φ and one would have analogs of Kac-Moody
generators on circle.
2. One possible interpretation for φ is as an angle parameter associated with a fermionic string
connecting partonic 2-surface. For the super-symplectic generators suitable normalized radial
light-like coordinate rM of the light-cone boundary (containing boundary of CD) would be the
counterpart of angle variable if periodic boundary conditions are assumed.
The eigenvalues could have interpretation as analogs of conformal weights. Usually
conformal weights are real and integer valued and in this case it is necessary to have
- 153 -
generalization of the notion of eigenvalues since otherwise the exponentials exp(isi) would be
trivial. In the case of super-symplectic algebra I have proposed that the generating elements of
the algebra have conformal weights given by the zeros of Riemann zeta. The spectrum of
conformal weights for the generators would consist of linear combinations of the zeros of zeta
with integer coefficients. The imaginary parts of the conformal weights could appear as
eigenvalues of S.
3. It is best to return to the definition of the U-matrix element to check whether the trace operation
appearing in it can already contain the angle integration. If one includes to the trace operation
appearing the integration over φ it gives δm,n factor and U-matrix has elements only between
states assignable to the same causal diamond. Hence one must interpret U-matrix elements as
functions of φ realized factors exp(i(sn-sm)φ). This brings strongly in mind operators defined as
distributions of operators on line encountered in the theory of representations of non-compact
groups such as Lorentz group. In fact, the unitary representations of discrete Lorentz groups are
involved now.
4. The unitarity condition contains besides the trace also the integrations over the two angle
parameters φi associated with the two U-matrix elements involved. The left hand side of the
unitarity condition reads as
∑k≥ Max(0,n-m) =I(ksi)I((n-m-k)sj) × ∑ j1Tr(HiHj1) Tr(Hj1Hj) = δi,jδm,n ,
I(s)=(1/2π)× ∫ dφ exp(isφ) =δs,0 .
Integrations give the factor δk,0 eliminating the infinite sum obtained otherwise plus the factor
δn,m. Traces give Kronecker deltas since the projectors are orthonormal. The left hand side equals
to the right hand side and one achieves unitarity. It seems that the proposed ansatz works and the
U-matrix can be reduced by a general ansatz to S-matrix.
What about the identification of S?
1. S should be exponential of time the scaling operator whose action reduces to a time translation
operator along the time axis connecting the tips of CD and realized as scaling. In other words,
the shift t/T0=m→ m+n corresponds to a scaling t/T0=m→ km giving m+n=km in turn giving k=
1+ n/m. At the limit of large shifts one obtains k≈ n/m→ ∞, which corresponds to QFT limit. nS
corresponds to (nT0)× (S/T0)= TH and one can ask whether QFT Hamiltonian could corresponds
to H=S/T0.
2. It is natural to assume that the operators Hi are eigenstates of radial scaling generator
L0=irMd/drM at both boundaries of CD and have thus well-defined conformal weights. As noticed
the spectrum for super-symplectic algebra could also be given in terms of zeros of Riemann zeta.
3. The boundaries of CD are given by the equations rM=m0 and rM= T-m0, m0 is Minkowski time
coordinate along the line between the tips of CD and T is the distance between the tips. From the
relationship between rM and m0 the action of the infinitesimal translation H== i∂/∂m0 can be
expressed as conformal generator L-1= i∂/∂rM = rM-1 L0 . Hence the action is non-diagonal in the
eigenbasis of L0 and multiplies with the conformal weights and reduces the conformal weight by
one unit. Hence the action of U can change the projection operator. For large values of conformal
weight, the action is classically near to that of L0: multiplication by L0 plus small relative change
of conformal weight.
4. Could the spectrum of H be identified as energy spectrum expressible in terms of zeros of zeta
defining a good candidate for the super-symplectic radial conformal weights. This certainly
means maximal complexity since the number of generators of the conformal algebra would be
infinite. This identification might make sense in chaotic or critical systems. The functions
(rM/r0)1/2+iy and (rM/r0)-2n, n>0, are eigenmodes of rM/drM with eigenvalues (1/2+iy) and -2n
corresponding to non-trivial and trivial zeros of zeta.
- 154 -
There are two options to consider. Either L0 or iL0 could be realized as a hermitian operator.
These options would correspond to the identification of mass squared operator as L0 and
approximation identification of Hamiltonian as iL1 as iL0 making sense for large conformal
weights.
1. Suppose that L0= rMd/drM realized as a hermitian operator would give harmonic oscillator
spectrum for conformal confinement. In p-adic mass calculations the string model mass
formula implies that L0 acts essentially as mass squared operator with integer spectrum. I
have proposed conformal confinent for the physical states net conformal weight is real
and integer valued and corresponds to the sum over negative integer valued conformal
weights corresponding to the trivial zeros and sum over real parts of non-trivial zeros
with conformal weight equal to 1/2. Imaginary parts of zeta would sum up to zero.
2. The counterpart of Hamiltonian as a time translation is represented by H=iL0= irM d/drM.
Conformal confinement is now realized as the vanishing of the sum for the real parts of
the zeros of zeta: this can be achieved. As a matter fact the integration measure drM/rM
brings implies that the net conformal weight must be 1/2. This is achieved if the number
of non-trivial zeros is odd with a judicious choice of trivial zeros. The eigenvalues of
Hamiltonian acting as time translation operator could correspond to the linear
combination of imaginary part of zeros of zeta with integer coefficients. This is an
attractive hypothesis in critical systems and TGD Universe is indeed quantum critical.
What about quantum classical correspondence and zero modes?
The one-one correspondence between the basis of quantum states and zero modes realizes QuantumClassical correspondence.
1. M-matrices would act in the tensor product of quantum fluctuating degrees of freedom and zero
modes. The assumption that zero energy states form an orthogonal basis implies that the
hermitian square roots of the density matrices form an orthonormal basis. This condition
generalizes the usual orthonormality condition.
2. The dependence on zero modes at given boundary of CD would be trivial and induced by 1-1
correspondence |m⟩ → z(m) between states and zero modes assignable to the state basis |m+/-⟩ at
the boundaries of CD, and would mean the presence of factors δ z+,f(m+) × δz-,f(n-) multiplying Mmatrix Mim,n.
To sum up, it seems that the architecture of the U-matrix and its relationship to the S-matrix is now
understood and in accordance with the intuitive expectations the construction of U-matrix reduces to that
for S-matrix and one can see S-matrix as discretized counterpart of ordinary unitary time evolution
operator with time translation represented as scaling: this allows to circumvent problems with loss of
manifest Poincare symmetry encountered in quantum field theories and allows Lorentz invariance
although CD has finite size. What came as surprise was the connection with stringy picture: strings are
necessary in order to satisfy the unitary conditions for U-matrix.
Second outcome was that the connection with super-symplectic algebra suggests itself strongly. The
identification of hermitian square roots of density matrices with Hermitian symmetry algebra is very
elegant aspect discovered already earlier. A further unexpected result was that U-matrix is unitary only
for strict arrow of time (which changes in the state function reduction to opposite boundary of CD).
See the article The relation between U-matrix and M-matrices. For a summary of earlier postings see
Links to the latest progress in TGD.
posted by Matti Pitkanen @ 8:00 AM
3 Comments:
- 155 -
At 7:09 PM, Stephen said...
It's almost obvious now that you put it lIke this Matti, is it possible for this more
generalized view cam be accepted without having to immediately accept the whole of
TGD (I can imagine newcomers being incredulous)
At 8:43 PM, [email protected] said...
To Stephen;
I throw identical copy of your message. TGD is strongly present.
* Zero energy ontology could be separated from TGD. ZEO based quantum
measurement theory demanding also NMP could in principle could be formulated
separately and would give TGD inspired theory of consciousness without the TGD
background. Clearly TGD inspired theory of consciousness is essential for the Umatrix- M-matrix relation.
But the word "consciousness" probably represents that part of TGD which is not
wanted by an average young newcomer whose has been just brainwashed to believe
on consciousness as epiphenomenon.
*The super-symplectic symmetries with conformal structure are fundamental for
TGD and they would naturally define the algebra of hermitian square roots of density
matrices. One could of course imagine also other choices. But it might be that after
this the theory looks very much like TGD
My feeling is that the academic elite decides whether TGD continues to be a cursed
word. It sees me an old crackpot who refuse to believe that he is wrong despite all the
humiliations, which only academic intelligence can imagine. During my lifetime there is
not much hope.
TGD represents world view changing ideas and this creates strong denial since they are
experienced as a threat for the beliefs building the standard ego. I am happy that I live era
in which thinkers of new thoughts are not burned at stake anymore!
05/21/2015 - http://matpitka.blogspot.com/2015/05/p-adic-physics-as-physics-ofcognition.html#comments
p-Adic physics as physics of cognition and imagination
The vision about p-adic physics as physics of cognition and imagination has gradually established
itself as one of the key idea of TGD inspired theory of consciousness. There are several motivations for
this idea.
The vision has developed fromthe vision about living matter as something residing in the
intersection of real and p-adic worlds. One of the earliest motivations was p-adic non-determinism
identified tentatively as a space-time correlate for the non-determinism of imagination. p-Adic nondeterminism follows from the fact that functions with vanishing derivatives are piecewise constant
functions in the p-adic context.
- 156 -
More precisely, p-adic pseudo constants depend on the pinary cutoff of their arguments and replace
integration constants in p-adic differential equations. In the case of field equations this means roughly
that the initial data are replaced with initial data given for a discrete set of time values chosen in such a
manner that unique solution of field equations results. Solution can be fixed also in a discrete subset of
rational points of the imbedding space. Presumably the uniqueness requirement implies some unique
pinary cutoff. Thus the space-time surfaces representing solutions of p-adic field equations are
analogous to space-time surfaces consisting of pieces of solutions of the real field equations. p-Adic
reality is much like the dream reality consisting of rational fragments glued together in illogical manner
or pieces of child's drawing of body containing body parts in more or less chaotic order.
The obvious looking interpretation for the solutions of the p-adic field equations would be as a
geometric correlate of imagination. Plans, intentions, expectations, dreams, and cognition in general
could have p-adic space-time sheets as their geometric correlates. A deep principle could be involved:
incompleteness is characteristic feature of p-adic physics but the flexibility made possible by this
incompleteness is absolutely essential for imagination and cognitive consciousness in general.
The original idea was that p-adic space-time regions can suffer topological phase transitions to real
topology and vice versa in quantum jumps replacing space-time surface with a new one is given up as
mathematically awkward: quantum jumps between different number fields do not make sense. The new
adelic view states that both real and p-adic space-time sheets are obtained by continuation of string
world sheets and partonic 2-surfaces to various number fields by strong form of holography.
The idea about p-adic pseudo constants as correlates of imagination is however too nice to be thrown
away without trying to find an alternative interpretation consistent with strong form of holography.
Could the following argument allow to save p-adic view about imagination in a mathematically
respectable manner?
1. Construction of preferred extremals from data at 2-surfaces is like boundary value problem.
Integration constants are replaced with pseudo-constants depending on finite number pinary
digits of variables depending on coordinates normal to string world sheets and partonic 2surfaces.
2. Preferred extremal property in real context implies strong correlations between string world
sheets and partonic 2-surfaces by boundary conditions a them. One cannot choose these 2surfaces completely independently. Pseudo-constant could allow a large number of p-adic
configurations involving string world sheets and partonic 2-surfaces not allowed in real context
and realizing imagination.
3. Could imagination be realized as a larger size of the p-adic sectors of WCW? Could the
realizable intentional actions belong to the intersection of real and p-adic WCWs? Could the
modes of WCW spinor fields for which 2-surfaces are extandable to space-time surfaces only in
some p-adic sectors make sense? The real space-time surface for them be somehow degenerate,
for instance, consisting of string world sheets only.
Could imagination be search for those collections of string world sheets and partonic 2surfaces, which allow extension to (realization as) real preferred extremals? p-Adic physics
would be there as an independent aspect of existence and this is just the original idea.
Imagination could be realized in state function reduction, which always selects only those 2surfaces which allow continuation to real space-time surfaces. The distinction between only
imaginable and also realizable would be the extendability by using strong form of holography.
- 157 -
I have the feeling that this view allows respectable mathematical realization of imagination in terms
of adelic quantum physics. It is remarkable that strong form of holography derivable from - you can
guess, strong form of General Coordinate Invariance (the Big E again!), plays an absolutely central role
in it.
See the article How Imagination Could Be Realized p-Adically? For a summary of earlier postings see
Links to the latest progress in TGD.
posted by Matti Pitkanen @ 10:59 PM
7 Comments:
At 1:41 PM, Ulla said...
The strongest motivation is the vision about living matter as something residing in the
intersection of real and p-adic worlds.
How do you know that not all matter follow this rule? I think it must be so, so we still
doesn't know what differ ordinary and living matter.
At 7:42 PM, [email protected] said...
Sorry for illogical statement and thank you for noticing it. My purpose was not to state
it in this manner. The original vision - a couple of years ago or so- was that living matter
can be said to reside in this intersection of realities and p-adicities- sensory and cognitive.
The questions are: what is living matter, sensory, cognitive, and intersection!
I have now mathematically sound answers to these questions in the conceptual
framework provided by TGD (only in this framework of course!)
A more precise statement is that the core of this posting: both matter and cognition can
be said to reside in this intersection: "genes of space-time" are in the intersection as 2-D
string wold sheets and partonic 2-surfaces, one might say.
Imagined and real differ in that pure figment of imagination cannot be continued by
holography to real space-time surface but can be done so to p-adic 4-surfaces because of
pseudo constant phenomenon. This is mathematically sound definition: mathematics is full
of this kind continuation problems of type "given boundary conditions can one find a
solution of field equations".
Dream reality consisting of pieces glued together illogically could be seen as example
of imagination. And of course, this posting too;-).
At 6:41 AM,
Ulla said...
Maybe (the tree of) primes are not bent or curved in the same way as spacetime?
At 7:45 AM,
[email protected] said...
The primes appear at deeper level: at the level of number field (extension of rationals) for
the parameters characterising the space-time surfaces. Space-time and space-time
curvature are higher level concepts: relate to primes like sociology to elementary particle
physics;-) .
- 158 -
At 6:54 PM, Stephen said...
http://www.jasoncolavito.com/blog/review-of-ancient-aliens-s05e05-the-einstein-factor
At 8:34 AM,
Ulla said...
http://www.theory.caltech.edu/~preskill/talks/Preskill-KITP-holographic-2015
05/21/2015 - http://matpitka.blogspot.com/2015/05/how-time-reversed-mental-imagesdiffer.html#comments
How time-reversed mental images differ from mental images?
The basic predictions of ZEO based quantum measurement theory is that self corresponds to a
sequence of state function reductions to a fixed boundary of CD (passive boundary) and that the first
reduction to the opposite boundary means death of self and re-incarnation at the opposite boundary. The
re-incarnated self has reversed arrow of geometric time. This applies also to sub-selves of self giving
rise to mental images. One can raise several questions.
Do we indeed have both mental images and time-reversed mental images? How the time-reversed
mental image differs from the original one? Does the time flow in opposite direction for it? The roles of
boundaries of CD have changed. The passive boundary of CD define the static back-ground the
observed whereas the non-static boundary defines kind of dynamic figure. Does the change of the arrow
of time change the roles of figure and background?
I have also proposed that motor action and sensory perception are time reversals of each other.
Could one interpret this by saying that sensory perception is motor action affecting the body of self (say
emotional expression) and motor action sensory perception of the environment about self.
In the sequel, reverse speech and figure-background illusion is represented as examples of what time
reversal for mental images could mean.
Time reversed cognition
Time reflection yields time reversed and spatially reflected sensory-cognitive representations. When
mental image dies it is replaced with its time-reversal at opposite boundary of its CD. The observation of
these representations could serve as a test of the theory.
There is indeed some evidence for this rather weird looking time and spatially reversed cognition.
1. I have a personal experience supporting the idea about time reversed cognition. During the last
psychotic episodes of my "great experience" I was fighting to establish the normal direction of
the experienced time flow. Could this mean that for some sub-CDs the standard arrow of time
had reversed as some very high level mental images representing bodily me died and was reincarnated?
2. The passive boundary of CD corresponds to static observing self - kind of background - and
active boundary the dynamical - kind of figure. Figure-background division of mental image in
this sense would change as sub-self dies and re-incarnates since figure and background change
their roles. Figure-background illusion could be understood in this manner.
- 159 -
3. The occurrence of mirror writing is well known phemonenon (my younger daughter was reverse
writer when she was young). Spatial reflections of MEs are also possible and might be involved
with mirror writing. The time reversal would change the direction of writing from right to left.
4. Reverse speech would be also a possible form of reversed cognition. Time reversed speech has
the same power spectrum as ordinary speech and the fact that it sounds usually gibberish means
that phase information is crucial for storing the meaning of speech. Therefore the hypothesis is
testable.
Reverse speech
Interestingly, the Australian David Oates claims that so called reverse speech is a real phenomenon,
and he has developed entire technology and therapy (and business) around this phenomenon. What is
frustrating that it seems impossible to find comments of professional linguistics or neuro-scientits about
the claims of Oates. I managed only to find comments by a person calling himself a skeptic believer but
it became clear that the comments of this highly rhetoric and highly arrogant commentator did not
contain any information. This skeptic even taught poor Mr. Oates in an aggressive tone that serious
scientists are not so naive that they would even consider the possibility of taking seriously what some
Mr. Oates is saying.
The development of Science can often depend on ridiculously small things: in this case one should
find a shielded place (no ridiculing skeptics around) to wind tape recorder backwards and spend few
weeks or months to learn to recognize reverse speech if it really is there! Also computerized pattern
recognition could be used to make speech recognition attempts objective since it is a well-known fact
that brain does feature recognition by completing the data into something which is familiar.
The basic claims of Oates are following.
1. Reverse speech contains temporal mirror images of ordinary words and even metaphorical
statements, that these words can be also identified from Fourier spectrum, that brain responds in
unconscious manner to these words and that this response can be detected in EEG. Oates
classifies these worlds to several categories. These claims could be tested and pity that no
professional linguist nor neuroscientist (as suggested by web search) has not seen the trouble of
finding whether the basic claims of Oates are correct or not.
2. Reverse speech is complementary communication mode to ordinary speech and gives rise to a
unconscious (to us) communication mechanism making lying very difficult. If person
consciously lies, the honest alter ego can tell the truth to a sub-self understanding the reverse
speech. Reverse speech relies on metaphors and Oates claims that there is general vocabulary.
Could this taken to suggest that reverse speech is communication of right brain whereas left
brain uses ordinary speech? The notion of semitrance used to model bicameral mind suggests
that reverse speech could be communication of higher levels of self hierarchy dispersed inside
the ordinary speech. There are also other claims relating the therapy using reverse speech, which
sound rather far-fetched but one should not confuse these claims to those which are directly
testable.
Physically-reverse speech could correspond to phase conjugate sound waves which together with
their electromagnetic counterparts can be produced in laboratory. Phase conjugate waves have rather
weird properties due the fact that second law applies in a reversed direction of geometric time. For this
reason phase conjugate waves are applied in error correction. ZEO predicts this phenomenon.
Negative energy topological light rays are in a fundamental role in the TGD-based model for Living
matter and brain. The basic mechanism of intentional action would rely on time mirror mechanism
- 160 -
utilizing the TGD counterparts of phase conjugate waves producing also the nerve pulse patterns
generating ordinary speech. If the language regions of brain contain regions in which the the arrow of
psychological time is not always the standard one, they would induce phase conjugates of the sound
wave patterns associated with the ordinary speech and thus reverse speech.
ZEO-based quantum measurement theory, which is behind the recent form of TGD-inspired theory
of Consciousness provides a rigorous basis for this picture. Negative energy signals can be assigned with
sub-CDs representing selves with non-standard direction of geometric time and every time when mental
image dies, a mental images with opposite arrow of time is generated. It would be not surprising if the
reverse speech would be associated with these time reversed mental images.
Figure-background rivalry and time reversed mental images
The classical demonstration of figure-background rivalry is is a pattern experienced either as a vase
or two opposite faces. This phenomenon is not the same thing as bi-ocular rivalry in which the percepts
associated with left and right eyes produced by different sensory inputs are rivalling. There is also an
illusion in which one perceices the dancer to make a pirouette in either counter-clockwise or clockwise
direction althought the figure is static. The direction of pirouette can change. In this case time-reversal
would naturally change the direction of rotation.
Figure-background rivalry gives a direct support for the TGD-based of self relying on ZEO if the
following argument is accepted.
1. In ZEO the state function reduction to the opposite boundary of CD means the death of the
sensory mental image and birth of new one, possibly the rivalling mental image. During the
sequence of state function reductions to the passive boundary of CD defining the mental image a
boundary quantum superposition of rivalling mental images associated with the active boundary
of CD is generated.
In the state function reduction to the opposite boundary the previous mental image dies and is
replaced with new one. In the case of bin-ocular rivalry this might be the either of the sensory
mental images generated by the sensory inputs to eyes. This might happen also now but also
different interpretation is possible.
2. The basic questions concern the time reversed mental image. Does the subject person as a higher
level self experience also the time reversed sensory mental image as sensory mental image as
one might expect. If so, how the time reversed mental image differs from the mental image?
Passive boundary of CD defines quite generally the background - the static observer - and active
boundary the figure so that their roles should change in the reduction to the opposite boundary.In
sensory rivalry situation this happens at least in the example considered (vase and two faces).
I have also identified motor action as time reversal of sensory percept. What this
identification could mean in the case of sensory percepts? Could sensory and motor be
interpreted as an exchange of experiencer (or sub-self) and environment as figure and
background?
If this interpretation is correct, figure-background rivalry would tell something very important about
consciousness and would also support ZEO. Time reversal would permute figure and background. This
might happen at very abstract level. Even subjective-objective duality and first - and third person aspects
of conscious experience might relate to the time reversal of mental images. In near death experiences
person sees himself as an outsider: could this be interpreted as the change of the roles of figure and
- 161 -
background indentified as first and third person perspectives? Could the first moments of the next life be
seeing the world from the third person perspective?
An interesting question is whether right- and left hemispheres tend to have opposite directions of
Geometric-Time. This would make possible metabolic energy transfer between them making possible
kind of flip-flop mechanism. The time-reversed hemisphere would receive negative energy serving as
metabolic energy resource for it and the hemisphere sending negative energy would get in this manner
positive metabolic energy. Deeper interpretation would be in terms of periodic transfer of negentropic
entanglement. This would also mean that hemispheres would provide two views about the world in
which figure and background would be permuted.
See the article Time Reversed Self. For a summary of earlier postings see Links to the latest progress in
TGD.
posted by Matti Pitkanen @ 12:25 AM
2 Comments:
At 6:02 PM,
L. Edgar Otto said...
I have thought a long time about this division of the human mind, and think that at times
one side can teach the other... potentially, along these lines. I think in general the side
process differently as if one side applies ideas of quantum physics and the other side more
toward the relativistic regime. But this is simplistic for the complexity of the question you
raise. What for example is folded inward or outward. Anyway, I wish you were more into
the discussion lately as in facebook with especially the debate on Sabine's blog. And it is
clear that new things have come from the microtubil ideas.
At 7:31 PM, [email protected] said...
I am surprised that this division might be there. I only recently realised that this is one
of the predictions of ZEO. Interpretation is of course difficult. There are many dualities
which might relate to this division: for instance, first and third person aspects of
consciousness are analogous to figure background. And right brain might be time reverse
of left. This would allow two different views with figure and background exchanged. Very
useful.
I like the discussion in Sabine's blog. Especially Sabine' critical view and short
comment of Arun, which crystallised what I would have said.
What went wrong with superstrings and theoretical physics is a question which should
be discussed thoroughly although it is certainly not a pleasant topic of discussion for fans
of superstrings.
The four decades 1975-2015 is very interesting historically since the normal evolution
of theoretical physics stopped. What caused the stagnation? Was it the emergence of
media technology making possible manipulation and hyping in unforeseen manner one of
the key factors. I have also the feeling that theoretical done by relatively small top elite
transformed to the analog of popular music in which mediocrits dominate and success is
determined by marketing efforts rather than musical content? Nowadays marketing is
essential part of visible science. Those who are not working with fashionable topics
remain in shadow.
- 162 -
05/19/2015 - http://matpitka.blogspot.com/2015/05/voevoedskis-univalent-foundationsof.html#comments
Voevoedski's univalent foundations of mathematics
I found a very nice article about the work of mathematician Voevoedski: he talks about univalent
foundations of mathematics. To put in nutshell: the world deals with mathematical proofs and the deep
idea is that proofs are like paths in some abstract space. One deals with paths also in homotopy theory.
What is remarkable that Voevoedski's work leads to computer programs allowing to check that proof
does not contain an error. Something very badly needed when the proofs are getting increasingly
complex. I dare to guess that the article is understandable by laymen too.
Russell's problem
The article tells about type theory originated already by Russell, who was led to his paradox with the
set consisting of sets which do not contain itself as an element. The fatal question was "Does this set
contain itself?"
Russell proposed a solution of the problems by introducing a hierarchy of types. Sets are at the
bottom and defined so that they do not contain as set collection of sets. In usual applications of set
theory - say in manifold theory - this is always true. Type hierarchy guarantees that you do not put
apples and oranges in the same basket.
Voevoedski's idea
Voevoedski's work relates to proof theory and to formalizing what mathematical proof does mean.
Consider demonstration that two sets A and B are equivalent. This means simple thing: construct a
one-to-one map between them. Usually one is only interested in the existence of this map but one can
also get interested on all manners to perform this map. All manners to make this map define in rather
abstract sense a collection of paths between A and B regarded as objects. Single path consists of a
collection of the arrows connecting element in A with element in B.
More generally, in mathematical proof theory all proofs of theorem define this kind of collection. In
topology all paths connecting two points defined this kind of collection. In this framework Goedel's
theorem becomes obvious: given axioms define rules for constructing paths and cannot give the paths
connecting arbitrarily chosen two truths.
One can again abstract this process. Just as one can make statements about statements about..., one
can build paths between paths, and paths between paths between paths.... Voevoedsky studied this
problem in his attempts to formalise mathematics and with a practical goal to develop eventually tools
for checking by computer that mathematical proofs are correct.
A more rigorous manner to define path is in terms of groupoid. It is more general notion that that of
group since the inverse of groupoid element need not exist. For intense open paths between two points
form a group but only closed paths can be given a structure of group.
Voevoedski introduced the notion of infinite groupoid containing paths, paths between paths, .... ad
infinitum. Voevoedski talks about univalent foundations. The great idea is that homotopy theory
becomes a foundation of mathematics: proofs are paths in some abstract structure. This suggests in my
non-professional mind that one can talk about continuous deformations of proofs and that one can
classify them to homotopy types with proofs of same homotopy type deformable to each other.
- 163 -
How this relates to Quantum-TGD
What made me fascinated was how closely the basic hierarchies of Quantum-TGD relate to the
objects studies in type theory and Voevoedski's approach.
Let us start with set theory and type theory. TGD provides a non-trivial example about types which
by the way distinguishes TGD from superstring models. Imbedding space is set, space-time surface are
its subsets. "World of Classical Worlds" (WCW) is the key abstraction distinguishing TGD from
superstring models where one still tries to deal by working at space-time level.
What has surprised me against and again that super string modellers have spend decades in the
landscape instead of making super string models a real theory by introducing loop space as a key notion
although it has very nice mathematics: just the existence of Kähler geometry fixes it uniquely: this
observation actually led to the realisation that quantum TGD might be unique just from its mathematical
existence.
The points of WCW are 3-surfaces and its sets are collections of 3-surfaces. They are of higher type
than the sets of imbedding space. There would be no sense in putting points of WCW and of imbedding
space in the same basket. But in the set theory before Russell you could in principle do this. We have got
as as birth gift the ability to not put cows and tooth brushes into same set. But the ability to take
seriously the existence of more abstract types does not seem to be a birth gift.
Voevoedski and others deal with statements about statements about statements…. What is amusing
that this vision has direct counterparts in TGD based quantum physics where various hierarchies have
taken key role. Some deep ideas seem to burst out simultaneously in totally different contexts!
Voevoedski noticed the same thing in his work but with within the realm of mathematics.
Just a list of examples should be enough. Consider first type theory.
1. The hierarchy of infinite primes (integers, rationals) was the first mathematical discovery
inspired by TGD inspired theory of consciousness. Infinite primes are constructed by a process
analogous to a repeated second quantisation of arithmetic quantum field theory having
interpretation as making statements about statements about..... up to arbitrary high order.
Hierarchy of space-time sheets of many-sheeted space-time is the classical counterpart. Physics
prediction is that higher level of quantisation are part of generalised quantum physics and allow
quantum description of macroscopic and even astrophysical objects. The map of the sheets of
many-sheeted space-time to single region of Minkowski space defines the contraction of TGD to
GRT and is approximate operation: it maps a hierarchy of types to single type and is a violent
procedure meaning a loss of information.
Infinite integers could provide a generalisation of Goedel number numbering in a quantum
mathematics based on the replacement of axiomatics with anti-axiomatics: specify what you
cannot do instead of what you can do! I wrote about this in earlier posting.
Infinite rationals of unit real norm lead also to a generalisation of the real number. Each real
point becomes infinite dimensional space consisting of all infinite rationals of unit norm but
well-defined number theoretic anatomy.
2. There are several very closely related infinite hierarchies. Fractal hierarchy of quantum
criticalities ( ball at the top of a hill at the top...) and isomorphic super-symplectic sub-algebras
with conformal structure. There is infinite fractal hierarchy of conformal gauge symmetry
breakings. This defines infinite hierarchy of dark matters with Planck constant heff=n× h. The
algebraic extensions of rationals giving rise to evolutionary hierarchy for physics and perhaps
- 164 -
explaining biological evolution define a hierarchy. The inclusions of hyper-finite factors
realizing finite measurement resolution define a hierarchy. Hierarchy of infinite integers and
rationals relates also closely to these hierarchies.
3. In TGD-inspired theory of Conscioussness hierarchy of selves having sub-selves (experienced as
mental images) having.... This hierarchy relates also very closely to the above hierarchies.
The notion of mathematical operation sequences as path is second key idea in Voevoedski's work.
The idea about paths representing mathematical computations, proofs, etc.. is realised quite concretely in
TGD quantum physics. Scattering amplitudes are identified as representations for sequences of algebraic
operations of Yangian leading from an initial collection of elements of super-symplectic Yangian
(physical states) to a final one. The duality symmetry of old fashioned string models generalises to a
statement that any two sequences connecting same collections are equivalent and correspond to same
amplitudes. This means extremely powerful predictions and it seems that in twistor programs analogous
results are obtained too: very many twistor Grassmann diagrams represent the same scattering
amplitude.
posted by Matti Pitkanen @ 8:49 PM
05/19/2015 - http://matpitka.blogspot.com/2015/05/more-about-physical-interpretationof.html#comments
More about physical interpretation of algebraic extensions of rationals
The number theoretic vision has begun to show its power. The basic hierarchies of Quantum-TGD
would reduce to a hierarchy of algebraic extensions of rationals and the parameters such as the degrees
of the irreducible polynomials characterizing the extension and the set of ramified primes - would
characterize quantum criticality and the physics of dark matter as large h eff phases. The identification of
preferred p-adic primes as ramified primes of the extension and generalization of p-adic length scale
hypothesis as prediction of NMP are basic victories of this vision (see this and this).
By strong form of holography the parameters characterizing string world sheets and partonic 2surfaces serve as WCW coordinates. By various conformal invariances, one expects that the parameters
correspond to conformal moduli, which means a huge simplification of Quantum-TGD since the
mathematical apparatus of superstring theories becomes available and number theoretical vision can be
realized. Scattering amplitudes can be constructed for a given algebraic extension and continued to
various number fields by continuing the parameters which are conformal moduli and group invariants
characterizing incoming particles.
There are many un-answered and even un-asked questions.
1. How the new degrees of freedom assigned to the n-fold covering defined by the space-time
surface pop up in the number theoretic picture? How the connection with preferred primes
emerges?
2. What are the precise physical correlates of the parameters characterizing the algebraic extension
of rationals? Note that the most important extension parameters are the degree of the defining
polynomial and ramified primes.
1. Some basic notions
Some basic facts about extensions are in order. I emphasize that I am not a specialist.
1.1. Basic facts
- 165 -
The algebraic extensions of rationals are determined by roots of polynomials. Polynomials be
decomposed to products of irreducible polynomials, which by definition do not contain factors which
are polynomials with rational coefficients. These polynomials are characterized by their degree n, which
is the most important parameter characterizing the algebraic extension.
One can assign to the extension primes and integers - or more precisely, prime and integer ideals.
Integer ideals correspond to roots of monic polynomials Pn(x)=xn+..a0 in the extension with integer
coefficients. Clearly, for n=0 (trivial extension) one obtains ordinary integers. Primes as such are not a
useful concept since roots of unity are possible and primes which differ by a multiplication by a root of
unity are equivalent. It is better to speak about prime ideals rather than primes.
Rational prime p can be decomposed to product of powers of primes of extension and if some power
is higher than one, the prime is said to be ramified and the exponent is called ramification index.
Eisenstein's criterion states that any polynomial Pn(x)= anxn+an-1xn-1+...a1x+ a0 for which the coefficients
ai, i<n are divisible by p and a0 is not divisible by p2 allows p as a maximally ramified prime. mThe
corresponding prime ideal is n:th power of the prime ideal of the extensions (roughly n:th root of p).
This allows to construct endless variety of algebraic extensions having given primes as ramified primes.
Ramification is analogous to criticality. When the gradient potential function V(x) depending on
parameters has multiple roots, the potential function becomes proportional a higher power of x-x0. The
appearance of power is analogous to appearance of higher power of prime of extension in ramification.
This gives rise to cusp catastrophe. In fact, ramification is expected to be number theoretical correlate
for the quantum criticality in TGD framework. What this precisely means at the level of space-time
surfaces, is the question.
1.2 Galois group as symmetry group of algebraic physics
I have proposed a long time ago that Galois group acts as fundamental symmetry group of quantum
TGD and even made clumsy attempt to make this idea more precise in terms of the notion of number
theoretic braid. It seems that this notion is too primitive: the action of Galois group must be realized at
more abstract level and WCW provides this level.
First some facts (I am not a number theory professional, as the professional reader might have
already noticed!).
1. Galois group acting as automorphisms of the field extension (mapping products to products and
sums to sums and preserves norm) characterizes the extension and its elements have maximal
order equal to n by algebraic n-dimensionality. For instance, for complex numbers Galois group
acs as complex conjugation. Galois group has natural action on prime ideals of extension
mapping them to each other and preserving the norm determined by the determinant of the linear
map defined by the multiplication with the prime of extension. For instance, for the quadratic
extension Q(51/2) the norm is N(x+51/2y)=x2-5y2: not that number theory leads to Minkowkian
metric signatures naturally. Prime ideals combine to form orbits of Galois group.
2. Since Galois group leaves the rational prime p invariant, the action must permute the primes of
extension in the product representation of p. For ramified primes the points of the orbit of ideal
degenerate to single ideal. This means that primes and quite generally, the numbers of extension,
define orbits of the Galois group.
Galois group acts in the space of integers or prime ideals of the algebraic extension of rationals and
it is also physically attractive to consider the orbits defined by ideals as preferred geometric structures. If
the numbers of the extension serve as parameters characterizing string world sheets and partonic 2- 166 -
surfaces, then the ideals would naturally define subsets of the parameter space in which Galois group
would act.
The action of Galois group would leave the space-time surface invariant if the sheets co-incide at
ends but permute the sheets. Of course, the space-time sheets permuted by Galois group need not coincide at ends. In this case the action need not be gauge action and one could have non-trivial
representations of the Galois group. In Langlands correspondence these representation relate to the
representations of Lie group and something similar might take place in TGD as I have indeed proposed.
Remark: Strong form of holography supports also the vision about quaternionic generalization of
conformal invariance implying that the adelic space-time surface can be constructed from the data
associated with functions of two complex variables, which in turn reduce to functions of single variable.
If this picture is correct, it is possible to talk about quantum amplitudes in the space defined by the
numbers of extension and restrict the consideration to prime ideals or more general integer ideals.
1. These number theoretical wave functions are physical if the parameters characterizing the 2surface belong to this space. One could have purely number theoretical quantal degrees of
freedom assignable to the hierarchy of algebraic extensions and these discrete degrees of
freedom could be fundamental for Living matter and understanding of Consciousness.
2. The simplest assumption that Galois group acts as a gauge group when the ends of sheets coincide at boundaries of CD seems however to destroy hopes about non-trivial number theoretical
physics but this need not be the case. Physical intuition suggests that ramification somehow
saves the situation and that the non-trivial number theoretic physics could be associated with
ramified primes assumed to define preferred p-adic primes.
2. How new degrees of freedom emerge for ramified primes?
How the new discrete degrees of freedom appear for ramified primes?
1. The space-time surfaces defining singular coverings are n-sheeted in the interior. At the ends of
the space-time surface at boundaries of CD however the ends co-incide. This looks very much
like a critical phenomenon.
Hence the idea would be that the end collapse can occur only for the ramified prime ideals of
the parameter space - ramification is also a critical phenomenon - and means that some of the
sheets or all of them co-incide. Thus the sheets would co-incide at ends only for the preferred padic primes and give rise to the singular covering and large h eff. End-collapse would be the
essence of criticality! This would occur, when the parameters defining the 2-surfaces are in a
ramified prime ideal.
2. Even for the ramified primes there would be n distinct space-time sheets, which are regarded as
physically distinct. This would support the view that besides the space-like 3-surfaces at the ends
the full 3-surface must include also the light-like portions connecting them so that one obtains a
closed 3-surface. The conformal gauge equivalence classes of the light-like portions would give
rise to additional degrees of freedom. In space-time interior and for string world sheets they
would become visible.
For ramified primes n distint 3-surfaces would collapse to single one but the n discrete
degrees of freedom would be present and particle would obtain them. I have indeed proposed
number theoretical second quantization assigning fermionic Clifford algebra to the sheets with n
oscillator operators. Note that this option does not require Galois group to act as gauge group in
- 167 -
the general case. This number theoretical second quantization might relate to the realization of
Boolean algebra suggested by weak form of NMP (see this).
3. About the physical interpretation of the parameters characterizing algebraic extension of
rationals in TGD framework
It seems that Galois group is naturally associated with the hierarchy h eff/h=n of effective Planck
constants defined by the hierarchy of quantum criticalities. n would naturally define the maximal order
for the element of Galois group. The analog of singular covering with that of z1/n would suggest that
Galois group is very closely related to the conformal symmetries and its action induces permutations of
the sheets of the covering of space-time surface.
Without any additional assumptions the values of n and ramified primes are completely independent
so that the conjecture that the magnetic flux tube connecting the wormhole contacts associated with
elementary particles would not correspond to very large n having the p-adic prime p characterizing
particle as factor (p=M127=2127-1 for electron). This would not induce any catastrophic changes.
TGD based physics could however change the situation and reduce number theoretical degrees of
freedom: the intuitive hypothesis that p divides n might hold true after all.
1. The strong form of GCI implies strong form of holography. One implication is that the WCW
Kähler metric can be expressed either in terms of Kähler function or as anti-commutators of
super-symplectic Noether super-charges defining WCW gamma matrices. This realizes what can
be seen as an analog of Ads/CFT correspondence. This duality is much more general. The
following argument supports this view.
1. Since fermions are localized at string world sheets having ends at partonic 2-surfaces, one
expects that also Kähler action can be expressed as an effective stringy action. It is
natural to assume that string area action is replaced with the area defined by the effective
metric of string world sheet expressible as anti-commutators of Kähler-Dirac gamma
matrices defined by contractions of canonical momentum currents with imbedding space
gamma matrices. It string tension is proportional to heff2, string length scales as heff.
2. AdS/CFT analogy inspires the view that strings connecting partonic 2-surfaces serve as
correlates for the formation of - at least gravitational - bound states. The distances
between string ends would be of the order of Planck length in string models and one can
argue that gravitational bound states are not possible in string models and this is the basic
reason why one has ended to landscape and multiverse non-sense.
2. In order to obtain reasonable sizes for astrophysical objects (that is sizes larger than
Schwartschild radius rs=2GM) For heff=hgr=GMm/v0 one obtains reasonable sizes for
astrophysical objects. Gravitation would mean quantum coherence in astrophysical length scales.
3. In elementary particle length scales the value of heff must be such that the geometric size of
elementary particle identified as the Minkowski distance between the wormhole contacts
defining the length of the magnetic flux tube is of order Compton length - that is p-adic length
scale proportional to p1/2. Note that dark physics would be an essential element already at
elementary particle level if one accepts this picture also in elementary particle mass scales. This
requires more precise specification of what darkness in TGD sense really means.
One must however distinguish between two options.
1. If one assumes n≈ p1/2, one obtains a large contribution to classical string energy as Δ ∼
mCP22Lp/hbar2eff ∼ mCP2/p1/2, which is of order particle mass. Dark mass of this size looks
un-feasible since p-adic mass calculations assign the mass with the ends wormhole
contacts. One must be however very cautious since the interpretations can change.
- 168 -
2. Second option allows to understand why the minimal size scale associated with CD
characterizing particle correspond to secondary p-adic length scale. The idea is that the
string can be thought of as being obtained by a random walk so that the distance between
its ends is proportional to the square root of the actual length of the string in the induced
metric. This would give that the actual length of string is proportional to p and n is also
proportional to p and defines minimal size scale of the CD associated with the particle.
The dark contribution to the particle mass would be Δ m ∼ mCP22Lp/hbar2eff∼ mCP2/p, and
completely negligible suggesting that it is not easy to make the dark side of elementary
visible.
4. If the latter interpretation is correct, elementary particles would have huge number of hidden
degrees of freedom assignable to their CDs. For instance, electron would have p=n=2127-1 ≈ 1038
hidden discrete degrees of freedom and would be rather intelligent system - 127 bits is the
estimate- and thus far from a point-like idiot of standard physics. Is it a mere accident that the
secondary p-adic time scale of electron is .1 seconds - the fundamental biorhythm - and the size
scale of the minimal CD is slightly large than the circumference of Earth?
Note however, that the conservation option assuming that the magnetic flux tubes connecting
the wormhole contacts representing elementary particle are in heff/h=1 phase can be considered
as conservative option.
See the article More about physical interpretation of algebraic extensions of rationals. For a
summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 1:53 AM
05/14/2015 - http://matpitka.blogspot.com/2015/05/quantum-mathematics-in-tgduniverse.html#comments
Quantum Mathematics in the TGD Universe
Some comments about quantum mathematics, quantum Boolean thinking and computation as they
might happen at fundamental level.
1. One should understand how Boolean statements A→B are represented. Or more generally: How
a computation like procedure leading from a collection A of math objects collection B of math
objects takes place? Recall that in computations the objects coming in and out are bit sequences.
Now one have computation like process. → is expected to correspond to the arrow of time.
If fermionic oscillator operators generate Boolean basis, zero energy ontology is necessary to
realize rules as rules connecting statements realized as bit sequences. Positive energy ontology
would allow only statements A,B but not statements A→B about them. ZEO allows also to avoid
restrictions due to fermion number conservation and its well-definedness.
Collection A is at the passive boundary of CD and not changed in state function reduction
sequence defining self and B is at the active one. As a matter fact, it is not single statement but a
quantum superpositions of statements B, which resides there! In the quantum jump selecting
single B at the active boundary, A is replaced with a superposition of A:s: self dies and reincarnates as more negentropic entity. Q-computation halts.
- 169 -
2.
3.
4.
5.
That both a and b cannot be known precisely is a quantal limitation to what can be known:
philosopher would talk about epistemology here. The different pairs (a,b) in superposition over
b:s are analogous to different implications of a. Thinker is doomed to always live in a quantum
cognitive dust and never be quite sure of.
What is the computer like structure now? Turing computer is discretized 1-D time-like line. This
quantum computer is superposition of 4-D space-time surfaces with the basic computational
operations located along it as partonic 2-surfaces defining the algebraic operations and connected
by fermion lines representing signals. Also string world sheets are involved. In some key aspects
this is very similar to ordinary computer. By strong form of holography computations use only
data at string world sheets and partonic 2-surfaces.
What is the computation? It is sequence of repeated state function reduction leaving the passive
boundary of CD intact but affecting the position (moduli) of upper boundary of CD and also the
parts of zero energy states there. It is a sequence of unitary processes delocalizing the active
boundary of CD followed by localization but no reduction. This the counterpart for a sequence of
reductions leaving quantum state invariant in ordinary measurement theory (Zeno etc).
Commutation halts as the first reduction to the opposite boundary occurs. Self dies and reincarnates at the opposite boundary. Negentropy gain results in general and can be see as the
information gained in the computation. One might hope that the new self (maybe something at
higher level of dark matter hierarchy) is a little bit wiser - at least statistically speaking this
seems to be true by weak form of NMP!
One should understand the quantum counterparts for the basic rules of manipulation. ×,/,+, and are the most familiar example.
1. The basic rules correspond physically to generalized Feynman/twistor diagrams
representing sequences of algebraic manipulations in the Yangian of super-symplectic
algebra. Sequences correspond now to collections of partonic 2-surfaces defining vertices
of generalized twistor diagrams.
2. 3- vertices correspond to product and co-product for quantal stringy Noether charges.
Geometrically the vertex - analog of algebraic operation - is a partonic 2-surface at with
incoming and outgoing light-like 3-surfaces meet - like vertex of Feynman diagram. Coproduct vertex is not encountered in simple algebraic systems, and is time reversed
variant of vertex. Fusion instead of annihilation.
3. This diagrammatics has a huge symmetry just like ordinary computations have. All
computation sequences (note that the corresponding space-time surfaces are different!)
connecting same collections A and B of objects produce the same scattering amplitude.
This generalises the duality symmetry of hadronic string models. This is really gigantic
simplification and the results of twistor Grassmann approach suggest that something
similar is obtained there. This implication was so gigantic that I gave up the idea for
years.
One should understand the analogs for the mathematical axioms. What are the fundamental rules
of manipulation?
1. The classical computation/deduction would obey deterministic rules at vertices. The
quantal formulation cannot be deterministic for the simple reason that one has quantum
non-determinism (weak form of NMP allowing also good and evil). The quantum rules
obey the format that God used when communicating with Adam and Eve: do anything
else but do not the break the conservation laws. Classical rules would list all the allowed
possibilities and this leads to difficulties as Goedel demonstrated. I think that chess
players follow the "anti-axiomatics".
2. I have the feeling that anti-axiomatics (not any well-established idea, it occurred to me as
I wrote this)could provide a more natural approach to quantum computation and even
allow a new manner to approach to the problematics of axiomatisations. It is also
- 170 -
interesting to notice a second TGD inspired notion - the infinite hierarchy of mostly
infinite integers (generated from infinite primes obtained by a repeated second
quantization of an arithmetic QFT) - could make possible a generalisation of Gödel
numbering for statements/computations. This view has at least one virtue: it makes clear
how extremely primitive conscious entities we are in a bigger picture!
6. The laws of Physics take care that the anti-axioms are obeyed. Quite concretely:
1. Preferred extremal property of Kähler action and Käler-Dirac action plus conservation
laws for charges associated with super-symplectic and other generalised conformal
symmetries would define the rules not broken in vertices.
2. At the fermion lines connecting the vertices the propagator would be determined by the
boundary part of Kahler-Dirac action. K-D equation for spinors and consistency
consistency conditions from Kahler action (strong form of holography) would dictate
what happens to fermionic oscillator operators defining the analog of quantum Boolean
algebra as super-symplectic algebra.
posted by Matti Pitkanen @ 7:50 PM
10 Comments:
At 4:22 AM,
Ulla said...
I must show my stupidity once again :P . One of the questions I have thought of lately is
how a quantum number as aritmetic concept would look like, as instance inside a BH or as
holography. If the 'computation' is made as something between boundaries this would be
part of one boundary, but this is labelled by fractional aspects, that is a quantum phase.
This decribes the uncertainty too, like a 'spagettified' number.
The complex numbers are a bit alike, but they are still ordered along a line. In a BH the
'numbers' are like fractional islands, but their displacement must still obey a law, as seen in
the holography principle. This law must be a symmetry in my mind. It is like a quantum
phase.
"Preferred extremal property of Kähler action and Käler-Dirac action plus conservation
laws for charges associated with super-symplectic and other generalised conformal
symmetries would define the rules not broken in vertices."
This statement talks of protected symmetries, like skyrmions, locally protected in
hexagons. At least so, maybe even non-locally? In my mind it comes as a 'tunnelling'
effect. I am still unsure about what that 'preferred' means, if it is a property of the
symmetry itself?
If we look at symmetries they are also in a hierarchial order, with superradiant, as most
non-local type, linear radiation, a dual and quasic structure. God herself? She is
computating continously about every single detail that happen in the whole universe, in a
holographic way, so her brain is a mirror of our own, also functions in the same way? This
means our cortex and that symmetry are mirrors? Both has excellent supermemory :)
through that symmetry.
If you can get a gist of what I say?
Look
at
this
about
quantum
phase
transitions.
http://www.nature.com/ncomms/2015/150514/ncomms8111/full/ncomms8111.html
- 171 -
At 5:10 AM,
Ulla said...
"By strong form of holography computations use only data at string world sheets and
partonic 2-surfaces."
This can be compared to wavefunction and nodes in form of quasiparticles or Dirac
fermions?
At 8:08 AM,
[email protected] said...
I answer to the latter question first. Fermions in question are Dirac fermions. They are
what I call fundamental fermions much like this in string model.
But fermions as a notion used in condensed matter physics are at a distance of light years
conceptually. They emerge as approximate notion after the replacement of many-sheeted
space-time with that of GRT and even that of special relativity- approximation as in
Standard Model.
One forgets everything about string world sheets and partonic 2-sufaces as basic structures
and many-sheeted space-time, and speaks about spinors in Minkowski space!
Quite a dramatic coarse graining but one crucial thing is common: symmetries and
conservation laws, which are the fundamental truths respected in the physical axiomatics
or should one say anti-axiomatics.
At 8:13 AM,
[email protected] said...
To Ulla:
I should not perhaps use the word 'God'. I try to use it with implicit ";-)". This because
people associate with with all kinds of properties making him/her less abstract. By the
way, the non-personal god could be NMP: principle rather than conscious entity;-).
Perhaps best manner to say this is to say that endless computation like process and and
recreation is going on.
At 8:28 AM,
[email protected] said...
To Ulla:
Non-locality is present from beginning. The starting point of TGD was that particles
corresponds to 3-dimensional surfaces rather than points.
The non-locallity is expressed by in Bell inequalities stating that particles in quantum
mechanics have more correlations than they could have in any *purely local* theory
relying on classical concept of probability (no interference of wave functions).
In TGD this fact has space-time correlates: strings connecting partonic 2-surfaces
accompanied by flux tubes. TGD is a non-local theory but does not try to get rid of state
function reduction like most the non-local theories such as Bohm's theory. TGD only
removes un-necessary mystics- one might call it black magic- from quantum physics.
- 172 -
Einstein would be happy bout TGD (maybe he is ;-)-. Also Bohr would be relaxed snce
he could reconsider whether it is really sensible to eliminate "ontology" from the
vocabulary altogether and replace it with the attribute "crazy" as he did.
Here is no unique reality but this does not mean that there would be no reality. There
are superpositions of quantum realities replaced by new ones all the time as this universe
is studying itself by recreating itself again and again and storing its findings as negentropic
entanglement - the Akashic records;-).
At 1:07 AM,
Ulla said...
Ye, The Akashic records... the prime string...
Condensed matter does not tell things that coarsly, in my mind.
"They emerge as approximate notion after the replacement of many-sheeted space-time
with that of GRT and even that of special relativity- approximation as in Standard Model."
Exactly, and this is the beautiful part of it, y get the curvature.
The
3-surfaces
comes
from
quarks?
http://www.sciencedirect.com/science/article/pii/037594749190738R
Like
this?
At 8:55 AM,
[email protected] said...
GRT is approximation to relativity: relates like classical thermodynamics to quantum
thermodynamics.
Quarks correspond to 3-surfaces as all particles do. The 3-surfaces define classical
geometric and topological correlates for their quantal properties. In string models strings
should provide this description but to be honest, they do not!
The link talks about something different. 3-quark systems, not 3-surfaces.
At 10:02 AM,
Ulla said...
This comment has been removed by the author.
At 10:39 AM,
Ulla said...
I must formulate again.
Ye, but mesons as diquarks then? Higgs particle is a dipole. Why would the foundation be
a 3-surface and then go back to a dipole?
Color confinement is what you think of?
http://ptp.oxfordjournals.org/content/65/5/1684.full.pdf
The hadronic quark bag system?
How does a 3 surface define a topological correlate (should be a dipole in this 2D world)?
Is it the stability, the simple fact that they are seen? Odd numbers, asymmetry? Partons as
2-surfaces are related how?
http://www.physics.gla.ac.uk/ppt/lqcd.htm
- 173 -
How is this a scalar field? Geometry and topology are made of vectors?
A link would be fine, I have searched your many papers for this. It is now important for
me to know the detailed structure.
At 7:05 PM, [email protected] said...
To Ulla: I answer the questions that I understand.
Topology is science of shape. Coffee cup and "munkkirinkila" are topologically the same
thing. Geometry takes also distances into account, not only shape. Coffee cup and coffee
cup upside down are geometrically one and same thing.
Scalar field could be seen as a notion of differential geometry. Differential geometry is
purely local geometry. Angles between vectors for instance. But not shapes.
About 3-surface as topological correlate of particle. Consider first 2-D surfaces. Take a
plane and add small handles. You can move these handles around. They are like particles,
particles as topological inhomogenities in 2-D world. Generalize this to 3-D case and you
get something that I mean. Reduction of particles to space-time topology.
Geometry made of vector is putting apple and orange in the same basket.
05/07/2015 - http://matpitka.blogspot.com/2015/05/breakthroughs-in-numbertheoretic.html#comments
Breakthroughs in the number theoretic vision about TGD
Number theoretic universality states that besides reals and complex numbers also p-adic number
fields are involved (they would provide the physical correlates of cognition). Furthermore, scattering
amplitudes should be well-defined in all number fields be obtained by a kind of algebraic continuation. I
have introduced the notion of intersection of realities and p-adicities which corresponds to some
algebraic extension of rationals inducing an extension of p-adic numbers for any prime p. Adelic physics
is a strong candidate for the realization of fusion of real and p-adic physics and would mean the
replacement of real numbers with adeles. Field equations would hold true for all numer fields and the
space-time surfaces would relate very closely to each other: one could say that p-adic space-time
surfaces are cognitive representations of the real ones.
I have had also a stronger vision which is now dead. This sad event however led to a discovery of
several important results.
1. The idea has been that p-adic space-time sheets would be not only "thought bubbles"
representing real ones but also correlates for intentions and the transformation of intention to
action would would correspond to a quantum jump in which p-adic space-time sheet is
transformed to a real one. Alternatively, there would be a kind of leakage between p-adic and
real sectors. Cognitive act would be the reversal of this process. It did not require much critical
thought to realize that taking this idea seriously leads to horrible mathematical challenges. The
leakage takes sense only in the intersection, which is number theoretically universal so that there
is no point in talking about leakage. The safest assumption is that the scattering amplitudes are
defined separately for each sector of the adelic space-time. This means enormous relief, since
there exists mathematics for defining adelic space-time.
- 174 -
2. This realization allows to clarify thoughts about what the intersection must be. Intersection
corresponds by strong form of holography to string world sheets and partonic 2-surfaces at which
spinor modes are localized for several reasons: the most important reasons are that em charge
must be well-defined for the modes and octonionic and real spinor structures can be equivalent at
them to make possible twistorialization both at the level of imbedding space and its tangent
space.
The parameters characterizing the objects of WCW are discretized - that is belong to an
appropriate algebraic extension of rationals so that surfaces are continuous and make sense in
real number field and p-adic number fields. By conformal invariance they might be just
conformal moduli. Teichmueller parameters, positions of punctures for partonic 2-surfaces, and
corners and angles at them for string world sheets. These can be continued to real and p-adic
sectors and
3. Fermions are correlates for Boolean cognition and anti-commutation relations for them are
number theoretically universal, even their quantum variants when algebraic extension allows
quantum phase. Fermions and Boolean cognition would reside in the number theoretically
universal intersection. Of course they must do so since Boolean thought and cognition in general
is behind all mathematics!
4. I have proposed this in p-adic mass calculations for two decades ago. This would be wonderful
simplification of the theory: by conformal invariance WCW would reduce to finite-dimensional
moduli space as far as calculations of scattering amplitudes are considered. The testing of the
theory requires classical theory and 4-D space-time. This holography would not mean that one
gives up space-time: it is necessary. Only cognitive and as it seems also fundamental sensory
representations are 2-dimensional. All that one can mathematically say about reality is by using
data at these 2-surfaces. The rest is needed but it require mathematical thinking and
transcendence! This view is totally different from the sloppy and primitive philosophical idea
that space-time could somehow emerge from discrete space-time.
This has led also to modify the ideas about the relation of real and p-adic physics.
1. The notion of p-adic manifolds was hoped to provide a possible realization of the
correspondence between real and p-adic numbers at space-time level. It relies on the notion
canonical identification mapping p-adic numbers to real in continuous manner and realizes finite
measurement resolution at space-time level. p-Adic length scale hypothesis emerges from the
application of p-adic thermodynamics to the calculation of particle masses but generalizes to all
scales.
2. The problem with p-adic manifolds is that the canonical identification map is not general
coordinate invariant notion. The hope was that one could overcome the problem by finding
preferred coordinates for imbedding space. Linear Minkowski coordinates or Robertson-Walker
coordinates could be the choice for M4. For CP2 coordinates transforming linearly under U(2)
suggest themselves. The non-uniqueness however persists but one could argue that there is no
problem if the breaking of symmetries is below measurement resolution. The discretization is
however also non-unique and makes the approach to look ugly to me although the idea about padic manifold as cognitive chargt looks still nice.
3. The solution of problems came with the discovery of an entirely different approach. First of all,
realized discretization at the level of WCW, which is more abstract: the parameters
characterizing the objects of WCW are discretized - that is assumed to belong to an appropriate
algebraic extension of rationals so that surfaces are continuous and make sense in real number
field and p-adic number fields.
- 175 -
Secondly, one can use strong form of holography stating that string world sheets and partonic
2-surfaces define the "genes of space-time". The only thing needed is to algebraically extend by
algebraic continuation these 2-surfaces to 4-surfaces defining preferred extremals of Kähler
action - real or p-adic. Space-time surface have vanishing Noether charges for a sub-algebra of
super-symplectic algebra with conformal weights coming as n-ples of those for the full algebrahierarchy of quantum criticalities and Planck constants and dark matters!
One does not try to map real space-time surfaces to p-adic ones to get cognitive charts but 2surfaces defining the space-time genes to both real and p-adic sectors to get adelic space-time!
The problem with general coordinate invariance at space-time level disappears totally since one
can assume that these 2-surfaces have rational parameters. One has discretization in WCW,
rather than at space-time level. As a matter fact this discretization selects punctures of partonic
surfaces (corners of string world sheets) to be algebraic points in some coordinatization but in
general coordinate invariant manner
4. The vision about evolutionary hierarchy as a hierarchy of algebraic extensions of rationals
inducing those of p-adic number fields become clear. The algebraic extension associated with the
2-surfaces in the intersection is in question. The algebraic extension associated with them
become more and more complex in evolution. Of course, NMP, negentropic entanglement (NE)
and hierarchy of Planck constants are involved in an essential manner too. Also the measurement
resolution characterized by the number of space-time sheets connecting average partonic 2surface to others is a measure for "social" evolution since it defines measurement resolution.
There are two questions, which I have tried to answer during these two decades.
1. What makes some p-adic primes preferred so that one can say that they characterizes elementary
particles and presumably any system?
2. What is behind p-adic length scale hypothesis emerging from p-adic mass calculations and
stating that primes near but sligthly below two are favored physically, Mersenne primes in
particular. There is support for a generalization of this hypothesis: also primes near powers of 3
or powers of 3 might be favored as length sand time scales which suggests that powers of prime
quite generally are favored.
The adelic view led to answers to these questions. The answer to the first question has been staring
directly to my eyes for more than decade.
1. The algebraic extension of rationals allow so called ramified primes. Rational primes decompose
to product of primes of extension but it can happen that some primes of extension appear as
higher than first power. In this case one talks about ramification. The product of ramified primes
for rationals defines an integer characterizing the ramification. Also for extension allows similar
characteristic. Ramified primes are an extremely natural candidate for preferred primes of an
extension (I know that I should talk about prime ideals, sorry for a sloppy language): that
preferred primes could follow from number theory itself I had not though earlier and tried to
deduce them from physics. One can assign the characterizing integers to the string world sheets
to characterize their evolutionary level. Note that the earlier heuristic idea that space-time surface
represents a decomposition of integer is indeed realized in terms of holography!
2. Also infinite primes seem to find finally the place in the big picture. Infinite primes are
constructed as an infinite hierarchy of second quantization of an arithmetic quantum field theory.
The infinite primes of the previous level label the single fermion - and boson states of the new
level but also bound states appear. Bound states can be mapped to irreducible polynomials of nvariables at nth level of infinite obeying some restrictions. It seems that they are polynomials of a
new variable with coefficients which are infinite integers at the previous level.
- 176 -
At the first level, bound state infinite primes correspond to irreducible polynomials: these
define irreducible extensions of rationals and as a special case one obtains those satisfying so
called Eistenstein criterion: in this case the ramified primes can be read directly from the form of
the polynomial. Therefore the hierarchy of infinite primes seems to define algebraic extension of
rationals, that of polynomials of one variables, etc.. What this means from the point of physics is
a fascinating question. Maybe physicist must eventually start to iterate second quantization to
describe systems in many-sheeted space-time! The marvellous thing would be the reduction of
the construction of bound states - the really problematic part of quantum field theories - to
number theory!
The answer to the second question requires what I call weak form of NMP.
1. Strong form of NMP states that negentropy gain in quantum jump is maximal: density matrix
decompose into sum of terms proportional to projection operators: choose the sub-space for
which number theoretic negentropy is maximal. The projection operator containing the largest
power of prime is selected. The problem is that this does not allow free will in the sense as we
tend to use: to make wrong choices!
2. Weak NMP allows to chose any projection operator and sub-space which is any sub-space of the
sub-space defined by the projection operator. Even 1-dimensional in which case standard state
function reduction occurs and the system is isolated from the environment as a prize for sin!
Weak form of NMP is not at all so weak as one might think. Suppose that the maximal projector
operator has dimension nmax which is product of large number of different but rather small
primes. The negentropy gain is small. If it is possible to choose n=nmax-k, which is power of
prime, negentropy gain is much larger!
It is largest for powers of prime defining n-ary p-adic length scales. Even more, large primes
correspond to more refined p-adic topology: p=1 (one could call it prime) defines discrete
topology, p=2 defines the roughest p-adic topology, the limit p→ ∞ is identified by many
mathematicians in terms of reals. Hence large primes p<nmax are favored. In particular primes
near but below powers of prime are favored: this is nothing but a generalization of p-adic length
scale hypothesis from p=2 to any prime p.
See the article What Could Be the Origin of Preferred p-Adic Primes and p-Adic Length Scale
Hypothesis? For a summary of earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 9:08 PM
40 Comments:
At 3:34 AM,
Anonymous said...
Hmm. So what about the phenomenal obviousness of ALL mathematical thinking and
believing (in various number theories, axioms and axiomatics etc.) happening as
"cognitive acts"?
At 6:37 AM,
[email protected] said...
Looks sensible. Universe evolves more and more algebraically complex and becomes
conscious of mathematical truths. There must be however something which cannot be
reduced by the strong holography to 2-D basic objects (string world sheets and partonic 2surfaces in the intersection of realities and p-adicities). We are are aware of space-time,
were are aware of the idea of continuum, we are aware of completions of rationals and
their extensions. This is something that goes beyond formulas.
At 8:03 PM,
Stephen said...
- 177 -
What
about
these?
http://www.encyclopediaofmath.org/index.php?title=Wilson_polynomials
They have an interpretation as Racah coefficients for tensor products of irreducible
representations of the group SU(2).
At 5:37 AM,
Anonymous said...
http://plato.stanford.edu/entries/fictionalism-mathematics/
At 6:47 AM,
Anonymous said...
Number theoretical universality of Boolean logic is highly questionable not only from
fictionalist view but even inside the Platonist credo. In 'Sophist' Plato shows the
codependent dialectics of fundamental categories ("sameness", "difference", "movement",
"stillness", etc.), not unlike what Buddha and Buddhist logic says about codependent
origins. So also in the Platonist approach, according to Plato himself, also geometry and
number theory, not to mention classical bivalent Aristotelean/boolean logic, are not
"outside" or "independent" of codependent origins.
All claims of all kinds of "universalities" can also be questioned from base of cultural
anthropology and the study of human world views. In this approach, math based on
bivalent boolean _functions_ is not cultural universal, but just one narrative that is not
universally accepted even among mathematicians and logicians of Western culture.
Especially if they accept Gödel's proof that number theory of PM and those similar to
it are incomplete, and cannot be consistently structured according to boolean or any other
bivalent truth OR false logic.
Bohr's complementary philosophy is certainly not boolean, and allows both "true" and
"false" as complementary "contradictions" and even as codependent views.
According to Bohr's interpretation, this non-universal Gödel-incomplete number theory
(that schools and universities teach and indoctrinate in) is just part of the "classical world"
that as a whole "measures" superposition. And as such, also number theory is ultimately a
matter of choice...
At 6:51 AM,
Anonymous said...
"As the Buddha may or may not have said (or both, or neither): ‘There are only two
mistakes one can make along the road to truth: not going all the way, and not starting.’"
The whole article is worth a read:
http://aeon.co/magazine/philosophy/logic-of-buddhist-philosophy/
At 5:36 AM,
[email protected] said...
To anonymous about number theoretic universality etc… I cannot bat disagree. Godel's
proof tells that complete axil systems are not possible: the pool of truth is infinitely deep.
Something very easy to accept. Godel does say that Boolean logic is somehow wrong. I
see this as mathematicians, not anthropologist.
- 178 -
For me, number theoretical universality means just that it is superstructure behind all
mathematics. This is simple fact. What is interesting is that it has concrete and highly nontrivial physical meaning when fermions are identified as physical correlates for Boolean
thinking. Anticommutation relations for fermions are number theoretically universal and
make sense in any number field. Bosonic commutation relations are not since they involve
imaginary unit.
One can of course extend Boolean logic to quantum Boolean version and fermionic
representations does this.
At 8:44 PM, [email protected] said...
Sorry for typo: Godel does NOT say that Boolean logic is somehow wrong.
Sorry for other typos too: these texting programs have their own vision about what I
want to say. In any case: the essence of what I am saying is that fermionic
anticommutation relations make sense in any number field - real or p-adic.
This is one facet of number theoretical universality and has a nice interpretation : one
can construct endlessly variants of Boolean logic and call them logics but these
constructions rely always on Boolean logic. You simply cannot think mathematically
without Boolean logic. Fermions serving as correlates of Boolean logic define also a
square root of geometry. This is a deep connection between logic and geometry: they
really are the two basic pillars of existence.
Buddhist philosophy is nice but my goal is not to transform buddhist writings to
mathematical statements. I regard myself as a scientist and for scientist authorities of any
kind are poison. Kill the Buddha if you meet him on the road;-).
At 9:03 PM, Stephen said...
https://statistics.stanford.edu/sites/default/files/2000-05.pdf
see theorem 4.1, there's the 4 outcomes thing
0, -0.5, +0.5, 1
At 11:34 PM, Ulla said...
Gödels problem is an outcome of the 3 body problem if I have understood it right. This is
also an effect of the 'collapse'?
2D quantum systems are always wider, bigger with more uncertainty, like some 'fuzzy
logic'. Maybe the density differencies this brings along can be something to work on? At
least now this is how I want to see the Hierarhchy of Plancks constant. Note how that
density shifts when the Ouroborus bites its own tail :P Also the uncertainty shifts.
This is the essence of topology?
At 2:00 AM,
[email protected] said...
To Ulla:
Godel's theorem belongs meta-mathematics. It is difficult to imagine any physics
application for it.
- 179 -
Effective 2-dimensionality means holography It might be seen as information theoretic
notion 4-D space-time is redundant description if only scattering amplitudes are
considered: 2-D basic objects are enough for this purpose. Classical correlates for quantum
physics require 4-D space-time and one cannot throw it away.
At 8:11 AM,
Anonymous said...
In Boolean terms the proposition "Boolean thought and cognition in general is behind all
mathematics!" is not just false, it is false!
At 7:20 PM, [email protected] said...
Before making this kind of statements, we must define what "behind" means in the
above statement. "Behind" of course does not mean that all mathematics reduces to
Boolean algebra.This would be absolutely idiotic.
The natural meaning of "behind" is that all mathematics relies on deductions, which
obey the rules of Boolean logic. p-Adic logic is not different from real one. The logic of
differential geometry is not different from the logic of probability calculus.
If you can give example about a mathematical theory where deductions are carried out
by using some other more romantic logic, please do so. I would be happy to get a concrete
illustration about how my statement is false: I would prefer using Boolean logic. Just a
concrete mathematical statement instead of something about Buddhism, egos, or
anthropology. After than I am ready to consider seriously what you are claiming.
At 4:01 AM,
Anonymous said...
On the contrary, I believe the beauty of pure mathematics is very much in strict
adherence to rigorous deductive logic based on clear definitions.
The freedom of choice in rigorously beautiful math is not in ad hoc axioms (to
postulate set theory, real numbers etc. inherently illogical "completions of infinite
processes"). The freedom of choice is at the foundational level, how we create and
organize and deductively prove our number theory from a blank "zero state".
And you see that there are many areas of modern math,such as calculus etc. that throw
away rigorous deductive logic and justify themselves by black magic alone, which also
"works".
If you really prefer using rigorous deductive logic, you take side with Berkeley and
Beauty and renounce the heretics Newton and Leibniz and their followers in the shadowy
art of self-deception, without trying to escape into any kind of authoritarian argumentation
(cf. "authoritarians rule academia", "black magic works", etc.).
Only after we agree to do also this bivalent deductive logic properly instead of by
deception and black magic axiomatics, we can proceed to thinking about foundational
level and "middle-way-math that would avoid all the following extremes:
a, b, both a and b, neither a or b.
- 180 -
As at least in the areas of math that involves measuring, boolean valuing does not work
as things-events appear to be inherently approximate and fuzzy, and number theory based
on middle-path-logic already at foundational level might be worth a thought.
At 4:23 AM,
[email protected] said...
If you believe on rigorous deductive logic, then the first thing to do is to start to apply
it. You can begin by justifying your claims by arguments. Saying "plainly wrong" without
any justification is only an emotional rhetoric burst.
When you say inherently illogical "completions of infinite processes" you must tell
what "inherently illogical" is. I find it very difficult to see what this phrase means when
you talk about completions.
Mathematics without axioms is like Munchausen lifting himself into air. You simply
cannot prove anything without axioms. They are a book keeping device for the what we
believe to be truths. From a blank zero state you cannot deduce anything.
I like argumentation with contents. When some-one starts to talk about "selfdeception", "authoritarian argumentation, "black magic"… I smell the presence of bad
rhetoric.
Our observations about things are fuzzy, not things. I would be happy to see a number
theory based on middle valued logic but I am afraid that this is only a nice sounding verbal
construct like mathematics without axioms.
Quantum logic based on qubits is beautiful but logical statements makes only after the
state function reduction yielding well-defined bits has happened. Our conscious
experience makes the world classical.
At 5:27 AM,
Anonymous said...
Then please, do your best to discuss the content. Even if the only content is your
confusion.
:)
"Infinite process" by definition means that the process (e.g. algorithm) continues ad
infinitum, does not get finitely completed. I hope this clarifies what is illogical about
"completions of infinite processes" in terms of binary logic. Based on this we can
conclude that Boolean bivalent either-or valuing applies strictly only to finite phenomena,
not to of infinite processes and approximations e.g. by some processes of limitations.
Hence, any area of mathematics that deals with infinite processes in any way is not
'boolean' in the strict sense. Boolean thought and cognition can apply only to finite,
bivalent processes that involve a Boolean identity.
"Axioms" are today used with variety of meanings. What was criticized was "ad hoc"
axioms used to postulate what a mathematical physicist _wants_ to please himself with
when rigorous deductive logic otherwise fails to produce the object of desire, not the
foundational level axioms e.g. in Euclidean sense. Mathematics is a field of applied magic,
and such use of ad hoc axioms is black magic.
- 181 -
"Our observations about things are fuzzy, not things." This is purely a statement of
personal metaphysical belief, faith in "things" having "inherent ontology". That statement
has nothing to do with logic, math and science.
Notions of 'length', 'area', 'volume', 'position', 'momentum' etc. are strictly speaking neither
'observations' nor 'things', but in this context just abstract mathematical notions, which in
the mathematics we are used to do not behave in boolean way.
At 6:22 AM,
[email protected] said...
It is a pity that your comments are getting increasingly emotional and rhetoric.
Mathematics probably looks black magic for anyone, who does not understand it. There is
a lot of mathematics which looks black magic to me, but by working hardly I can get rid of
this expression. I do not want to blame mathematics for my own limitations.
I will comment those parts of your comment which have some content.
*As mathematical structures, Boolean algebras extend without difficulty to continuous
case: consider set theoretic realisation. p-Adics are completely well-defined notion and 2adic number can be seen as infinite sequence if binary digits.
What is important that pinay digits are ordered: the higher the digit, the lower its
significance. This is the basic idea of metric topology and makes it possible to work with
continuum. Mathematics without continuum reduces to mere combinatorics.
*Completions of rationals to various number fields are a standard mathematical concept
and extremely successful: if someone wants to believe that this notion is mathematically
flawed, he can do so but certainly remains a lonely believer.
In any case, mathematicians discovered for centuries about that the notions of
nearness, limit, continuity, Cauchy sequence and smoothness are extremely useful notions
and allow to conclude the outcome of infinite processes. Very useful notions also for a
philosopher of mathematics and highly recommended;-)
*Conscious logical thought- at least that part of which is representable physically - is
discrete. Discreteness is one of the basic aspects of cognition - I formulate this in terms of
cognitive resolution implying in turn finite measurement resolution.
*We should be be careful to not project the limitations of our cognition to the physical and
mathematical realities. Materialists do this: they try to identify consciousness and physical
reality and end up to a dead end.
At 7:48 AM,
Anonymous said...
The only real argument presented above is "useful", which is purely emotional and
rhetoric argument. "Useful" is exactly what is meant by "black magic", in contrast to the
magic and beauty of rigorous mathematical deduction. Argument from authority is to refer
what others think and do, instead of establishing a well defined theory of real numbers
based on foundational axioms and Boolean chain of deductions and proves. If you want to
play the game of Boolean mathematices, play it honestly and don't cheat at every corner.
- 182 -
In the Boolean context, it seemingly takes a complex self-deception to lose sight of the
simple fact that there is indeed significant arithmetic symmetry break. As the basic carry
rules of basic arithmetics state, Cauchy numbers in form ...nnn,p + ...nnn,p make
arithmetic sense in the boolean context (ie, they can be added), but irrational Cauchy
numbers n,nnn... + n,nnn... do not add up but remain vague and non-Boolean.
Unless, of course, you can prove that irrational Cauchy sequenses do add up in finite
life time calculation in discrete non-vague manner and can be given a Boolean value. Go
ahead, give it try.
At 9:17 AM,
[email protected] said...
I think it is time to stop the discussion since you are seem to be in rebel against
mathematics and theoretical physics : windmills would be less dangerous enemy. People
who argue that entire branches of science are totally wrong are usually called crackpots: I
do not like that word because it is so much misused.
I have met many people who have declared war against some branch of eel-established
science: one was logician who had the fix ide that special relativity contains logical errors:
he had ended up with this conclusion by interpreting special relativity in Newtonian
framework. I tried to explain but in vain.
You seems to misunderstand the idea of Cauchy sequence in a manner which remains
for me black magic.
*Cauchy sequences provide a formulation of continuity: I fail to see how you manage to
assign to them Boolean interpretation.
*You talk also about Cauchy numbers: by looking at Wikipedia you see that it is a
dimensionless parameter used in hydrodynamics. I honesty admit that I fail to see the
connection to the notion of continuity.
*Also the addition of Cauchy sequences is to my best knowledge completely irrelevant for
the notion of limit.
At 9:28 AM,
[email protected] said...
Some comments about axiomatics. This is of a technical tool for mathematicians. The
unavoidable bureaucracy, one might say.
Theoretical physicist who is really constructing a theory is working hardly to find
minimal number of basic assumptions, which might be true. Trying to find a set of
assumptions which forms a coherent whole, is internally consistent, and predicts as much
as possible.
This a process of trial and error and there is no point if declaring wars against
mathematics or any other branch of science. This activity could not be farther from
mechanical deduction from a fixed set of axioms, which requires just algorithms and in
principle can be carried out by computer.
Theoretical physics has indeed led to powerful insight about mathematics: consider
only Witten's work. Recently Nima Arkani Hamed and colleagues (mathematicians) have
- 183 -
done similar job. This is by working as visionary: mathematicians take care of details
when the dust has settled: this can take centuries.
Theoretician of course hopes that some day this structure can be axiomatized and even
average professor can use the rules to calculate.
At 9:53 AM,
Anonymous said...
No, it's not about politics, the question is very simple. Are "real numbers" numbers in
boolean sense or combinatorical noise?
Present real numbers in the form of hindu-arabic cauchy sequenses in base 2. Pick a
pair of such real numbers and add them up. Do you get a discrete result that starts with
either one or zero?
AFAIK, no, and if not otherwise proven, hence "real numbers" cannot be said to be
numbers in boolean sense.
And as real numbers cannot be said to be definable in boolean sense, that goes also for
real complex plain, complex manifolds etc.
I don't know what the hell those things are, but they are certainly not "Boolean thought
and cognition", presumably meaning numbers that can be expressed as either 1 or 0.
You can twist and dance around and play your politics and war games as much as you
want - and it is sorry to see you try so hard not to admit what is so obvious -, but it's just
math. This is just math, and if we choose to play Boolean game, we play it by Boolean
rules, otherwise we would cheat.
Also in math you need to learn to walk before you can run. When you try to run before
you have learned to take baby steps, you make a glorious short dash and then end up with
your face in mud. And IMHO that summarizes the state of contemporary academic
mathematics.
At 10:47 AM,
Anonymous said...
PS: IFF mathematics and theoretical physics claim to rule and conquer and control either
openly or by implication, of course I revel, as any honest self-loving man would. :)
Thusly experiencing does not reduce to nor is limited by mathematics and theoretical
physics, not even TGD. 2D representation of music is not same as picking a guitar in your
lap and playing music that never was and never will be.
At 11:20 AM,
Stephen said...
Anonymous, you are talking out of your ass. Stop wasting Matti's time
At 11:21 AM,
Anonymous said...
Now that we have hopefully left the dogmatic trenches or warfare and politics and are
taking our baby steps, the structure of "real line" (-field) is as such an interesting object of
study. In binary the sum of two points on real line (almost all of which are non-algebraic)
is at least in most cases not:
- 184 -
0
1
both 0 and 1
neither 0 nor 1
but avoids all these extreme positions. ;)
So the notion of qubit seems now inherently related with adding elements of nonBoolean "completion" of rational line.
Also, it would seem that the bigger the number base, the smaller the room for vague.
What would be the situation with base of Biggest Mersenne Known? Is there some kind or
structural relation with "modern interpretation" of consistent histories and the questions it
claims to allow and exclude?
At 1:50 PM, Stephen said...
see http://www.encyclopediaofmath.org/index.php/Algebra_of_sets for instance, its also
called a σ-field σ-algebra, they can be unconditional, or conditional, upon all sorts of other
spaces
go back and read the link I posted
Theorem 4.1
and quit babbling your wordy nonsense, Anonymous
random matrix theory alone cannot do it, the primes must enter in some way, and this
spectral signature is universal for any and all unitary processes, apparently, if "U" know
how to look , amirite M8? :)
At 4:46 PM, Anonymous said...
Stephen, I have checked the link and now rechecked and found this gem under Proposition
5.1:
"Remark. From the equality, the infinite sum of squares converges to an _almost surely_
finite limit."
I humbly suggest that linguistic expression "almost surely" is pretty sure tell that the
math in question has moved from the confines of Boolean values or "Boolean cognition"
to somewhere else. ;)
Or maybe you can show that randomly picked real numbers in base 2, let's say 0,000...
and 0,000... , without assuming that they are rationals, do really sum up in Boolean terms,
ie. the sum is either a number beginning with 0 or 1, but not both or neither or something
even more weird like "qubit"?
If you can't, we can't honestly say that "Boolean cognition" is behind claimed
mathematical structures such as "real number line", and that the proof theory which is used
to postulate such structures is a Boolean proof theory. And same goes for set theory.
- 185 -
At 5:15 PM, Stephen said...
https://statistics.stanford.edu/sites/default/files/2001-01.pdf is
"Unitary correlations and the Feijer kernel" is very interesting.
also
very
interesting
Your statement of "randomly picked real numbers in base 2" is ill-posed , are you just
trying to reinvent some some "floating-point" representation ?
I'm saying its something more "weird" like a qubit.
nowhere in the thing u described do I see any sort of room for time, much less
deterministic or nondeterministic notions of system states.
Fuzzy you say? nonsense, classical mechanics chaos, u take a Poincaire section of the
flow and each orbit punctures that section at a particular point, repeat this many times (by
dynamical system evaluation of integrals etc with whatever conditions) and you end up
with a process whose "output" can take on a discrete number of values relative to the
reference measure, well, basically, one can easily prove things such as Cantor, Levy 'dust'
etc and go into fractal dimension which takes on any real value, so your argument is really
just wildly stabbing in some direction or another, trying to talk big it seems...
this Boolean aspect of any set whatsoever requires the concept of indicator function,
I(A)=1 if A in omega, 0 if its not in omega. integral. do you speak it? apparently not.
you sound like an I/T guy... are you an I/T guy? ;-)
At 5:57 PM, Stephen said...
See https://app.box.com/files/0/f/0/1/f_30073473869 for some interesting ways that
numbers 7 and 24 just pop out of some rather elementary integrals
At 7:09 PM, [email protected] said...
I do not want to use my time to ponder whether there is some conspiracy of power
greedy mathematicians and physicists against civilized world. I just want to make clear
some elementary things about Cauchy sequences in hope that they remove the feeling of
black magic.
a) Cauchy sequences are used by everyone who can sum, subtract, multiply and divide
decimal numbers. These are special kind of Cauchy sequences in which nth term is the
number in approximation using n decamical digits. One can use also binary or binary and
much more general sequences.
These particular sequences are however convenient since all arithmetic operations are
for rational numbers.
b) In numerics on introduces decimal/pinary/... cutoff. This makes sense if the
functions are continuous and the operations for them respect continuity.
c) If one wants to formulate this axiomatically one can say that one works in the
category of continuous functions. Absolutely no crime against mankind is involved.
Everything is finite and numbers of operations are finite but approximate with an error that
can be estimated. Computers use routinely binary Cauchy sequences with a success.
- 186 -
c) One could of course throw away real numbers as a conspiracy against mankind and
decides to use only rationals (I do not know whether algebraic numbers are also doomed to
to be part of conspiracy) . This leads to difficulties.
One must define differential equations etc as difference equations by specifying the
size of different: single equation would be replaced by infinite number of them- one for
each accuracy. Calculus and most of what has been achieved since Newton would be lost
since no-one wants to write Newton's mechanics or Maxwell's theory or quantum field
theory using difference equations: it would incredibly clumsy.
There would no exponent function, no Gaussian, no pi, no special functions. Things
become in practice impossible. Most of number theory is lost: forget Riemann Zeta, forget
p-adic numbers, ... Analytic calculations absolutely central in all science would become
impossible.
Reals represent the transcendent, spirituality, going beyond what we can represent by
counting with fingers. Recent science is deeply spiritual and in very concrete manner but
the materialistic dogma prevents us from seeing this.
At 7:35 PM, [email protected] said...
I hope that the importance of the notion of finite accuracy became clear. It certainly
does not look like a beautiful notion in its recent formulation.
Finite accuracy is the counterpart for finite measurement resolution/cognitive
resolution and is anotion, which is often not considered explicitly in math text books. It is
fundamental in physics but the problem is how to formulate it elegantly.
It is also encountered in in the adelic vision based on strong form of holography. One
can in principle deduce scattering amplitudes in an algebraic extension of rationals (this
for the parameters such as momenta appearing in them). One can algebraically continue
this expression to all number fields.
But what if one cannot calculate the amplitudes exactly in the algebraic extension?
There is no problem in real topology using ordinary continuity. But continuation to p-adic
topologies is difficult since even a smallest change in rational number in real sense can
mean very big change in p-adic sense. It seems that one cannot avoid canonical
identification or some of its variants if one wants to assign to a real amplitude a p-adic
amplitude in continuous manner.
Finite accuracy is also a deep physical concept: fermions at string world sheets are
Boolean cognitive representation of space-time geometry. But in finite accuracy
representing 4-D object using data assignable to a collection of 2-D objects rather than 0dimensional objects (points) as in the usual naive discretization, which is not consistent
with symmetries. Discrete set of points is replaced with discrete collection of 2-surfaces
labelled by parameters in algebraic extension of rationals. The larger the density of strings,
the better the representation. This is strong form or holography is implied by strong form
of general coordinate invariance: a completely unexpected connection to Einstein's great
principle.
- 187 -
This leads also to an elegant realization of number theoretical universality and
hierarchy of inclusions of hyper-finite factors as a realization of finite measurement
resolution. Also evolution as increase of complexity of algebraic extension of rationals
pops up labelled by integers n =h_eff/h, which are products of ramified primes
characterizing the sub-algebra of super-symplectic algebra acting as gauge conformal
gauge symmetries. Effective Planck constant has purely number theoretic meaning as a
measure for the complexity of algebraic extension!
Ramification is also number theoretic correlate of quantum criticality! Rational prime
decomposes to product of prime powers such that some of them are higher than first
powers: analog for multiple root in polynomial - criticality! For me this looks amazingly
beautiful.
At 7:43 PM, [email protected] said...
Correction to the last paragraph: "prime powers such that" should read "powers of primes
of extension such that"
At 5:08 AM,
Anonymous said...
I don't know what all will be lost if we honestly admit that "real numbers" do not
behave arithmetically, at least in the boolean sense, and though many say that "real
numbers satisfy the usual rules of arithmetic", obviously they don't. Any child can see that
emperor has no clothes in that respect.
Even though reals don't, AFAIK the p-adic side does satisfy the usual rules of
arithmetic, at least in some areas. Worth a more careful look. Cauchy intervals within
intervals is perfectly OK and very rich and interesting structure, and repeating patterns of
rationals is amazing and beautiful thing worth deeper study, e.g. how do lengths of
repeating patterns behave in various bases, on both sides of rationals? When repeating
patterns are plotted inside Cauchy intervals, I see a wave forms at very basic level of
number theory.
In OP Matti does see the light saying that mathematical structures follow from number
theory itself, trying to deduce from "physics" does not work.
So here is relatively simple question: what is the _minimum_ of number theory you
need to observe quantum observables? I'm very much in doubt that e.g. "canonical
identification" is needed (but rather, confuses and messes things up).
I'm not I/T, but even I know that computers don't do real numbers or any other infinite
processes. Floating points, lazy algorithms, etc. get the job done. Finite accuracy works in
boolean way, but no, we can't say that "finite accuracy" strings are "real numbers".
If we insist that problem of mathematical continuum "has been solved with "least
upper bound" completion of algebraic (e.g. roots) and algorithmic (pi, e) realside infinite
processes", there is a cost: the solution is not boolean, rules of arithmetic don't work
regardless of how much some people pretend that they work and push the problems under
the mattress and out of text books. It's not about politics, it's just math.
There are other options, we can admit that the problem of mathematical continuum
remains unsolved, or poorly understood and formulated, and keep on thinking and
- 188 -
questioning, instead of blindly believing the academic authorities that say that real
numbers follow the basic rules of arithmetics. Eppur si muove.
At 5:29 AM,
[email protected] said...
This is my last comment in this fruitless discussion. I have done pedagogical efforts in
order to clarify the basics but in vain. I however strongly encourage to continue serious
studies of basics before declaring a war against modern mathematics and physics.
I have tried to explain that finite calculational accuracy is the point: it is not possible to
calculate real number exactly in finite time and no-one has been claiming anything like
that. The idea of giving up all the mathematics since Newton is just just because we cannot
calculate with infinite precision is complete idiotism.
And I am still unable to see what is wrong with Cauchy sequences: here I tried to
concretise them in terms of decimal representation in order to give the idea what they are
about but it seems that it did not help.
The generalisation of real numbers rather than refusing to admit their existence, is the
correct direction to proceed and I have been working with this problem with strong
physical motivations. Fusion of reals and p-adics to adelic structures also at space-time
level, hierarchy of infinite primes defining infinite hierarchy of second quantisation for an
arithmetic quantum field theory, even construction of arithmetics of Hilbert spaces,
replacement of real point with infinitely structured point realizing number theoretic
Brahman = Atman/algebraic holography. These are interesting lines to proceed rather than
a return to cave.
Strange that someone blames me for blindly believing academic authorities;-).
At 5:44 AM,
Anonymous said...
As for relation of Boolean operators V and its vertical inverse, and human cognition,
propositional logic is far from universal; some natural languages behave closer to
propositional logic, some not in the slightest.
Leveled horizontal operators of ordinality "<" and ">" (less-more) are much more
naturally universal in human cognition, I'm not aware of natural language without moreless relation, which is also naturally hyperfinite process closely related to whole-part
relation. The arrows giving directions are also more-less relations: go more in the direction
the arrow is pointing, less in the opposite direction. These operators predate all other
written language and needless to say, propositional logic.
At 6:19 AM,
Anonymous said...
Matti, read again. Your latest comment has very little to do with what has been said
and meant.
Again: The authorities (e.g. wiki) keep on saying that real numbers follow the basic
rules of arithmetics. Obviously that claim is not true.
The definition of 'real number' refers to infinite process ("least upper bound"), not to
finite computable segment. Finite segments by definition are NOT "real numbers", they
- 189 -
are something else. Some say "approximations", but also an approximation is NOT a real
number. It is an approximation.
If we want to keep math communicable, we must respect definitions and do our best to
define as clearly as we can. The notion of "real number" is as it is usually used, horribly
vague and poorly defined.
That is of course a big IF and communication is not necessarily priority. The word
"sin" has been mentioned in context of incommunicado.
At 2:17 PM, Stephen said...
Anonymous, who is this we you speak of? get some books and stop watching videos. do
some analysis. read up on Baire spaces if u are so caught up on notions of continuity
At 2:24 PM, Stephen said...
for example, ur little toy problem: The irrational numbers, with the metric defined by ,
where is the first index for which the continued fraction expansions of a and b differ
(this is a complete metric space).
the iterated map that gives rise to the continted fraction expansion of a real number .. well,
it's related to the riemann zeta function, see the Wikipedia page
Edit
Continued fractions also play a role in the study of dynamical systems, where they tie
together the Farey fractions which are seen in the Mandelbrot set with Minkowski's
question mark function and the modular group Gamma.
The backwards shift operator for continued fractions is the map h(x) = 1/x − ⌊1/x⌋ called
the Gauss map, which lops off digits of a continued fraction expansion: h([0; a1, a2, a3,
…]) = [0; a2, a3, …]. The transfer operator of this map is called the Gauss–Kuzmin–
Wirsing operator. The distribution of the digits in continued fractions is given by the
zero'th eigenvector of this operator, and is called the Gauss–Kuzmin distribution.
At 6:19 PM, [email protected] said...
To Anonymous: You should calm down and stop talking total nonsense.
You are unable to realize that things can exist although we cannot know perfectly what
they are. What we can know is that real number is in some segment, we can narrow down
this segment endlessly but never know exactly.
But we have also something else than mere numerical computation: we have the
conscious intelligence. It cannot be computerised or axiomatised but and most
importantly, it is able to discover new truths.
In mathematics communication requires also learning: just watching some videos and
becoming a fan of some character making strong nonsense claims is not enough. Also
mathematical intuition is something very difficult to teach: some of us have it, others do
not.
- 190 -
Just as some people are able to compose marvellous music. It seems that we must just
accept this. I am not musically especially talented but I enjoy the music of the great
composers and experience the miracle again and again: I do not declare war against this
music.
At 7:17 PM, [email protected] said...
Some comments about quantum Boolean thinking and computation as I see it to
happen at fundamental level.
a) One should understand how Boolean statements A-->B are represented. Or more
generally, how a computation like procedure leading from a collection A of math objects
collection B of math objects takes place. Recall that in computations the objects coming in
and out are bit sequences. Now one have computation like process. --> is expected to
correspond to the arrow of time.
If fermionic oscllator operators generate Boolean basis, zero energy ontology is
necessity to realize rules as rules connecting statements realized as bit sequences. Positive
energy ontology would allow only statements. Collection A is at the lower passive
boundary of CD and B at the upper active one. As a matter fact, it is a quantum
superpositions of Bs, which is there! In the quantum jump selecting single B at the active
boundary, A is replaced with a superposition of A:s: self dies and re-incarnates and
generates negentropy. Q-computation halts.
That both a and b cannot be known precisely is a quantal limitation to what can be
known: philosopher would talk about epistemology here. The different pairs (a,b) in
superposition over b:s are analogous to different implications of a. Thinker is doomed to
always live in a quantum cognitive dust and never be quite sure of.
At 7:19 PM, [email protected] said...
Continuing….
b) What is the computer like structure now? Turing computer is 1-D time-like line. This
quantum computer is superposition of 4-D space-time surfaces with the basic
computational operations located along it as partonic 2-surfaces defining the algebraic
operations and connected by fermion lines representing signals. Very similar to ordinary
computer.
c) One should understand the quantum counterparts for the basic rules of manipulation.
x,/,+, and - are the most familiar example.
*The basic rules correspond physically to generalized Feynman/twistor diagrams
representing sequences of algebraic manipulations in the Yangian of super-symplectic
algebra. Sequences correspond now to collections of partonic 2-surfaces defining vertices
of generalized twistor diagrams.
*3- vertices correspond to product and co-product represented as stringy Noether charges.
Geometrically the vertex - analog of algebraic operation - is a partonic 2-surface at with
incoming and outgoing light-like 3-surfaces meet - like vertex of Feynman diagram. There
- 191 -
is also co-product vertex not encountered in simple algebraic systems, it is time reversed
variant of vertex. Fusion instead of annihilation.
*There is a huge symmetry as in ordinary computations too. All computation sequences
connecting same collections A and B of objects produce the same scattering amplitude.
This generalises the duality symmetry of hadronic string models. This is really gigantic
simplification and the results in twistor program suggest that something similar is obtained
there. This implication was so gigantic that I gave up the idea for years.
d) One should understand the analogs for the mathematical axioms. What are the
fundamental rules of manipulation?
*The classical computation/deduction would obey deterministic rules at vertices. The
quantal formulation cannot be deterministic for the simple reason that one has quantum
non-determinism (weak form of NMP allowing also good and evil) . The quantum rules
obey the format that God used when communicating with Adam and Eve: do anything else
but do not the break conservation laws. Classical rules would list all the allowed
possibilities and this leads to difficulties as Goedel demonstrated. I think that chess players
follow the anti-axiomatics.
I have the feeling that anti-axiomatics could give a more powerful approach to
computation and deduction and allow a new manner to approach to the problematics of
axiomatisations. Note however that the infinite hierarchy of mostly infinite integers could
make possible a generalisation of Godel numbering for statements/computations.
e) The laws of physics take care that the anti-axioms are obeyed. Quite concretely:
*Preferred extremal property of Kaehler action and Kaeler-Dirac action plus conservation
laws for charges associated with super-symplectic and other generalised conformal
symmetries would define the rules not broken in vertices.
*At the fermion lines connecting the vertices the propagator would be determined by the
boundary part of Kahler-Dirac action. K-D equation for spinors and consistency
consistency conditions from Kahler action (strong form of holography) would dictate what
happens to fermionic oscillator operators defining the analog of quantum Boolean algebra
as super-symplectic algebra.
05/05/2015 - http://matpitka.blogspot.com/2015/05/updated-negentropymaximization.html#comments
Updated Negentropy Maximization Principle
Quantum-TGD involves "holy trinity" of time developments. There is the geometric time
development dictated by the preferred extremal of Kähler action crucial for the realization of General
Coordinate Invariance and analogous to Bohr orbit. There is what I originally called unitary "time
development" U: Ψi→ UΨi→ Ψf, associated with each quantum jump. This would be the counterpart of
the Schrödinger time evolution U(-t,t→ ∞). Quantum jump sequence itself defines what might be called
subjective time development.
- 192 -
Concerning U, there is certainly no actual Schrödinger equation involved: situation is in practice
same also in quantum field theories. It is now clear that in Zero Energy Ontology (ZEO) U can be
actually identified as a sequence of basic steps such that single step involves a unitary evolution
inducing delocalization in the moduli space of causal diamonds CDs) followed by a localization in this
moduli space selecting from a superposition of CDs single CD. This sequence replaces a sequence of
repeated state function reductions leaving state invariant in ordinary QM. Now it leaves in variant
second boundary of CD (to be called passive boundary) and also the parts of zero energy states at this
boundary. There is now a very attractive vision about the construction of transition amplitudes for a
given CD, and it remains to be see whether it allows an extension so that also transitions involving
change of the CD moduli characterizing the non-fixed boundary of CD.
A dynamical principle governing subjective time evolution should exist and explain state function
reduction with the characteristic one-one correlation between macroscopic measurement variables and
quantum degrees of freedom and state preparation process. Negentropy Maximization Principle is the
candidate for this principle. In its recent form it brings in only a single little but overall important
modification: state function reductions occurs also now to an eigen-space of projector but the projector
can now have dimension which is larger than one. Self has free will to choose beides the maximal
possible dimension for this sub-space also lower dimension so that one can speak of weak form of NMP
so that negentropy gain can be also below the maximal possible: we do not live in the best possible
world. Second important ingredient is the notion of negentropic entanglement relying on p-adic norm.
The evolution of ideas related to NMP has been slow and tortuous process characterized by
misinterpretations, over-generalizations, and unnecessarily strong assumptions, and has been basically
evolution of ideas related to the anatomy of quantum jump and of Quantum-TGD itself.
Quantum measurement theory is generalized to theory of Consciousness in TGD framework by
replacing the notion of observer as outsider of the physical world with the notion of self. Hence it is not
surprising that several new key notions are involved.
1. ZEO is in central role and brings in a completely new element: the arrow of time changes in the
counterpart of standard quantum jump involving the change of the passive boundary of CD to
active and vice versa. In living matter the changes of the of time are inn central role: for instance,
motor action as volitional action involves it at some level of self hierarchy.
2. The fusion of real physics and various p-adic physics identified as physics of cognition to single
adelic physics is second key element. The notion of intersection of real and p-adic worlds
(intersection of sensory and cognitive worlds) is central and corresponds in recent view about
TGD to string world sheets and partonic 2-surfaces whose parameters are in an algebraic
extension of rationals. By strong form of of holography it is possible to continue the string world
sheets and partonic 2-surfaces to various real and p-adic surfaces so that what can be said about
quantum physics is coded by them. The physics in algebraic extension can be continued to real
and various p-adic sectors by algebraic continuation meaning continuation of various parameters
appearing in the amplitudes to reals and various p-adics.
An entire hierarchy of physics labeled by the extensions of rationals inducing also those of padic numbers is predicted and evolution corresponds to the increase of the complexity of these
extensions. Fermions defining correlates of Boolean cognition can be said so reside at these 2dimensional surfaces emerging from strong form of holography implied by strong form of
general coordinate invariance (GCI).
- 193 -
An important outcome of adelic physics is the notion of number theoretic entanglement
entropy: in the defining formula for Shannon entropy logarithm of probability is replaced with
that of p-adic norm of probability and one assumes that the p-adic prime is that which produces
minimum entropy. What is new that the minimum entropy is negative and one can speak of
negentropic entanglement (NE). Consistency with standard measurement theory allows only NE
for which density matrix is n-dimensional projector.
3. Strong form of NMP states that state function reduction corresponds to maximal negentropy
gain. NE is stable under strong NMP and it even favors its generation. Strong form of NMP
would mean that we live in the best possible world, which does not seem to be the case. The
weak form of NMP allows self to choose whether it performs state function reduction yielding
the maximum possible negentropy gain. If n-dimensional projector corresponds to the maximal
negentropy gain, also reductions to sub-spaces with n-k-dimensional projectors down to 1dimensional projector are possible. Weak form has powerful implications: for instance, one can
understand how primes near powers of prime are selected in evolution identified at basic level as
increase of the complexity of algebraic extension of rationals defining the intersection of realities
and p-adicities.
4. NMP gives rise to evolution. NE defines information resources, which I have called Akashic
records (a kind of Universal library). The simplest possibility is that under the repeated sequence
of state function reductions at fixed boundary of CD NE at that boundary becomes conscious and
gives rise to experiences with positive emotional coloring: experience of love, compassion,
understanding, etc... One cannot exclude the possibility that NE generates a conscious experience
only via the analog of interaction free measurement but this option looks un-necessary in the
recent formulation.
5. Dark matter hierarchy labelled by the values of Planck constant h eff=n× h is also in central role
and interpreted as a hierarchy of criticalities in which sub-algebra of super-symplectic algebra
having structure of conformal algebra allows sub-algebra acting as gauge conformal algebra and
having conformal weights coming as n-ples of those for the entire algebra. The phase transition
increasing heff reduces criticality and takes place spontaneously. This implies a spontaneous
generation of macroscopic quantum phases interpreted in terms of dark matter. The hierarchies
of conformal symmetry breakings with n(i) dividing n(i+1) define sequences of inclusions of
HFFs and the conformal sub-algebra acting as gauge algebra could be interpreted in terms of
measurement resolution.
n-dimensional NE is assigned with heff=n× h and is interpreted in terms of the n-fold
degeneracy of the conformal gauge equivalence classes of space-time surfaces connecting two
fixed 3-surfaces at the opposite boundaries of CD: this reflects the non-determinism
accompanying quantum criticality. NE would be between two dark matter systems with same heff
and could be assigned to the pairs formed by the n sheets. This identification is important but not
well enough understood yet. The assumption that p-adic primes p divide n gives deep
connections between the notion of preferred p-adic prime, negentropic entanglement, hierarchy
of Planck constants, and hyper-finite factors of type II1.
6. Quantum-Classical correspondence (QCC) is an important constraint in ordinary measurement
theory. In TGD, QCC is coded by the strong form of holography assigning to the quantum states
assigned to the string world sheets and partonic 2-surfaces represented in terms of supersymplectic Yangian algebra space-time surfaces as preferred extremals of Kähler action, which
by quantum criticality have vanishing super-symplectic Noether charges in the sub-algebra
characterized by integer n. Zero modes, which by definition do not contribute to the metric of
"world of classical worlds" (WCW) code for non-fluctuacting classical degrees of freedom
correlating with the quantal ones. One can speak about entanglement between quantum and
classical degrees of freedom since the quantum numbers of fermions make themselves visible in
- 194 -
the boundary conditions for string world sheets and their also in the structure of space-time
surfaces.
NMP has a wide range of important implications.
1. In particular, one must give up the standard view about second law and replace it with NMP
taking into account the hierarchy of CDs assigned with ZEO and dark matter hierarchy labelled
by the values of Planck constants, as well as the effects due to NE. The breaking of second law
in standard sense is expected to take place and be crucial for the understanding of evolution.
2. Self hierarchy having the hierarchy of CDs as imbedding space correlate leads naturally to a
description of the contents of consciousness analogous to thermodynamics except that the
entropy is replaced with negentropy.
3. In the case of living matter NMP allows to understand the origin of metabolism. NMP demands
that self generates somehow negentropy: otherwise a state function reduction to tjhe opposite
boundary of CD takes place and means death and re-incarnation of self. Metabolism as gathering
of nutrients, which by definition carry NE is the manner to avoid this fate. This leads to a vision
about the role of NE in the generation of sensory qualia and a connection with metabolism.
Metabolites would carry NE and each metabolite would correspond to a particular qualia (not
only energy but also other quantum numbers would correspond to metabolites). That primary
qualia would be associated with nutrient flow is not actually surprising!
4. NE leads to a vision about cognition. Negentropically entangled state consisting of a
superposition of pairs can be interpreted as a conscious abstraction or rule: negentropically
entangled Schrödinger cat knows that it is better to keep the bottle closed.
5. NMP implies continual generation of NE. One might refer to this ever expanding universal
library as "Akaschic records". NE could be experienced directly during the repeated state
function reductions to the passive boundary of CD - that is during the life cycle of sub-self
defining the mental image. Another, less feasible option is that interaction free measurement is
required to assign to NE conscious experience. As mentioned, qualia characterizing the
metabolite carrying the NE could characterize this conscious experience.
6. A connection with fuzzy qubits and quantum groups with NE is highly suggestive. The
implications are highly non-trivial also for quantum computation allowed by weak form of NMP
since NE is by definition stable and lasts the lifetime of self in question.
For details see the chapter Negentropy Maximization Principleof "TGD Inspired Theory of
Consciousness". For a summary of the earlier postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 5:09 AM
6 Comments:
At 9:53 AM,
Anonymous said...
Matti, I know this is a "narrow specialization" but (stochastic) point processes are actually
related to the Ising 'problem' and Riemann zeta hypothesis and fermions/bosons.
See
https://books.google.com/books?id=3Iw2NTPsS6QC&pg=PA9&lpg=PA9&dq=point+pro
cesses+1950-2000+verejones&source=bl&ots=DNgQ0dIbGO&sig=w285gr6bA6jeNhD7XlqcXeJ067s&hl=en&sa
=X&ei=kXBKVYn6IIXfsATlh4HQBA&ved=0CCQQ6AEwAA#v=onepage&q=point%2
0processes%201950-2000%20vere-jones&f=false
see page 14, Diaconis and Evans, 2001 and Coram and Diaconis 2002
can you please comment?
--crow
- 195 -
At 10:13 AM,
Anonymous said...
the
referenced
article
is
available
in
pdf
format
http://www.sciencedirect.com/science/article/pii/S0097316500930978
the other article is at http://statweb.stanford.edu/~cgates/PERSI/papers/coram03.pdf
--crow
at
At 6:53 PM, [email protected] said...
To Crow:
I would be happy say something interesting about these articles. Unfortunately I cannot.
This kind statistical approach I have not been working with.
At 10:05 PM, Stephen said...
Brahman is full of all perfections. And to say that Brahman has some purpose in creating
the world will mean that it wants to attain through the process of creation something which
it has not. And that is impossible. Hence, there can be no purpose of Brahman in creating
the world. The world is a mere spontaneous creation of Brahman. It is a Lila, or sport, of
Brahman. It is created out of Bliss, by Bliss and for Bliss. Lila indicates a spontaneous
sportive activity of Brahman as distinguished from a self-conscious volitional effort. The
concept of Lila signifies freedom as distinguished from necessity.
—Ram Shanker Misra, The Integral Advaitism of Sri Aurobindo
http://en.m.wikipedia.org/wiki/Lila_(Hinduism)
Surely has some the TGD cognate and also ti axiom of choice in math somehow
At 1:29 PM, Stephen said...
Matti, I just realized my links that I posted in the comments on your other post are
probably more appropriate for this post, in regards to "unitary time development"
https://statistics.stanford.edu/sites/default/files/2001-01.pdf
see the title "unitary correlations and the Fejer kernel" irreducible group characters and
whatnot
At 6:05 PM, [email protected] said...
The recent view is that unitary evolution would correspond in TGD a dispersion in the
space of moduli characterising the position of the upper boundary of CD and would also
affect the states at the upper boundary. Unfortunately, I am not able to say much about
this. Sequence of steps: U followed by localisation in moduli.
Concerning zeros of zeta, I take again seriously the hypothesis that they might correspond
to conformal weights of supersymplectic algebra characterising exponents of powers of
radial light- like coordinate. Also this algebra as a fractal hierarchy of sub-algebras for
which are n-ples of those for the full algebra. The number of generators is infinite: the
zeros of zeta! In case of ordinary Kac-Moody type algebras generators have only finite
number of different weights. This would be something incredibly complex.
- 196 -
Conformal confinement would be implication: physical states would have real conformal
weights: the sum over multiples of imaginary parts of Riemann zeros would vanish for
them.
04/29/2015 - http://matpitka.blogspot.com/2015/04/what-could-be-origin-of-p-adiclength.html#comments
What could be the origin of p-adic length scale hypothesis?
The argument would explain the existence of preferred p-adic primes. It does not yet explain p-adic
length scale hypothesis stating that p-adic primes near powers of 2 are favored. A possible
generalization of this hypothesis is that primes near powers of prime are favored. There indeed exists
evidence for the realization of 3-adic time scale hierarchies in living matter (see this) and in music both
2-adicity and 3-adicity could be present, this is discussed in TGD inspired theory of music harmony and
genetic code (see this).
The weak form of NMP might come in rescue here.
1. Entanglement negentropy for a negentropic entanglement characterized by n-dimensional
projection operator is the log(Np(n) for some p whose power divides n. The maximum
negentropy is obtained if the power of p is the largest power of prime divisor of p, and this can
be taken as definition of number theoretic entanglement negentropy. If the largest divisor is pk,
one has N= k× log(p). The entanglement negentropy per entangled state is N/n=klog(p)/n and is
maximal for n=pk. Hence powers of prime are favoured which means that p-adic length scale
hierarchies with scales coming as powers of p are negentropically favored and should be
generated by NMP. Note that n=pk would define a hierarchy of heff/h=pk. During the first years of
heff hypothesis I believe that the preferred values obey heff=rk, r integer not far from r= 211. It
seems that this belief was not totally wrong.
2. If one accepts this argument, the remaining challenge is to explain why primes near powers of
two (or more generally p) are favoured. n=2k gives large entanglement negentropy for the final
state. Why primes p=n2= 2k-r would be favored? The reason could be following. n=2k
corresponds to p=2, which corresponds to the lowest level in p-adic evolution since it is the
simplest p-adic topology and farthest from the real topology and therefore gives the poorest
cognitive representation of real preferred extremal as p-adic preferred extermal (Note that p=1
makes formally sense but for it the topology is discrete).
3. Weak form of NMP suggests a more convincing explanation. The density matrix of the state to
be reduced is a direct sum over contributions proportional to projection operators. Suppose that
the projection operator with largest dimension has dimension n. Strong form of NMP would say
that final state is characterized by n-dimensional projection operator. Weak form of NMP allows
free will so that all dimensions n-k, k=0,1,...n-1 for final state projection operator are possible. 1dimensional case corresponds to vanishing entanglement negentropy and ordinary state function
reduction isolating the measured system from external world.
4. The negentropy of the final state per state depends on the value of k. It is maximal if n-k is
power of prime. For n=2k=Mk+1, where Mk is Mersenne prime n-1 gives the maximum
negentropy and also maximal p-adic prime available so that this reduction is favoured by NMP.
Mersenne primes would be indeed special. Also the primes n=2k-r near 2k produce large
entanglement negentropy and would be favored by NMP.
5. This argument suggests a generalization of p-adic length scale hypothesis so that p=2 can be
replaced by any prime.
- 197 -
This argument together with the hypothesis that preferred prime is ramified would correlate the
character of the irreducible extension and character of super-conformal symmetry breaking. The integer
n characterizing super-symplectic conformal sub-algebra acting as gauge algebra would depends on the
irreducible algebraic extension of rational involved so that the hierarchy of quantum criticalities would
have number theoretical characterization. Ramified primes could appear as divisors of n and n would be
essentially a characteristic of ramification known as discriminant.
An interesting question is whether only the ramified primes allow the continuation of string world sheet
and partonic 2-surface to a 4-D space-time surface. If this is the case, the assumptions behind p-adic
mass calculations would have full first principle justification.
For details see the article The Origin of Preferred p-Adic Primes? For a summary of earlier
postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 10:04 PM
04/28/2015 - http://matpitka.blogspot.com/2015/04/how-preferred-p-adic-primes-couldbe.html#comments
How preferred p-adic primes could be determined?
p-Adic mass calculations allow to conclude that elementary particles correspond to one or possible
several preferred primes assigning p-adic effective topology to the real space-time sheets in
discretization in some length scale range. TGD inspired theory of consciousness leads to the
identification of p-adic physics as physics of cognition. The recent progress leads to the proposal that
Quantum-TGD is adelic: all p-adic number fields are involved and each gives one particular view about
physics.
Adelic approach plus the view about evolution as emergence of increasingly complex extensions of
rationals leads to a possible answer to th question of the title. The algebraic extensions of rationals are
characterized by preferred rational primes, namely those which are ramified when expressed in terms of
the primes of the extensions. These primes would be natural candidates for preferred p-adic primes.
1. Earlier attempts
How the preferred primes emerges in this framework? I have made several attempts to answer this
question.
1. Classical non-determinism at space-time level for real space-time sheets could in some length
scale range involving rational discretization for space-time surface itself or for parameters
characterizing it as a preferred extremal correspond to the non-determinism of p-adic differential
equations due to the presence of pseudo constants which have vanishing p-adic derivative.
Pseudo- constants are functions depend on finite number of pinary digits of its arguments.
2. The quantum criticality of TGD is suggested to be realized in in terms of infinite hierarchies of
super-symplectic symmetry breakings in the sense that only a sub-algebra with conformal
weights which are n-multiples of those for the entire algebra act as conformal gauge symmetries.
This might be true for all conformal algebras involved. One has fractal hierarchy since the subalgebras in question are isomorphic: only the scale of conformal gauge symmetry increases in the
phase transition increasing n. The hierarchies correspond to sequences of integers n(i) such tht
n(i) divides n(i+1). These hierarchies would very naturally correspond to hierarchies of
inclusions of hyper-finite factors and m(i)= n(i+1)/n(i) could correspond to the integer n
- 198 -
characterizing the index of inclusion, which has value n≥ 3. Possible problem is that m(i)=2
would not correspond to Jones inclusion. Why the scaling by power of two would be different?
The natural question is whether the primes dividing n(i) or m(i) could define the preferred
primes.
3. Negentropic entanglement corresponds to entanglement for which density matrix is projector.
For n-dimensional projector any prime p dividing n gives rise to negentropic entanglement in the
sense that the number theoretic entanglement entropy defined by Shannon formula by replacing
pi in log(pi)= log(1/n) by its p-adic norm Np(1/n) is negative if p divides n and maximal for the
prime for which the dividing power of prime is largest power-of-prime factor of n. The
identification of p-adic primes as factors of n is highly attractive idea. The obvious question is
whether n corresponds to the integer characterizing a level in the hierarchy of conformal
symmetry breakings.
4. The adelic picture about TGD led to the question whether the notion of unitary could be
generalized. S-matrix would be unitary in adelic sense in the sense that Pm=(SS†)mm=1 would
generalize to adelic context so that one would have product of real norm and p-adic norms of Pm.
In the intersection of the realities and p-adicities Pm for reals would be rational and if real and padic Pm correspond to the same rational, the condition would be satisfied. The condition that Pm≤
1 seems however natural and forces separate unitary in each sector so that this options seems too
tricky.
These are the basic ideas that I have discussed hitherto.
2. Could preferred primes characterize algebraic extensions of rationals?
The intuitive feeling is that the notion of preferred prime is something extremely deep and the
deepest thing I know is number theory. Does one end up with preferred primes in number theory? This
question brought to my mind the notion of ramification of primes (see this) (more precisely, of prime
ideals of number field in its extension), which happens only for special primes in a given extension of
number field, say rationals. Could this be the mechanism assigning preferred prime(s) to a given
elementary system, such as elementary particle? I have not considered their role earlier also their
hierarchy is highly relevant in the number theoretical vision about TGD.
1. Stating it very roughly (I hope that mathematicians tolerate this language): As one goes from
number field K, say rationals Q, to its algebraic extension L, the original prime ideals in the so
called integral closure (see this) over integers of K decompose to products of prime ideals of L
(prime is a more rigorous manner to express primeness).
Integral closure for integers of number field K is defined as the set of elements of K, which
are roots of some monic polynomial with coefficients, which are integers of K and having the
form xn+an-1xn-1+...+a0 . The integral closures of both K and L are considered. For instance,
integral closure of algebraic extension of K over K is the extension itself. The integral closure of
complex numbers over ordinary integers is the set of algebraic numbers.
2. There are two further basic notions related to ramification and characterizing it. Relative
discriminant is the ideal divided by all ramified ideals in K and relative different is the ideal of L
divided by all ramified Pi:s. Note that te general ideal is analog of integer and these ideas
represent the analogous of product of preferred primes P of K and primes Pi of L dividing them.
3. A physical analogy is provided by decomposition of hadrons to valence quarks. Elementary
particles becomes composite of more elementary particles in the extension. The decomposition
to these more elementary primes is of form P= ∏ Pie(i), where ei is the ramification index - the
physical analog would be the number of elementary particles of type i in the state (see this).
Could the ramified rational primes could define the physically preferred primes for a given
elementary system?
- 199 -
In TGD framework, the extensions of rationals (see this) and p-adic number fields (see this) are
unavoidable and interpreted as an evolutionary hierarchy physically and cosmological evolution would
have gradually proceeded to more and more complex extensions. One can say that string world sheets
and partonic 2-surfaces with parameters of defining functions in increasingly complex extensions of
prime emerge during evolution. Therefore ramifications and the preferred primes defined by them are
unavoidable. For p-adic number fields the number of extensions is much smaller for instance for p>2
there are only 3 quadratic extensions.
1.
2. In p-adic context a proper definition of counterparts of angle variables as phases allowing
definition of the analogs of trigonometric functions requires the introduction of algebraic
extension giving rise to some roots of unity. Their number depends on the angular resolution.
These roots allow to define the counterparts of ordinary trigonometric functions (the naive
generalization based on Taylor series is not periodic) and also allows to defined the counterpart
of definite integral in these degrees of freedom as discrete Fourier analysis. For the simplest
algebraic extensions defined by xn-1 for which Galois group is abelian are are unramified so that
something else is needed. One has decomposition P= ∏ Pie(i), e(i)=1, analogous to n-fermion
state so that simplest cyclic extension does not give rise to a ramification and there are no
preferred primes.
3. What kind of polynomials could define preferred algebraic extensions of rationals? Irreducible
polynomials are certainly an attractive candidate since any polynomial reduces to a product of
them. One can say that they define the elementary particles of number theory. Irreducible
polynomials have integer coefficients having the property that they do not decompose to
products of polynomials with rational coefficients. It would be wrong to say that only these
algebraic extensions can appear but there is a temptation to say that one can reduce the study of
extensions to their study. One can even consider the possibility that string world sheets
associated with products of irreducible polynomials are unstable against decay to those
characterize irreducible polynomials.
4. What can one say about irreducible polynomials? Eisenstein criterion states following. If Q(x)=
∑k=0,..,n akxk is n:th order polynomial with integer coefficients and with the property that there
exists at least one prime dividing all coefficients ai except an and that p2 does not divide a0, then
Q is irreducible. Thus one can assign one or more preferred primes to the algebraic extension
defined by an irreducible polynomial Q - in fact any polynomial allowing ramification. There are
also other kinds of irreducible polynomials since Eisenstein's condition is only sufficient but not
necessary.
5. Furthermore, in the algebraic extension defined by Q, the primes P having the above mentioned
characteristic property decompose to an n :th power of single prime P i: P= Pin. The primes are
maximally/completely ramified. The physical analog P=P0n is Bose-Einstein condensate of n
bosons. There is a strong temptation to identify the preferred primes of irreducible polynomials
as preferred p-adic primes.
A good illustration is provided by equations x2+1=0 allowing roots x+/-=+/- i and equation
x +2px+p=0 allowing roots x+/-= -p+/-p1/2p-11/2. In the first case the ideals associated with +/- i
are different. In the second case these ideals are one and the same since x += =- x- +p: hence one
indeed has ramification. Note that the first example represents also an example of irreducible
polynomial, which does not satisfy Eisenstein criterion. In more general case the n conditions on
defined by symmetric functions of roots imply that the ideals are one and same when Eisenstein
conditions are satisfied.
6. What does this mean in p-adic context? The identity of the ideals can be stated by saying P= P 0n
for the ideals defined by the primes satisfying the Eisenstein condition. Very loosely one can say
- 200 2
that the algebraic extension defined by the root involves nth root of p-adic prime p. This does not
work! Extension would have a number whose nth power is zero modulo p. On the other hand, the
p-adic numbers of the extension modulo p should be finite field but this would not be field
anymore since there would exist a number whose nth power vanishes. The algebraic extension
simply does not exist for preferred primes. The physical meaning of this will be considered later.
7. What is so nice that one could readily construct polynomials giving rise to given preferred
primes. The complex roots of these polymials could correspond to the points of partonic 2surfaces carrying fermions and defining the ends of boundaries of string world sheet. It must be
however emphasized that the form of the polynomial depends on the choices of the complex
coordinate. For instance, the shift x→ x+1 transforms (xn-1)/(x-1) to a polynomial satisfying the
Eisenstein criterion. One should be able to fix allowed coordinate changes in such a manner that
the extension remains irreducible for all allowed coordinate changes.
Already the integral shift of the complex coordinate affects the situation. It would seem that
only the action of the allowed coordinate changes must reduce to the action of Galois group
permuting the roots of polynomials. A natural assumption is that the complex coordinate
corresponds to a complex coordinate transforming linearly under subgroup of isometries of the
imbedding space.
In the general situation one has P= ∏ Pie(i), e(i)≥ 1 so that aso now there are prefered primes so that
the appearance of preferred primes is completely general phenomenon.
3. A connection with Langlands program?
In Langlands program (see this) the great vision is that the n-dimensional representations of Galois
groups G characterizing algebraic extensions of rationals or more general number fields define ndimensional adelic representations of adelic Lie groups, in particular the adelic linear group Gl(n,A).
This would mean that it is possible to reduce these representations to a number theory for adeles. This
would be highly relevant in the vision about TGD as a generalized number theory. I have speculated
with this possibility earlier (see this) but the mathematics is so horribly abstract that it takes decade
before one can have even hope of building a rough vision.
One can wonder whether the irreducible polynomials could define the preferred extensions K of
rationals such that the maximal abelian extensions of the fields K would in turn define the adeles utilized
in Langlands program. At least one might hope that everything reduces to the maximally ramified
extensions.
At the level of TGD string world sheets with parameters in an extension defined by an irreducible
polynomial would define an adele containing various p-adic number fields defined by the primes of the
extension. This would define a hierarchy in which the prime ideals of previous level would decompose
to those of the higher level. Each irreducible extension of rationals would correspond to some physically
preferred p-adic primes.
It should be possible to tell what the preferred character means in terms of the adelic representations.
What happens for these representations of Galois group in this case? This is known.
1. For Galois extensions ramification indices are constant: e(i)=e and Galois group acts transitively
on ideals Pi dividing P. One obtains an n-dimensional representation of Galois group. Same
applies to the subgroup of Galois group G/I where I is subgroup of G leaving P i invariant. This
group is called inertia group. For the maximally ramified case G maps the ideal P0 in P=P0n to
- 201 -
itself so that G=I and the action of Galois group is trivial taking P0 to itself, and one obtains
singlet representations.
2. The trivial action of Galois group looks like a technical problem for Langlands program and also
for TGD unless the singletness of Pi under G has some physical interpretation. One possibility is
that Galois group acts as like a gauge group and here the hierarchy of sub-algebras of supersymplectic algebra labelled by integers n is highly suggestive. This raises obvious questions.
Could the integer n characterizing the sub-algebra of super-symplectic algebra acting as
conformal gauge transformations, define the integer defined by the product of ramified primes?
P0n brings in mind the n conformal equivalence classes which remain invariant under the
conformal transformations acting as gauge transformiations. . Recalling that relative discriminant
is an of K ideal divisible by ramified prime ideals of K, this means that n would correspond to
the relative discriminant for K=Q.
Are the preferred primes those which are "physical" in the sense that one can assign to the
states satisfying conformal gauge conditions?
4. A connection with infinite primes?
Infinite primes are one of the mathematical outcomes of TGD. There are two kinds of infinite
primes. There are the analogs of free many particle states consisting of fermions and bosons labelled by
primes of the previous level in the hierarchy. They correspond to states of a supersymmetric arithmetic
quantum field theory or actually a hierarchy of them obtained by a repeated second quantization of this
theory. A connection between infinite primes representing bound states and and irreducible polynomials
is highly suggestive.
1. The infinite prime representing free many-particle state decomposes to a sum of infinite part and
finite part having no common finite prime divisors so that prime is obtained. The infinite part is
obtained from "fermionic vacuum" X= ∏kpk by dividing away some fermionic primes pi and
adding their product so that one has X→ X/m+ m, where m is square free integer. Also m=1 is
allowed and is analogous to fermionic vacuum interpreted as Dirac sea without holes. X is
infinite prime and pure many-fermion state physically. One can add bosons by multiplying X
with any integers having no common denominators with m and its prime decomposition defines
the bosonic contents of the state. One can also multiply m by any integers whose prime factors
are prime factors of m.
2. There are also infinite primes, which are analogs of bound states and at the lowest level of the
hierarchy they correspond to irreducible polynomials P(x) with integer coefficients. At the
second levels the bound states would naturally correspond to irreducible polynomials Pn(x) with
coefficients Qk(y), which are infinite integers at the previous level of the hierarchy.
3. What is remarkable that bound state infinite primes at given level of hierarchy would define
maximally ramified algebraic extensions at previous level. One indeed has infinite hierarchy of
infinite primes since the infinite primes at given level are infinite primes in the sense that they
are not divisible by the primes of the previous level. The formal construction works as such.
Infinite primes correspond to polynomials of single variable at the first level, polynomials of two
variables at second level, and so on. Could the Langlands program could be generalized from the
extensions of rationals to polynomials of complex argument and that one would obtain infinite
hierarchy?
4. Infinite integers in turn could correspond to products of irreducible polynomials defining more
general extensions. This raises the conjecture that infinite primes for an extension K of rationals
could code for the algebraic extensions of K quite generally. If infinite primes correspond to real
quantum states they would thus correspond the extensions of rationals to which the parameters
appearing in the functions defining partonic 2-surfaces and string world sheets.
- 202 -
This would support the view that partonic 2-surfaces associated with algebraic extensions
defined by infinite integers and thus not irreducible are unstable against decay to partonic 2surfaces which corresponds to extensions assignable to infinite primes. Infinite composite integer
defining intermediate unstable state would decay to its composites. Basic particle physics
phenomenology would have number theoretic analog and even more.
5. According to Wikipedia, Eisenstein's criterion (see this) allows generalization and what comes in
mind is that it applies in exactly the same form also at the higher levels of the hierarchy. Primes
would be only replaced with prime polynomials and the there would be at least one prime
polynomial Q(y) dividing the coefficients of Pn(x) except the highest one such that its square
would not divide P0. Infinite primes would give rise to an infinite hierarchy of functions of many
complex variables. At first level zeros of function would give discrete points at partonic 2surface. At second level one would obtain 2-D surface: partonic 2-surfaces or string world sheet.
At the next level one would obtain 4-D surfaces. What about higher levels? Does one obtain
higher dimensional objects or something else. The union of n 2-surfaces can be interpreted also
as 2n-dimensional surface and one could think that the hierarchy describes a hierarchy of unions
of correlated partonic 2-surfaces. The correlation would be due to the preferred extremal
property of Kähler action.
One can ask whether this hierarchy could allow to generalize number theoretical Langlands
to the case of function fields using the notion of prime function assignable to infinite prime.
What this hierarchy of polynomials of arbitrary many complex arguments means physically is
unclear. Do these polynomials describe many-particle states consisting of partonic 2-surface such
that there is a correlation between them as sub-manifolds of the same space-time sheet
representing a preferred extremals of Kähler action?
This would suggest strongly the generalization of the notion of p-adicity so that it applies to infinite
primes.
1. This looks sensible and maybe even practical! Infinite primes can be mapped to prime
polynomials so that the generalized p-adic numbers would be power series in prime polynomial Taylor expansion in the coordinate variable defined by the infinite prime. Note that infinite
primes (irreducible polynomials) would give rise to a hierarchy of preferred coordinate variables.
In terms of infinite primes this expansion would require that coefficients are smaller than the
infinite prime P used. Are the coefficients lower level primes? Or also infinite integers at the
same level smaller than the infinite prime in question? This criterion makes sense since one can
calculate the ratios of infinite primes as real numbers.
2. I would guess that the definition of infinite-P p-adicity is not a problem since mathematicians
have generalized the number theoretical notions to such a level of abstraction much above of a
layman like me. The basic question is how to define p-adic norm for the infinite primes (infinite
only in real sense, p-adically they have unit norm for all lower level primes) so that it is finite.
3. There exists an extremely general definition of generalized p-adic number fields (see this). One
considers Dedekind domain D, which is a generalization of integers for ordinary number field
having the property that ideals factorize uniquely to prime ideals. Now D would contain infinite
integers. One introduces the field E of fractions consisting of infinite rationals.
Consider element e of E and a general fractional ideal eD as counterpart of ordinary rational and
decompose it to a ratio of products of powers of ideals defined by prime ideals, now those
defined by infinite primes. The general expression for the p-adic norm of x is x-ord(P), where n
defines the total number of ideals P appearing in the factorization of a fractional ideal in E: this
number can be also negative for rationals. When the residue field is finite (finite field G(p,1) for
p-adic numbers), one can take c to the number of its elements (c=p for p-adic numbers.
- 203 -
Now it seems that this number is not finite since the number of ordinary primes smaller than
P is infinite! But this is not a problem since the topology for completion does not depend on the
value of c. The simple infinite primes at the first level (free many-particle states) can be mapped
to ordinary rationals and q-adic norm suggests itself: could it be that infinite-P p-adicity
corresponds to q-adicity discussed by Khrennikov about p-adic analysics. Note however that qadic numbers are not a field.
Finally, a loosely related question. Could the transition from infinite primes of K to those of L takes
place just by replacing the finite primes appearing in infinite prime with the decompositions? The
resulting entity is infinite prime if the finite and infinite part contain no common prime divisors in L.
This is not the case generally if one can have primes P1 and P2 of K having common divisors as primes
of L: in this case one can include P1 to the infinite part of infinite prime and P2 to finite part.
For details see the article The Origin of Preferred p-Adic Primes? For a summary of earlier postings
see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 2:34 AM
04/27/2015
http://matpitka.blogspot.com/2015/04/could-adelic-approach-allowto.html#comments
Could adelic approach allow to understand the origin of preferred p-adic primes?
The comment of Crow to the posting Intentions, cognitions, and time stimulated rather interesting
ideas about the adelization of Quantum-TGD.
First two questions.
1. What is Adelic Quantum-TGD? The basic vision is that scattering amplitudes are obtained by
algebraic continuation to various number fields from the intersection of realities and p-adicities
(briefly intersection in what follows) represented at the space-time level by string world sheets
and partonic 2-surfaces for which defining parameters (WCW coordinates) are in rational or in in
some algebraic extension of p-adic numbers. This principle is a combination of strong form of
holography and algebraic continuation as a manner to achieve number theoretic universality.
2. Why Adelic quantum TGD? Adelic approach is free of the earlier assumptions, which require
mathematics which need not exist: transformation of p-adic space-time surfaces to real ones as a
realization of intentional actions was the questionable assumption, which is un-necessary if
cognition and matter are two different aspects of existence as already the success of p-adic mass
calculations strongly suggests. It always takes years to develop ability to see things from bigger
perspective and distill discoveries from clever inventions. Now adelicity is totally obvious. Being
a conservative radical - not radical radical or radical conservative - is the correct strategy which I
have been gradually learning. This particular lesson was excellent!
Some years ago, Crow sent to me the book of Lapidus about adelic strings. Witten wrote for long
time ago an article in which the demonstrated that that the product of real stringy vacuum amplitude and
its p-adic variants equals to 1. This is a generalisation of the adelic identity for a rational number stating
that the product of the norm of rational number with its p-adic norms equals to one.
The real amplitude in the intersection of realities and p-adicities for all values of parameter is
rational number or in an appropriate algebraic extension of rationals. If given p-adic amplitude is just the
p-adic norm of real amplitude, one would have the adelic identity. This would however require that p- 204 -
adic variant of the amplitude is real number-valued: I want p-adic valued amplitudes. A further
restriction is that Witten's adelic identity holds for vacuum amplitude. I live in Zero Energy Ontology
(ZEO) and want it for entire S-matrix, M-matrix, and/or U-matrix and for all states of the basis in some
sense.
Consider first the vacuum amplitude. A weaker form of the identity would be that the p-adic norm of
a given p-adic valued amplitude is same as that p-adic norm for the rational-valued real amplitude (this
generalizes to algebraic extensions, I dare to guess) in the intersection. This would make sense and give
a non-trivial constraint: algebraic continuation would guarantee this constraint. In particular, the p-adic
norm of the real amplitude would be inverse of the product of p-adic norms of p-adic amplitudes. Most
of these amplitudes should have p-adic norm equal to one in other words, real amplitude is product of
finite number of powers of prime. This because the p-adic norms must approach rapidly to unity as padic prime increases and for large p-adic primes this means that the norm is exactly unity. Hence the padic norm of p-adic amplitude equals to 1 for most primes.
In ZEO one must consider S-, M-, or U-matrix elements. U and S are unitary. M is product of
hermitian square root of density matrix times unitary S-matrix. Consider next S-matrix.
1. For S-matrix elements one should have pm=(SS†)mm=1. This states the unitarity of S-matrix.
Probability is conserved. Could it make sense to generalize this condition and demand that it
holds true only adelically that only for the product of real and p-adic norms of pm equals to one:
NR(pm)(R)∏p Np(pm(p))=1. This could be actually true identically in the intersection if algebraic
continuation principle holds true. Despite the triviality of the adelicity condition, one need not
have anymore unitarity separately for reals and p-adic number fields. Notice that the numbers pm
would be arbitrary rationals in the most general cased.
2. Could one even replace Np with canonical identification or some form of it with cutoffs
reflecting the length scale cutoffs? Canonical identification behaves for powers of p like p-adic
norm and means only more precise map of p-adics to reals.
3. For a given diagonal element of unit matrix characterizing particular state m one would have a
product of real norm and p-adic norms. The number of the norms, which differ from unity would
be finite. This condition would give finite number of exceptional p-adic primes, that is assign to
a given quantum state m a finite number of preferred p-adic primes! I have been searching for a
long time the underlying deep reason for this assignment forced by the p-adic mass calculations
and here it might be.
4. Unitarity might thus fail in real sector and in a finite number of p-adic sectors (otherwise the
product of p-adic norms would be infinite or zero). In some sense the failures would compensate
each other in the adelic picture. The failure of course brings in mind p-adic thermodynamics,
which indeed means that adelic SS†, or should it be called MM†, is not unitary but defines the
density matrix defining the p-adic thermal state! Recall that M-matrix is defined as hermitian
square root of density matrix and unitary S-matrix.
5. The weakness of these arguments is that states are assumed to be labelled by discrete indices.
Finite measurement resolution implies discretization and could justify this.
The p-adic norms of pm or the images of pm under canonical identification in a given number field
would define analogs of probabilities. Could one indeed have ∑m pm=1 so that SS† would define a
density matrix?
1. For the ordinary S-matrix this cannot be the case since the sum of the probabilities pm equals to
the dimension N of the state space: ∑ pm=N. In this case one could accept pm>1 both in real and
p-adic sectors. For this option adelic unitarity would make sense and would be highly non-trivial
- 205 -
condition allowing perhaps to understand how preferred p-adic primes emerge at the
fundamental level.
2. If S-matrix is multiplied by a hermitian square root of density matrix to get M-matrix, the
situation changes and one indeed obtains ∑ pm=1. MM†=1 does not make sense anymore and
must be replaced with MM†=ρ, in special case a projector to a N-dimensional subspace
proportional to 1/N. In this case the numbers p(m) would have p-adic norm larger than one for
the divisors of N and would define preferred p-adic primes. For these primes the sum Np(p(m))
would not be equal to 1 but to NNp(1/N.
3. Situation is different for hyper-finite factors of type II1 for which the trace of unit matrix equals
to one by definition and MM†=1 and ∑ pm=1 with sum defined appropriately could make sense.
If MM† could be also a projector to an infinite-D subspace. Could the M-matrix using the
ordinary definition of dimension of Hilbert space be equivalent with S-matrix for the state space
using the definition of dimension assignable to HFFs? Could these notions be dual of each other?
Could the adelic S-matrix define the counterpart of M-matrix for HFFs?
This looks like a nice idea but usually good looking ideas do not live long in the crossfire of counter
arguments. The following is my own. The reader is encouraged to invent his or her own objections.
1. The most obvious objection against the very attractive direct algebraic continuation} from real to
p-adic sector is that if the real norm or real amplitude is small then the p-adic norm of its p-adic
counterpart is large so that p-adic variants of pm(p) can become larger than 1 so that probability
interpretation fails. As noticed there is no actually no need to pose probability interpretation. The
only way to overcome the "problem" is to assume that unitarity holds separately in each sector so
that one would have p(m)=1 in all number fields but this would lead to the loss of preferred
primes.
2. Should p-adic variants of the real amplitude be defined by canonical identification or its variant
with cutoffs? This is mildly suggested by p-adic thermodynamics. In this case it might be
possible to satisfy the condition pm(R)∏p Np(pm(p))=1. One can however argue that the adelic
condition is an ad hoc condition in this case.
To sum up, if the above idea survives all the objections, it could give rise to a considerable progress.
A first principle understanding of how preferred p-adic primes are assigned to quantum states and thus a
first principle justification for p-adic thermodynamics. For the ordinary definition of S-matrix this
picture makes sense and also for M-matrix. One would still need the justification of canonical
identification map playing a key role in p-adic thermodynamics allowing to map p-adic mass squared to
its real counterpart.
posted by Matti Pitkanen @ 1:16 AM
04/26/2015 - http://matpitka.blogspot.com/2015/04/hierarchies-of-conformalsymmetry.html#comments
Hierarchies of conformal symmetry breakings, quantum criticalities, Planck constants, and hyperfinite factors
TGD is characterized by various hierarchies. There are fractal hierarchies of quantum criticalities,
Planck constants and hyper-finite factors and these hierarchies relate to hierarchies of space-time sheets,
and selves. These hierarchies are closely related and this article describes these connections. In this
article the recent view about connections between various hierarchies associated with quantum TGD are
described.
- 206 -
For details, see the article Hierarchies of conformal symmetry breakings, quantum criticalities, Planck
constants, and hyper-finite factors. For a summary of earlier postings see Links to the latest progress in
TGD.
posted by Matti Pitkanen @ 2:41 AM
04/26/2015 - http://matpitka.blogspot.com/2015/04/updated-view-about-k-geometry-ofwcw.html#comments
Updated View about Kähler geometry of WCW
Quantum-TGD reduces to a construction of Kähler geometry for what I call the "World of Classical
Worlds. It has been clear from the beginning that the gigantic super-conformal symmetries generalizing
ordinary super-conformal symmetries are crucial for the existence of WCW Kähler metric. The detailed
identification of Kähler function and WCW Kähler metric has however turned out to be a difficult
problem.
It is now clear that WCW geometry can be understood in terms of the analog of AdS/CFT duality
between fermionic and space-time degrees of freedom (or between Minkowskian and Euclidian spacetime regions) allowing to express Kähler metric either in terms of Kähler function or in terms of anticommutators of WCW gamma matrices identifiable as super-conformal Noether super-charges for the
symplectic algebra assignable to δ M4+/-× CP2. The string model type description of gravitation emerges
and also the TGD based view about dark matter becomes more precise. String tension is however
dynamical rather than pregiven and the hierarchy of Planck constants is necessary in order to understand
the formation of gravitationally bound states. Also the proposal that sparticles correspond to dark matter
becomes much stronger: sparticles actually are dark variants of particles.
A crucial element of the construction is the assumption that super-symplectic and other superconformal symmetries having the same structure as 2-D super-conformal groups can be seen a broken
gauge symmetries such that sub-algebra with conformal weights coming as n-ples of those for full
algebra act as gauge symmetries. In particular, the Noether charges of this algebra vanish for preferred
extremals- this would realize the strong form of holography implied by strong form of General
Coordinate Invariance.
This gives rise to an infinite number of hierarchies of conformal gauge symmetry breakings with
levels labelled by integers n(i) such that n(i) divides n(i+1) interpreted as hierarchies of dark matter with
levels labelled by the value of Planck constant heff=n× h. These hierarchies define also hierarchies of
quantum criticalities and are proposed to give rise to inclusion hierarchies of hyperfinite factors of II1
having interpretation in terms of finite cognitive resolution. These hierarchies would be fundamental for
the understanding of living matter.
For details see the article Updated view about Kähler geometry of WCW. For a summary of earlier
postings see Links to the latest progress in TGD.
posted by Matti Pitkanen @ 1:51 AM
04/26/2015 - http://matpitka.blogspot.com/2015/04/intentions-cognition-and-time.html#comments
Intentions, Cognition, and Time
- 207 -
Intentions involve time in an essential manner and this led to the idea that p-adic-to-real quantum
jumps could correspond to a realization of intentions as actions. It however seems that this hypothesis
posing strong additional mathematical challenges is not needed if one accepts adelic approach in which
real space-time time and its p-adic variants are all present and quantum physics is adelic. I have already
earlier developed the first formulation of p-adic space-time surfaces as cognitive charges of real spacetime surfaces and also the ideas related to the adelic vision.
The recent view involving strong form of holography would provide dramatically simplified view
about how these representations are formed as continuations of representations of strings world sheets
and partonic 2-surfaces in the intersection of real and p-adic variants of WCW ("World of Classical
Worlds") in the sense that the parameters characterizing these representations are in the algebraic
numbers in the algebraic extension of p-adic numbers involved.
For details see the article Intentions, Cognition, and Time. For a summary of earlier postings see Links
to the latest progress in TGD.
posted by Matti Pitkanen @ 12:14 AM 4 comments
4 Comments:
At 12:29 PM, Anonymous said...
Matti, I really think you are on to something (as if you weren't already) with the adelic
approach to things. I was just looking at a paper in wrote on fractal strings and membranes
a few years ago and realized that the adelic product defined by Lapidus and others, assigns
to each element of the product a (square intehrable) Hilbert space.
Also I think there must be something special about modular arithmetic , the special
status of the integral numbers 12 and 24 hours there, and the approximately 24hr period of
the standard day. Maybe life requires these almost ratios? If other planets had cycles not
commensurate then perhaps life would not be favored? --crow
At 10:09 PM, [email protected] said...
To Anonymous:
Thank you for a very stimulating comment.
Adelic approach is free of assumptions which require mathematics which need not
exist: transformation of p-adic space-time surfaces to real ones was the questionable
assumption. It always takes years to develop ability to see it from bigger perspective. Now
adelicity is totally obvious. Being a conservative radical - not radical radical or radical
conservative - is the correct strategy which I have been gradually learning. An excellent
lesson!
I think you sent the book of Lapidus about adelic strings. Witten wrote for long time
ago an article in which the demonstrated that that the product of real stringy vacuum
amplitude and its p-adic variants equals to 1. This is a generalisation of the adelic identity
for a rational number.
One can think that the real amplitude in the intersection of realities and p-adicities for
all values of parameter is rational number. If given p-adic amplitude is just the p-adic
norm of real amplitude, one would have the adelic identity. But this would require that padic variant of the amplitude is real number-valued: I have p-adic valued amplitudes. And
- 208 -
Witten's identity holds for vacuum amplitude. I want it for entire S-matrix, M-matrix,
and/or U-matrix and for all states of the basis in some sense.
a) Consider first vacuum amplitude. A weaker form of the identity would be that the
*p-adic norm* of given p-adic valued amplitude is same as that p-adic norm for the
rational-valued real amplitude (this generalizes to algebraic extensions, I dare to guess).
This would make sense and give a non-trivial constraint. In particular, the p-adic norm of
the real amplitude would be inverse of the the product of p-adic norms of p-adic
amplitudes. Most of these amplitudes should have p-adic norm equal to one. This
condition can make sense only if the p-adic norm of p-adic amplitude equals to 1 for most
prime.
b) In ZEO one must consider S-, M-, or U-matrix elements. U and S are unitary. M is
product of hermitian square root of density matrix times unitary S-matrix. Consider the Smatrix.
*For S-matrix elements one should have SS^dagger=1. This states unitarity of Smatrix. Probability is conserved. Could it make sense to generalize this condition
and demand that it holds true only adelically that is for the product of real and padic norms of SS^dagger in various number fields? For each state m of basis: the
product of norm real X_mm =(SS^dagger)_{mm} and p-adic norms its p-adic
counterparts would be qual to 1 in the intersection of reality and p-adicities.
Strong condition would be that this holds for X_mm themselves. It could if the
the p-adic norm of X_mm is X_mm itself that is power of p.
*For a given diagonal element of unit matrix defining particular state m one would
have a product of real norm and p-adic norms. The number of the norms, which
differ from unity would be finite. This condition would give finite number of
exceptional p-adic primes, that is assign to a given quantum state labelling the
diagonal matrix element of SS^dagger a *finite number of preferred p-adic
primes*!! The underlying deep reason for this assignment I have been looking
for!!
*Unitary might thus fail in real sector and in a finite number of p-adic sectors
(otherwise the product of p-adic norms would be infinite or zero). In some sense
the failures would compensate each other in the adelic picture. The failure of
course brings in mind p-adic thermo-dynamics which indeed means that SS^+ is
density matrix defining the p-adic thermal state!
*The diagonal elements of SS^dagger_{mm} in given number field would define
analogs of probabilities. Could these probabilities be interpreted as the
probabilities, whose sum equals to 1? Probability conservation for a given
number field. Adelic S-matrix would be the more sophisticated counterpart of Mmatrix!
One can consider a variant of this. One could also consider the possibility that the padic norm of SS^dagger_mm is replaced with its image under canonical identification. The
information loss wold not be so huge. This might be required by p-adic thermodynamics.
At 10:14 PM,
[email protected] said...
- 209 -
To Anonymous:
I forgot the summary. The outcome could be two breakthroughs. First principle
understanding of how preferred p-adic primes are assigned to quantum states and first
principle justification for p-adic thermodynamics.
04/25/2015 - http://matpitka.blogspot.com/2015/04/good-and-evil-life-and-death.html#comments
Good and Evil, Life and Death
In principle the proposed conceptual framework allows already now a consideration of the basic
questions relating to concepts like Good and Evil and Life and Death. Of course, too many uncertainties
are involved to allow any definite conclusions, and one could also regard the speculations as outputs of
the babbling period necessarily accompanying the development of the linguistic and conceptual
apparatus making ultimately possible to discuss these questions more seriously.
Even the most hard-boiled materialistic sceptic mentions ethics and moral when suffering personal
injustice. Is there actual justification for moral laws? Are they only social conventions or is there some
hard core involved? Is there some basic ethical principle telling what deeds are good and what deeds are
bad?
Second group of questions relates to life and biological death. How should on define life? What
happens in the biological death? Is self preserved in the biological death in some form? Is there
something deserving to be called soul? Are reincarnations possible? Are we perhaps responsible for our
deeds even after our biological death? Could the law of Karma be consistent with physics? Is liberation
from the cycle of Karma possible?
In the sequel, these questions are discussed from the point of view of TGD inspired theory of
consciousness. It must be emphasized that the discussion represents various points of view rather than
being a final summary. Also mutually conflicting points of view are considered. The cosmology of
consciousness, the concept of self having space-time sheet and causal diamond as its correlates, the
vision about the fundamental role of negentropic entanglement, and the hierarchy of Planck constants
identified as hierarchy of dark matters and of quantum critical systems, provide the building blocks
needed to make guesses about what biological death could mean from subjective point of view.
For details see the article Good and Evil, Life and Death. For a summary of earlier postings see
Links to the latest progress in TGD.
posted by Matti Pitkanen @ 10:23 PM
2 Comments:
At 4:47 AM,
Anonymous said...
The Zen-koan about 500-years as a fox is about karma and denial of. Liberation from
karma is not denial of karma, nor rule of karma - the opposite of liberation. I'm tempted to
formulate in this context, that even in liberated state karma does not cease being
observable, even though karma is not being fed causal metabolic energy.
And strangely, "saint" does not become saint by excluding but by including every "sinner".
"Saint" that morally-judgmentally excludes sinners from "higher-self" is the worst
"sinner", the worst Jungian shadow-projector. Those who do most horrible, most hurting
- 210 -
deeds are not the self-loving and self-pleasing "sinners" but the White Knights of Light
and Truth, who build highway to hell paved with "good intentions".
At 5:50 AM,
[email protected] said...
Exactly. Good deeds generate negentropic entanglement with owner sinners and is gift to
them too.
04/24/2015 - http://matpitka.blogspot.com/2015/04/variation-of-newstons-constant-andof.html#comments
Variation of Newton's constant and of length of day
J. D. Anderson et al have published an article discussing the observations suggesting a periodic
variation of the measured value of Newton constant and variation of length of day (LOD) (see also this).
This article represents TGD based explanation of the observations in terms of a variation of Earth radius.
The variation would be due to the pulsations of Earth coupling via gravitational interaction to a dark
matter shell with mass about 1.3× 10-4ME introduced to explain Flyby anomaly: the model would predict
Δ G/G= 2Δ R/R and Δ LOD/LOD= 2Δ RE/RE with the variations pf G and length of day in opposite
phases. The expermental finding Δ RE/RE= MD/ME is natural in this framework but should be deduced
from first principles.
The gravitational coupling would be in radial scaling degree of freedom and rigid body rotational
degrees of freedom. In rotational degrees of freedom the model is in the lowest order approximation
mathematically equivalent with Kepler model. The model for the formation of planets around Sun
suggests that the dark matter shell has radius equal to that of Moon's orbit. This leads to a prediction for
the oscillation period of Earth radius: the prediction is consistent with the observed 5.9 years period. The
dark matter shell would correspond to n=1 Bohr orbit in the earlier model for quantum gravitational
bound states based on large value of Planck constant. Also n>1 orbits are suggestive and their existence
would provide additional support for TGD view about quantum gravitation.
For details see the chapter Cosmology and Astrophysics in Many-Sheeted Space-Time or the article
Variation of Newton's constant and of length of day. For a summary of earlier postings see Links to the
latest progress in TGD.
posted by Matti Pitkanen @ 6:05 AM
04/21/2015 - http://matpitka.blogspot.com/2015/04/connection-between-booleancognition.html#comments
Connection between Boolean cognition and emotions
Weak form of NMP allows the state function reduction to occur in 2n-1 manners corresponding to
subspaces of the sub-space defined by n-dimensional projector if the density matrix is n-dimensional
projector (the outcome corresponding to 0-dimensional subspace and is excluded). If the probability for
the outcome of state function reduction is same for all values of the dimension 1≤m ≤n, the probability
distribution for outcome is given by binomial distribution B(n,p) for p=1/2 (head and tail are equally
probable) and given by p(m)= b(n,m)× 2-n= (n!/m!(n-m)!)×2-n . This gives for the average dimesion
E(m)= n/2 so that the negentropy would increase on the average. The world would become gradually
better.
- 211 -
One cannot avoid the idea that these different degrees of negentropic entanglement could actually
give a realization of Boolean algebra in terms of conscious experiences.
1. Could one speak about a hierarchies of codes of cognition based on the assignment of different
degrees of "feeling good" to the Boolean statements? If one assumes that the nth bit is always 1,
all independent statements except one correspond at least two non-vanishing bits and
corresponds to negentropic entanglement. Only of statement (only last bit equal to 1) would
correspond 1 bit and to state function reduction reducing the entanglement completely (brings in
mind the fruit in the tree of Good and Bad Knowlege!).
2. A given hierarchy of breakings of super-symplectic symmetry corresponds to a hierarchy of
integers ni+1= ∏k≤ i mk. The codons of the first code would consist of sequences of m1 bits. The
codons of the second code consists of m2 codons of the first code and so on. One would have a
hierarchy in which codons of previous level become the letters of the code words at the next
level of the hierarchy.
In fact, I ended up with almost Boolean algebra for decades ago when considering the hierarchy of
genetic codes suggested by the hierarchy of Mersenne primes M(n+1)= MM(n), Mn= 2n-1.
1. The hierarchy starting from M2=3 contains the Mersenne primes 3,7,127,2127-1 and Hilbert
conjectured that all these integers are primes. These numbers are almost dimensions of Boolean
algebras with n=2,3,7,127 bits. The maximal Boolean sub-algebras have m=n-1=1,2,6,126 bits.
2. The observation that m=6 gives 64 elements led to the proposal that it corresponds to a Boolean
algebraic assignable to genetic code and that the sub-algebra represents maximal number of
independent statements defining analogs of axioms. The remaining elements would correspond
to negations of these statements. I also proposed that the Boolean algebra with m=126=6× 21
bits (21 pieces consisting of 6 bits) corresponds to what I called memetic code obviously
realizable as sequences of 21 DNA codons with stop codons included. Emotions and information
are closely related and peptides are regarded as both information molecules and molecules of
emotion.
3. This hierarchy of codes would have the additional property that the Boolean algebra at n+1:th
level can be regarded as the set of statements about statements of the previous level. One would
have a hierarchy representing thoughts about thoughts about.... It should be emphasized that
there is no need to assume that the Hilbert's conjecture is true.
One can obtain this kind of hierarchies as hierarchies with dimensions m, 2m, 22m,... that is
n(i+1)= 2n(i). The conditions that n(i) divides n(i+1) is non-trivial only for at the lowest step and
implies that m is power of 2 so that the hierarchies starting from m=2k. This is natural since
Boolean algebras are involved. If n corresponds to the size scale of CD, it would come as a
power of 2.
p-Adic length scale hypothesis has also led to this conjecture. A related conjecture is that the
sizes of CDs correspond to secondary p-adic length scales, which indeed come as powers of two
by p-adic length scale hypothesis. In case of electron this predicts that the minimal size of CD
associated with electron corresponds to time scale T=.1 seconds, the fundamental time scale in
living matter (10 Hz is the fundamental bio-rhythm). It seems that the basic hypothesis of TGD
inspired partly by the study of elementary particle mass spectrum and basic bio-scales (there are
4 p-adic length scales defined by Gaussian Mersenne primes in the range between cell membrane
thickness 10 nm and and size 2.5 μm of cell nucleus!) follow from the proposed connection
between emotions and Boolean cognition.
- 212 -
4. NMP would be in the role of God. Strong NMP as God would force always the optimal choice
maximizing negentropy gain and increasing negentropy resources of the Universe. Weak NMP
as God allows free choice so that entropy gain is not be maximal and sinners populate the world.
Why the omnipotent God would allow this? The reason is now obvious. Weak form of NMP
makes possible the realization of Boolean algebras in terms of degrees of "feels good"! Without
the God allowing the possibility to do sin there would be no emotional intelligence!
Hilbert's conjecture relates in interesting manner to space-time dimension. Suppose that Hilbert's
conjecture fails and only the four lowest Mersenne integers in the hierarchy are Mersenne primes that is
3,7,127, 2127-1. In TGD one has hierarchy of dimensions associated with space-time surface coming as
0,1,2,4 plus imbedding space dimension 8. The abstraction hierarchy associated with space-time
dimensions would correspond discretization of partonic 2-surfaces as point set, discretization of 3surfaces as a set of strings connecting partonic 2-surfaces characterized by discrete parameters,
discretization of space-time surfaces as a collection of string world sheet with discretized parameters,
and maybe - discretization of imbedding space by a collection of space-time surfaces. Discretization
means that the parameters in question are algebraic numbers in an extension of rationals associated with
p-adic numbers.
In TGD framework, it is clear why imbedding space cannot be higher-dimensional and why the
hierarchy does not continue. Could there be a deeper connection between these two hierarchies. For
instance, could it be that higher dimensional manifolds of dimension 2×n can be represented physically
only as unions of say n 2-D partonic 2-surfaces (just like 3×N dimensional space can be represented as
configuration space of N point-like particles)? Also infinite primes define a hierarchy of abstractions.
Could it be that one has also now similar restriction so that the hierarchy would have only finite number
of levels, say four. Note that the notion of n-group and n-algebra involves an analogous abstraction
hierarchy.
For details see the article Good and Evil, Life and Death. For a summary of earlier postings see Links to
the latest progress in TGD.
posted by Matti Pitkanen @ 10:33 PM
4 Comments:
At 5:18 AM,
Anonymous said...
Also polygonal numbers (feel good! :)) and have similar structure of statements about
statements on the earlier level, e.g. triangular numbers, tetragonal etc. higher dimensional:
0
1
1
1
1
etc.
0
1
2
3
4
0
1
3
6
10
0
1
4
10
19
0
1
5
15
31
...
...
...
...
...
The pattern can be extended also to the negative side, and there's amazing finding
connecting Euler's finding about relation of "full" set of pentagonal numbers
(http://en.wikipedia.org/wiki/Pentagonal_number_theorem)
and
sigma
function(http://en.wikipedia.org/wiki/Divisor_function) together with sum of all noncloned Egyptian fractions aka harmonic numbers and logarithmic function, resulting in
elementary problem that is equivalent with Riemann hypothesis:
- 213 -
http://www.math.lsa.umich.edu/~lagarias/doc/elementaryrh.pdf
Nice introduction to the subject here:
https://www.youtube.com/watch?v=1mSk3J3GlA8
At 5:22 AM,
Anonymous said...
Correction: "triangular, tetragonal..." -> triangular, tetrahedral...
At 6:13 AM,
Anonymous said...
Scalar feel-good factor allows to feel better and better, which is good. :)
The fundamental problem with leveled empathy is that leveled negentropy between
e.g. human-form emotional states lets in both feel-good and feel-bad, and raising barriers
(e.g. us-against-them) against the feel-bad aspect and thus the whole holografic empathy
of multi-observer negentropic field.
Empathy can thus be stated as strong form of emotional holography, and by not
actively filtering out the emotions of other tribes, other species, spirits and gods, we can
trust that the pain that comes in is nothing compared to the whole of love (=God). While
also fully sympathizing with the fear of opening heart more and more fully also to the
suffering of others, and non-judgementally allowing that basic fear the space and time and
life-experience that it needs.
The Boolean aspect of what is called 'monogamy' of entangled observables seems to be
the core mathematical issue relating to empathy, understood as strong holography of
emotional negentropic entanglement, and monogamy of entangled observables (cf.
representation theory and epistemology) is related to weak form of NMP. However, we do
not need to treat strong NMP and weak NMP as either-or question, when we remember
that according to Spinoza's definition, Absolute contains all attributes, and that Spinoza's
Ethics is happily smiling feel-better philosophy. :)
This in mind, can we see a way to combine and relate strong NMP and weak NMP in
terms of both monogamic and polyamoric negentropic entanglements of observables?
At 6:59 AM,
[email protected] said...
To Anonymous:
At least now I am happy with weak NMP. It allows realisation of emotional Boolean
intelligence rather than only the usual cold and academic one;-).
An interesting question: can one map this emotionally represented Boolean algebra to
fermionic representation of Boolean algebra: m-D subspace to m-fermion state. One
should pair many-fermion states representing n-bit Boolean algebra with the subspaces of
space defined by projection operator. One should entangle2^n-1 many-fermion states with
these 2^n-1 state functions reductions? Sounds crazy!
04/20/2015 - http://matpitka.blogspot.com/2015/04/can-one-identify-quantumphysical.html#comments
- 214 -
Can one identify quantum physical correlates of ethics and moral?
TGD-inspired theory of Consciousness involves a bundle of new concepts making it possible to
seriously discuss quantum physical correlates of ethics and moral assuming that we live in the TGD
Universe. In the following I summarize the recent understanding. I do not guarantee that I will agree
with myself tomorrow since I am just going through this stuff in the updating of TGD-inspired theory of
Consciousness and Quantum Biology.
Quantum ethics very briefly
Could physics generalized to a theory of consciousness allow to undersand the physical correlates of
ethics and moral. The proposal is that this is the case. The basic ethical principle would be that good
deeds help evolution to occur. Evolution should correspond to the increase of negentropic entanglement
resources, defining negentropy sources, which I have called Akashic records.
This idea can be
criticized.
1. If strong form of NMP prevails, one can worry that TGD Universe does not allow Evil at all,
perhaps not even genuine free will! No-one wants Evil but Evil seems to be present in this world.
2. Could one weaken NMP so that it does not force but only allows to make a reduction to a final
state characterized by density matrix which is projection operator? Self could choose whether to
perform a projection to some sub-space of this subspace, say 1-D ray as in ordinary state
function reduction. NMP would be like Christian God allowing the sinner to choose between
Good and Evil. The final entanglement negentropy would be measure for the goodness of the
deed. This is so if entanglement negentropy is a correlate for love. Deeds which are done with
love would be good. Reduction of entanglement would in turn mean loneliness and separation.
3. Or could could think that the definition of good deed is as a selection between deeds, which
correspond to the same maximal increase of negentropy so that NMP cannot tell what happens.
For instance the density matrix operator is direct sum of projection operators of same dimension
but varying coefficients and there is a selection between these. It is difficult to imagine what the
criterion for a good deed could be in this case. And how self can know what is the good deed and
what is the bad deed.
Good deeds would support evolution. There are many manners to interpret evolution in theTGD
Universe.
1. p-Adic evolution would mean a gradual increase of the p-adic primes characterizing individual
partonic 2-surfaces and therefore their size. The identification of p-adic space-time sheets as
representations for cognitions gives additional concreteness to this vision. The earlier proposal
that p-adic--real-phase transitions correspond to realization of intentions and formations of
cognitions seems however to be wrong. Instead, adelic view that both real and p-adic sectors are
present simultaneously and that fermions at string world sheets correspond to the intersection of
realities and p-adicities seems more realistic.
The inclusion of phases q=exp(i2π/n) in the algebraic extension of p-adics allows to define
the notion of angle in p-adic context but only with a finite resolution since only finite number of
angles are represented as phases for a given value of n. The increase of the integers n could be
interpreted as the emergence of higher algebraic extensions of p-adic numbers in the intersection
of the real and p-adic worlds. These observations suggest that all three views about evolution are
closely related.
2. The hierarchy of Planck constants suggests evolution as the gradual increase of the Planck
constant characterizing p-adic space-time sheet (or partonic 2-surface for the minimal option).
The original vision about this evolution was as a migration to the pages of the book like structure
- 215 -
defined by the generalized imbedding space and has therefore quite concrete geometric meaning.
It implies longer time scales of long term memory and planned action and macroscopic quantum
coherence in longer scales.
The new view is in terms of first quantum jumps to the opposite boundary of CD leading to
the death of self and its re-incarnation at the opposite boundary.
3. The vision about Life as something in the intersection of real and p-adic words allows to see
evolution information theoretically as the increase of number entanglement negentropy implying
entanglement in increasing length scales. This option is equivalent with the second view and
consistent with the first one if the effective p-adic topology characterizes the real partonic 2surfaces in the intersection of p-adic and real worlds.
The third kind of evolution would mean also the evolution of spiritual consciousness if the proposed
interpretation is correct. In each quantum jump U-process generates a superposition of states in which
any sub-system can have both real and algebraic entanglement with the external world. If state function
reduction process involves also the choice of the type of entanglement it could be interpreted as a choice
between good and evil. The hedonistic complete freedom resulting as the entanglement entropy is
reduced to zero on one hand, and the negentropic entanglement implying correlations with the external
world and meaning giving up the maximal freedom on the other hand. The selfish option means
separation and loneliness. The second option means expansion of consciousness - a fusion to the ocean
of consciousness as described by spiritual practices.
In this framework one could understand the physics correlates of ethics and moral. The ethics is
simple: evolution of consciousness to higher levels is a good thing. Anything which tends to reduce
consciousness represents violence and is a bad thing. Moral rules are related to the relationship between
individual and society and presumably develop via self-organization process and are by no means
unique. Moral rules however tend to optimize evolution. As blind normative rules they can however
become a source of violence identified as any action which reduces the level of consciousness.
There is an entire hierarchy of selves and every self has the selfish desire to survive and moral rules
develop as a kind of compromise and evolve all the time. ZEO leads to the notion that I have christened
cosmology of consciousness. It forces to extend the concept of society to four-dimensional society. The
decisions of "me now" affect both my past and future and time like quantum entanglement makes
possible conscious communication in time direction by sharing conscious experiences. One can
therefore speak of genuinely four-dimensional society. Besides my next-door neighbors I had better to
take into account also my nearest neighbors in past and future (the nearest ones being perhaps copies of
me!). If I make wrong decisions those copies of me in future and past will suffer the most. Perhaps my
personal hell and paradise are here and are created mostly by me.
What could the quantum correlates of moral be?
We make moral choices all the time. Some deeds are good, some deeds are bad. In the world of
materialist there are no moral choices, the deeds are not good or bad, there are just physical events. I am
not a materialist so that I cannot avoid questions such as how do the moral rules emerge and how some
deeds become good and some deeds bad. Negentropic entanglement is the obvious first guess if one
wants to understand emergence of moral.
1. One can start from ordinary quantum entanglement. It corresponds to a superposition of pairs of
states. Second state corresponds to the internal state of the self and second state to a state of
external world or biological body of self. In negentropic quantum entanglement each is replaced
with a pair of sub-spaces of state spaces of self and external world. The dimension of the sub- 216 -
space depends on the which pair is in question. In state function reduction one of these pairs is
selected and deed is done. How to make some of these deeds good and some bad?
2. Obviously the value of heff/h=n gives the criterion in the case that weak form of NMP holds true.
Recall that weak form of NMP allows only the possibility to generate negentropic entanglement
but does not force it. NMP is like God allowing the possibility to do good but not forcing good
deeds.
Self can choose any sub-space of the subspace defined by n-dimensional projector and 1-D
subspace corresponds to the standard quantum measurement. For n=1 the state function
reduction leads to vanishing negentropy, and separation of self and the target of the action.
Negentropy does not increase in this action and self is isolated from the target: kind of price for
sin.
For the maximal dimension of this sub-space the negentropy gain is maximal. This deed is
good and by the proposed criterion the negentropic entanglement corresponds to love or more
generally, positively colored conscious experience. Interestingly, there are 2n possible choices
which is the dimension of Boolean algebra consisting of n independent bits. This could relate
directly to fermionic oscillator operators defining basis of Boolean algebra. The deed in this
sense would be a choice of how loving the attention towards system of external world is.
3. Could the moral rules of society be represented as this kind of entanglement patterns between its
members? Here one of course has entire fractal hierarchy of societies corresponding different
length scales. Attention and magnetic flux tubes serving as its correlates is the basic element also
in TGD inspired quantum biology already at the level of bio-molecules and even elementary
particles. The value of heff/h=n associated with the magnetic flux tube connecting members of the
pair, would serve as a measure for the ethical value of maximally good deed. Dark phases of
matter would correspond to good: usually darkness is associated with bad!
4. These moral rules seem to be universal. There are however also moral rules or should one talk
about rules of survival, which are based on negative emotions such as fear. Moral rules as rules
of desired behavior are often tailored for the purposes of power holder. How this kind of moral
rules could develop? Maybe they cannot be realized in terms of negentropic entanglement.
Maybe the superposition of the allowed alternatives for the deed contains only the alternatives
allowed by the power holder and the superposition in question corresponds to ordinary
entanglement for which the signature is simple: the probabilities of various options are different.
This forces the self to choose just one option from the options that power holder accepts. These
rules do not allow the generation of loving relationship.
Moral rules seem to be generated by society, up-bringing, culture, civilization. How the moral rules
develop? One can try to formulate and answer in terms of quantum physical correlates.
1. Basically the rules should be generated in the state function reductions which correspond to
volitional action which corresponds to the first state function reduction to the earlier active
boundary of CD. Old self dies and new self is born at the opposite boundary of CD and the arrow
of time associated with CD changes.
2. The repeated sequences of state function reductions can generate negentropic entanglement
during the quantum evolutions between them. This time evolution would be the analog for the
time evolution defined by Hamiltonian - that is energy - associated with ordinary time translation
whereas the first state function reduction at the opposite boundary inducing scaling of heff and
CD would be accompanied by time evolution defined by conformal scaling generator L0.
Note that the state at passive boundary does not change during the sequence of repeated state
function reductions. These repeated reductions however change the parts of zero energy states
- 217 -
associated with the new active boundary and generate also negentropic entanglement. As the self
dies the moral choices can made if the weak form of NMP is true.
3. Who makes the moral choices? It looks of course very weird that self would apply free will only
at the moment of its death or birth! The situation is saved by the fact that self has also sub-selves,
which correspond to sub-CDs and represent mental images of self. We know that mental images
die as also we do some day and are born again (as also we do some day) and these mental images
can generate negentropic resources within CD of self.
One can argue that these mental images do not decide about whether to do maximally ethical
choice at the moment of death. The decision must be made by a self at higher level. It is me who
decides about the fate of my mental images - to some degree also after their death! I can choose
the how negentropic the quantum entanglement characterizing the relationship of my mental
image and the world outside it. I realize, that the misused idea of positive thinking seems to
unavoidably creep in! I have however no intention to make money with it!
There are still many questions that are waiting for more detailed answer. These questions are also a good
manner to detect logical inconsistencies.
1. What is the size of CD characterizing self? For electron it would be at least of the order of Earth
size. During the lifetime of CD the size of CD increases and the order of magnitude is measured
in light-life time for us. This would allow to understand our usual deeds affecting the
environment in terms of our subselves and their entanglement with the external world which is
actually our internal world, at least if magnetic bodies are considered.
2. Can one assume that the dynamics inside CD is independent from what happens outside CD. Can
one say that the boundaries of CD define the ends of space-time or does space-time continue
outside them. Do the boundaries of CD define boundaries for 4-D spotlight of attention or for
one particular reality? Does the answer to this question have any relevance if everything
physically testable is formulated in term physics of string world sheets associated with spacetime surfaces inside CD?
Note that the (average) size of CDs (, which could be in superposition but need not if every
repeated state function reduction is followed by a localization in the moduli space of CDs)
increases during the life cycle of self. This makes possible generation of negentropic
entanglement between more and more distant systems. I have written about the possibility that
ZEO could make possible interaction with distant civilizations (see this. The possibility of
having communications in both time directions would allow to circumvent the barrier due to the
finite light-velocity, and gravitational quantum coherence in cosmic scales would make possible
negentropic entanglement.
3. How selves interact? CDs as spot-lights of attention should overlap in order that the interaction is
possible. Formation of flux tubes makes possible quantum entanglement. The string world sheets
carrying fermions also essential correlates of entanglement and the possibly entanglement is
between fermions associated with partonic 2-surfaces. The string world sheets define the
intersection of real and p-adic worlds, where cognition and life resides.
For details see the article Good and Evil, Life and Death.
posted by Matti Pitkanen @ 10:34 PM
- 218 -
04/20/2015 - http://matpitka.blogspot.com/2015/04/intentions-cognitions-time-and-padic.html#comments
Intentions, cognitions, time, and p-adic physics
Intentions involved time in an essential manner and this led to the idea that p-adic-to-real quantum
jumps could correspond to a realization of intentions as actions. It however seems that this hypothesis
posing strong additional mathematical challenges is not needed if one accepts adelic approach in which
real space-time time and its p-adic variants are all present and quantum physics is adelic. I have
developed the first formulation of p-adic space-time surface in and the ideas related to the adelic vision
(see this, this, and this).
1. What intentions are?
One of the earlier ideas about the flow of subjective time was that it corresponds to a phase transition
front representing a transformation of intentions to actions and propagating towards the geometric future
quantum jump by quantum jump. The assumption about this front is un-necessary in the recent view
inspired by ZEO.
Intentions should relate to active aspects of conscious experience. The question is what the quantum
physical correlates of intentions are and what happens in the transformation of intention to action.
1. The old proposal that p-adic-to-real transition could correspond to the realization of intention as
action. One can even consider the possibility that the sequence of state function reductions
decomposes to pairs real-to-padic and p-adic-to-real transitons. This picture does not explain
why and how intention gradually evolves stronger and stronger, and is finally realized. The
identification of p-adic space-time sheets as correlates of cognition is however natural.
2. The newer proposal, which might be called adelic, is that real and p-adic space-time sheets form
a larger sensory-cognitive structure: cognitive and sensory aspects would be simultaneously
present. Real and p-adic space-time surfaces would form single coherent whole which could be
called adelic space-time. All p-adic manifolds could be present and define kind of chart maps
about real preferred extremals so that they would not be independent entities as for the first
option. The first objection is that the assignment of fermions separately to the every factor of
adelic space-time does not make sense. This objection is circumvented if fermions belong to the
intersection of realities and p-adicities.
This makes sense if string world sheets carrying the induced spinor fields define seats of
cognitive representations in the intersection of reality and p-adicities. Cognition would be still
associated with the p-adic space-time sheets and sensory experience with real ones. What can
sensed and cognized would reside in the intersection.
Intention would be however something different for the adelic option. The intention to
perform quantum jump at the opposite boundary would develop during the sequence of state
function reductions at fixed boundary and eventually NMP would force the transformation of
intention to action as first state function reduction at opposite boundary. NMP would guarantee
that the urge to do something develops so strong that eventually something is done.
Intention involves two aspects. The plan for achieving something which corresponds to
cognition and the will to achieve something which corresponds to emotional state. These aspects
could correspond to p-adic and real aspects of intentionality.
2. p-Adic physics as physics of only cognition?
- 219 -
There are two views about p-adic-real correspondence corresponding to two views about p-adic
physics. According to the first view p-adic physics defines correlates for both cognition and
intentionality whereas second view states that it provides correlates for cognition only.
1. Option A: The older view is that p-adic -to-real transitions realize intentions as actions and
opposite transitions generate cognitive representations. Quantum state would be either real or padic. This option raises hard mathematical challenges since scattering amplitudes between
different number fields are needed and the needed mathematics might not exist at all.
2. Option B: Second view is that cognition and sensory aspects of experience are simultaneously
present at all levels and means that real space-time surface and their real counterparts form a
larger structure in the spirit of what might be called Adelic TGD. p-Adic space-time charts could
be present for all primes. It is of course necessary to understand why it is possible to assign
definite prime to a given elementary particle.
This option could be developed by generalizing the existing mathematics of adeles by
replacing number in given number field with a space-time surface in the imbedding space
corresponding that number field. Therefore this option looks more promising. For this option
also the development of intention can be also understood. The condition that the scattering
amplitudes are in the intersection of reality and p-adicities is very powerful condition on the
scattering amplitudes and would reduce the realization of number theoretical universality and padicization to that for string world sheets and partonic 2-surfaces.
For instance, the difficult problem of defining p-adic analogs of topological invariant would
trivialize since these invariants (say genus) have algebraic representation for 2-D geometries. 2dimensionality of cognitive representation would be perhaps basically due to the close
correspondence between algebra and topology in dimension D=2.
Most of the following considerations apply in both cases.
3. Some questions to ponder
The following questions are part of the list of question that one must ponder.
a) Do cognitive representations reside in the intersection of reality and p-adicities?
The idea that cognitive representation reside in the intersection of reality and various p-adicities is
one of the key ideas of TGD-inspired theory of Consciousness.
1. All quantum states have vanishing total quantum numbers in ZEO, which now forms the basis of
quantum TGD (see this). In principle conservation laws do not pose any constraints on possibly
occurring real--p-adic transitions (Option A) if they occur between zero energy states.
On the other hand, there are good hopes about the definition of p-adic variants of conserved
quantities by algebraic continuation since the stringy quantal Noether charges make sense in all
number fields if string world sheets are in the real--p-adic intersection. This continuation is
indeed needed if quantum states have adelic structure (Option B). In accordance with this
quantum classical correspondence (QCC) demands that the classical conserved quantities in the
Cartan algebra of symmetries are equal to the eigenvalues of the quantal charges.
2. The starting point is the interpretation of fermions as correlates for Boolean cognition and p-adic
space-time sheets space-time correlates for cognitions (see this). Induced spinor fields are
localized at string world sheets, which suggests that string world sheets and partonic 2-surfaces
define cognitive representations in the intersection of realities and p-adicities. The space-time
adele would have a book-like structure with the back of the book defined by string world sheets.
- 220 -
3. At the level of partonic 2-surfaces common rational points (or more generally common points in
algebraic extension of rationals) correspond to the real--p-adic intersection. It is natural to
identify the set of these points as the intersection of string world sheets and partonic 2-surfaces at
the boundaries of CDs. These points would also correspond to the ends of strings connecting
partonic 2-surfaces and the ends of fermion lines at the orbits of partonic 2-surfaces (at these
surfaces the signature of the induced 4-metric changes). This would give a direct connection with
fermions and Boolean cognition.
1. For option A the interpretation is simple. The larger the number of points is, the higher
the probability for the transitions to occur. This because the transition amplitude must
involve the sum of amplitudes determined by data from the common points.
2. For option B the number of common points measures the goodness of the particular
cognitive representation but does not tell anything about the probability of any quantum
transition. It however allows to discriminate between different p-adic primes using the
precision of the cognitive representation as a criterion. For instance, the non-determinism
of Kähler action could resemble p-adic non-determinism for some algebraic extension of
p-adic number field for some value of p. Also the entanglement assignable to density
matrix which is n-dimensional projector would be negentropic only if the p-adic prime
defining the number theoretic entropy is divisor of n. Therefore also entangled quantum
state would give a strong suggestion about the value of the optimal p-adic cognitive
representation as that associated with the largest power of p appearing in n.
b) Could cognitive resolution fix the measurement resolution?
For p-adic numbers the algebraic extension used (roots of unity fix the resolution in angle degrees of
freredom and pinary cutoffs fix the resolution in "radial" variables which are naturally positive. Could
the character of quantum state or perhaps quantum transition fix measurement resolution uniquely?
1. If transitions (state function reductions) can occur only between different number fields (Option
A), discretization is un-avoidable and unique if maximal. For real-real transitions the
discretization would be motivated only by finite measurement resolution and need be neither
necessary nor unique. Discretization is required and unique also if one requires adelic structure
for the state space (Option B). Therefore both options A and B are allowed by this criterion.
2. For both options cognition and intention (if p-adic) would be one half of existence and sensory
perception and motor actions would be second half of existence at fundamental level. The first
half would correspond to sensory experience and motor action as time reversals of each other.
This would be true even at the level of elementary particles, which would explain the amazing
success of p-adic mass calculations.
3. For option A the state function reduction sequence would correspond to a formation of p-adic
maps about real maps and real maps about p-adic maps: real → p-adic → real →..... For option B
it would correspond the sequence adelic → adelic → adelic →.....
4. For both options p-adic and real physics would be unified to single coherent whole at the
fundamental level but the adelic option would be much simpler. This kind of unification is highly
suggestive - consider only the success of p-adic mass calculations - but I have not really
seriously considered what it could mean.
c) What selects the preferred p-adic prime?
What determines the p-adic prime or preferred p-adic prime assignable to the system considered? Is
it unique? Can it change?
1. An attractive hypothesis is that the most favorable p-adic prime is a factor of the integer n
defining the dimension of the n× n density matrix associated with the flux tubes/fermionic
strings connecting partonic 2-surfaces: the presence of fermionic strings already implies at least
two partonic 2-surfaces. During the sequence of reductions at same boundary of CD n receives
- 221 -
additional factors so that p cannot change. If wormhole contacts behave as magnetic monopoles
there must be at least two of them connected by monopole flux tubes. This would give a
connection with negentropic entanglement and for heff/h=n to quantum criticality, dark matterm
and hierarchy of inclusions of HFFs.
2. Second possibility is that the classical non-determinism making itself visible via supersymplectic invariance acting as broken conformal gauge invariance has same character as p-adic
non-determinism for some value of p-adic prime. This would mean that p-adic space-time
surfaces would be especially good representations of real space-time sheets. At the lowest level
of hierarchy this would mean large number of common points. At higher levels large number of
common parameter values in the algebraic extension of rationals in question.
d) How finite measurement resolution relates to hyper-finite factors?
The connection with hyper-finite factors suggests itself.
1. Negentropic entanglement can be said to be stabilized by finite cognitive resolution if hyperfinite factors are associated with the hierarchy of Planck constants and cognitive resolutions. For
HFFs the projection to single ray of state space in state function reduction is replaced with a
projection to an infinite-dimensional sub-space whose von Neumann dimension is not larger than
one.
2. This raises interesting question. Could infinite integers constructible from infinite primes
correspond to these infinite dimensions so that prime p would appear as a factor of this kind of
infinite integer? One can say that for inclusions of hyperfinite factors the ratio of dimensions for
including and included factors is quantum dimension which is algebraic number expressible in
terms of quantum phase q=exp(i2π/n). Could n correspond to the integer ratio n=nf/ni for the
integers characterizing the sub-algebra of super-symplectic algebra acting as gauge
transformations?
4. Generalizing the notion of p-adic space-time surface
The notion of p-adic manifold \citealb/picosahedron is an attempt to formulate p-adic space-time
surfaces identified as preferred extremal of p-adic variants of p-adic field equations as cognitive charts
of real space-time sheets. Here the essential point is that p-adic variants of field equations make sense:
this is due to the fact that induced metric and induced gauge fields make sense (differential geometry
exists p-adically unlike global geometry involving notions of lengths, area, etc does not exist: in
particular the notion of angle and conformal invariance make sense).
The second key element is finite resolution so that p-adic chart map is not unique. Same applies to
the real counterpart of p-adic extremal and having representation as space-time correlate for an intention
realized as action.
The discretization of the entire space-time surface proposed in the formulation of p-adic manifold
concept (see this) looks too naive an approach. It is plausible that one has an abstraction hierarchy for
discretizations at various abstraction levels.
1. The simplest discretization would occur at space-time level only at partonic 2-surfaces in terms
of string ends identified as algebraic points in the extension of p-adics used. For the boundaries
of string world sheets at the orbits of partonic 2-surface one would have discretization for the
parameters defining the boundary curve. By field equations this curve is actually a segment of
light-like geodesic line and characterized by initial light-like 8-velocity, which should be
therefore a number in algebraic extension of rationals. The string world sheets should have
similar parameterization in terms of algebraic numbers.
By conformal invariance the finite-dimensional conformal moduli spaces and topological
invariants would characterize string world sheets and partonic 2-surfaces. The p-adic variant of
- 222 -
2.
3.
4.
5.
Teichmueller parameters was indeed introduced in p-adic mass calculations and corresponds to
the dominating contribution to the particle mass (see this and this).
What might be called co-dimension 2 rule for discretization suggests itself. Partonic 2-surface
would be replaced with the ends of fermion lines at it or equivalently: with the ends of space-like
strings connecting partonic 2-surfaces at it. 3-D partonic orbit would be replaced with the
fermion lines at it. 4-D space-time surface would be replaced with 2-D string world sheets.
Number theoretically this would mean that one has always commutative tangent space.
Physically the condition that em charge is well-defined for the spinor modes would demand codimension 2 rule.
This rule would reduce the real-p-adic correspondence at space-time level to construction of real
and p-adic space-time surfaces as pairs to that for string world sheets and partonic 2-surfaces
determining algebraically the corresponding space-time surfaces as preferred extremals of Kähler
action. Strong form of holography indeed leads to the vision that these geometric objects can be
extended to 4-D space-time surface representing preferred extremals.
In accordance with the generalization of AdS/CFT correspondence to TGD framework cognitive
representations for physics would involve only partonic 2-surfaces and string world sheets. This
would tell more about cognition rather than Universe. The 2-D objects in question would be in
the intersection of reality and p-adicities and define cognitive representations of 4-D physics.
Both classical and quantum physics would be adelic.
Space-time surfaces would not be unique but possess a degeneracy corresponding to a subalgebra of the super-symplectic algebra isomorphic to it and and acting as conformal gauge
symmetries giving rise to n conformal gauge invariance classes. The conformal weights for the
sub-algebra would be n-multiples of those for the entire algebra and n would correspond to the
effective Planck constant heff/h=n. The hierarchy of quantum criticalities labelled by n would
correspond to a hierarchy of cognitive resolutions defining measurement resolutions.
Clearly, very many big ideas behind TGD and TGD inspired theory of consciousness would have
this picture as a Boolean intersection.
5. Number theoretic universality for cognitive representations
1. By number theoretic universality p-adic zero energy states should be formally similar to their
real counterparts for option B. For option A the states between which real--p-adic transitions are
highly probable would be similar. The states would have as basic building bricks the elements of
the Yangian of the super-symplectic algebra associated with these strings which one can hope to
be algebraically universal.
2. Finite measurement resolution demands that all scattering amplitudes representing zero energy
states involve discretization. In purely p-adic context this is unavoidable because the notion of
integral is highly problematic. Residue integral is p-adically well-defined if one can deal with π.
p-Adic integral can be defined as the algebraic continuation of real integral made possible by
the notion of p-adic manifold and this works at least in the real--p-adic intersection. String world
sheets would belong to the intersection if they are cognitive representations as the interpretation
of fermions as correlates of Boolean cognition suggests. In this case there are excellent hopes
that all real integrals can be continued to various p-adic sectors (which can involve algebraic
extensions of p-adic number fields). Quantum-TGD would be adelic. There are of course
potential problems with transcendentals like powers of π.
3. Discrete Fourier analysis allows to define integration in angle degrees of freedom represented in
terms of algebraic extension involving roots of unity. In purely p-adic context the notion of angle
does not make sense but trigonometric functions make sense: the reason is that only the local
aspect of geometry generalize characterized by metric generalize. The global aspects such as line
- 223 -
length involving integral do not. One can however introduce algebraic extensions of p-adic
numbers containing roots of unity and this gives rise to a realistic notion of trigonometric
function. One can also define the counterpart of integration as discrete Fourier analysis in
discretized angle degrees of freedom.
4. Maybe the 2-dimensionality of cognition has something to do with the fact that quaternions and
octonions do not have p-adic counterpart (the p-adic norm squared of quaternion/octonion can
vanish). I have earlier proposed that life and cognitive representations resides in real-p-adic
intersection. Stringy description of TGD could be seen as number theoretically universal
cognitive representation of 4-D physics. The best that the limitations of cognition allow to
obtain. This hypothesis would also guarantee that various conserved quantal charges make sense
both in real and p-adic sense as p-adic mass calculations demand.
posted by Matti Pitkanen @ 6:53 AM
8 Comments:
At 5:22 AM,
Anonymous said...
It is joy to see how TGD keeps on evolving and boldly asking genuine questions.
For further evolution, as an informant of a bundle of conscious experiences I can't
identify with 2-dimensionality of cognition, and I suggest that that idea might simply arise
from an entangled state with primarily visual sentient-cognitive state, (as also "theory"
literally and etymologically means, from Greek verb theaomai, to watch, from which
theater etc. are derived) ie. from cognitive entanglement and filtering with space-time
sheet of sentience with visual 2D-character (binary mapping into lighted (and colored)
pointillistic(!? a la certain impressionit painters) points vs. unlighted/dark points
A conscious body-sense feel of empathy (cf. negentropic entanglement) has very
different geometric connotation from visual sensing. I, as an informant of this 'thusly'
conscious experience, can best describe this as infinitely(?) dense point at heart chackra,
where conscious sensations of "good feel" and "bad feel" locally alternate in various
mixtures and intensities, surrounded with field-like extentions, for which the best
mathematical analogy so far has been some kind of Dehn-ball. The 'ball' aspect sensing
happens in 3-D and/or 4-D, and the ball-like field consists of n-dimensional dimension
lines (cf. "worm holes") allowing heart-local "non-local" empathy of good feel / bad feel
entanglements also with other sentient beings and conscious informants.
According to most functional ethical theory and practice I'm aware or, the 'good feel' is
the norm (universe=god=love) and the 'bad feels' are exceptions to the rule arising from
breaking of strong entanglements aka "attachments", such as actual loss of a love, or fear
and worry of loss projected to some other space time sheet than 'now'. Informant
expressions such as "heart-broken" and "without him/her/it", there's an empty space within
me", etc. gain very literal spatio-temporal interpretations. On the other hand, according to
critical-ethical theory of global-local, the Universal Source of 'good feel' is already and
always inside each heart and each and every point in space-time, and can be also
consciously sensed-experienced when filters that cover it are removed.
One further comment, I associate intending with 'focus', and as the word in-tention
suggests, the movement has more pointed character than that of planar mapping. If needs
that give rise to intentions arise adelically, the most self-suggestive geometry of
intentional targeting to fulfill the adelically metabolic need would be ideles. Or just 'idgesture', as the Latin version of Freud goes. Note that when walking, the intention to not
- 224 -
stumble and fall, and when stumbling, step dancing faster than conscious thought, is very
carefully focused on the point of balance. :)
At 6:35 AM,
[email protected] said...
To Anonymous:
I have been very critical about string models. But I must admit that they have caught
something very profound about the structure of existence. String world sheets and partonic
2-surface could be seen as maximal cognitive representations that one can have. One can
of course have a lot of them.
One extremely nice feature is that conformal invariance implies the reduction of WCW
for string world sheets and partonic 2-surfaces to the confromal moduli space for them.
They are finite-dimensional and I have already applied these spaces in p-adic mass
calculations.
Too many independent arguments lead to these objects so that I can only accept them.
Strong form of holography, well-definedness of em charge, octonionic spinor structure
making possible 8-D twistorilization, and commutativity of tangent spaces of objects in
the intersection of reality and p-adicities, non-existence of p-adic variants of quaterions
and octonions, fermions as correlates of Boolean cognition and the arguments forcing the
to string world sheets, and so on….
The Book analogy is at work again. String world sheets define the back of the book
with space-time surfaces in various p-adic number fields as pages.
It is surprising that the notion of negentropic entanglement allows also to seriously
discuss about the quantum correlates of ethics and moral rules. In standard physics one has
only events. NMP makes it possible to talk about deeds.
The question about precise form of NMP leads directly to the key question about
which spiritual people have discussed for millennia: why God does allow Evil. I guess that
the weak form of NMP allowing genuine free will in good and bad is the only
mathematically consistent option.
At 7:14 PM, Anonymous said...
One feels and thinks that it would be nicer deed if one's mode of experiencing would
be received as empirical phenomenology that an explanatory function such as theory
formulation aims to explain, instead of excluding and limiting theoretically allowed
possible experience on grounds of theoretical presuppositions.
Neural networks have a filtering function, and yes, they have stringy character. We do
not, however, presume or insist that full-body sentience (cf. e.g. magnetic bodies) and
consciousness in themselves reduce to neural networks alone, rather, neural networks and
their filtering and representing function in relation to conscious experience deserve also
explanation and description as part of the larger picture, where view of "inner" and "outer"
extensions of rationals is becoming less dualistic, and neural networks can't be said to be
absolute requirement for sentience and conscience (cf. plants, mushrooms, headless
chickens, etc.).
- 225 -
Theory of "mental representations" has a more natural place at the level of neural
networks than at the level of space-time sheets and undivided whole of sentienceconsciousness, which has more 'map=landscape" character than that of representation of
externalized objects.
At 8:15 PM,
[email protected] said...
To Anonymous:
I see this somewhat differently.
When one thinks seriously about physics and consciousness, one sooner or later starts
to talk about ontology, even when one is physicist;-). In my case new ontology emerged
gradually: space-time as 4-surface of imbedding space, WCW, subjective existence as
quantum jumps between objective existences, zero energy ontology, hierarchy of Planck
constants, p-adic physics as physics of cognition (note that I have no dropped "and
intention";-)), WCW.
Even worse;-), if one is seriously building what one might call unification, eventually
also the word "epistemology" creeps in, and one must seriously ponder what one can
know. Uncertainty Principle to the discoveries of physical epistemology.
These two words are practically all that I know about philosophy as an academic
discipline. Therefore I do not know the name of branch of philosophy studying what can
be not only known but also expressed using language, that is mathematically. Let us
temporarily call the field of philosophy X-ology.
In TGD framework the first X-logical discovery is the realisation that although spacetime surface is 4-D and seems that all that we can say mathematically about quantum
physics is expressible mathematically in terms of string world sheets and partonic 2surfaces. Strong form of holography is another manner to say this. Or strong form of
general coorinate invariance which explains it. Note that this is much more than string
models can say - one would have only string world sheets.
One can have space-time surfaces full of string world sheets and get arbitrary precise
description of space-time but it is not of course the same space-time anymore.
What one can known and say define what I call cognitive representations. X-ologist
wold say that these string world sheets carrying fermions serving as correlates of Boolean
cognition. They reside in the intersection of realities and p-adicities where also life and
love as negentropic entanglement reside. These cognitive representations would defining
all that universe can say about itself using language.
This is very practical statement. In principle one can build quantum physics predictions
by using only the data associated with partonic 2-surfaces and string world sheets. Even
better, conformal equivalence classes are enough. Infinite-D WCW is effectively reduced
to finite-D spaces of moduli. In case of string world sheets just the positions of edges and
angles at them! This is gigantic simplification and gives realistic hopes about calculable
theory ( that is allowing to express mathematically what is known).
At 8:20 PM,
[email protected] said...
- 226 -
To Anonymous:
Conscious experience can of course contain more than the basically 2-D
representations. For instance, one can wonder whether sensory experience really reduces
to string world sheets. Maybe only the sensory representations defining percepts do so.
This means division of percept to objects with names. This is what brain is doing.
At 8:31 PM, [email protected] said...
One can of course postulate theories in which arbitrary high dimensions appear. One
can look whether they provide better model of the world: string models is one such
approach but was not a success. String models produce all kinds of nice things - brings in
my mind mouse as a tailor- but not the physics we know is there.
Some people argue that they have experiences about higher dimensions, but they are
not mathematicians, and can mean with dimensions something totally different than
mathematician.
Emulation of higher dimensional structures is of course possible in TGD Universe. For
instance, the union of n 2-surfaces would be purely formally a representation of region of
2*n-dimensional space. I wrote years about about the idea that TGD Universe - or any
Universe- must be able to emulate higher dimensional manifolds.
I see too many purely mathematical arguments demonstrating that TGD is unique. And
best of all, TGD is consistent with standard model and explains its symmetries. That
sensory and cognitive representations are 2-D is a testable very powerful prediction of
TGD as I understand it. I have no intention to cut anything out from our experience.
At 3:36 AM,
Anonymous said...
Very good, most obviously a written linguistic description of an experience as such is a
2D-representation, as are also pages of mathematical language. So as you said, X-ology of
what is/happens and what can be known (e.g. observables) is called representation theory
(http://en.wikipedia.org/wiki/Representation_theory). And as theoretical representations
nowadays are usually done, writing linear vectors on page forming letters and numbers
and words etc., they can be defined as intentional filtering of ontology and epistemology
into 2D-language. By definition re-presentation is not identical with experience, but
representation theory as self-aware theory is a big step forward.
Now, perhaps we can approach e.g. theory - and practice - of empathy with clearer
minds, being more fully aware that we are limited to discussing experiencing empathy by
filtering into 2D-representations.
There is a very serious joint research program into empathy, involving Dalai Lama,
other Buddhist monks and numerous scientists, and youtube offers plenty of discussions
(Mind
and
Life
dialogues,
e.g.
here
with
Zeilinger:
https://www.youtube.com/watch?v=ALGKIcfXxcM) .
Naturally, representations are not only passive distached observations, but they have
also their active participatory aspect in the whole of Creation - cf. observation events. One
of the key findings of the study group is that empathy can be taught, or if we give
- 227 -
negentropic empathy foundational omnipresent ontological status, education - also in form
of participating 2D representations - can help to remove emotional and conscious filters
that cover and hinder active and conscious empathy, such as us-against-them "tribal"
entanglements with divisive border lines (cf. closed borders between integers of normative
integer theory vs. open border-zones like rational numbers; btw. I just noticed that
Spinozan Number Theory has open borders already at the level of natural numbers, and
maybe
there's
a
link
to
the
packing
problem
of
tetrahedrons:
http://www.ams.org/notices/201211/rtx121101540p.pdf) .
Can we do a good deed in form of mathematical representation theory showing what
kinds entropies, entanglements and observables we are talking about when we talk about
us-against-them filters?
At 3:51 AM,
Anonymous said...
PS: Representations can have also a Quantum-Zeno effect, when impatient intention keeps
mapping again-and-again the representation over the wished-for process-event (no-cloning
theorem?), as well as their better functioning aspects.
04/13/2015 - http://matpitka.blogspot.com/2015/04/manifest-unitarity-and-informationloss.html#comments
Manifest unitarity and information loss in gravitational collapse
There was a guest posting in the blog of Lubos by Prof. Dejan Stojkovic from Buffalo University.
The title of the post was Manifest unitarity and information loss in gravitational collapse. It explained
the contents of the article Radiation from a collapsing object is manifestly unitary by Stojkovic and
Saini.
The posting
The posting describes calculations carried out for a collapsing spherical mass shell, whose radius
approaches its own Scwartschild radius. The metric outside the shell with radius larger than rS is
assumed to be Schwartschild metric. In the interior of the shell the metric would be Minkowski metric.
The system considered is second quantized massless scalar field. One can calculate the Hamiltonian of
the radiation field in terms of eigenmodes of the kinetic and potential parts and by canonical
quantization the Schrödinger equation for the eigenmodes reduces to that for a harmonic oscillator with
time dependent frequency. Solutions can be developed in terms of solutions of time-independent
harmonic oscillator. The average value of the photon number turns out to approach to that associated
with a thermal distribution irrespective of initial values at the limit when the of the shell approaches its
blackhole radius. The temperature is Hawking temperature. This is of course highly interesting result
and should reflect the fact that Minkowski vacuum looks from the point of view of an accelerated
system to be in thermal equilibrium. Manifest unitary is just what one expects.
The authors assign a density matrix to the state in the harmonic oscillator basis. Since the state is
pure, the density matrix is just a projector to the quantum state since the components of the density
matrix are products of the coefficients characterizing the state in the oscillator basis (there are a couple
of typos in the formulas, reader certainly notices them). In Hawking's original argument the nondiagonal cross terms are neglected and one obtains a non-pure density matrix. The approach of authors
is of course correct since they consider only the situation before the formation of horizon. Hawking
consider the situation after the formation of horizon and assumes some un-specified process taking the
- 228 -
non-diagonal components of the density matrix to zero. This decoherence hypothesis is one of the
strange figments of insane theoretical imagination which plagues recent-day theoretical physics.
Authors mention as a criterion for purity of the state the condition that the square of the density
matrix has trace equal to one. This states that the density matrix is N-dimensional projector. The
criterion alone does not however guarantee the purity of the state for N> 1. This is clear from the fact
that the entropy is in this case non-vanishing and equal to log(N). I notice this because negentropic
entanglement in TGD framework corresponds to the situation in entanglement matrix is proportional to
unit matrix (that is projector). For this kind of states number theoretic counterpart of Shannon entropy
makes sense and gives negative entropy meaning that entanglement carries information. Note that
unitary 2-body entanglement gives rise to negentropic entanglement.
Authors inform that Hawkins used Bogoliubov transformations between initial Minkowski vacuum
and final Schwartschild vacum at the end of collapse which looks like thermal distribution with
Hawking temperature in terms from Minkowski space point of view. I think that here comes an essential
physical point. The question is about the relationship between two observers - one might call them the
observer falling into blackhole and the observer far away approximating space-time with Minkowski
space. If the latter observer traces out the degrees of freedom associated with the region below horizon,
the outcome is genuine density matrix and information loss. This point is not discussed in the article and
authors inform that their next project is to look at the situation after the spherical shell has reached
Schwartschild radius and horizon is born. One might say that all that is done concerns the system before
the formation of blackhole (if it is formed at all!).
Several poorly defined notions arise when one tries to interpret the results of the calculation.
1. What do we mean with observer? What do we mean with information? For instance, authors
define information as difference between maximum entropy and real entropy. Is this definition
just an ad hoc manner to get sum well-defined number christened as an information? Can we
really reduce the notion of information to thermodynamics? Shouldn't we be very careful in
distinguishing between thermodynamical entropy and entanglement entropy? A sub-system
possessing entanglement entropy with its complement can be purified by seeing it as a part of the
entire system. This entropy relates to pair of systems. Thermal entropy can be naturally assigned
to an average representative of ensemble and is single particle observable.
2. Second list of questions relates to quantum gravitation. Is blackhole really a relevant notion or
just a singular outcome of a theory exceeding its limits? Does something deserving to be called
blackhole collapse really occur? Is quantum theory in its recent form enough to describe what
happens in this process or its analog? Do we really understand the quantal description of
gravitational binding?
What TGD can say about blackholes
The usual objection of string theory hegemony is that there are no competing scenarios so that
superstring is the only "known" interesting approach to quantum gravitation (knowing in academic sense
is not at all the same thing as knowing in the naive layman sense and involves a lot of sociological
factors transforming actual knowing to sociological unknowing: in some situations these sociological
factors can make a scientist practically blind, deaf, and as it looks - brainless!) . I dare however claim
that TGD represents an approach, which leads to a new vision challenging a long list of cherished
notions assigned with blackholes.
To my view, blackhole science crystallizes huge amount of conceptual sloppiness. People can
calculate but are not so good in concetualizing. Therefore one must start the conceptual cleaning from
- 229 -
fundamental notions such as information, notions of time (experienced and geometric), observer, etc...
In attempt to develop TGD from a bundle of ideas to a real theory I have been forced to carry out this
kind of distillation and the following tries to summarize the outcome.
1. TGD provides a fundamental description for the notions of observer and information. Observer is
replaced with "self" identified in ZEO by a sequences of quantum jumps occurring at same
boundary of CD and leaving it and the part of the zero energy state at it fixed whereas the second
boundary of CD is delocalized and superposition for which the average distance between the tips
of CDs involve increases: this gives to the experience flow of time and its correlation with the
flow of geometric time. The average size of CDs simply increases and this means that the
experiences geometric time increases. Self "dies" as the first state function reduction to the
opposite boundary takes place and new self assignable it is born.
2. Negentropy Maximizaton Principle favors the generation of entanglement negentropy. For states
with projection operator as density matrix the number theoretic negentropy is possible for primes
dividing the dimension of the projection and is maximum for the largest power of prime factor of
N. Second law is replaced with its opposite but for negentropy which is two-particle observable
rather than single particle observable as thermodynamical entropy. Second law follows at
ensemble level from the non-determinism of the state function reduction alone.
The notions related to blackhole are also in need of profound reconsideration.
1. Blackhole disappears in TGD framework as a fundamental object and is replaced by a space-time
region having Euclidian signature of the induced metric identifiable as wormhole contact, and
defining a line of generalized Feynman diagram (here "Feynmann" could be replaced with "
twistor" or "Yangian" something even more appropriate). Blackhole horizon is replaced the 3-D
light-like region defining the orbit of wormhole throat having degenerate metric in 4-D sense
with signature (0,-1,-1,-1). The orbits of wormhole throats are carries of various quantum
numbers and the sizes of M4 projections are of order CP2 size in elementary particle scales. This
is why I refer to these regions also as light-like parton orbits. The wormhole contacts involved
connect to space-time sheets with Minkowskian signature and stability requires that the
wormhole contacts carry monopole magnetic flux. This demands at least two wormhole contacts
to get closed flux lines. Elementary particles are this kind of pairs but also multiples are possible
and valence quarks in baryons could be one example.
2. The connection with GRT picture could emerge as follows. The radial component of
Schwartschild-Nordström metric associated with electric charge can be deformed slightly at
horizon to transform horizon to light-like surface. In the deep interior CP2 would provide
gravitational instanton solution to Maxwell-Einstein system with cosmological constant and
having thus Euclidian metric. This is the nearest to TGD description that one can get within GRT
framework obtained from TGD at asymptotic regions by replacing many-sheeted space-time
with slightly deformed region of Minkowski space and summing the gravitational fields of sheets
to get the the gravitational field of M4 region.
All physical systems have space-time sheets with Euclidian signature analogous to blackhole.
The analog of blackhole horizon provides a very general definition of "elementary particle".
3. Strong form of general coordinate invariance is central piece of TGD and implies strong form of
holography stating that partonic 2-surfaces and their 4-D tangent space data should be enough to
code for quantum physics. The magnetic flux tubes and fermionic strings assignable to them are
however essential. The localization of induced spinor fields to string world sheets follows from
the well-definedness of em charge and also from number theoretical arguments as well as
generalization of twistorialization from D=4 to D=8.
- 230 -
One also ends up with the analog of AdS/CFT duality applying to the generalization of
conformal invariance in TGD framework. This duality states that one can describe the physics in
terms of Kähler action and related bosonic data or in terms of Kähler-Dirac action and related
data. In particular, Kähler action is expressible as string world sheet area in effective metric
defined by Kähler-Dirac gamma matrices. Furthermore, gravitational binding is describable by
strings connecting partonic 2-surfaces. The hierarchy of Planck constants is absolutely essential
for the description of gravitationally bound states in thems of gravitational quantum coherence in
macroscopic scales. The proportionality of the string area in effective metric to 1/h eff2, heff=n×
h=hgr=GMm/v0 is absolutely essential for achieving this.
If the stringy action were the ordinary area of string world sheet as in string models, only
gravitational bound states with size of order Planck length would be possible. Hence TGD forces
to say that superstring models are at completely wrong track concerning the quantum description
of gravitation. Even the standard quantum theory lacks something fundamental required by this
goal. This something fundamental relates directly to the mathematics of extended superconformal invariance: these algebras allow infinite number of fractal inclusion hierarchies in
which algebras are isomorphic with each other. This allows to realize infinite hierarchies of
quantum criticalities. As heff increases, some degrees are reduced from critical gauge degrees of
freedom to genuine dynamical degrees of freedom but the system is still critical, albeit in longer
scale.
4. A naive model for the TGD analog of blackhole is as a macroscopic wormhole contact
surrounded by particle wormhole contacts with throats connected to the large wormhole throats
by flux tubes and strings to the large wormhole contact. The macroscopic wormhole contact
would carry magnetic charge equal to the sum of those associated with elemenentary particle
wormhole throats.
5. What about black hole collapse and blackhole evaporation if blackholes are replaced with
wormhole contacts with Euclidian signature of metric? Do they have any counterparts in TGD?
Maybe! Any phase transition increasing heff=hgr would occur spontaneously as transitions to
lower criticality and could be interpreted as analog of blackhole evaporation. The gravitationally
bound object would just increase in size. I have proposed that this phase transition has happened
for Earth (Cambrian explosion) and increases its radius by factor 2. This would explain the
strange finding that the continents seem to fit nicely together if the radius of Earth is one half of
the recent value. These phase transitions would be the quantum counterpart of smooth classical
cosmic expansion.
The phase transition reducing heff would not occur spontaneusly and in living systems
metabolic energy would be needed to drive them. Indeed, from the condition that h eff=hgr=
GMm/v0 increases as M and v0 change also gravitational Compton length Lgr=hgr/m= GM/v0
defining the size scale of the gravitational object increases so that the spontaneous increase of hgr
means increase of size.
Does TGD predict any process resembling blackhole collapse? In Zero Energy Ontology (ZEO)
state function reductions occurring at the same boundary of causal diamond (CD) define the
notion of self possessing arrow of time. The first quantum state function reduction at opposite
boundary is eventually forced by Negentropy Maximization Principle (NMP) and induces a
reversal of geometric time. The expansion of object with a reversed arrow of geometric time with
respect to observer looks like collapse. This is indeed what the geometry of causal diamond
suggests.
6. The role of strings (and magnetic flux tubes with which they are associated) in the description of
gravitational binding (and possibly also other kinds of binding) is crucial in TGD framework.
- 231 -
They are present in arbitrary long length scales since the value of gravitational Planck constant
heff = hgr = GMm/v0, v0 (v0/c<1) has dimensions of velocity can have huge values as compared
with those of ordinary Planck constant. This implies macroscopic quantum gravitational
coherence and the fountain effect of superfluidity could be seen as an example of this.
The presence of flux tubes and strings serves as a correlate for quantum entanglement present
in all scales is highly suggestive. This entanglement could be negentropic and by NMP and could
be transferred but not destroyed. The information would be coded to the relationship between
two gravitationally bound systems and instead of entropy one would have enormous negentropy
resources. Whether this information can be made conscious is a fascinating problem. Could one
generalize the interaction free quantum measurement so that it would give information about this
entanglement? Or could the transfer of this information make it conscious?
The superstring camp has also become aware about possibility of geometric and topological
correlates of entanglement. The GRT-based proposal relies on wormhole connections. Much
older TGD-based proposal applied systematically in quantum biology and TGD-inspired theory
of Consciousness identifies magnetic flux tubes and associated fermionic string world sheets as
correlates of negentropic entanglement.
posted by Matti Pitkanen @ 4:30 AM
13 Comments:
At 1:23 AM,
Ulla said...
https://www.youtube.com/watch?v=yMRYZMv0jRE&feature=youtu.be
See how he describes the hierarchy between subworlds as a 'jump' between BH
horisons...and not a word of other dimensions, nor strings...
At 8:32 PM, [email protected] said...
Thank you. I hope that I find time to listen.
At 3:37 PM, Anonymous said...
Nice link, Ulla. Lenny seems to be looking for quantum gravity in terms of entanglement:
“When the black holes were entangled, then pulled apart, the theorists found that what
emerged was a wormhole — a tunnel through space-time that is thought to be held
together by gravity. The idea seemed to suggest that, in the case of wormholes, gravity
emerges from the more fundamental phenomenon of entangled black holes,”
http://www.universetoday.com/106968/could-particle-spooky-action-define-the-nature-ofgravity/
What does TGD predict about entropic and/or negentropic entanglement as(?)quantum
gravity and is it possible to derive GRT, or what is relevant and worth keeping in GRT,
from such idea?
Should we start from idea that "nonlocal" (different hbar values) universal entangled unity
= universal gravity field, and local non-entangled phenomena are context dependent
measurement exceptions to this state? What about the "monogamy" coupling of entangled
states that was derived? How is that rule derived, and does it hold only for entropic
entanglement?
At 8:16 PM,
[email protected] said...
- 232 -
To Anonymous,
Maldacena, Susskind, and others are following TGD in that they suggests that wormholes
as long tube like structures connecting distant space-time regions (blackholes) are
correlates for entanglement.
In TGD, situation is however in many respects different although the gurus have caught
the basic idea of TGD correctly: entanglement has topological space-time correlates. They
have still to discover that also particles have these correlates and that Feynman
diagrammatic has space-tme correlates. The traffic in Research Gate suggests that these
discoveries will be made soon;-)
Colleagues of course cannot suddenly start to talk about many-sheeted space-time and
must transform the ideas to the language of GRT. This is only mimicry but the best they
can achieve.
There are many new elements
*Instead of tube like wormholes magnetic flux tubes and associated fermionic strings
serve as correlates for entanglement and are responsible for the formation of gravitational
bound states.
* Another new element is two- sheetednes of basic structures: always two sheets
connected by wormhole contacts are involved and flux tubes carry monopole charged
needed to stabilise wormhole contacts. And always pairs of wormhole contacts.
*Third new element is large h_eff =n*h =h_gr making possible quantum gravitational
binding and astroscopic quantum coherence.
*Fourth new element are huge generalised conformal symmetries and the breaking
hierarchy for them (hierarchy of quantum criticalities).
*Fifth new element is negentropic entanglement: state function reduction can take place to
a n-D subspace such that density matrix is n-D projector. And of course Negentropy
Maximization Principle is there too.
*Sixth new element. In TGD wormholes are associated with wormhole contacts, which
have short dimension in CP_2 directions (CP_2 size- 10^4 Planck lengths) . The can have
arbitrary large M^4 projection. What is new that they have Euclidian induced metric and
they serve as space-time sheets assignable to physical objects of their size. They replace
blakcholes. These wormhole contacts the wormhole tubes of GRT along which the
spaceships travel in science fiction. One can forget space-time travel now.
* Seventh new element. In TGD, gravitational binding is mediated by fermion carrying
strings and flux tubes connecting partonic 2-surfaces. It would be exaggeration to say that
gravitation emerges from entanglement. Also here gurus are transforming TGD based
ideas to GRT framework.
The problem is that the hegemony deso not have any theory and are used to express
these ideas - even the idea about emergent gravity - using notions like blackhole which are
- 233 -
outcome from a theory of gravity. Things would become simple by accepting TGD as
starting point but this is not possible until the proponents….. as Bohr said;-).
At 11:26 PM, Anonymous said...
The Social Construction of Reality
"The
Social
Construction
of
Reality"
on
@Wikipedia:
https://en.wikipedia.org/wiki/The_Social_Construction_of_Reality?source=app *grabs the
popcorn*
At 5:53 PM, Anonymous said...
Social Construction of Reality also this:
Surprising Properties of Non-Archimedean Field Extensions of the Real Numbers
http://arxiv.org/pdf/0911.4824v2.pdf
At 6:20 PM, Anonymous said...
And this:
http://earthweareone.com/alien-message-to-mankind-do-you-wish-that-we-show-up/
https://www.youtube.com/watch?v=Uj-kHy5b5mo
At 8:00 PM, [email protected] said...
I have written a TGD-inspired commentary of Elemer Rosinger's idea about extension of
reals
compared
to
the
hierarchy
of
infinite
primes.
See
http://tgdtheory.fi/pubic_html/tgdnumber/tgdnumber.html#infsur .
At 6:20 AM,
Anonymous said...
Link didn't work.
Anyhow, the basic philosophy is becoming clearer: trinity of two entangled
complementary pairs: finite-infinite and infinitesimal-infinitely large.
That trinity brings to mind theory of games that is more general than surreal numbers,
go is based on very simple rules, but when you bring in intention of winning the game
(controlling more area than opponent in Archimedean sense), the psychological aspect
generates
theory
of
non-Archimedean
surreal
numbers:
http://scienceblogs.com/goodmath/2007/04/01/post-3/ .
Even more generally, win-lose games are just a subset of more general anatomy of
social games: win-win, lose-lose and lose-win. And whether we speak about representative
dictatorship of majority, nation state competition for finite resources, nuclear MAD,
theories competing for social acceptance etc. win-lose games, they are much more like
lose-lose games than win-win games.
So, what if we apply win-win games as the conscious ethical constraint on playing
games of number theories, definitions of (neg)entropy etc. theory formulation?
- 234 -
At 6:12 AM,
[email protected] said...
To Anonymous:
Amazing that a model for games, something which is very finite, leads to surreal numbers!
Games look like non-Archimedean variant of surreal numbers.
It is a pity that I do not have the rules for constructing surreals in spin.
I however wonder whether there could be a connection between infinite primes
(integers/rationals), adeles, and these mysterious games. Infinite primes are finite in any padic norm and the norm is one.
At 7:32 PM, Anonymous said...
Yes, Go has very simple rules, and includes also a version of no-cloning theorem, but as
said, surreal numbers did not arise from the rules of Go, but from the calculations for a
winning end-game strategy in terms of win-lose game. Metabolic creatures like to have
tummy full, and human brain is a mighty hunting weapon.
At 8:36 PM, [email protected] said...
The basic objects consists of objects, now (L,R) pairs.
This sounds very fractal. Probably this aspect been considered.
At 4:03 AM,
Anonymous said...
Yeah, physics and theory formulation is a game that you play without knowing the rules,
and the purpose of the game is to find the rules of the game you play. In that sense it can
be a win-win game. Of course, the Rule-Maker / Game-Master can be Heyoka-like joker,
that gives the player a new and even more complex game to play if and when the rules
have been found (cf. Newton->Einstein->Quantum->Unified->etc.). Human brain is
hunting weapon that likes challenges...
03/31/2015 - http://matpitka.blogspot.com/2015/03/links-to-latest-progress-in-tgd.html#comments
Links to the latest progress in TGD
During the last years, the understanding of the mathematical aspects of TGD and of its connection
with the experimental world has developed rapidly. The material is scattered to 17 books about TGD
and its applications and therefore it seems appropriate give an overall view about the developments as
links (mostly) to blog postings containing links to homepage.
In the article The latest progress in TGD, I list blog links and also some homepage links to Quantum
TGD, its applications to physics, to biology and to consciousness theory with the intention to give an
overall view about the development of the ideas (I did not receive the final form of TGD from heaven
and have been forced to work hardly for almost 4 decades!).
posted by Matti Pitkanen @ 12:19 AM
16 Comments:
At 11:16 PM,
Leo Vuyk said...
Hi Matti,
- 235 -
Congratulations with this incredible intelligent work. Could you also make an extract for
quick scanning?
I did it for my own ideas like this:
The result I try to describe :
1: Black holes are the same as Dark Matter, they all consume photons, even gravitons and
the Higgs field, but REPEL Fermions due to their propeller shape. They produce electric
charged plasma.
2: Dark Energy is the oscillating ( Casimir) energy of the Higgs Field equipped with a
tetrahedron lattice structure with variable Planck length..
3: Quantum Gravity = Dual Push gravity= Attraction (Higgs-Casimir opposing Graviton
push).
4: The Big Bang is a Splitting dark matter Big Bang Black Hole (BBBH), splitting into
smaller Primordial Big Bang Spinters (PBBS) forming the Fractalic Lyman Alpha forest
and evaporating partly into a zero mass energetic oscillating Higgs particle based Higgs
field.
5: Dual PBBSs hotspots, produce central plasma concentration in electric Herbig Haro
systems as a base for star formation in open star clusters as a start for Spiral Galaxies.
6: Spiral Galaxies will keep both Primordial Dark Matter Black Holes as Galaxy Anchor
Black Holes (GABHs) at long distance.
7: After Galaxy Merging, these GABHs are the origin of Galaxy- and Magnetic field
complexity and distant dwarf galaxies .
8: Black Holes produce Plasma direct out of the Higgs field because two Higgs particles
are convertible into symmetric electron and positron (or even dual quark-) propellers (by
BH horizon fluctuations).
9: The chirality of the (spiralling) vacuum lattice is the origin our material universe.
(propeller shaped positrons merge preferentially first with gluons to form (u) Quarks to
form Hydrogen.
10: The first Supernovas produce medium sized Black Holes as the base for secondary
Herbig Haro systems and open star clusters.
11: ALL Dark Matter Black Holes are supposed to be CHARGE SEPARATORS with
internal positive charge and an external globular shell of negative charged Quark electron
plasma.
12: The lightspeed is related to gravity fields like the earth with long extinction distances
to adapt with the solar gravity field.
- 236 -
13. Quantum FFF Theory states that the raspberry shaped multiverse is symmetric and
instant entangled down to the smallest quantum level. Also down to living and dying
CATS in BOXES.
14 Large Primordial Big Bang Spinters (PBBS) are responsible for the creation of the
Lyman Alpha forest structure and first spiral galaxy forming of the universe, but they
seem to be also responsible for the implosion of the universe at the end in the form of
Galaxy Anchor Black Holes (GABHs) located mainly outside galaxies. see: (Quasisoft
Chandra sources)
If our material universes has a chiral oscillating Higgs field, then our material Right
Handed DNA helix molecule could be explained.
owever it also suggests that in our opposing ANTI-MATERIAL multiverse neighbour
universe the DNA helix should have a LEFT HANDED spiral.
ccording to Max Tegmark: in an entangled multiverse we may ask: is there COPY
PERSON over there, who is reading the same lines as I do?
If this COPY person is indeed living over there, then even our consciousness should be
shared in a sort of DEMOCRATIC form, Then we are not alone with our thoughts and
doubts,see: Democratic Free Will in the instant Entangled Multiverse.
At 1:44 AM,
[email protected] said...
One can imagine that there is a pool of standard mental images shared by many conscious
entities. Selve could sharing of mental images by entanglement of subselves defining
mental images.
At 2:05 AM,
Leo Vuyk said...
For us humans it is hard to imagine that we are entangled over the edge of our own
universe (behind the horizon) and as such could have to deal with our opposite self at very
long distance. http://vixra.org/pdf/1401.0071v2.pdf
At 7:16 AM,
[email protected] said...
In zero energy ontology, one ends up with a strange problem. State function reductions
occur as sequences at either boundary of causal diamond (CD). When the first reduction at
opposite boundary happens, self dies and new is bor and its arrow of time is opposite. The
simplest guess is that our life-time defines the increase of the size scale of CD during life
cycle: something like 50-100 light years. Do I have shadow self in my geometric past at
this distance? Some solar system? Better to add at least one ;-)!
At 8:15 AM,
Leo Vuyk said...
In a CPT symmetric bubble multiverse, each wave function collapse thus human choices,
should suffer instant entanglement over much more than 100 Light years.
(Time
symmetry means only that the clock is running in the opposite way).
At 4:16 PM, Anonymous said...
Ancient yogic texts advise meditation near waterfalls, rivers, and lakes. Carl Jung spoke
for many in his description of lake scenery. "The lake stretched away and away in the
- 237 -
distance. This expanse of water was an inconceivable pleasure to me, an incomparable
splendour. At that time the idea became fixed in my mind that I must live near a lake;
without water, I thought, nobody could live at all." The pleasure we derive from showers,
saunas, swimming pools, ocean views and swimming in the sea testifies to the deep
affinity that we feel for water. Perhaps an echo of our amniotic state in our mother's
womb, and possibly related also to the image of the unconscious itself as an unfathomable
ocean. --George , from Learn to Relax --Stephen
At 5:43 PM, Anonymous said...
https://en.wikipedia.org/wiki/Parton_%28particle_physics%29
this wikipedia article seems pretty decent to me, what do y'all think?
A parton distribution function within so called collinear factorization is defined as the
probability density for finding a particle with a certain longitudinal momentum fraction x
at resolution scale Q2. Because of the inherent non-perturbative nature of partons which
can not be observed as free particles, parton densities cannot be fully obtained by
perturbative QCD. Within QCD one can, however study variation of parton density with
resolution scale provided by external probe. Such scale is for instance provided by a
virtual photon with virtuality Q2 or by a jet. Due to the limitations in present lattice QCD
calculations, the known parton distribution functions are instead obtained by fitting
observables to experimental data.
Experimentally determined parton distribution functions are available from various groups
worldwide. The major unpolarized data sets are ...
--Stephen
At 10:52 PM,
Leo Vuyk said...
My ultimate conclusion is: The Big Bang did not produce instantly all the Fermions in the
universes. Even now there is reason to assume that lots of Fermions are produced- in the
form of charged plasma-globules by all BHs (see ball lightning fig B.) even by the largest
primordial Big bang splinters located outside large galaxies.
https://www.academia.edu/11865245/New_Anode_Black_holes_are_Plasma_Producing_
Charge_Splitters_by_a_Fermion_Repelling_Horizon
At 9:07 PM, [email protected] said...
Would be quite different view from standard one in which inflaton fields decayed to
elementary particles including fermions. My own view replaces inflation field with cosmic
strings whose magnetic energy replacing the energy of inflaton fields decays.
The basic objection against your scenario is that horrible temperatures are required:
temperature which considerable higher than rest mass (.5 Mev for electrons). How these
black holes could be created if they are ball lightnings?
I understand that by fermion number conservation equal amounts of antifermions
would be produced at the same time. Annihilations of photons with huge energies? For
does the fermion or antifermion remain inside the blachole?
- 238 -
In the case of ball lightnings, electrons and gammas in MeV range have been observed.
They should not be there because electrons should dissipate their energy in atmosphere.
TGD explanation is that they comes as dark electrons along magnetic flux tubes and
accelerate freely in the voltage involved. I expected the production of fermion antifermion pairs from gamma rays is rather low.
At 4:13 PM,
Leo Vuyk said...
Thanks Matti for your reaction. Perhaps you may find those information on my flickr site:
https://www.flickr.com/photos/93308747@N05/?details=1
My approach is not math based but observation based, also for ball lightning. I am an
architect focusing on observation.
At 4:46 PM, Stephen said...
Speaking of firmeons, http://arxiv.org/abs/math-ph/0505041
At 11:27 PM, [email protected] said...
Thank you. Interesting article I decided to Determinant of operator K(x,y) with
continuous indices acting in the function space L^2 - now functions on real line - is a
subtle notion. One manner to disretize it is going to a discrete basis of eigenfunctions of
some observables.
Ordinary determinant is defined in terms of N:th exterior power of cotangent space of
N-dimensional space. Now one would have infinite exterior power o fHilbert space, call it
Ext (in the case of WCW its cotangent bundle defined in terms of WCW gamma matrices).
Since single fermion state space can be identified as L^2 and the natural identification
for Ext would be as the space of second quantized fermions. In TGD framework the WCW
gamma matrices are indeed identified as Noether super charges of super-symplectic
algebra expressible in terms of second quantizedfermions at string world sheets. WCW
Kahler metric is fixed by sermonic anticommutation relations string world sheets.
One cannot overemphasise the importance of Noether super charge interpretation: this
is what leads to an explicit expression of WCW metric almost hopeless to deduce from
defining formulate in terms of Kahler function. Kahler function itself is however easily
expressed and Dirac determinant - very difficult to actually calculate- would correspond to
exponent of Kahler action! The analog of AdS/CFT duality forced by huge generalization
of conformal symmetries would make things calculable!
The article indeed demonstrates that one can express the norm of a local scaling
operator (selected as an example) in three manners. A a norm in the space of
configurations, as determinant in Hilbert space in the usual manner, and also has a
determinant of an operator acting in sermonic Fock space.
The first expression of determinant is in terms of configurations of discrete points to
which one can assign ordinary finite-D determinant of the restriction of projector K(x,y).
In fermionic picture this would correspond do many-fermion states localized to these
points.
- 239 -
The localization of induced spinors to string world sheets indeed implies discretisation
at partonic 2-surfaces.
A very delicate point relates to the ordering of the discrete points natural in 1-D case.
What about ordering in higher-D case? No natural ordering exists. In TGD one expected
product of determinants assignable with strings connecting partonic 2-surfaces so that
effective 1-dimensionality is satisfied for each determinant in the finite product of them
(and being basically due to the finite measurement resolution realized by the structure of
quantum states).
What about p-adicization? p-Adic numbers are not well-ordered: how to define
determents? Is exponent of Kahler action well-defined. Now integral is problem. Even if
generalisation of AdS/CFT applies one has 2-D integral- could some kind of residue
integral make it well-defined? Can one define it by algebraic continuation?
At 7:07 PM, Anonymous said...
http://people.sc.fsu.edu/~jburkardt/presentations/continuation_2014_fsu.pdf
I promise I didn't stumble upon that article just because it invokes the Lambert W function
:) I only noticed that after thinking about the method for some unrelated reasons the p-adic
stuff is baffling to me
--croω
At 3:45 PM, Anonymous said...
p-adic stuff is indeed baffling.
Norman Wildberger makes a very strong pure math case against modern "axiomatics"
here:
https://www.youtube.com/watch?v=rCDRCGjmaO8
... emphasizing that correct place to postulate axioms is at the level of generating natural
numbers. And there are many ways to do that, not just Peano axioms - that very few
actually know and think about, me including.
Spinoza's Ethics is perhaps the most consistently logical treatise on Absolute, and what
are today called 'triangular numbers' is also a very natural and Pythagorean way to
generate natural numbers. Combining these two ideas, we can start from Absolute (A) at
the top of the triangle, and _share_ (not _divide_, in respect to Spinoza's argument against
divisibility of Absolute) the 'metric' attributes of A into infinitely small (o) and infinitely
large (ω) in the next row of the triangle. Finite attribute of A (1) emerges on third row,
bounded by o and ω.
A
oω
o1ω
o 1 1 ω ("2")
o 1 1 1 ω ("3")
etc.
- 240 -
In this approach we start number theory from 2D triangle instead of 1D line, and can
immediately see the columns (c) and rows (r) of our number theory while also maintaining
o and ω as attributes of any finite shape.
Also, it is immediately visible, that if we bend the number triangle and project the
finite shapes of natural numbers starting from o, we get spiral line with border value ω
(infinitely large completeness allowing any size), but if we project from ω, the infinitely
small (0-dimension or dimensionless?) o's by their inherent character can't draw complete
spiral and just "scatter" in any resolution.
This relation shows that the triangle approach is more general than the standard
atomistic and reductionistic way of generating number theory from the smallest common
denominator (cf. o = Planck scale norm), and giving o the value or symbol "1"; and we can
see that the atomistic-reductionistic number theory is just subspace of the more wholesome
triangular number theory.
At 6:30 PM, Anonymous said...
It's quite strange feeling to address anonymous as having a 'me' (implied self) but do 'you'
or 'you all' think any of the nonsense at https://oeis.org/A095861 is any sense relavant to
this triangular proposition? I wonder wtf 'I' was thinking! Just exploring
--crow
At 10:17 PM, Anonymous said...
Can't say yet definitively, but let's remain hopeful and trusting that there relevant
connection. :)
This Gödelian contemplation came up:
https://www.youtube.com/watch?v=KXTJdryRueQ
We can at this stage of development of Spinozan Number Theory (aka between pals, croω
;)) state the basic Axiom:
Finite shapes and sizes can be neither infinitely large nor infinitely small, therefore any
finite shape or form is by definition bounded by o and ω, or respectively ω and o. We give
finite shape or form the symbol 1 and define that o < 1 < ω and ω > 1 > o.
We can interprete no-thing or "zero/0" as the blank space where we are formulating SNT,
and at least tentatively give also following definition: 0 < o < 1 < ω < A.
Given above context, we could consistently, withing defined confines, substitute ω with a
larger triangle than the row where ω manifests, and o with smaller triangle than the row
where o manifests. For example we could write row labeled "2" in the following way:
1st column:
A
oω
o1ω
2nd column:
- 241 -
1
3rd column:
1
4th column:
A
oω
o1ω
o11ω
o111ω
= "2"
Substituting idea of 'infinitely small' with structure containing also symbol for 'infinitely
large' may sound strange and unacceptable, but as we have been formally using so far only
the "Archimedean" operators < >, we can continue believing in good hope that our number
theory is consistent. If someone with better thinking skills can show that our number
theory has at this stage or even earlier become inconsistent in the Gödelian sense, as
argued in the linked lecture, we might need to rethink the whole - or not, if we set our
goals lower.
03/28/2015 - http://matpitka.blogspot.com/2015/03/about-huygens-principle-andtgd.html#comments
About Huygens Principle and TGD
Stephen made an interesting question about the relationship of Huygens principle to TGD
The answer to the question became too long to serve as a comment so that I decided to add it as a blog
posting.
1. Huygens Principle
Huygens principle can be assigned most naturally with classical linear wave equations with a source
term. It applies also in perturbation theory involving small non-linearities.
One can solve the d'Alembert equation Box Φ= J with a source term J by inverting the d'Alembertian
operator to get a bifocal function G(x,y) what one call's Green function.
Green function is bi-local function G(x,y) and the solution generated by point infinitely strong
source J localised at single space-time point y - delta function is the technical term. This description
allows to think that every point space-time point acts as a source for a spherical wave described by
Green function. Green function is Lorentz invariant satisfies causality: on selects the boundary
conditions so that the signal is with future light cone.
There are many kind of Green functions and also Feynman propagator satisfies same equation. Now
however causality in the naive sense is not exact for fermions. The distance between points x and y can
be also space-like but the breaking of causality is small. Feynman propagators
form the basics of QFT description but now the situation is changing after what Nima et al have done to
the theoretical physics;-). Twistors are the tool also in TGD too but generalised to 8-D case and this
- 242 -
generalisation has been one of the big steps of progress in TGD shows that M 4×CP2 is twistorially
completely unique.
2. What about Huygens principle and Green functions at the level of TGD space-time?
In TGD, classical field equations are extremely non-linear. Hence perturbation theory based Green
function around a solution defined by canonically imbedded Minkowski space M4 in M4×CP2 fails. Even
worse: the Green function would vanish identically because Kahler action is non-vanishing only in
fourth order for the perturbations of canonically imbedded M4! This total breakdown of perturbation
theory forces to forget standard ways to quantise TGD and I ended up with the world of classical worlds:
geometrization of the space of 3-surfaces. Later zero energy ontology emerged and 3-surfaces were
replaced by pairs of 3-surfaces at opposite boundaries of causal diamond CD defining the portion of
imbedding space which can be perceived by conscious entity in given scale. Scale hierarchy is explicitly
present.
Preferred externals in space-time regions with Minkowskian signature of induced metric decompose
to topological light-rays which behave like quantum of massless radiation field. Massless externals for
instance are space-time tubes carrying superposition of waves in same light-like direction proceeding.
Restricted superposition replaces superposition for single space-time sheet whereas unlimited
superposition holds only for the effects caused by space-time sheets to at test particle touching them
simultaneously.
The shape of the radiation pulse is preserved which means soliton like behaviour: form of pulse is
preserved, velocity of propagation is maximal, and the pulse is precisely targeted. Classical wave
equation is "already quantized". This has very strong implications for communications and control in
Living matter. The GRT approximation of many-sheetedness of course masks tall these beauty as it
masked also dark matter, and we see only some anomalies such as several light velocities for signals
from SN1987A.
In geometric optics, rays are a key notion. In TGD they correspond to light-like orbits of partonic 2surfaces. The light-like orbit of partonic 2-surface is a highly non-curved analog of light-one boundary the signature of the induced metric changes at it from Minkowskian to Eucldian at it. Partonic 2-surface
need not expand like sphere for ordinary light-cone. Strong gravitational effects make the signature of
the induced metric 3-metric (0,-1,-1) at partonic 2-surfaces. There is a strong analogy with Schwartscild
horizon but also differences: for Scwartschild blackhole the interior has Minkowskian signature.
3. What about fermonic variant of Huygens principle?
In fermionic sector spinors are localised at string world sheets and obey Kähler-Dirac equation
which by conformal invariance is just what spinors obey in super string models. Holomorphy in
hypercomplex coordinate gives the solutions in universal form, which depends on the conformal
equivalence class of the effective metric defined by the anti-commutators of Kähler-Dirac gamma
matrices at string world sheet. Strings are associated with magnetic flux tubes carrying monopole flux
and it would seem that the cosmic web of these flux tubes defines the wiring along which fermions
propagate.
The behavior of spinors at the 1-D light-like boundaries of string world sheets carrying fermion
number has been a long lasting head ache. Should one introduce a Dirac type action these lines?.
Twistor approach and Feynman diagrammatics suggest that fundamental fermionic propagator should
emerge from this action.
- 243 -
I finally turned out that one must assign 1-D massless Dirac action in induced metric and also its 1-D
super counterpart as line length which however vanishes for solutions. The solutions of Dirac equation
have 8-D light-like momentum assignable to the 1-D curves, which are 8-D light-like geodesics of
M4×CP2. The 4-momentum of fermion line is time-like or light-like so that the propagation is inside
future light-cone rather than only along future light-cone as in Huygens principle.
The propagation of fundamental fermions and elementary particles obtained as the composites is
inside the future light-one, not only along light-cone boundary with light-velocity. This reflects the
presence of CP2 degrees of freedom directly and leads to massivation.
To sum up, quantized form of Huygens principle but formulated statistically for partonic fermionic
lines at partonic 2-surfaces, for partonic 2-surfaces, or for the masses quantum like regions of space-time
regions - could hold true. Transition from TGD to GRT limit by approximating many-sheeted spacetime with region of M4 should give Huygens principle. Especially interesting is 8-D generalisation of
Huygens principle implying that boundary of 4-D future light-cone is replaced by its interior. 8-D notion
of twistor should be relevant here.
posted by Matti Pitkanen @ 10:54 PM
62 Comments:
At 5:01 AM,
Anonymous said...
Oh my God this Ramanujan is beautiful:
"Universal quadratic form
An integral quadratic form whose image consists of all the positive integers is sometimes
called universal. Lagrange's four-square theorem shows that w^2+x^2+y^2+z^2 is
universal. Ramanujan generalized this to aw^2+bx^2+cy^2+dz^2 and found 54 multisets
{a,b,c,d} that can each generate all positive integers, namely,
{1,1,1,d}, 1 ≤ d ≤ 7
{1,1,2,d}, 2 ≤ d ≤ 14
{1,1,3,d}, 3 ≤ d ≤ 6
{1,2,2,d}, 2 ≤ d ≤ 7
{1,2,3,d}, 3 ≤ d ≤ 10
{1,2,4,d}, 4 ≤ d ≤ 14
{1,2,5,d}, 6 ≤ d ≤ 10
There are also forms whose image consists of all but one of the positive integers. For
example, {1,2,5,5} has 15 as the exception. Recently, the 15 and 290 theorems have
completely characterized universal integral quadratic forms: if all coefficients are integers,
then it represents all positive integers if and only if it represents all integers up through
290; if it has an integral matrix, it represents all positive integers if and only if it represents
all integers up through 15." http://en.wikipedia.org/wiki/Quadratic_form
There are 7 universal intervals for d for the limit a=1, b=(1-2) and c=(1-5), and their
lengths are 7, 13, 4, 6, 8, 11 and 5. Three multiples of 2 and the 2 first prime pairs.
One possible physical interpretation of these intervals of positive integers is quantum
number, and hence it could be said that there are 7 "quantum numbers" giving the
"interval" (or field) of principal quantum number. Also, each interval of universal
quadratic form can be considered a delta-function with discrete boundaries for coefficient
- 244 -
d, and on rational line there are 5 center points of these seven intervals for d: 4 (for
abc:111), 4½ (113, 122), 6½ (123), 8 (112, 125) and 9 (124); note also the symmetry
8=2x4 and 9=2x4½ (cf. Dirac "sky and sea"). Or, if you like, five complete pages in the
"Good Book", glued together by coefficients of Ramanujan's formula.
Could it be shown that these 7 (pre)quantum numbers/intervals that generate the principal
quantum number (1, 2, 3,...), also contain the other quantum numbers and their relations?
At 6:21 AM,
[email protected] said...
Amazing results. They might even have some exotic application to physics some day.
At 9:09 AM,
Anonymous said...
http://www.fen.bilkent.edu.tr/~franz/mat/15.pdf
At 4:31 PM, Anonymous said...
BTW, what do you mean by '3-surface', exactly? Does your definition include or allow
also triangles on a plane?
The 15-theorem's limit 15 for the universal quadratic form (cf 8D) is a triangular number,
first such with similar internal structure, with fully contained genuine triangular number
(3).
At 8:54 PM, [email protected] said...
By the way, universality is possible only in D=4 or higher. One cannot avoid association
with space-time dimension.
a) The Dirac equation on fermion line (light-like geodesic characterise by 8-D lightlike momentum) representing string boundary at the light-like parton orbit) leads to 8-D
mass squared, which vanishes :
p_0^1-….-p_7^2=0 or
p_01^-..-p_3^2= m^2= p_4^2+…+p_7^2
Suppose that the number theoretical universality of TGD forces the four-momenta to
have integer components or rational components in which case one can take out the
common denominator and get similar situation again.
Mass-squared is constant times integer by super-conformal invariance: eigenvalue of
super-conformal scaling generator L_0. This condition requires that m^2 is integer.
By four-dimensionality p_4^+..p_7^2 indeed would have all non-negative integer
values.
What about the Minkowskian variant p_0^1-..-p^3^2: is it also universal? My hunch is
that in this signature universality is easier to satisfy. Probably also this problem has been
discussed. Any information about this?
At 10:47 PM, Anonymous said...
Again, the number 15 showing up in the present epoch is a convergence of "criticality"
http://arxiv.org/abs/1403.5227
- 245 -
criticality is such a funny term.. is it like on Star Trek when the captain is saying "faster"
and Scotty from engeering replies they are already going at warp 9,99999 whatever...
captain doesn't understand logarithm max out to 10?
--Stephen
At 6:46 AM,
Anonymous said...
The Grasmannian of quantum Pascal triangle (diamonds are the yang girls best friend...;) )
might be help-full here (ie., how does Ramanujan cognition work?):
http://math.ucr.edu/home/baez/qg-fall2007/pascal.html
A some-what related question, how do you know where your hand is, exPecially/at least if
you don't quantum zeno by looking/thinking it?
At 7:05 AM,
Anonymous said...
By the way, the funniest and most revealing Ramanujan here was his only "mistake" that
left out the gap of 15, which then became the 15-theorem and 290-theorem (290=1+17^2).
(cf. self-avoiding and self-including walks).
At 8:53 AM,
Ulla said...
Can you explain this phrase a little more, thx.
"This total breakdown of perturbation theory forces to forget standard ways to quantise
TGD
..."
Still you talk of partition theory.
At 8:57 AM,
Ulla said...
http://inspirehep.net/record/1297532/plots . Look at this. Is the CD-diamonds forming
Dirac fermions in TGD? Can it be said so?
At 3:52 PM, Santeri Satama said...
Stephen, love your comment. With very simple gardeners understanding, criticality = let it
grow, and dont push the growth factors on the negative side when mulching. Or if and
when you do, learn from overdoing it. On the other hand, who's to deny willing and
accepting participation of being a growth factor, up to the state of harmony and balance, so
dynamics...
At 4:51 PM, Anonymous said...
Matti my friend, as our days are getting shorter and flying past faster, your question
about universality of Minkowskian signature become more and more acute. I don't have
the formal answer (you are better in that field than you think), but this I know: we can
pose our questions and universe will answer, or allow us to find the answer by our selfs.
Only hunch I have now, is that the three colors of algebraic quadrance in Normans'
hyperbolic rational geometry are highly relevant and go deeper than Minkowski.
But the main point stays, this is all heuristic, and heureka will come when you are
ready for it and the answer satisfies your deepest questions, that only you can ask for your
- 246 -
self. Also according to TGD the whole of math reinvents itself from universal QJ to
another, so with all empirical evidence you can Trust that universe answers your
questions, as you are also a hologram containing all the Information aka Creation. In the
way that gives you the deepest emotional satisfaction also in this life.
At 7:23 PM, [email protected] said...
To Ulla:
you probably meant to say "perturbation theory". Perturbation theory is extremely
general concept.
You make a guess for a solution. It is not quite correct but you can calculate
corrections order by order in some small parameter. Cows are not spherical - not even for
theoretical physicists, but physicist can start from a spherical cow as approximation and do
next centuries perturbation theory to reproduce the correct shape from standard model;-).
In the sentence I referred to perturbation theory path integral developed by Feynman
originally. The scattering amplitude is sum over amplitudes over all paths that particle can
travel from A to B. Huygens principle says actually much the same classically and
quantum theory brings only corrections to this as radiative corrections.
In TGD, path would become four-surface connecting 3-surfaces A and B since particle
is now 3-surface. This approach fails if one takes empty Minkowski space M^4 in
M^4xCP_2 as the first approximation- "spherical cow". One cannot sum over all "paths"
(space-time surfaces).
King is dead and we must find a new king. The new king is WCW geometry. Path
integral is replaced with functional integral over pairs of 3-surface A and B at opposite
boundaries of WCW. The additional bonus is that functional integral is well-defined
mathematically unlike path integral: this is the main victory of the new king. Of course,
this also extends Einstein's geometrization program so that it applies to entire quantum
theory and this is something really big.
Perturbation theory is universal approach to treat corrections to the sphericality of
cows and one can of course develop perturbation theory for the functional integral in
powers of Kaehler coupling strength alpha_K= g_^2/4*pi*h_eff. When h_eff is large it is
small and large value of h_eff - the phase transition to dark matter phase - would save
perturbation theory when it would not be possible otherwise.
What is however new that in topological sense there only one diagram. One can call it
generalised Feynman diagram, or replacd "Feynman" with "twistor" or "knot" or "braid".
If I would replace simply with "TGD" , I would be regarded as a crackpot (CP) of
second kind. If I would replace it with "Pitkänen" I would be regarded kind CP of first
kind, the worst variety of crackpots. Now I am seen only as a CP of third kind and my
friends and relatives have not so much to be ashamed of;-).
At 7:39 PM, [email protected] said...
To Anonymous:
When I have mental image about my hand there, quantum Zenoing occurs as long as
the mental image, one self in the hierarchy, exists.
- 247 -
The mental image about hand, exists as self identified as the period of state function
reductions to fixed boundary of CD and leaving the part of zero energy state at that
boundary invariant.
In ordinary quantum theory, these state function reductions would do nothing for the
state. Now they change the state at the second boundary of CD and even change the
position of second boundary. This gives universally rise to the experience about flow of
time since the average the distance between tips of CD increases. Funny to think that my
hand would experience flow of time!
Standard neuroscience of course says that mental images are associated with
*representations* of the external object in my brain rather than external objects as such,
say my hand. To how high extent this is true is an interesting question. In TGD Universe
flux tubes connect things together in all scales, in particular gravitational interaction
involves them plus associated strings connecting partonic 2-surfaces. Attention creates
flux tubes and presumably also associated strings.
Question: Are the flux tubes only between me and the representation about object or
perhaps between me and the object? What happens when I look distant star by telescope?
At 7:55 PM, [email protected] said...
Thanks to Anonymous for quantum Pascal triangle. It would be intereting to look this
more closely.
Inclusions of HFFs and quantum groups characterised by quantum phases q=
exp(i2pi/n) are very interesting and quantum Pascal triangle is characterised by such a
phase and gives quantum variant of binomial coefficient as a result. One can imagine
quantum variants for integers characterising all kinds of combinatorial objects: Do
quantum variants of objects make sense in some sense? Probably some mathematicians
has pondered also this question.
Only few days ago I realised that in order to have "quantum quantum theory" as a tool
to describe finite measurement resolution, it is better to have quantum variants of
fermionic quantum anti-commutation relations for the induced spinors. They have been
formulated as I learned in five minutes from web.
These anticommutation relations however demand 2-D space/space-time! But just the
well-definedness of em charge almost-forces 2-D string world sheets! And number
theoretic arguments removes the "almost". In 4-D Minkowski space-time you do not get
them!
In over-optimistic mood - officially allowed at morning hours - I can therefore
conclude that the observation of anyons in condensed matter systems (assigned with 2-D
boundaries) serves as a direct evidence for the localisation of induced spinors at 2-D
surfaces and for large h_eff. I must however assume that also partonic 2-surfaces carry
them- whether it is so has been an open question for a long time.
At 8:12 PM,
[email protected] said...
- 248 -
To Ulla about CD-diamonds and Dirac fermions: I would not say it in that manner.
Spinors and thus fermions are the from beginning. Space-time surfaces are correlates for
what we call sensory, second quantised spinor fields are correlates for what we call
Boolean cognition. Both are needed. I am because I sense and think. Descartes got half of
it.
CDs are the imbedding space correlate of ZEO: zero energy states are associated with
them and space-time surfaces are within CDs as also the induced spinor fields at string
worlds sheets within CDs.
There are many levels in the geometric complex and this does not make things easy for a
layman.
In the lowest resolution you have space-time surfaces, imbedding space, WCW.
As you look at space-time surface in better resolution you begin to discern Euclidian and
Minkowskian a regions, light-like orbits of partonic 2-surfaces between them, string world
sheets and their 1-D boundaries at orbits of partonic 2-surfaces. You discover what are the
space-time correlates of Feynman/ twistor/whatever diagrams. This is one part of
geometrization and topologization of physics initiated by Einstein.
At imbedding level you find hierarchy of CDs required by ZEO, by non-determinism of
Kaehler action, by cosmological facts, and by consciousness theory.
WCW decomposes to sub-WCWs associated with CDs, etc… This is very nice.
Restriction to finite volume is not only an approximation, it is basic property of conscious
experience: conscious experience does this for the physics: CD is the spotlight of
consciousness.
At 6:54 AM,
Anonymous said...
There's a story behind the question "how do I know where my hand is":
An anthropologist was living with Siberian shamanistic tribe, and one day the hunters of
the tribe came to the shaman, telling that the deer were not where they used to be (may
troubled by oil company), and needles to say, if they would not find the deer, the tribe
would be very hungry next winter. Shaman told the hunters to go and find the deer in
another valley, they did and found them and tribe had food for next winter. The
anthropologist who had been following these events asked the shaman, how did you know
where the deer were? The shaman replied with a question: "how do you know where your
hand is?"
The simplest level of math and dimensional/exponential logic tells us that this basic
spatiotemporal "proprioseptic" knowing needs to be +1 dimensional field/space-time in
relation to the object/intentional target (e.g. food) located. But in further analysis and
number theoretic etc. geomatric analysis, the deductive mathematical logic is not
independent from the mathematical language used. This is the most basic linguistic and
relativistic truth, the measurement tools chosen also need to be holistically comprehended,
and that is not an easy task for beings born and raised in this or that limiting paradigm (e.g.
Cantorian "paradise", "real" number line etc.). Challenges are not meant to be easy, but
- 249 -
challenging. Both layman and expert have their mutual handicaps and strengths, and
expert skills can greatly benefit from layman intuitions and questions outside the
axiomatic box in which the expert has been dogmatized. And laymen do not want to or
need to do everything again ab ovo, but need to share and use the cumulated expert
experience. In best situation the child like laymans open and anarchic curiosity and age old
experts skills and wisdom are combined in one person.
So, in the spirit of socratic dialogue, can we establish and agree that by definition, any and
all space-time surface is a 2D object, observable and observation event of which requires a
3D space? When I look at the ground and trees, I see 2D surfaces in 3D space, and this
(external) observation event entails some kind of trigonometry to measure limits or
boundaries. And when I observe these 3D and 4D observation fields (my "magnetic field
body"), I'm thinking in a higher level of dimensional quadrance or quadrea.
All language, including number theories, is symbolic, ie. relation of parts and whole. Parts
have meaning only in relation to whole. Ramanujan is a symbolic-holographic part of
whole or God, thinking Godly thoughts with rare sensory directness. On Ramanujan level,
the math is in the gut, not just learned rules in head. And Ramanujan cognition is not just
"out there", but also inside each of us, like the hand and deer and the tribe are inside
shamans cognitive n-dimensional field, where if question arises, answer arises...
At 7:42 AM,
Anonymous said...
Hmm. Cats. When a cat has the mental image of cat-self on the floor and cat-self on the
table (in order to lay down on your keyboard), it does a rapid head movement of two eyesvertical line up and down. Question: do we know and can we know what kind of
trigonometry the cat is using, in order to jump with cat like grace?
PS: and for balance, a dog: https://www.youtube.com/watch?v=GhsNLvxYSNM
At 5:39 PM, Anonymous said...
Orwin, is that you?
At 4:19 PM, Anonymous said...
Matti, is there be a number theoretical link between the 4D quadrance universal
coefficients and inversion transformation?
The invariants for this symmetry in 4 dimensions is unknown", they can't be points, lines
(strings) seem unknowable(?), so what about 'universal' areas/surfaces?
http://en.wikipedia.org/wiki/Inversion_transformation
At 7:51 PM, [email protected] said...
To Anonymous about 4-D inversion.:
Conformal transformations in 4-D are conformal transformations. They preserve
angles and scale metric by local factor. Inner product given by Lorentz invariant metric
transforms by this factor. The ratio of two inner products of tangent space vectors at point
P given by : I= A.B/C.D is invariant under these transformations but this is tangent space
invariants.
- 250 -
One obtains M^4 conformal transformations by starting from Poincare transformations
which are linear and by combining them with inversion x^mu -->X^mu/X.X. Infinitesimal
transformations are characterised by extension of Poincare Lie algebra in which new Lie
algebra generators are vectors (inversions at point differing infinitesimally from origin)
and scalar (scaling).
I looked at the article on the transformations called conformal transformations and they
look very different from conformal transformations and the authors do not actually show
that the transformations are conformal transformations and they are not. Author just
generalises the condition guaranteeing that Poincare transformation are isometries.
Therefore there is not reason to expect that four-point invariant would exist .
There are no references in the article to the inversion transformation, which makes me
really skeptic.
At 7:55 PM, [email protected] said...
One can look for the conformal transformations by starting from complex case and
then look for a possible quaternionic generalisation.
In complex plane one can write z= u/v and conformal transformations act as u->a*z+b, v-->c*z+d. From linearity at the level of C^2 follows the existence of the fourpoint invariant for Mobius transformations. The analogs of four point invariants for
conformal transformation of M^4 do not exist but one can construct them in terms of 8-D
twistors. In Minkowski space this kind of invariants do not exist.
Could quaternion analogs of Mobius transformations allow to generalize the four-point
invariants from complex plane to 4-D quaternionic space? One would identify Q
=Q_1/Q_2 as in complex case and act by linear transformations on Q_i. Here noncommutativity becomes the problem since one would like to have quaternion analyticity,
which would mean Taylor or Laurent series with coefficients multiplying powers of Q
from left or right but not both.
One must must specify whether the matrices for linear transformations act from right or
left - say right. Also in the inversion (Q*a+b)/(Q*c+D) one must specify whether one
divides from right or left. It seems that here one loses the idea about quaternion analyticity
since one necessarily obtains terms which can be written as Q^n*k. The coefficients
a,b,c,d should be real to avoid the problems. Projectivization Q=Q_1/Q_2 does not work
by non-commutativity: one obtains terms of Q*a*Q. One should use biquaternions rather
than their ratios to keep everything righ-linear. They would be rather analogous to 8-D
linear representation of M^4 twistors.
At 4:54 AM,
Anonymous said...
I have a layman problem of comprehending Minkowski space and the expression
"Euclidean and Minkowski regions" does not make sense. What kind of "space" would be
divided into such different regions?
AFAIK Minkowski space is not fully algebraic, as it is based on transcendental angles and
lengths and preserves and creates unnecessary complexities. Special Relativity follows
naturally from quadratic form identity/symmetry of chromogeometry:
- 251 -
Euclidean:
blue dot product (Qb)
[x1, y1] ·b [x2, y2] ≡ x1x2 + y1y2,
Relativistic:
red dot product (Qr)
[x1, y1] ·r [x2, y2] ≡ x1x2 − y1y2
and
green dot product (Qg)
[x1, y1] ·g [x2, y2] = x1y2 + x2y1.
The identity of the threefold symmetry:
Qb^2 = Qr^2 + Qg^2
Relativistic observers can agree on observing shared dot products/2D-surfaces. Continuing
linear algebra fully algebraically from this basic theorem of three colors of
perpendicularity and their identity has a strong flavor and promise of "quantum quantum
theory" where you don't run into complexities of transcendental Minkowski. Euclidean
space contains/is given by the both inverses of relativistic spacetime.
In other words, following your notion, we're all in the Euclidean/quantum "black (w)hole",
and relativistic red and green dot products/quadrances (cf red shift and green shift) just
look like the "outside"...
At 6:13 AM,
[email protected] said...
Sorry, I failed to understand how the threefold symmetry would given Minkowskian inner
product or shared inner product. What I get is x1y_2=-x_2y_1 from the threefold
symmetry.
At 6:16 AM,
Anonymous said...
Link to chromogeometry:
http://arxiv.org/pdf/0806.3617v1.pdf
At 6:31 AM,
Anonymous said...
The point was that threefold symmetry goes much deeper, on the level of unified theory. If
I understood correctly, Minkowskian is just the 4D generalization of Qr.
Here's Norman's approach to linear algebra, looks
https://www.youtube.com/playlist?list=PL01A21B9E302D50C1
very
interesting:
At 7:13 AM,
Anonymous said...
Can it be shown, purely algebraically in (rational) linear algebra, that 8D Euclidean (Qb) is
the sum of 4D relativistic and its inverse (Qr and Qg)? And could the two relativistic dot
products be interpreted as CD? If so, the threefold symmetry would be very beautiful
simplification of 8D imbedding space as the sum of two sides of CD.
At 6:42 PM,
[email protected] said...
- 252 -
To anonymous.
I looked at the article about rgb and found that what looks like product is not an
ordinary product but something else which I do not understand.
From article O learn that the rgb identity is actually just the standard identity
(x^2+y^2)^2= (x^2-y^2)^2+(2xy)^2 appearing in the construction of Pythagorean
triangles whose sides have rational/integer lengths: x^2+y^2,x^2-y^2, 2xy. This is number
theoretically very interesting.
A natural idea would be that p-adically the angles associated with Pythagorean
triangles would be preferred. It however seems that preferred angles correspond to 2pi/n:
actually roots of unity are p-adically well-defined: one cannot speak about angles but only
sines and cosines of them. The notion of angle (actually phase) leads to the introduction of
algebraic extensions of p-adic numbers: this makes also possible discrete Fourier analysis
without which physics is not possible.
The three quadratic forms are 2-D length squared in Euclidian coordinates, Minkowski
coordinates and Minkowski coordinates rotated by pi/2. I fail to see any connection with
4-D Minkowski space or with unification of interactions.
At 6:48 PM, [email protected] said...
To Anonymous: About the lecture of Norman. I am going to be polemic now!;-). Nothing
personal.
The geometry in lecture looks to me like elementary descriptive geometry- nothing bad
in that as such. Linear algebra is also described. This is standard stuff which students of
theoretical physics should learn during the first autumn- usually they do not.
I find it difficult to see how Norman's approach could lead to a unified theory. These
concepts belong to the time before modern physics maybe even before Newton: there is no
mention about even differential calculus!
Huge developments in mathematical consciousness have occurred after Newton.
Modern physics involves partial differential equations, differential geometry, Hilbert
spaces, advanced algebraic notions such as von Neumann algebras, group theory,
topology, advanced number theory, ….. It is difficult for me to see how one could
formulate it in terms of descriptive geometry by believing that rationals form a continuum.
Sorry for being polemic, but it is important to see the big picture about evolution of
physics and mathematics, which has led to greatest revolution in consciousness that has
happened on this planet. It is a pity that most scientists building curriculum vitae so often
fail to realize this. There is no return to the times before Einstein and even less to the times
before Newton.
This sounds frustrating but also professionals experience the same frustration: the
understanding of individual remains nowadays infinitesimal. The miracle of science
convinces me about existence of higher levels in the hierarchy of consciousness.
At 12:25 AM,
Anonymous said...
- 253 -
OK then, Let's look at the big picture :). The underlying philosophy and basic
assumptions of science (mechanistic determinism) have been wrong since Descartes, and
even today, hundred years after QM, the ruling paradigm is the cult of materialism and
alienated objectivism, the consciousness of wannabe king of the hill chimp with a big
stick. Sure, even blind chicken finds a kernel of a corn now and then, but as the whole of
civilization revealed itself to be just this self-destructive collective psychosis we live
entangled with, all the talk about evolution is just words, no actual evolving involved.
Polemical enough big picture? ;-)
Looney crackpots like you are the most fun and interesting part of this field of magic.
And same for Norman, to do math properly from the beginning and care for big audience
aka fellow people in this day and age of ad hoc make believe "axioms" etc. scholastics
about number of angels on needle point where irrationals converge, is a sure sign of a
crackpot... ;)
Yes, if we give quadrance of general measurement theory (aka thermodynamics) any
power, there is no going back to good old days of just shamanistic tribes, or just to Euclid
and Euler, we have all these kernels too, they just don't fit together with all the acorns. The
name of the game is to question basic assumptions, especially when in dead end.
Everybody can pick a stick and bullshit all they want, it's entertaining game. But
enough is enough, now back to business.
Threefold symmetry is a "higher level" Pythagoras theorem, and as it has been already
shown without transcendental Minkowski that special relativity is rooted in Qr, that
deserves a good look, and I believe my hypothesis that this level of holographic
Pythagoran identity can help solve the riddle of unified theory is natural and justified. We
can't say it cannot or can help without checking. As for the holographic levels or scales of
Pythagoras' theorem, it's good to remember Bohm's philosophical notion of generative
order.
The basic philosophy behind Norman's approach is finite measurement resolution
("does not believe in infinite sets" of Cantorian paradise), so it's best suited for that
purpose, but does not alone lead to e.g. math of cognition and the fields and spacetimes in
which mathematical observables take place.
At 3:41 AM,
[email protected] said...
The materialistic view is wrong for obvious reason: it neglects consciousness as acts of
re-creation of Universe. The mathematics of behind materialistic view is however a
wonderful creation and remains intact also when the world view is expanded.
I believe that finite measurement resolution is realised as properties of quantum states
themselves. Rationals and their algebraic extensions are the tool to express the finite
resolution classically. The fundamental geometric existence involves however both real
and p-adic continua. Restriction to mere rationals is analogous to restricting to
materialistic world view: both deny the transcendental.
Trascendentals is not mere mathematical spiritual luxury but extremely powerful tool.
Without them physical theories would degenerate to computer programs.
- 254 -
At 9:05 AM,
Anonymous said...
Notions of continuity and discontinuity are at the heart of the matter, and they deserve
careful philosophical thought. Ad hoc axiomatics serve the needs of applied engineering,
but when we say that geometry and physics reduce to number theory, we are talking on
different level. There are various continuities and discontinuities at various dimensional
etc. exponential contexts, and their interrelations. Completeness is again different but
related notion.
Notion of "real number line" is IMHO a misnomer and category error, as
phenomenally it refers to 2D-fields of natural numbers with certain kind of exponential
structure, not 1D-line like rational continuum. According to their exponential structure
these natural number fields are _internal_ structures of integers and integer ratios. P-adic
notion of exponentiality, its fields - or strings or braids?! start from the divine Whole One,
platonic hen kai agathon (cf infinite primes in your language), divided into primes that
multiplied together generate the natural numbers.
This is the big picture that I just figured out. Platonic anamnesis aka 'mathematical
holography' is IMHO most basic axiomatic condition for reducing physics and theory of
cognition to number theory. I haven't seen this number theoretic relation of whole and
parts is not explained anywhere this simply and intuitively. And as math is these days
usually taught and done in very misguiding language, this beautiful simplicity does not get
often fully comprehended. The psychological mechanisms behind this obfuscation of
Platonia are so obvious they need not to be pointed out.
Now we can see and say that the "p-adic" (One divided into primes is more intuitive
language) unity is complete in the sense that it generates natural numbers. And parts
_really_ make sense only in relation to holographic whole. 2D triangles and their algebraic
relations ("irrationals") now make perfect sense p-adically, as line segments and their
quadrants from hen kai agathon to rational number line. Their "real" and "complex" etc.
approximations are quite literally the shadows on on the walls of Plato's cave, all
converging to the infinitesimal that good bishop Berkeley ridiculed for good reasons, not
the Ideas (Pythagoras' quadratic theorem, Ramanujan cognition etc) themselves.
Seen in this context, the dimensional "limit" of universal quadratic form and 15- and
290 theorems bring an interesting aspect of inherent discontinuity or economy to the
picture. Which kind, exactly, and where?
And like the original Pythagoras' theorem, the Idea/Identity of threefold symmetry
resides also in p-adic Platonia, outside the cave.
Of course also the Plato's cave is part of the whole, but to keep on evolving, we need to
step once in a while outside the cave, clear out the spider webs and let new light create
new shadows.
At 7:14 PM, [email protected] said...
I agree with the philosophical picture but my big picture is different. I see different
number fields as faces of a kaleidoscope.
p-Adics labelled by primes p=2,3,5,….(I call p "p-adic prime") plus their extensions
are also completions of rationals containing their own kind of transcendentals as infinite
- 255 -
series of powers of p not periodic after finite number digits as for rationals and infinite as
real numbers. Number fields and their algebraic and possibly even non-algebraic
extensions are to me a Big Book.
I see rationals as islands of order in sea of chaos. You want to throw away the sea and
keep only the islands. To mer this is unrealistic. In physical systems you have not only the
periodic orbits but also the non-periodic ones and they dominate.
Rationals for me not all but only the back of Big Book. It is this Book in which I want
to formulate physics - or at least TGD;-). Number theoretical universality - equations and
formulas are expressed so that they make sense in any number field - is extremely
powerful constraint as mass calculations based on p-adic thermodynamics demonstrated:
p-adic temperature is quantized, the primes characterizing p-adic number fields (or briefly
and somewhat confusingly "p-adic primes") characterise mass scales, etc…
[Here I must clarify: p-adic integers in R_p have just one prime, p unlike real integers
and one does not have prime decomposition of p-adic integer in p-adic sense and most of
p-adic integers can be said to be infinite as reals].
The needed universal language would be very much analogous to tensor analysis,
which is the manner to realize general coordinate invariance (, which by the way is even
more powerful in TGD than in GRT).
One concrete application would be the interpretation of field equations and solutions
for preferred externals in a manner making sense in any number field. p-Adic manifold as
cognitive representation, cognitive chart would be second related notion. Third application
would be the construction of scattering amplitudes in a manner not depending on number
field: here vertices as product and co-product of super-symplectic Yangian and amplitudes
as computations freedom from a collection of algebra elements to another one might be the
universal construction. Note the Great Principle: Physics represents mathematics and not
vice versa!!
What might be the tensor analysis of number theoretically universal physics?: maybe
some-one is already writing a thesis about this;-).
At 1:03 AM,
Anonymous said...
Too bad Norman didn't include tensors in his lectures on linear algebra for beginners.
:)
Internalizing the p-adic unity was a big step for me towards comprehending TGD, and
breaking the barrier of real line was necessary for that, and what was big step for me might
seem small step for your infinite primes... :)
Anyhow, I remembered this poem written a time ago in a book with some p-adic page
numbers:
Pikkuinen kuuntelee avaruutta
missä satelliitti tähtiä metsästää,
nyt napsuu pajunlehtiin
irronneita sakaroita,
- 256 -
kiinnittää säteen kaukainen torni
kuin köyden läpi syvän pimean
missä syvän meren
oudot oliot
loihtivat omaa valoa
***
Have you ever tried imagining the kaleidoscope (Greek for 'beautiful/good form
viewer') sea "1-adically"? Purely affine geometry comes formally closest to "1-adic".
To continue on the big picture, the internal Cauchy structures of rationals and roots
also converge in one and same "point" or "number", called 'fluxon' or 'infinitesimal', and
historically "the two wrongs that can make right", as Berkeley quipped on Newton and
Leibnitz, can be taken as illogical or non-deductive heuristic that showed the path to the
discovery of the p-adic unity of all-inclusive One. The whole narrative is a nice
representation of "causal diamond" on idea level. :)
The notion of "real numbers" in the form of Cauchy sequenses is archimedean (and
they say p-adic field is non-archimedean), but the root sequenses converging to
infinitesimal are "incomplete" only if by "axiom of choice" it is _decided_ that it is
"Euclidean" in the sense of the fifth. affine/parallel-axiom. One could just as well and even
better make not that choice and accept that the internal Cauchy sequences for rationals and
algebraic roots converging at "infitesimal" point at infinity are non-euclidean and at least
look if they are 'complete' in some definable non-euclidean sense. It is one thing to
generate every possible combination of natural numbers and call that generative function
complete (or universal quadrance ;)), but it quite another thing to call all the random noise
created by simple combinatorics "transcendentals" and "unkowables" even there are no
algebraic functions or other generative algorithms involved.
I came up with number theoretic metaphor for n-slit quantum experiment: p-adic unity
as the source of light, and rationals as the cave entrances for cave man measurements of
"real" Cauchy particles on the cave wall... how do you like that picture? ;)
At 8:54 AM,
Anonymous said...
Heh. So what kind of trigonometry cat uses to jump on the table? None, every cat knows
there are only isosceles triangles, they just heff their heads up and down to bend the cave
walls. :D
At 12:46 PM, Anonymous said...
So, Berkeley ("to be is to be perceived") was at the root of Quantum Cat Jump in his
theory of sensing, stressing the p-adic ultrametric of touch and body-sense, and "real" cave
mechanic metrics of seeing: http://www.iep.utm.edu/berkeley/. As in ultrametric adelic all
triangles are isosceles (again, cf. triangular numbers), by allowing "real" Cauchy field aka
cave wall bend e.g. by scaling Planck, also Castaneda's assemblage point
(http://www.prismagems.com/castaneda/donjuan8.html) can be given mathematical
interpretation.
At 2:52 PM,
Anonymous said...
- 257 -
Posted to John Baez, cool to share also here:
Long time ago I started for some reason (platonic anamnesis?) meditating 3n+1 in
cocentric way and doing Hilbert space meditation that way , and later I heard about Collatz
conjecture aka 3n+1 problem and now I found that there is also a name for my meditation:
http://en.wikipedia.org/wiki/Dehn_plane
Imagining an open ball like space where perpendicular dimension lines (defined by
three points, "left" and "right" and "middle" where left and right cancel each other) all
meet in the same middle point intuits that also the Collatz conjecture would be easily
provable on non-archimedean Dehn level.
I can't do formal proofs as I'm just amateur philosopher of math, and google didn't
bring up anything on Dehn approach to Collatz, so if this interests you, take a look.
My own approach to math is basically hippy "spiritual", or more exactly body-sense
holography (gnothi seauton...;) ) , and I'm very happy that we can say "One Heart" also in
the language of math: Dehn balls of Hilbert spaces in adelic ultrametric space of all padics combined, where every point of the ball is the center point. <3
At 3:17 PM, Anonymous said...
https://www.youtube.com/watch?v=vdB-8eLEW8g
At 6:00 PM, Anonymous said...
Matti, on a "practical" level, in the POSIX standard, "UNIX time" is the nix time (also
known as POSIX time or Epoch time) is a system for describing instants in time, defined
as the number of seconds that have elapsed since 00:00:00 Coordinated Universal Time
(UTC), Thursday, 1 January 1970.
Let that simply be a point of reference relative to the time since the big bang
14.1413........... billion years...
then the triangle is always two points of time with the "fixed point at the origin" being
the "big stretch" back at t=0, those being the 3 verticies of a 'triangle' . could that be the
point at infinity added to the complex plane leading to topological considerations?
--Stephen
At 8:48 PM, [email protected] said...
I would not like to bring in complex plane. This triangle almost degreasing to a long (the
ratio of short and long is about 10^-9 ) line is hyperbolic rather than ordinary one. [The
Mobius transformations by the way generalise also to the hyperbolic plane: imaginary unit
satisfies now e^2=1].
At 4:22 AM,
Anonymous said...
"The solutions of Dirac equation have 8-D light-like momentum assignable to the 1-D
curves..."
If physics reduce to number theory, and there is a 8-D limit of universal quadratic forms,
and rational number lines can curve...?
- 258 -
At 4:42 AM,
[email protected] said...
Quadratic forms make sense in any dimension. D=4 is exception that with integer
coefficients standard Euclidian length squared has all integer values. I dare guess that
Minkowskian variant has also all values. This is the case.
From 8-D masslessness one would have
n_01^2-n_1^2-n_2^*2-n_3^2= m_0^2+m_1^2+m_2^2+m_3^2.
The right hand side has integer values and I guess that also the left hand side. When
one writes this in the form
n_01^2=n_1^2+n_2^*2+n_3^2+ m_0^2+m_1^2+m_2^2+m_3^2.
one realises that the condition is certainly satisfied. LHS is square of integer and even
for integer values 7-D length squared as all integer values.
The identification of mass squared as conformal weight requires it to have integer
spectrum and this is true if both M^4 and E^4 length squared have integer valued
spectrum.
At 2:39 AM,
Anonymous said...
Sorry, can't follow your argument. How mass enters the picture, and what definition of
mass is used above - Newton, SR, GR, some quantum or TGD specific ("p-adic length?")?
In the last case, could you please explain TGD notion of mass in greater detail, or give link
to the appropriate document?
General wisdom says that Minkowski is flat and does not do gravity, therefore GR
"curved spacetime" is needed. Here's a grazy idea. The threefold symmetry suggests that
to build complex plane and Riemannian manifolds from a number system, e.g. natural
numbers, you need BOTH negative quadratic form (cf Minkowski) AND doubled
quadratic form those "partonic" quadratic forms you get positive euclidean quadratic form
and dimensions.
Very simply, e.g. in one dimension, if you only negate natural numbers without also
doubling them, you end up with no numbers or just zero, instead of integers. :)
To see this most basic relation in mathematical cognition, you need some triangular
point of observation: atomistic infinitesimal, cf. Minkowski/Qr and even more importantly
inclusive adelic/infinite prime "point" of projective geometry. It is easy to see that planar
geometries of Qr and Qg in Cartesian coordinates correspond to inner view and outer
view, and perhaps 3D gravity - the only one we know by experience - is no more complex
than the Euclidean special case of of 4D universal quadratic form.
Finite measurement resolution in terms of adelic point of observation/projection would
most naturally be related to largest natural prime known to us. Such fmr would also assign
Cauchy expansions of transcendental numbers finite length - meaning that largest known
natural prime would have no measurable/computable internal Cauchy in that base???!
At 2:51 AM,
Anonymous said...
- 259 -
"meaning that largest known natural prime would have no measurable/computable internal
Cauchy in that base???!"
-> Sorry, was thinking about internal Cauchy structure of pi in base of largest known
natural prime.
At 8:12 AM,
[email protected] said...
p-Adic mass calculations assume that mass-squared is thermal expectation value of
mass squared over states for which mass squared is in suitable units integer n - conformal
weight of the state by conformal invariance. p-Adically only integer valued spectrum
actually makes sense.
p-Adic thermodynamics is highly unique if one assumes conformal invariance. Even
more, it very probably does not exist without conformal invariance implying integer
spectrum for conformal weights.
n=0 corresponds to the massless ground states and extremely small contributions with
n=1,2.. make the expectation non-vanishing. State has non-vanishing p-adic mass squared
which can be mapped to real mass squared. This is TGD view about Higgs mechanism.
At 8:14 AM,
[email protected] said...
One can look the situation at two levels. Imbedding space and space-time
corresponding to inertial-gravitatuinal dichotomy behind Equivalence Principle.
a) Let us look it at imbedding space. In any case, one has p^2_iner=m^2=n. Inertial 4momentum p=(p_0,0_1,0_2p_3) has four components. p^2_inert is inertial mass-squared.
The formulas p^2_inert -m^2=0 says 8-D mass squared vanishes in 8-D sense. m^2 is the
analog of CP_2 momentum-squared: eigenvalue of color Laplacian. Particle is massless in
8-D sense. This is generalisation of ordinary masslessnes and leads to 8-D variant of
twistorialization making sense since M^4 and CP_2 are completely unique twistorially.
b) One can look the situation also at the level of space-time surface. Actually we look
the situation at the boundary of string world sheet identifiable as fermion line in
generalised Feynman/twistor/Yangian diagram. From the massless
*Dirac equation and its bosonic counterpart for induced spinor at the boundary of
string world sheet one obtains that boundary is light-like geodesic in 8-D sense and the 8D momentum associated with fermion is light like. Light-likeness in 8-D sense says
p^2_gr- p^2_E^4=0,
where 4-momentum p_gr=(n_0,n_1,n_2,n_3) could be called gravitational momentum
and the E^4 vector p_E^4=(n_4,n_5,n_6,n_7) could be called E^4 4-momentum defining
the gravitational dual of color quantum numbers.
I get finally to your question:
p_gr^2-(p_E^4)^2
- 260 -
gives the 8-D generalization of the quartic form and it allows solutions for all integers
n= p_gr^2 as solutions.
The correspondence between inertial and gravitational in CP_2-E^4 case is not so
straight-forward. It has as a direct physical counterpart as dualiity between descriptions of
hadrons using SO(4) symmetry group and partons using color group SU(3): low energy
description --high energy description. This duality generalizes to entire physics.
Equivalence principle says p= p_gr.
At 12:40 PM, Anonymous said...
As "adelic coupling constant is equal to unity" (http://arxiv.org/pdf/hepth/0005200.pdf) and Archimedean geometry ends at Planck scale, Egyptian fractions, and
especially Egyptian fractions modulo prime come to mind.
http://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Graham_problem
There are several interesting conjectures and proofs, e.g.:
Recently proven
http://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Graham_problem
connection to smooth numbers and fast Fourier transform
has
http://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Straus_conjecture
remains unproven: "The restriction that x, y, and z be positive is essential to the
difficulty of the problem, for if negative values were allowed the problem could be solved
trivially."
There is still hope for finding proof in modular approach: "No prime number can be a
square, so by the Hasse–Minkowski theorem, whenever p is prime, there exists a larger
prime q such that p is not a quadratic residue modulo q. One possible approach to proving
the conjecture would be to find for each prime p a larger prime q and a congruence solving
the 4/n problem for n ≡ p (mod q); if this could be done, no prime p could be a
counterexample to the conjecture and the conjecture would be true." Infinite primes any
help in this respect?
At 4:40 PM, Anonymous said...
The more you look at QFT, the uglier it looks, especially when compared to pure math.
Stitches and patches of "length scales" in terms of orders of magnitude and
"renormalization groups". Maybe the wiki on 'length scale' is just horrible, but also the
concept itself does not make much sense, and with that card goes the house that it is
supposed to support. Perhaps there is some connection between 15 degrees of freedom of
conformal symmetry and the 15-theorem, that might guide us to the middle path between
philosophically unsound notions of "individual" particles and fields with infinite degrees
of freedom, but more likely the whole QFT, both its classical and algebraic versions, is
doomed from the beginning to remain dead end. See e.g. this standard discussion:
http://plato.stanford.edu/entries/quantum-field-theory/
TGD contains novel and beautiful deep ideas, but as long as its (non)formulation is tainted
by confusing language and ill-defined concepts originating in QFT, reluctance against
explicating it with formal rigour is more than understandable, but sadly does not give
- 261 -
much hope of ability to communicate what is good and true in TGD. Most importantly, to
our understanding on TGD theory of consciousness level there is no real conflict between
platonist and fictionalist approaches to philosophy of mathematics. As also Einstein
understood, when the math is beautiful enough, there is no question that the empirical
evidence could fail to respond to the beauty. :)
At 8:36 PM, [email protected] said...
QFT is an idealisation. Particles are made point like and the many-sheeted
topologically non-trivial space-time is replaced with empty Minkowski space. This is quite
a violent act from TGD point of view. On the other hand, N=4 SUSY has led to the
Yangian symmetry and also discovery that scattering amplitudes might be extremely
simple and obey quite unexpected symmetries such as Yangian symmetry so that this
violent process seems to respect some of the key aspects.
The notions of resolution and length scale are to my view fundamental. Their recent
formulation makes them to look like ugly ducklings.
Here von Neumann's mathematical vision would help. Hyperfirnite factors of type II_1
and their inclusion hierarchies would be the solution. Concrete representation would be in
terms of fractal hierarchies of breakings of conformal gauge symmetry assignable to
supersympelectic and other key symmetry algebras of TGD possessing conformal
structure (generators labelled by conformal weight).
Also hierarchy of Planck constants emerges in this manner and means generalisation of
quantum theory and understanding of dark matter.
The relation to p-adic length scale hierarchy and negentropic entanglement I do not yet
fully understand but certainly it is also very intimately connected to the other hierarchies.
At 8:38 PM, [email protected] said...
I mentioned that I am trying to improve my understanding about the relationship of padic length scale hierarchy to that of Planck constants.
a) Negentropic entanglement corresponds to density matrix which is projector and thus
proportional to mxm unit matrix. In ordinary quantum mechanics the reduction occurs to
1-D sub-space: projector is 1-D and to a ray of Hilbert space. In TGD projection occurs to
a subspace with arbitrary dimension.
When the dimension of projecto is larger than one, the reduced is negentropically
entangled and the number theoretic entanglement negentropy characterized by the largest
prime power dividing m gives a negative entanglement entropy. Unitary entanglement
gives in 2-particle case rise to this kind of situation. For systems with larger particle
number the conditions are more demanding.
Two systems with h_eff=n*h and thus with n conformal equivalence classes of
connecting space-time sheets could have this kind of entanglement- quantum superposition
of pairs of sheets. Also quantum superpositions for pairs of *subsets* of space-time sheets
would give negentropic entanglement. The entanglement coefficients would form unitary
matrix.
- 262 -
b) It would be very natural to assume that the p-adic prime characterizing the system is
the one giving rise to maximal entanglement negentropy. What looks like a problem is that
the physical values of p-adic primes are very large. For electron p= M_127=2^127-1 is of
order 10^38 and if the p-adicity for negentropic entanglement corresponds to that for
electron then n should have p as a factor! For m in living matter values of order 10^12
appears and this is not possible.
This is a typical situation in which it is good to take a closer look. There are two
systems: electron and electron pair. They are not same. And h_eff is associated with flux
tubes connecting two systems.
Hence one could argue that the p-adic prime characterizing the magnetic flux tubes
connecting two electrons is not same as that characterizing electrons but is much smaller?
This would solve the problem.
The p-adic prime characterizing electron could assigned to the magnetic flux tubes
connecting the wormhole throats of two wormhole contacts making electron (or any
elementary particle)? This n would be really large! As a matter fact, this would mean that
the quantum phase q=exp(i2*pi/n) is very near to unity and quantum quantum mechanics
is very much like quantum mechanics. Note that n=1 and n=infty at opposite ends of
spectrum give rise to ordinary quantum theory!
c) There is a further poorly understood problem. n =h_eff/h would correspond to the
number of conformal equivalence classes of space-time surfaces connecting 3-surfaces at
opposite boundaries of CD.
In ordinary classical n-furcation only single branch is selected. In second quantization
n-furcation any number 0<m<=n of branches can be selected. 0<m<=n such space-time
sheets would connect these 3-surfaces simultaneously. m would ike particle number. Now
h_eff would be for a given m equal to h_eff= m*h.
This could relate to charge fractionization. What values of m are allowed? For
instance, can one demand that m divides n?
At 9:06 AM,
Anonymous said...
I still don't comprehend the meaning - number theoretical structure - of 'p-adic length
scale' - does that just refer to exponential orders of magnitude of p-adic integers? Or does
the concept somehow relate to p-adic norm, p-adic distribution and p-adic measures?
(http://en.wikipedia.org/wiki/P-adic_distribution)
What is the ratio of circle and it's diameter in adelic and p-adic context can it be given
numerical value in a base p, e.g. 5?
Searching on that topic, this came up:
"Pi is troublesome number when one tries to generalize geometry to p-adic context. If
one wants pi one must allow infinite-D (in algebraic sense) of p-adic numbers meaning
that all powers of pi multiplied by p-adic numbers are allowed. As such this is not
catastrophe but if one tolerates only algebraic extensions then only the phases exp(i2pi/n)
- 263 -
make sense. Only phases but not angles. Something deep physically (distance
measurement by interferometry)?
In light-hearted mood, one might ask whether gravitation could save from this trouble
and allow to speak about circumference of circle also in p-adic context. By replacing plane
with a cone (this requires cosmic string;-)), 2pi defined as ratio of length of circle to its
radius becomes k*2pi and could therefore be also rational."
Maybe you recognize the writer. :)
Yes, the notions of length, area, volume etc. are deeply physical, and it's easy to get
confused when trying to define e.g. area in terms of length. Reduction of areas to lengths
is not universal, the relation doesn't always commute.
Dehn plane seems very interesting way to think and look at these matters, also in terms
of non-euclidean pi and possible "measure". And as space/field of "p-adics and reals glued
together by _common rationals_" is by definition not "complete" in the usual Euclidean
sense, it generates only repetitive Cauchy sequenses on both sides, it seems prudent to use
notions of quadrance and spread (cf. inner products) instead of lengths and angles to think
about pi and (pinary?) measures involved in this context.
Maybe there's a very good gut feeling behind the 'light-hearted' comment about gravity
saving the situation... ;)
At 4:34 PM, Anonymous said...
It seems that one of the main problems here is 'conformal' preserving of angles,
especially if we want to proceed from non-archimedean quantum theory, e.g. adelic
theory, to Archimedean classical metrics and measurements.
Transcendental angles do not solve the problem, rather they create it. Replacing
conformality of angles with more general conformal spreads that get "quantumnumberish", spin-like values from 0 to 1 could make much more sense and clear out much
of the linguistic and conceptual confusion.
At 6:19 PM, Anonymous said...
Matti, the well defined notions of Pythagorean field and Pythagorean closure, which are
closely related to TGD imbedding space, might help to find answers to your questions:
http://en.wikipedia.org/wiki/Pythagorean_field
Here can be found many hopefully useful remarks on filtration on Witt ring in relation to
Euclidean and Pythagorean fields:
http://math.uga.edu/~pete/quadraticforms2.pdf
At 9:03 PM, [email protected] said...
To Anonymous. p-Adic length scale involves map of p-adic mass square do real one
mediated by canonical identification Sum x_np^n --> Sum x^np^-n mapping reals to padics and vice versa.
p-Adic mass-squared is of form
M^2_p= n_1p+ n_2p^2 =about n_1p
- 264 -
(because p^2 is very small in p-adic norm) and is mapped to real mass-squared
M^2_R =n_1/p+n_2/p^2+=about n_1/p
By uncertainty principle, the real mass corresponds to length scale h/M propto sqrt(p)
in suitable units.
At 9:20 PM, [email protected] said...
To Anonymous: pi is well-defined in terms of geometry but becomes problematic in
algebraic context unless one assumes real numbers.
In p-adic context one could try to define 2*p geometrically as a circumference of padic unit circle length. The problem is that p-adic unit circle is difficult to define!
Essentially the problem is that the notions of length, areas, etc.. are not possible to
define as a purely p-adic notion. p-Adic physics is about cognition, not about sensory
world in which one can quantify in terms of lengths, etc.. More concretely, length would
require integral defining line length. Line should have ends but p-adic line does not have
ends.
Metric makes sense as purely local notion also p-adically. In particular, inner product
involving *cosine* of angle can be defined in terms of p-adic metric. But not angles! The
simplest angles 2pi/n assignable to roots of unity do not exist p-adically unless one
introduces pi and its powers to extension.
One can argue that we always measure length ratios rather than directly angles. sines,
cosines, tangents, inner products… That is sin(2*pi/n), etc… one can introduce by
introducing algebraic extension of p-adic numbers containing n:th root of unity. This
would mean discretisation in angle degrees of freedom and finite measurement resolution
in angle (or rather phase degrees of freedom)
This allows also to define discrete Fourier analysis in angle degrees of freedom giving
rise to the counterpart of integration. This is enough for doing physics if one accepts the
notion of finite measurement resolution which emerges from basic TGD automatically.
Discretisation corresponds to the points of partonic 2-surface at which string ends are
located. The localisation of spinors to string world sheets forced by well-definendess of
em charge is behind this. As also the condition that generalisation of twistor structure
requiring equivalence of octonionic gamma matrices with ordinary ones. Very deep
mathematical connections with concrete physical meaning.
At 9:35 PM, [email protected] said...
To Anonymous:
Conformal transformations preserve the angles in local sense: the cosine of angle is
preserved. In p-adic contex metric makes sense and one can indeed define cosine of angle.
Therefore also the notion of conformal transformation is well-defined.
The purely algebraic manner to define conformal transformations is as powers series
and makes sense also p-adically.
- 265 -
One can say that p-adicization is possible for differential (purely local) geometry but
not for global geometry (lenghs, areas…). The manner to overcome this is to understand
real and p-adic space-time surfaces as pairs. p-Adic preferred extremal is cognitive chart
map of real preferred extremal and real preferred extemal vice versa (sating it loosely, a
representation of thought as action).
An interesting question is whether all quantum jumps/state function reductions could
actually occur as from real to p-adic and vice versa. Matter to thought o matte to thought...
Sensory-cognitive-sensory-cognive. Cognition and intentionally would be the second half
of existence present in all scales rather than only in brain: the other half would be sensory
experience and action.
This picture would *force* finite measurement resolution number theoretically since
the transition amplitudes between different number fields should involve only points of
space-time which are in algebraic extension of rationals - in the intersection of reality and
p-adicities.
At level of partonic 2-surfaces the parameters defining the representation of surface
would be in this algebraic extension.
At 9:44 PM, [email protected] said...
To Anomynous:
Pythagorean field has escaped my lazy attention. If Pythagoras had had Wikipedia, this
would have saved the life of his pupil who started to produce strange talk about diagonal
of unit square.
Pythagorean is an extension of rationals which contains also square roots of integers.
In the real context infinite number of square roots are added so that the extension is
infinite-D.
In p-adic context only very few square roots of integers in the range [1,p-1] are needed. I
still do not know whether sqrt[p] can be added. If so, then for p>2 the extension is 4-D and
for p=2 8-D. During the first year of p-adic physics I wrote a little article about this and of
course wondered about the connection with TGD.
Penrose made a reference to this little article in his book. Penrose is a bold man! This is
one of the very few references to my work that colleagues have dared to make. The loss of
scientific reputation is an extremely infective disease and can kill within few days;-)!
At 9:51 PM, Anonymous said...
Your hypothesis for p-adic thermodynamics is far from convincing. First of all,
according to local-global, there should be no problem with whole universe being
conscious - and also fully loving - and the counter argument of "catastrophy for NMP"
sounds like locally filtering psychological defense mechanism, or what Buddhists define
as 'ignorance' or 'ego'.
Especially unconvincing is the whole idea of canonical identification, as well as the
idea of p-adic logarithms, as p-adics by themselves are kinds of exponential inverse
functions of real Cauchy logarithms, and/or vice versa. So instead of static canonical
- 266 -
identification, both euclidean completions or rationals would rather pulsate like oscillator,
and not unlike CD. :)
According to elementary math of periodic functions, the length of periodic phase of
cosmic string is - for observers like us and our finite measurement resolution - the sum of
all p-adic primes up to the biggest mersenne we have calculated. We can't calculate the
exact length, as there are many "betweens" that we don't know at the moment, but
Riemann hypothesis should give the rough idea.
And if we apply RH to primal temperature, doesn't that mean that cosmic
thermodynamic loudness is exactly ½?! The metric measure of between 0/cold and 1/hot
does not really matter, what is beautiful is that this sounds warm and homely octave. <3 :)
At the Middle Path point where h_eff evolves.
At 5:37 AM,
[email protected] said...
To Anomymous:
all of us have right to opinions, even non-educated ones.
The predictions from p-adic length scale hypothesis are impressive and this is enough
for me. Especially so because one ends up with a beautiful vision about physics, which
includes also cognition. Canonical identification (having more refined version involving
cognitive/ length scale cutoffs) appears naturally in the formulation of p-adic manifold - or
rather adelic manifold involving real and various p-adic manifolds as adelic structure.
I don't like bringing 'ego's or 'ignorance's and similar rhetoric stuff into physical
argumentation. I have found that many people calling them spiritual use these coin words
as kind of rhetoric weapons.
Neither do I want to base my thinking to empty statements such as "actually the is only
ONE conscious entity". The fact is that there are multitude of conscious experiences,
maybe also entities, and experience of unity is one particular experience. I want to
understand consciousness, not to cherish one particular kind of experience - this belongs to
priests.
I cannot understand what the rest of the argument means since they are not formulated
with concepts that I am used to. I do not know what the motivation for introducing the
length of cosmic string in this context is.
I can however tell that p-adic logarithm is not an nidea but completely well defined
and standard mathematical notion existing for argument having p-adic norm below unity.
I do not know what you mean application of RH to primal temperature. You must have
some theory behind it and TGD does not seem to be that theory.
At 9:58 AM,
Anonymous said...
Sorry, my way of learning TGD (or some emergent closely related theory) can be
bothersome, but please bear with me. Creative tension aka inspiration has also it's
destructive side, as some opinions need to be cleared out to make room for forming new
opinions, which if succesful, lets us see also old opinions in new light.
- 267 -
Ego-ignorance rhetoric was not an ad hominem - or goes for me as much as for anyone
- but intrinsic to a theory of consciousness, h_eff evolution and shannon entropy. Yes,
there are multitude of conscious experiences and entities and therefore experience of unity
should not be a priori excluded, if and when we want to understand consciousness as
holistically as we can.
This is work in progress, as ideas keep revealing themselves, and now it feels that first
idea worth mentioning is that in geometrical conscious experience senses are not spatially
identical, but can be assigned dimensional character, first idea was that taste and smell
without dimensional character, touch with point like or tangential character, sight 2D,
hearing 3D and proprioseptic etc body-senses n-dimensional character. The point of this is
that a TOE including consciousness cannot be based on only one sense phenomenal
experience, e.g. sight, but needs to cover whole range. Not only mental images but also
music, multidimensional feels, including feels of cold and warm, etc. are shared and
teleported via (neg)entropic entanglement, and also without linguistic verification through
classical channel. Much of this multitude of sentient experience stays locally
subconsciouss, filtered out from local conscious experiences by entropic filters, but
conscious experience can also disentangle from filters and "expand". The "unity" of
experiences of unity can also have huge geometrical and scalar and sentient variety.
This in mind, filtration on Witt ring is a natural place to start contemplate and feel out
filtering entanglements and re-member what is filtered into subconscious. And in order to
get thoughts more organized, we need a finite global measure to compare and organize
local measures... hence numerical integer value for cosmic string/wave length.
Self-educated (in the full sense of TGD 'self' and beyond) intuitions don't equal noneducated opinions, and that finite value cosmic string of summing primes up to largest
known Mersenne feels like the most natural candidate for the circumference of p-adic
circle, which was searched for in the "light-hearted" comment above. This as such is not a
new idea in terms of TGD.
Expanding the circumference into cone is the truly beautiful clue to beauty, when we
give the planar circle geometric anatomy of Dehn plane, and identify Dehn plane with
what you call partonic 2-surface meaning, if I understood correctly, the intersection of
adelic space and real side Cauchy structures.
We can, I presume, imagine adelic unity as the perpendicular tip point of the cone
above the intersecting point of Dehn plane where all dimension lines with rational
anatomy meet, and limit the number of intersecting dimension lines by the number of
primes up to largest known Mersenne, so that we can draw connecting p-adic lines - which
can be interpreted directly as worm-hole flux tubes - to end points of Dehn plane
dimension lines.
So, according to Pythagoras theorem, the quadrance of p-adic line can be expressed in
terms of quadrance of positive or negative side of rational line; and quadrance of Dehn
plane meet point and adelic point of unity. So now we have non-archimedean triangels of
adelic space that can be expanded into in some sense square "sheets" below the Dehn
plane, in the "cave" of real extensions. With shifts between two notions of Dehn plane and
other actions, the n-furcating geodesic anatomy of the "cone above" becomes very rich,
and as the Dehn plane "filter" between above and below is rational, all the mappings
- 268 -
between above and below are so far finite repeating wave lengths. And if we don't do
Euclidean completeness action below or above at this stage, to my understanding the tip
point of the cone below is the 'infinitesimal', and the surface of the cone below is
hyperbolic. And if and when we do measurement in terms of completeness theorem in the
cone below, it bursts open and scattering into particles happens.
And there is more, in the sense 'less is more': Assuming that Collatz conjecture is true,
we can interprete the 3n+1 iteration as 3 spatial dimensions intersecting on Dehn plane
and the +1 element as temporal component or dimension, perpendicular to Dehn plane. In
other terms, the circumference of Dehn plane as shown above can be filled by 3n+1
iteration leading always back to unity, and these 3n lengths would also be equal to three
components of isosceles triangle. Hence, according to Collatz conjecture we can reduce
the cones to tetrahedrons or bipyramid of 2 tetrahedrons.
There is also inverse possibility to interprete 3n as temporal dimensions, in other words
observers moving up and down tetrahedral lines from unity to rational Dehn plane, and +1
as the spatial dimension partonic 2-surface.
This is what intention to understand TDG better and ability to self-learn only in very
simple and rational - but possible not less high - math has produced up to this moment.
Thank you for your patience.
03/22/2015 - http://matpitka.blogspot.com/2015/03/cell-memory-and-magneticbody.html#comments
Cell Memory and Magnetic Body
In Ulla's "Thinking Allowed", there was a very interesting link to a popular article telling about
claimed discovery of mechanism of cell memory. In the case considered memory means now the ability
of mother and daughter cell to remember what was or what should should be their identity as highly
differentiated cells. In the division this information seems to be completely forgotten in the sense of
standard biochemistry. How is it regained: this is the problem!
Transcription factor proteins bound to DNA guarantee that cell expresses itself according to its
differentiation. I have never asked myself what the mechanism of differentiation is (I should have a
proper emoticon to describe this unpleasant feeling)! Transcription factors is it: they guarantee that
correct genes are expressed.
As cell replicates the transcription factors disappear temporarily but are restored in mother and
daughter cell later. How this happens looks like a complete mystery in the ordinary biochemistry in
which one has soup of all kinds stupid molecules moving randomly around and making random
collisions with similar idiots;-).
This problem is much more general and plagues all biochemistry. How just the correct randomly
moving biomolecules of the dense molecular soup find each other - say say DNA and mRNA in
transcription process and DNA and its conjugate in replication?
The TGD-based answer is that reacting molecules are connected or get connected by rather long
magnetic flux tubes, which actually appear as pairs (this is not relevant for the argument). Then
- 269 -
magnetic flux tubes contract and force the reacting molecules close to each other. The contraction of the
dark magnetic flux tube is induced by a reduction of Plankc constant heff=n×h: this does not occur
spontaneously since one ends up to a higher criticality. This conclusions follows by accepting the
association of a hierarchy of criticalities to a hierarchy of Planck constants and fractal hierarchy of
symmetry breakings for what I call supersymplectic algebra possessing natural conformal structure and
the hierarchy of isomorphic sub-algebras for which the conformal weights of the original algebra are
multipled by integer n characterizing the sub-algebra. Metabolic energy would be needed at this stage.
As a matter of fact, the general rule would be that anything requiring reduction of Planck constant
demands metabolic energy. Life can be seen as an endless attempt to bet back to higher criticality and
spontaneous drifting to lower criticality. Buddhists understood this long time ago and talked about
Karma's law: the purpose of life is to keep heff low and fight with all means to avoid spiritual
awakening;-). In biological death we have again the opportunity to get rid of this cycle and get
enlightened. Personally I do not dare to be optimistic;-).
In the case of cell replication also the transcription factors replicate just as the DNA but stay farther
away from DNA during the replication: the value of heff has spontaneously increased during replication
period as it happens as conscious entity "dies";-). When the time is ripe, their heff is reduced and they
return to their proper place near DNA and the fight with Karma's law continues again. Note that this
shows also that death is not a real thing at molecular level. Same should be true at higher levels of
fractal self hierarchy.
posted by Matti Pitkanen @ 8:30 PM
14 Comments:
At 12:38 AM,
Ulla said...
"the purpose of Life is to keep heff low and fight with all means to avoid spiritual
awakening;-)."
I have been asking lately what really the 'enlarged consciousness' so many talk of
means. If you take it literally it would mean a path away from knowledge, learned things,
education. Maybe what we see today in the world?
As a rule, consciousness cannot but diminish as we make cognates out of it.
'Enlightment' is the same, we shred off old learnings and sins that have distorted our
minds, old memes disappear etc. This is the essence of 'only a child can see heaven'.
Generally these words are used in the opposite meaning. As some 'knowing' state,
some 'truths' etc. This 'knowledge or truth' is however not learned, but 'sensed', a bit along
the Sheldrake line of thought.
So, in essence the 'enlarged consciousness' would be enlarged Plancks constant. There
are some meditation techniques that can do this, in a way 'link in to a bigger brain'. Also
Karma (old sins and distortions) belong here :) The purpose would be to undo, neutralize
these things to be able to learn the truth/get 'enlarged consciousness'? This is growing, one
purpose of evolution.
The messing of consciousness and awareness is so common, and so hard to get right
back again.
- 270 -
At 4:46 AM,
[email protected] said...
Enlightment means a way off from what we call "learned" wisdom to personal
discovery and real understanding or at least honest attempt to understand. Learned things
are very often only behaviours that we learn to gain acceptance. No real understanding is
involved. Sad ti sat that most of the learning in academic environments is just this.
Saying it more quantitatively. Large h_eff means also better measurement resolution
and cognitive resolution. Maybe what is lost is concreteness when state function
reductions do not occur and one has negentropicentanglement representing abstractions.
One must give something away to achieve it and letting it go to larger h_eff means loss
of the earlier ego- biological death at extreme. The paradoxical thing is that increase of
h_eff tends to occur spontaneously - assuming that my interpretations are correct: but do
not take it as word of God!;-). And even Gods cannot be trusted!
At 3:27 AM,
Anonymous said...
What the heff?!
http://www.urbandictionary.com/define.php?term=heff
At 4:58 AM,
[email protected] said...
heff was not the meaning I had in mind originally;-). I was not quite sure whether the
hierarchy is real or not and decided to be cautious the possibly existing remnants of my
academic reputation and added the subscript _eff - effective. As a matter of fact, the
interpretations as real thing and effective thing are both possible.
At 6:32 PM, Anonymous said...
so are the cells Markovian or not? :)
http://www.worldscientific.com/doi/abs/10.1142/S0219477513400026
At 7:04 PM,
[email protected] said...
Abstract talks about Markovian process with long term memory. I understand this as
presence of long range correlations in time. This is one manner to give some content to the
word "memory". Memories can be also identified as learned skills or as associations as
behaviourists do. There are also episodal memories: this is what memory in layman
language often means . Neither Markov nor behaviourism can give much here.
Markovian process is very special kind of statistical modelling tool assuming
randomness at the bottom. What I am talking about is something much more general. For
me cell is conscious intentional agent rather than Markovian random walker. By
answering yes or no I would do violence to cell;-).
At 5:37 AM,
Ulla said...
http://www.scientificamerican.com/article/memories-may-not-live-in-neurons-synapses/
At 6:14 AM,
[email protected] said...
This is a very interesting finding. May fresh ideas about memory are gradually entering
to the science market.
- 271 -
My own view is that the view about memories is fatally wrong. The identification of
experienced time and geometric time is the fatal error. Even child realises that these two
times are not one and the same thing but who would manage to tell this to academic
scientist?
Zero Energy Ontology allows to see brain and body as 4-dimensional things and
memory recall would be communication with the geometric past. Recall Libet's findings.
The 4-D brain allows also to see behaviours - often erratically identified as memories as 4-D patterns of magnetic body, which develop in 4-D sense in state function reduction
sequence.
Replication of magnetic body would mean replication of behaviours and there is
support for this from what happens to split flat worms: both halves have the learned
behaviours not only the worm which has the original head.
In TGD framework any pattern in principle defines a memory. This does not mean that
synaptic contacts would not be crucial for generating association sequences making
possible behavioural patterns: I would not however regard them as genuine memories.
The braiding of flux tubes could be the really fundamental memory representation
since it would transform temporal "dancing" patterns for lipids of cell membrane to spatial
braiding patterns for flux tubes. Dancers with feet connected to wall is the metaphor.
At 2:18 PM, donkerheid said...
Dear Matti,
You said: One must give something away to achieve it and letting it go to larger h_eff
means loss of the earlier ego- biological death at extreme. The paradoxical thing is that
increase of h_eff tends to occur spontaneously - assuming that my interpretations are
correct
I guess this must also apply to the process of real understanding and gathering wisdom
that you mention in the previous paragraph. This also seems like reaching higher spiritual
stations that mystics talk about. How could this be spontaneous? It takes a lot of effort to
kill a former self and transcend it.
At 7:22 PM, Anonymous said...
Change is constant so it takes lot of effort to hold on to inertia. Also the inverse of trying
to "kill a former self" just feeds it negative energy. Letting go and acceptance happen
spontaneously.
At 8:35 PM, Anonymous said...
Non-markovianity and information flow:
https://www.youtube.com/watch?v=8cPXdf5D_LA
At 9:05 PM, [email protected] said...
To Donkerheid: I agree. I know from experience how difficult it is to give up to throw
a dead idea to paper basked and there is always the risk that it is a real idea after all. I am
- 272 -
just now making this kind of spring-cleaning going through all online books about TGD
proper.
All kinds of trash hides the deep ideas, I get angry to myself, and still it is so difficult
to just throw them away. But again I discovered that an old idea about which I wrote an
enthusiastic chapter years ago and then threw it away made a comeback and now I know
that it will stay!
Scattering amplitudes as computations of minimal length connecting initial and final
collections of algebraic objects in some algebraic structure with product and its inverse
(co-product) representing 3-vertices. Extremely beautiful and elegant idea giving
mathematical realisation for the Universe as computer idea. Universe as supersymplectic
Yangian algebraist would be a more precise formulation!;-)
At 2:45 PM, Anonymous said...
Matti my friend, I know the feeling. The first rule and practice of writer/poet schooling is
the maxim "kill your darlings". And of course, that practiial guidance is not absolute, just
relative and practical.
At 2:36 PM, Ulla said...
Donkerhead,
I guess you talk of 'getting knowledge from above', without reading books, that you
just 'know'. Through the window, as Jung talked of.
It is spontanous and does not include efforts, not much, and if it does it is a cleaning
process, aiming to find the 'real' self.
Is this the same as intiution?
03/22-2015 - http://matpitka.blogspot.com/2015/03/second-quantisation-of-kaction.html#comments
Second quantisation of Kähler-Dirac action
Second quantization of Kähler-Dirac action is crucial for the construction of the Kähler metric of
world of classical worlds as anticommutators of gamma matrices identified as super-symplectic Noether
charges. To get a unique result, the anticommutation relations must be fixed uniquely. This has turned
out to be far from trivial.
The canonical manner to second quantize fermions identifies spinorial canonical momentum
densities and their conjugates as Πbar= ∂ LKD/∂Ψ= ΨbarΓt and their conjugates. The vanishing of
Kähler-Dirac gamma matrix Γt at points, where the induced Kähler form J vanishes can cause problems
since anti-commutation relations are not internally consistent anymore. This led me to give up the
canonical quantization and to consider various alternatives consistent with the possibility that J vanishes.
They were admittedly somewhat ad hoc. Correct (anti-)commutation relations for various fermionic
Noether currents seem however to fix the anti-commutation relations to the standard ones. It seems that
it is better to be conservative: the canonical method is heavily tested and turns out to work quite nicely.
- 273 -
Consider first the 4-D situation without the localization to 2-D string world sheets. The canonical
anti-commutation relations would state {Πbar, Ψ}= δ3(x,y) at the space-like boundaries of the string
world sheet at either boundary of CD. At points where J and thus K-D gamma matrix ΓTt vanishes,
canonical momentum density vanishes identically and the equation seems to be inconsistent.
If fermions are localized at string world sheets assumed to always carry a non-vanishing J at their
boundaries at the ends of space-time surfaces, the situation changes since Γt is non-vanishing. The
localization to string world sheets, which are not vacua saves the situation. The problem is that the limit
when string approaches vacuum could be very singular and discontinuous. In the case of elementary
particle strings are associated with flux tubes carrying monopole fluxes so that the problem disappears.
It is better to formulate the anti-commutation relations for the modes of the induced spinor field. By
starting from {Πbar (x),Ψ (y)}=δ1(x,y) and contracting with Ψ(x) and Π (y) and integrating, one obtains
using orthonormality of the modes of Ψ the result {b†m,bn} = γ0 δm,n holding for the modes with nonvanishing norm. At the limit J→ 0 there are no modes with non-vanishing norm so that one avoids the
conflict between the two sides of the equation.
Quantum deformation introducing braid statistics is of considerable interest. Quantum deformations
are essentially 2-D phenomenon, and the condition that it indeed occurs gives a further strong support
for the localization of spinors at string world sheets. If the existence of anyonic phases is taken
completely seriously, it supports the existence of the hierarchy of Planck constants and TGD view about
dark matter. Note that the localization also at partonic 2-surfaces cannot be excluded yet.
I have wondered whether quantum deformation could relate to the hierarchy of Planck constants in
the sense that n=heff/h corresponds to the value of deformation parameter q=exp(i2π/n). The quantum
deformed anti-commutation relations b†b+q-1bb†= q-N are obtained by posing the constraints that the
eigenvalues of b†b and bb† are Nq (1-N)q. Here N=,1 is the number of fermions in the mode (see this).
The modification to the recent case is obvious.
posted by Matti Pitkanen @ 4:40 AM
3 Comments:
At 2:11 PM, Anonymous said...
Matti, can you talk about "Huygens principle" in TGD?
http://www.mathpages.com/home/kmath242/kmath242.htm
I see this post from http://matpitka.blogspot.com/2011/11/as-i-told-already-earlier-ullasend-me.html which got me all excited because of http://arxiv.org/abs/1501.01650 .
"We analyze the implications of the violations of the strong Huygens principle in the
transmission of information from the early universe to the current era via massless fields.
We show that much more information reaches us through timelike channels (not mediated
by real photons) than it is carried by rays of light, which are usually regarded as the only
carriers of information. "
a totally new kind of "two-way radio?" ? :)
--Stephen (that was me that asked about Markov... yes)
Matti, why has someone not consulted you about LHC happenings? Well, I guess the
machine is pretty boring, massive and noisy anyway... who cares
At 11:17 PM,
[email protected] said...
- 274 -
Dear Stephen,
I answered the question about Huygens in blog post since the answer was too long to
serve as a comment. To my opinion time like channels provide also information in the
sense that ordinary stable and massive particles propagate along them. Difficult to say how
significant this information
I learned from some source that wave equations in curved background indeed predict
that signals propagate classically inside future light-cone rather than alone its boundary:
this can be understood perturbation theoretically by treating the deviation from flat metric
as perturbation serving as a source of secondary waves.
At 11:21 PM, [email protected] said...
Dear Stephen,
concerning your question about LHC and missing consultations. CERN is an
institution, a very big institution, and it is well known that the intelligence of of
organism/institution is inversely proportional to its size - dinosaur effect. Academic
researchers are willing to use their brains only if they do not get research money. CERN
and also the academic institutions have a clearly formulated policy concerning academic
dissidents: no contacts with those who are academic outsiders. Pretend that they do not
exist.
I am certainly such an outsider. Even worse: I am danger for the respectability of the
academic community since I have been working for almost forty years without a single
coin of research money, and as it is now clear, the work has been highly successful while
the entire GUT-super-symmetry-superstring paradigm has suffered monumental and
humiliating failure. The only manner to deal with this kind of harmful individuals is
zombieing. One Finnish colleague formulated this policity in layman terms: Finnish
physicists would not touch my work even with a long stick.
03/22/2015 - http://matpitka.blogspot.com/2015/03/what-tgd-is-and-what-it-is-not.html#comments
What TGD is and what it is not
People (in particular those in Academia) tend to see TGD from their perspective often defined by a
heavy specialization to a rather narrow discipline. This is of course understandable but often leads to
rather comic mis-understandings and considerable intellectual violence and my heart is crying when I
see how brilliant ideas are bleeding in the heavy grasp of big academic hands. The following is a humble
attempt to express concisely what TGD is not and also what new TGD can give to physics - just to avoid
more violence.
1. TGD is not just General Relativity made concrete by using imbeddings: the 4-surface property is
absolutely essential for unifying standard model physics with gravitation. The many-sheeted
space-time of TGD gives rise only at macroscopic limit to GRT space-time as a slightly curved
Minkowski space. TGD is not a Kaluza-Klein theory although color gauge potentials are
analogous to gauge potentials in these theories. TGD is not a particular string model although
string world sheets emerge in TGD very naturally as loci for spinor modes: their 2dimensionality makes among other things possible quantum deformation of quantization known
to be physically realized in condensed matter, and conjectured in TGD framework to be crucial
- 275 -
for understanding the notion of finite measurement resolution. TGD space-time is 4-D and its
dimension is due to completely unique conformal properties of 3-D light-like surfaces implying
enormous extension of the ordinary conformal symmetries. TGD is not obtained by performing
Poincare gauging of space-time to introduce gravitation.
2. In TGD framework, the counterparts of also ordinary gauge symmetries are assigned to supersymplectic algebra which is a generalization of Kac-Moody algebras rather than gauge algebra
and suffers a fractal hierarchy of symmetry defining hierarchy of criticalities. TGD is not one
more quantum field theory like structure based on path integral formalism: path integral is
replaced with functional integral over 3-surfaces, and the notion of classical space-time becomes
exact part of the theory. Quantum theory becomes formally a purely classical theory of WCW
spinor fields: only state function reduction is something genuinely quantal.
3. TGD is in some sense extremely conservative geometrization of entire quantum physics: no
additional structures such as torsion and gauge fields as independent dynamical degrees of
freedom are introduced: Kähler geometry and associated spinor structure are enough. Twistor
space emerges as a technical tool and its Kähler structure is possible only for H=M4× CP2. What
is genuinely new is the infinite-dimensional character of the Kähler geometry making it highly
unique and its generalization to p-adic number fields to describe correlates of cognition. Also the
hierarchies of Planck constants heff=n× h and p-adic length scales and Zero Energy Ontology
represent something genuinely new.
posted by Matti Pitkanen @ 4:13 AM
8 Comments:
At 7:47 AM,
Hamed said...
Dear Matti,
I am sorry if my recent questions in the Blog or Email contain misunderstanding from my
perspective. I Can't prevent myself to ask another misunderstanding:)in the following
from your last answer:
You noted: "E-M charge relates to CP_2 holonomy rotations and phase rotations of
spinors acting as symmetry of Kahler-Dirac action."
Take an object on your hand, this object is a 3-surface in M4*CP2. One can define
induced spinors on this 3-surface and ask about corresponding charge. what is this charge?
i guess this charge at this level is just it's mass. if this is not correct, what is this charge?
Also there is hierarchy of hydrodynamics. first at the level of elementary particles and the
higher is at the level of atomic particles. Em charge at first level and mass at second level
have similar role?
"All induced gauge fields are expressible in terms of imbedding space coordinate and for
space-time surfaces representable as maps from M^4 to CP_2 "
What is these induced gauge fields at the level of macroscopic objects? one can ask are
they just gravitation and gravitomagnetic and gravitoweak! fields? it is very attractive if
the moon is just (gravito)neutrino for earth!!!! that has very lower mass rather than earth.
maybe the Moon has the similar role for earth that neutrino has for electron!.
Best,
Hamed.
- 276 -
At 7:11 PM, [email protected] said...
Dear Hamed, no reason to be sorry!! I am happy that there are people with courage to
make questions and try really learn about TGD rather than trying to violently force it to
some existing paradigm! This is the intellectual integrity and honesty.
Concerning your question. In micro-scale spinors are localised to strings connecting
wormhole throats. The electromagnetic charge is quantum number of spinor modes and
well-defined just because of this localisation to string world sheet which has vanishing W
boso fields which would mix different charge states say electron and neutrino.
One can also write explicitly the Noether charge as integer over string as an operator
bilinear in second quantized spinor field at string.
ou charged take a partial derivative of K-D action with respect to Psi and put the
change of Psi in the gauge rotation which includes ask phase multiplication to get the
Noether current.
At 7:18 PM, [email protected] said...
Dear Hamed,
concerning your second question about induced gauge field.
With our recent technology we are not able to detect what happens at the level of
single space-time sheet. We see them only through anomalies such as several neutrino
speeds in case of SN1987A. We of course see the many-sheetedness directly as boundaries
of physical objects but this is not a measurement!
We must treat the sheets as single sheet, and even forget CP_2 and approximate it as
slightly curved Minkowski space carrying in it gravitational field and gauge fields.
Gravitational field is sum of the deviations of induced metric from the Minkowski
metric at the individual sheets. Gauge potentials are sums of induced gauge potentials at
sheets. Just this sum makes it possible to treat them as independent fields if the number of
sheets is large enough. The approximate description is in terms of standard model in M^4.
This representation can be understood from what happens for test particle touching all
these sheets simultaneously: it experiences the sum of gravitational and gauge forces from
different sheets. Effects superpose at fundamental level only. In GRT-gauge theory
description fields superpose.
At 7:19 PM, [email protected] said...
After this replacement of many-sheeted space-time everything is what colleagues are
doing at CERN! Very conservative. Most exotic effects disappear. No hope anymore of
understanding what living matter is, etc… Too much has been lost!
At 12:39 PM, Hamed said...
Dear Matti,
Thanks for the answers.
- 277 -
We have a hierarchy of dark matter bodies. Does the bodies became larger and larger
in each corresponding quantum jump? when we was borned these hierarchy of our bodies
started to propagating from us. "i " is refer to all of the hierarchy of bodies(containing our
light body and hierarchy of dark bodies). our light body is going very fast(velocity of
light) what is velocity of dark bodies?
Now after many years after our borning, our bodies have cosmological sizes. if we are
conscious by them, we must be conscious from cosmological phenomenas. we see stars in
the sky, does this means our light body(not dark body) is just at the size that we see the
stars? does our dark bodies have smaller sizes?
At 7:51 PM, [email protected] said...
Macroscopic physics of everyday life seems to demonstrate the presence of dark
matter: as it demonstrates also directly the many-sheeted space-time - we really see it. To
avoid misinterpretation I want to emphasize that many-sheeted space-time is in question
and magnetic body is the key notion: darkness (large h_eff etc.) is associated with it!
The lengths of magnetic flux tubes/string associated with them define the distances
between composite particles of dark body. For instance, gravitational bound states having
no perturbative quantum description in string model because coupling parameter GMm is
much larger unity in macro scales , are possible because of dark gravitons at flux tubes
mediating the gravitational interaction.
Without them only Planck-length bound objects would be possible since the string
tension connecting the objects - partonic 3-surfaces in TGD - would be 1/hbar*G in string
models where string tension is fixed and breaks the basic rule of good theory: no purely
geometric dimensional parameters!
In TGD framework the string tension associated with the effective area defined by
anticommutators of Kahler-Dirac gamma matrices scales as 1/h_eff^2 and the size of
bound states scales like h_eff. h_eff=h_gr= GMm/v_0 gives correct estimate for the size
of gravitationally bound states as 1/v_0 multiple of Schwartschild radius. Gravitation is
indeed macroscopically quantum coherent phenomenon: one example is superfluid
fountain effect.
One could say that the flux tubes mediating interactions can be dark and if they carry
monopole flux they must have large enough Planck constant.
At 8:10 PM, [email protected] said...
Dear Hamed,
evolution of organism would spontaneously tend to increase the value of h_eff at given
level of hierarchy of flux tubes. Flux tubes tend to get longer.
A good example comes from molecular biology. In cell replication DNA gets
replicated. The mystery is why the differentiation of the cell is inherited by the daughter
cell. The proteins responsible for the differetiation are transcription factors: they disappear
during the replication. Somehow they emerge later in both daughter and mother cell.
The explanation is simple: the magnetic flux tubes connecting them to DNA get longer
in spontaneous increase of h_eff during the replication. Later metabolism forces them back
- 278 -
to higher criticality: h_eff is reduced and transcription factors are there again. Same
mechanism is universal mechanism of catalysis: it brings the reacts at the ends of flux
tubes near to each other.
At 8:12 PM, [email protected] said...
This tendency is visible also during the evolution of individual. h_eff increases and
thinking and experiencing becomes more and more abstract. Theatre piece does not have
the same effect as it had at young age since one has seen so many of them and most of it is
predictable. "Nothing new under the Sun".
Our many-sheeted bodies have also space-time sheets of astrophysical size: we are
connected by gravitation mediating flux tubes to arbitrarily distant partonic 2-surfaces in
cosmos.
Of course, also standard physics would say that our bodies generate fields extending to
arbitrary long distances.
What is the new thing is the many-sheeted description of fields in terms of tubes and
hierarchy of criticality's - h_eff hierarchy and hierarchy of broken super-symplectic
symmetries. This quantum critical Universe makes life possible.
We must only learn to think in terms of many-sheeted space-time instead of the spacetime we have got used to and which is somewhat boring in its topological dullness. After
this the Universe is again that magic place that it was before we started to study academic
physics and began to learn arrogant "nothing but"'s;-).
03/20/2015 - http://matpitka.blogspot.com/2015/03/could-universe-be-doing-yangianquantum.html#comments
Could the Universe be doing Yangian quantum arithmetics?
One of the old TGD-inspired really crazy ideas about scattering amplitudes is that Universe is doing
some sort of arithmetics so that scattering amplitude are representations for computational sequences of
minimum length. The idea is so crazy that I have even given up its original form, which led to an
attempt to assimilate the basic ideas about bi-algebras, quantum groups, Yangians and related exotic
things.
The work with twistor Grassmannian approach inspired a reconsideration of the original idea
seriously with the idea that super-symplectic Yangian could define the arithmetics. I try to describe the
background, motivation, and the ensuing reckless speculations in the following.
Do scattering amplitudes represent quantal algebraic manipulations?
1. I seems that tensor product ⊗ and direct sum ⊕ - very much analogous to product and sum but
defined between Hilbert spaces rather than numbers - are naturally associated with the basic
vertices of TGD. I have written about this a highly speculative chapter - both mathematically and
physically.
1. In ⊗ vertex 3-surface splits to two 3-surfaces meaning that the 2 "incoming" 4-surfaces
meet at single common 3-surface and become the outgoing 3-surface: 3 lines of Feynman
diagram meeting at their ends. This has a lower-dimensional shadow realized for partonic
- 279 -
2-surfaces. This topological 3-particle vertex would be higher-D variant of 3-vertex for
Feynman diagrams.
2. The second vertex is trouser vertex for strings generalized so that it applies to 3-surfaces.
It does not represent particle decay as in string models but the branching of the particle
wave function so that particle can be said to propagate along two different paths
simultaneously. In double slit experiment this would occur for the photon space-time
sheets.
2. The idea is that Universe is doing arithmetics of some kind in the sense that particle 3-vertex in
the above topological sense represents either multiplication or its time-reversal co-multiplication.
The product (call it •) can be something very general, say algebraic operation assignable to some
algebraic structure. The algebraic structure could be almost anything: a random list of structures popping
into mind consists of group, Lie-algebra, super-conformal algebra quantum algebra, Yangian, etc.... The
algebraic operation • can be group multiplication, Lie-bracket, its generalization to super-algebra level,
etc...). Tensor product and thus linear (Hilbert) spaces are involved always, and in product operation
tensor product ⊗ is replaced with •.
1. The product Ak⊗ Al→ C= Ak• Al is analogous to a particle reaction in which particles Ak and Al
fuse to particle Ak⊗ Al→ C=Ak• Al. One can say that ⊗ between reactants is transformed to • in
the particle reaction: kind of bound state is formed.
2. There are very many pairs Ak, Al giving the same product C just as given integer can be divided
in many manners to a product of two integers if it is not prime. This of course suggests that
elementary particles are primes of the algebra if this notion is defined for it! One can use some
basis for the algebra and in this basis one has C=Ak• Al= fklmAm, fklm are the structure constants
of the algebra and satisfy constraints. For instance, associativity A(BC)=(AB)C is a constraint
making the life of algebraist more tolerable and is almost routinely assumed.
For instance, in the number theoretic approach to TGD associativity is proposed to serve as
fundamental law of physics and allows to identify space-time surfaces as 4-surfaces with
associative (quaternionic) tangent space or normal space at each point of octonionic imbedding
space M4× CP2. Lie algebras are not associative but Jacobi-identities following from the
associativity of Lie group product replace associativity.
3. Co-product can be said to be time reversal of the algebraic operation •. Co-product can be
defined as C=Ak→ ∑lm fklmAl⊗ Bm is co-product in which one has quantum superposition of
final states which can fuse to C (Ak⊗ Bkl→ C=Ak• Bl is possible). One can say that • is replaced
with ⊗: bound state decays to a superposition of all pairs, which can form the bound states by
product vertex.
There are motivations for representing scattering amplitudes as sequences of algebraic operations
performed for the incoming set of particles leading to an outgoing set of particles with particles
identified as algebraic objects acting on vacuum state. The outcome would be analogous to Feynman
diagrams but only the diagram with minimal length to which a preferred extremal can be assigned is
needed. Larger ones must be equivalent with it.
The question is whether it could be indeed possible to characterize particle reactions as computations
involving transformation of tensor products to products in vertices and co-products to tensor products in
co-vertices (time reversals of the vertices). A couple of examples gives some idea about what is
involved.
- 280 -
1. The simplest operations would preserve particle number and to just permute the particles: the
permutation generalizes to a braiding and the scattering matrix would be basically unitary
braiding matrix utilized in topological quantum computation.
2. A more complex situation occurs, when the number of particles is preserved but quantum
numbers for the final state are not same as for the initial state so that particles must interact. This
requires both product and co-product vertices. For instance, Ak⊗ Al→ fklmAm followed by Am→
fmrsAr⊗ As giving Ak→ fklmfmrsAr⊗ As representing 2-particle scattering. State function
reduction in the final state can select any pair Ar⊗ As in the final state. This reaction is
characterized by the ordinary tree diagram in which two lines fuse to single line and defuse back
to two lines. Note also that there is a non-deterministic element involved. A given final state can
be achieved from a given initial state after large enough number of trials. The analogy with
problem solving and mathematical theorem proving is obvious. If the interpretation is correct,
Universe would be problem solver and theorem prover!
3. More complex reactions affect also the particle number. 3-vertex and its co-vertex are the
simplest examples and generate more complex particle number changing vertices. For instance,
on twistor Grassmann approach on can construct all diagrams using two 3-vertices. This
encourages the restriction to 3-vertice (recall that fermions have only 2-vertices)
4. Intuitively it is clear that the final collection of algebraic objects can be reached by a large maybe infinite - number of ways. It seems also clear that there is the shortest manner to end up to
the final state from a given initial state. Of course, it can happen that there is no way to achieve
it! For instance, if • corresponds to group multiplication the co-vertex can lead only to a pair of
particles for which the product of final state group elements equals to the initial state group
element.
5. Quantum theorists of course worry about unitarity. How can avoid the situation in which the
product gives zero if the outcome is element of linear space. Somehow the product should be
such that this can be avoided. For instance, if product is Lie-algebra commutator, Cartan algebra
would give zero as outcome.
Generalized Feynman diagram as shortest possible algebraic manipulation connecting initial and
final algebraic objects
There is a strong motivation for the interpretation of generalized Feynman diagrams as shortest
possible algebraic operations connecting initial and final states. The reason is that in TGD one does not
have path integral over all possible space-time surfaces connecting the 3-surfaces at the ends of CD.
Rather, one has in the optimal situation a space-time surface unique apart from conformal gauge
degeneracy connecting the 3-surfaces at the ends of CD (they can have disjoint components).
Path integral is replaced with integral over 3-surfaces. There is therefore only single minimal
generalized Feynman diagram (or twistor diagram, or whatever is the appropriate term). It would be nice
if this diagram had interpretation as the shortest possible computation leading from the initial state to the
final state specified by 3-surfaces and basically fermionic states at them. This would of course simplify
enormously the theory and the connection to the twistor Grassmann approach is very suggestive. A
further motivation comes from the observation that the state basis created by the fermionic Clifford
algebra has an interpretation in terms of Boolean quantum logic and that in ZEO the fermionic states
would have interpretation as analogs of Boolean statements A→ B.
To see whether and how this idea could be realized in TGD framework, let us try to find
counterparts for the basic operations ⊗ and • and identify the algebra involved. Consider first the basic
geometric objects.
- 281 -
1. Tensor product could correspond geometrically to two disjoint 3-surfaces representing 3particles. Partonic 2-surfaces associated with a given 3-surface represent second possibility. The
splitting of a partonic 2-surface to two could be the geometric counterpart for co-product.
2. Partonic 2-surfaces are however connected to each other and possibly even to themselves by
strings. It seems that partonic 2-surface cannot be the basic unit. Indeed, elementary particles are
identified as pairs of wormhole throats (partonic 2-surfaces) with magnetic monopole flux
flowing from throat to another at first space-time sheet, then through throat to another sheet, then
back along second sheet to the lower throat of the first contact and then back to the thirst throat.
This unit seems to be the natural basic object to consider. The flux tubes at both sheets are
accompanied by fermionic strings. Whether also wormhole throats contain strings so that one
would have single closed string rather than two open ones, is an open question.
3. The connecting strings give rise to the formation of gravitationally bound states and the
hierarchy of Planck constants is crucially involved. For elementary particle there are just two
wormhole contacts each involving two wormhole throats connected by wormhole contact.
Wormhole throats are connected by one or more strings, which define space-like boundaries of
corresponding string world sheets at the boundaries of CD. These strings are responsible for the
formation of bound states, even macroscopic gravitational bound states.
Super-symplectic Yangian would be a reasonable guess for the algebra involved.
1. The 2-local generators of Yangian would be of form TA1= fABCTB⊗ TC, where fABC are the
structure constants of the super-symplectic algebra. n-local generators would be obtained by
iterating this rule. Note that the generator TA1 creates an entangled state of TB and TC with fABC
the entanglement coefficients. TAn is entangled state of TB and TCn-1 with the same coefficients.
A kind replication of TAn-1 is clearly involved, and the fundamental replication is that of TA. Note
that one can start from any irreducible representation with well defined symplectic quantum
numbers and form similar hierarchy by using TA and the representation as a starting point.
That the hierarchy TAn and hierarchies irreducible representations would define a hierarchy
of states associated with the partonic 2-surface is a highly non-trivial and powerful hypothesis
about the formation of many-fermion bound states inside partonic 2-surfaces.
2. The charges TA correspond to fermionic and bosonic super-symplectic generators. The geometric
counterpart for the replication at the lowest level could correspond to a fermionic/bosonic string
carrying super-symplectic generator splitting to fermionic/bosonic string and a string carrying
bosonic symplectic generator TA. This splitting of string brings in mind the basic gauge bosongauge boson or gauge boson-fermion vertex.
The vision about emission of virtual particle suggests that the entire wormhole contact pair
replicates. Second wormhole throat would carry the string corresponding to TA assignable to
gauge boson naturally. TA should involve pairs of fermionic creation and annihilation operators
as well as fermionic and anti-fermionic creation operator (and annihilation operators) as in
quantum field theory.
3. Bosonic emergence suggests that bosonic generators are constructed from fermion pairs with
fermion and anti-fermion at opposite wormhole throats: this would allow to avoid the problems
with the singular character of purely local fermion current. Fermionic and anti-fermionic string
would reside at opposite space-time sheets and the whole structure would correspond to a closed
magnetic tube carrying monopole flux. Fermions would correspond to superpositions of states in
which string is located at either half of the closed flux tube.
4. The basic arithmetic operation in co-vertex would be co-multiplication transforming TAn to TAn+1
= fABCTBn ⊗ TC. In vertex the transformation of TAn+1 to TAn would take place. The
interpretations would be as emission/absorption of gauge boson. One must include also emission
of fermion and this means replacement of TA with corresponding fermionic generators FA, so that
- 282 -
the fermion number of the second part of the state is reduced by one unit. Particle reactions
would be more than mere braidings and re-grouping of fermions and anti-fermions inside
partonic 2-surfaces, which can split.
5. Inside the light-like orbits of the partonic 2-surfaces there is also a braiding affecting the Mmatrix. The arithmetics involved would be therefore essentially that of measuring and "comeasuring" symplectic charges.
Generalized Feynman diagrams (preferred extremals) connecting given 3-surfaces and manyfermion states (bosons are counted as fermion-anti-fermion states) would have a minimum
number of vertices and co-vertices. The splitting of string lines implies creation of pairs of
fermion lines. Whether regroupings are part of the story is not quite clear. In any case, without
the replication of 3-surfaces it would not be possible to understand processes like e-e scattering
by photon exchange in the proposed picture.
This was not the whole story yet
The proposed amplitude represents only the value of WCW spinor field for single pair of 3-surfaces
at the opposite boundaries of given CD. Hence Yangian construction does not tell the whole story.
1. Yangian algebra would give only the vertices of the scattering amplitudes. On basis of previous
considerations, one expects that each fermion line carries propagator defined by 8-momentum.
The structure would resemble that of super-symmetric YM theory. Fermionic propagators should
emerge from summing over intermediate fermion states in various vertices and one would have
integrations over virtual momenta which are carried as residue integrations in twistor Grassmann
approach. 8-D counterpart of twistorialization would apply.
2. Super-symplectic Yangian would give the scattering amplitudes for single space-time surface
and the purely group theoretical form of these amplitudes gives hopes about the independence of
the scattering amplitude on the pair of 3-surfaces at the ends of CD near the maximum of Kähler
function. This is perhaps too much to hope except approximately but if true, the integration over
WCW would give only exponent of Kähler action since metric and poorly defined Gaussian and
determinants would cancel by the basic properties of Kähler metric. Exponent would give a nonanalytic dependence on αK.
The Yangian supercharges are proportional to 1/αK since covariant Kähler-Dirac gamma
matrices are proportional to canonical momentum currents of Kähler action and thus to 1/αK.
Perturbation theory in powers of αK= gK2/4πhbareff is possible after factorizing out the exponent
of vacuum functional at the maximum of Kähler function and the factors 1/α K multiplying supersymplectic charges.
The additional complication is that the characteristics of preferred extremals contributing
significantly to the scattering amplitudes are expected to depend on the value of α K by quantum
interference effects. Kähler action is proportional to 1/αK. The analogy of AdS/CFT
correspondence states the expressibility of Kähler function in terms of string area in the effective
metric defined by the anti-commutators of K-D matrices. Interference effects eliminate string
length for which the area action has a value considerably larger than one so that the string length
and thus also the minimal size of CD containing it scales as h eff. Quantum interference effects
therefore give an additional dependence of Yangian super-charges on heff leading to a
perturbative expansion in powers of αK although the basic expression for scattering amplitude
would not suggest this.
See the chapter Classical part of the twistor story or the article Classical part of the twistor story.
posted by Matti Pitkanen @ 10:42 PM
- 283 -
03/18/2015 - http://matpitka.blogspot.com/2015/03/hierarchies-of-conformalsymmetry.html#comments
Hierarchies of conformal symmetry breakings, Planck constants, and inclusions of hyperfinite
factors of type II1
The basic almost prediction of TGD is a fractal hierarchy of breakings of symplectic symmetry as a
gauge symmetry. It is good to briefly summarize the basic facts about the symplectic algebra assigned
with δ M4+/-× CP2 first.
1. Symplectic algebra has the structure of Virasoro algebra with respect to the light-like radial
coordinate rM of the light-cone boundary taking the role of complex coordinate for ordinary
conformal symmetry. The Hamiltonians generating symplectic symmetries can be chosen to be
proportional to functions fn(rM). What is the natural choice for fn(rM) is not quite clear. Ordinary
conformal invariance would suggests fn(rM)=rMn. A more adventurous possibility is that the
algebra is generated by Hamiltonians with fn(rM)= r-s, where s is a root of Riemann Zeta so that
one has either s=1/2+iy (roots at critical line) or s=-2n, n>0 (roots at negative real axis).
2. The set of conformal weights would be linear space spanned by combinations of all roots with
integer coefficients s= n - iy, s=∑ niyi, n>-n0, where -n0≥ 0 is negative conformal weight. Mass
squared is proportional to the total conformal weight and must be real demanding y=∑ yi=0 for
physical states: I call this conformal confinement analogous to color confinement. One could
even consider introducing the analog of binding energy as "binding conformal weight".
Mass-squared must be also non-negative (no tachyons) giving n0≥ 0. The generating
conformal weights however have negative real part -1/2 and are thus tachyonic. Rather
remarkably, p-adic mass calculations force to assume negative half-integer valued ground state
conformal weight. This plus the fact that the zeros of Riemann Zeta has been indeed assigned
with critical systems forces to take the Riemannian variant of conformal weight spectrum with
seriousness. The algebra allows also now infinite hierarchy of conformal sub-algebras with
weights coming as n-ples of the conformal weights of the entire algebra.
3. The outcome would be an infinite number of hierarchies of symplectic conformal symmetry
breakings. Only the generators of the sub-algebra of the symplectic algebra with radial conformal
weight proportional to n would act as gauge symmetries at given level of the hierarchy. In the
hierarchy ni divides ni+1 . In the symmetry breaking ni→ ni+1 the conformal charges, which
vanished earlier, would become non-vanishing. Gauge degrees of freedom would transform to
physical degrees of freedom.
4. What about the conformal Kac-Moody algebras associated with spinor modes. It seems that in
this case one can assume that the conformal gauge symmetry is exact just as in string models.
The natural interpretation of the conformal hierarchies ni→ ni+1 would be in terms of increasing
measurement resolution.
1. Conformal degrees of freedom below measurement resolution would be gauge degrees of
freedom and correspond to generators with conformal weight proportional to ni. Conformal
hierarchies and associated hierarchies of Planck constants and n-fold coverings of space-time
surface connecting the 3-surfaces at the ends of causal diamond would give a concrete realization
of the inclusion hierarchies for hyper-finite factors of type II1.
ni could correspond to the integer labelling Jones inclusions and associating with them the
quantum group phase factor Un=exp(i2π/n), n≥ 3 and the index of inclusion given by |M:N| =
- 284 -
4cos2(2π/n) defining the fractal dimension assignable to the degrees of freedom above the
measurement resolution. The sub-algebra with weights coming as n-multiples of the basic
conformal weights would act as gauge symmetries realizing the idea that these degrees of
freedom are below measurement resolution.
2. If heff =n× h defines the conformal gauge sub-algebra, the improvement of the resolution would
scale up the Compton scales and would quite concretely correspond to a zoom analogous to that
done for Mandelbrot fractal to get new details visible. From the point of view of cognition the
improving resolution would fit nicely with the recent view about heff/h as a kind of intelligence
quotient.
This interpretation might make sense for the symplectic algebra of δ M4+/- × CP2 for which
the light-like radial coordinate rM of light-cone boundary takes the role of complex coordinate.
The reason is that symplectic algebra acts as isometries.
3. If Kähler action has vanishing total variation under deformations defined by the broken
conformal symmetries, the corresponding conformal charges are conserved. The components of
WCW Kähler metric expressible in terms of second derivatives of Kähler function can be
however non-vanishing and have also components, which correspond to WCW coordinates
associated with different partonic 2-surfaces. This conforms with the idea that conformal
algebras extend to Yangian algebras generalizing the Yangian symmetry of N =4 symmetric
gauge theories. The deformations defined by symplectic transformations acting gauge
symmetries the second variation vanishes and there is not contribution to WCW Kähler metric.
4. One can interpret the situation also in terms of consciousness theory. The larger the value of heff,
the lower the criticality, the more sensitive the measurement instrument since new degrees of
freedom become physical, the better the resolution. In p-adic context large n means better
resolution in angle degrees of freedom by introducing the phase exp(i2π/n) to the algebraic
extension and better cognitive resolution. Also the emergence of negentropic entanglement
characterized by n× n unitary matrix with density matrix proportional to unit matrix means
higher level conceptualization with more abstract concepts.
The extension of the super-conformal algebra to a larger Yangian algebra is highly suggestive and
gives and additional aspect to the notion of measurement resolution.
1. Yangian would be generated from the algebra of super-conformal charges assigned with the
points pairs belonging to two partonic 2-surfaces as stringy Noether charges assignable to strings
connecting them. For super-conformal algebra associated with pair of partonic surface only
single string associated with the partonic 2-surface. This measurement resolution is the almost
the poorest possible (no strings at all would be no measurement resolution at all!).
2. Situation improves if one has a collection of strings connecting set of points of partonic 2-surface
to other partonic 2-surface(s). This requires generalization of the super-conformal algebra in
order to get the appropriate mathematics. Tensor powers of single string super-conformal
charges spaces are obviously involved and the extended super-conformal generators must be
multi-local and carry multi-stringy information about physics.
3. The generalization at the first step is simple and based on the idea that co-product is the "time
inverse" of product assigning to single generator sum of tensor products of generators giving via
commutator rise to the generator. The outcome would be expressible using the structure
constants of the super-conformal algebra schematically a Q1A= fABCQB⊗ QC. Here QB and QC
are super-conformal charges associated with separate strings so that 2-local generators are
obtained. One can iterate this construction and get a hierarchy of n-local generators involving
products of n stringy super-conformal charges. The larger the value of n, the better the
resolution, the more information is coded to the fermionic state about the partonic 2-surface and
3-surface. This affects the space-time surface and hence WCW metric but not the 3-surface so
- 285 -
that the interpretation in terms of improved measurement resolution makes sense. This supersymplectic Yangian would be behind the quantum groups and Jones inclusions in TGD Universe.
4. n gives also the number of space-time sheets in the singular covering. One possible interpretation
is in terms measurement resolution for counting the number of space-time sheets. Our recent
quantum physics would only see single space-time sheet representing visible manner and dark
matter would become visible only for n>1.
It is not an accident that quantum phases are assignable to Yangian algebras, to quantum groups, and
to inclusions of HFFs. The new deep notion added to this existing complex of high level mathematical
concepts are hierarchy of Planck constants, dark matter hierarchy, hierarchy of criticalities, and
negentropic entanglement representing physical notions. All these aspects represent new physics.
posted by Matti Pitkanen @ 9:09 PM
5 Comments:
At 11:37 PM, Hamed said...
Dear Matti,
1- In every day that earth is rotated once around itself, in current physics it is supposed
that torque is zero, because it is supposed that angular velocity is constant. But we
experience the length of the day is changed during the year. Hence it is supposed that
rotation of the earth around the sun is not on the circle but on an ellipse. This makes
variation of the day length. Can one say rotation of the earth around the sun is on the circle
but torque of rotation of earth around itself is constant? This makes n-fold sheeted for
earth naturally!
2- Zoisite stones that are used in healing are more evolved than simple elements like iron
and copper and radiate EM waves with larger Planck constants? And this radiation makes
healing? If that is correct, is the origin of the radiation come from scaled up mass scales of
elementary particles in the molecule structures of them? In really can one say it is not
needed to search in LHC for finding these large mass scales of elementary particles but
just search in the Zoisite stones?
At 3:37 AM,
[email protected] said...
Dear Hamed,
a) The rotation is on ellipse. From Wikipedia one finds the value of the parameter
describing the ratio of the axis lengths. Angular momentum is constant of motion since
Sun does not pose any torque (force is radial) . I do not know the dissipation rate causing
slowing down. There are all effects of other planets.
b) It has been claimed that also mineral like quartz emit biophotons. The interpretation
would as ordinary photons resulting in phase transition of dark photons conserving photon
energy. The possible healing effect could come from large scale quantum coherence.
The assumption is that masses are *same* for particles and their dark variants but
Compton and de Broglie wave lengths are scale by h_eff/h. Hence quantal effects in length
scales would become possible.
I have proposed that even thermodynamically critical states of matter generate dark
variants of particles and vice versa. The surprise to me was that the larger the Planck
constant, the less critical system is: more degrees of freedom have transformed from gauge
degrees of freedom to physical ones. I had thought just the opposite.
- 286 -
The basic mistake of particle physicists might be that they to find dark matter from
short scales. They should do just the opposite: biology would be optimal place. Particles
physicists should extend their intellectual horizon considerably.
At 1:47 PM, Ulla said...
Crystals are used in shamanic healing (maybe also chakra stones) as absorbers of
energy, and hence also balancers of disturbed energy fields, much in the same way as
needles are used to channel out too much piezoelectric energy (the - pole effect, the tap).
According to TCM this is done only when severe acute energy distortion is at hand. To tap
out energy is not recommended.
The other effect of needles is to smooth out piezoelectric energy fields (the + pole, or
hole) and in this way the crystals and stones may also help, simply by moving the fields?
To give off biophotons is something I have not heard much of. Biophotons is maybe
given, only when they are first recieved. This is what is called 'bad energy'. Crystals shall
be purified, emptied of energy between different patients to avoid this.
Stones have a very weak potential and invoke on the DC current in our bodies.
A weak field has a larger magnetic Plancks constant (if such can be talked of) maybe,
but is analog and with much information, which maybe is more important. This links again
to informational transfer as seen in homeopathy?
At 1:50 PM, Ulla said...
The tap is + and the hole is - of course, sorry for my typos :)
At 7:26 PM, [email protected] said...
I must warn: do not have the a reference to biophotons and quartz. I see biophotons as
leakage: dark photons which are the relevant thing transform partially to ordinary photons
that we call biophotons.
Standard approach of course tries explain them in terms of chemical mechanism but
the signatures predicted by chemical production mechanisms are not present in biophoton
spectrum.
What would make dark photons behind biophotons so special is that their wavelengths
for a given energy are much much longer than for ordinary photons: macroscopic quantum
coherence! Energy spectrum is in visible and UV and this makes them ideal for controlling
biomolecular chemistry. Dark magnetic bodies of size scales of Earth could also transform
the ordinary photons from Sun to dark photons.
It is possible that UV light is in fact transformed to dark photons so that it survive
through atmosphere and for energy around 12 eV acts as new source of metabolic energy
almost cutting OH bonds in water molecules so that ordinary metabolic energy quantum is
enough to split the bond and one obtains the H1.5O fourth phase of water discovered by
Pollack and collaborators.
- 287 -
Clearly, every fourth proton goes somewhere. These protons would become dark and
go to magnetic flux tubes and their sequences would define dark nuclei realizing the
analogs of basic biopolymers and the primordial variant of genetic code.
These dark proton sequences could accompany DNA , RNA, and amino-acids. In very
courageous mood one might even say that dark nuclear strings serve as templates of
biomolecules and for their biochemistry;-). Biochemistry would be a shadow of something
much more deeper!
03/17/2015 - http://matpitka.blogspot.com/2015/03/is-view-about-evolution-asapproach.html#comments
Is the view about Evolution as gradual reduction of criticality consistent with Biology?
The naive idea would be that living systems are thermodynamically critical so that life would be
inherently unstable phenomenon. One can find support for this view. For instance, living matter as we
know it functions in rather narrow temperature range. In this picture the problem is how the emergence
of life is possible at all.
TGD suggests a different view. Evolution corresponds to the transformation of gauge degrees of
freedom to dynamical ones and leads away from quantum criticality rather than towards it. Which view
is correct?
The argument below supports the view that evolution indeed involves a spontaneous drift away from
maximal quantum criticality. One cannot however avoid the feeling about the presence of a paradox.
1. Maybe the crux of paradox is that quantum criticality relies on NMP and thermodynamical
criticality relies on second law which follows from NMP at ensemble level for ordinary
entanglement (as opposed to negentropic one) at least. Quantum criticality is geometric
criticality of preferred extremals and thermodynamical criticality criticality against the first state
function reduction at opposite boundary of CD inducing decoherence and "death" of self defined
by the sequence of state function reductions at fixed boundary of CD. NMP would be behind
both criticalities: it would stabilize self and force the first quantum jump killing the self.
2. Perhaps the point is that living systems are able to stay around both thermodynamical and
quantum criticalities. This would make them flexible and sensitive. And indeed, the first
quantum jump has an interpretation as correlate for volitional action at some level of self
hierarchy. Consciousness involves passive and active aspects: periods of repeatedstate function
reductions and acts of volition. The basic applications of hierarchy of Planck constants to
biology indeed involve the heff changing phase transitions in both directions: for instance,
molecules are able to find is each by heff reducing phase transition of connecting magnetic flux
tubes bringing them near to each other.
The attempt to understand cosmological evolution in terms of hierarchy of Planck constants
demonstrates that the view about evolution corresponds to a spontaneous drift away from maximal
quantum criticality is feasible.
1. In primordial cosmology one has gas of cosmic strings X2× Y2⊂ M4× CP2. If they behave
deterministically as it seems, their symplectic symmetries are fully dynamical and cannot act as
gauge symmetries. This would suggest that they are not quantum critical and cosmic evolution
- 288 -
leading to the thickening of the cosmic strings would be towards criticality contrary to the
general idea.
Here one must be however extremely cautious: are cosmic strings really maximally noncritical? The CP2 projection of cosmic string can be any holomorphic 2-surface in CP2 and there
could be criticality against transitions changing geodesic sphere to a holomorphic 2-surface.
There is also a criticality against transitions changing M4 projection 4-dimensional. The
hierarchy of Planck constants could be assignable to the resulting magnetic flux tubes.
In TGD-inspired biology, magnetic flux tubes are indeed carriers of large heff phases. That
cosmic strings are actually critical, is also supported by the fact that it does not make sense to
assign infinite value of heff and therefore vanishing value of αK to cosmic strings since Kähler
action would become infinite. The assignment of large heff to cosmic strings does not seem a
good idea since there are no gravitationally bound states yet, only a gas of cosmic strings in M 4×
CP2.
Cosmic strings allow conformal invariance. Does this conformal invariance act as gauge
symmetries or dynamical symmetries? Quantization of ordinary strings would suggests the
interpretation of super-conformal symmetries as gauge symmetries. It however seems that the
conformal invariance of standard strings corresponds to that associated with the modes of the
induced spinor field, and these would be indeed full gauge invariance. What matters is however
symplectic conformal symmetries - something new and crucial for TGD view. The non-generic
character of 2-D M4 projection suggests that a sub-algebra of the symplectic conformal
symmetries increasing the thickness of M4 projection of string act as gauge symmetries (the
Hamiltonians would be products of S2 and CP2 Hamiltonians). The most plausible conclusion is
that cosmic strings recede from criticality as their thickness increases.
2. Cosmic strings are not the only objects involved. Space-time sheets are generated during
inflationary period and cosmic strings topologically condense at them creating wormhole
contacts and begin to expand to magnetic flux tubes with M4 projection of increasing size.
Ordinary matter is generated in the decay of the magnetic energy of cosmic strings replacing the
vacuum energy of inflaton fields in inflationary scenarios.
M4 and CP2 type vacuum extremals are certainly maximally critical by their non-determinism
and symplectic conformal gauge invariance is maximal for them. During later stages gauge
degrees of freedom would transform to dynamical ones. The space-time sheets and wormhole
contacts would also drift gradually away from criticality so that also their evolution conforms
with the general TGD view.
Cosmic evolution would thus reduce criticality and would be spontaneous (NMP). The
analogy would be provided by the evolution of cell from a maximally critical germ cell to a
completely differentiated outcome.
3. There is however a paradox lurking there. Thickening cosmic string should gradually approach
to M4 type vacuum extremals as the density of matter is reduced in expansion. Could the
approach from criticality transforms to approach towards it? The geometry of CD involves the
analogs of both Big Bang and Big Crunch. Could it be that the eventual turning of expansion to
contraction allows to circumvent the paradox? Is the crux of matter the fact that thickened
cosmic strings already have a large value of heff mea meaning that they are n-sheeted objects
unlike the M4 type vacuum extremals.
- 289 -
Could NMP force the first state function reduction to the opposite boundary of CD when the
expansion inside CD would turn to contraction at space-time level and the contraction would be
experienced as expansion since the arrow of time changes? Note that at the imbedding space
level the size of CD increases all the time. Could the ageing and death of living systems be
understood by using this analogy. Could the quantum jump to the opposite boundary of CD be
seen as a kind of reincarnation allowing the increase of heff and conscious evolution to continue
as NMP demands? The first quantum jump would also generate entropy and thermodynamical
criticality could be criticality against its occurrence. This interpretation of thermodynamical
criticality would mean that living system by definition live at the borderline of life and death!
posted by Matti Pitkanen @ 6:31 PM
1 Comments:
At 1:59 AM,
Ulla said...
Both ends of the spectrum are dead, too much noise or chaos does not hold stability,
and too much order and stability doesn't allow change, but homeostasis itself is a bit on the
line of stability, so some is needed, but it also must be regulated very carefully. Allostasis
often do this by creating more noise. If it does not succeed we get a new allostatic
balancing point for regulation longer on the stability line, and we are then prone to get
illness. Joseph often talks of stress as do biologists, but stress in itself is an ad hoc word of
the same sort as homeostasis. Both acts on the same stimuli.
So to minimize the stress we have to stay as close to the edge as we can. This means
adaptive, flexible, changeable, sensitive...
One of the most important tasks we have is to predict our future, and that requires the
'staying receptive' and reactive.
A diamond (max coherence) is as bad as the chaos soup (min. coherence?
I talked of Life being both coherent and decoherent at the same time in my FQXI
essay. Note that Life is complex, many-sheeted, many-bodied state, not the singular catstate (which indeed also is complex, made of many states of waves).
03/13/2015 - http://matpitka.blogspot.com/2015/03/in-previous-posting-i-toldabout.html#comments
Classical number fields and associativity and commutativity as fundamental law of physics
In the previous posting I told about the possibility that string world sheets with area action could be
present in TGD at fundamental level with the ratio of hbar G/R2 of string tension to the square of CP2
radius fixed by quantum criticality. I however found that the assumption that gravitational binding has as
correlates strings connecting the bound partonic 2-surfaces leads to grave difficulties: the sizes of the
gravitationally bound states cannot be much longer than Planck length. This binding mechanism is
strongly suggested by AdS/CFT correspondence but perturbative string theory does not allow it.
I proposed that the replacement of h with heff = n× h= hgr= GMm/v0 could resolve the problem. It
does not. I soo noticed that the typical size scale of string world sheet scales as h gr1/2, not as hgr=
GMm/v0 as one might expect. The only reasonable option is that string tension behave as 1/hgr2. In the
following I demonstrate that TGD in its basic form and defined by super-symmetrized Kähler action
- 290 -
indeed predicts this behavior if string world sheets emerge. They indeed do so number theoretically from
the condition of associativity and also from the condition that electromagnetic charge for the spinor
modes is well-defined. By the analog of AdS/CFT correspondence the string tension could characterize
the action density of magnetic flux tubes associated with the strings and varying string tension would
correspond to the effective string tension of the magnetic flux tubes as carriers of magnetic energy (dark
energy is identified as magnetic energy in TGD Universe).
Therefore the visit of string theory to TGD Universe remained rather short but it had a purpose: it made
completely clear why superstring are not the theory of gravitation and why TGD can be this theory.
Do associativty and commutativity define the laws of physics?
The dimensions of classical number fields appear as dimensions of basic objects in quantum TGD.
Imbedding space has dimension 8, space-time has dimension 4, light-like 3-surfaces are orbits of 2-D
partonic surfaces. If conformal QFT applies to 2-surfaces (this is questionable), one-dimensional
structures would be the basic objects. The lowest level would correspond to discrete sets of points
identifiable as intersections of real and p-adic space-time sheets. This suggests that besides p-adic
number fields also classical number fields (reals, complex numbers, quaternions, octonions are involved
and the notion of geometry generalizes considerably. In the recent view about quantum TGD the
dimensional hierarchy defined by classical number field indeed plays a key role. H=M 4× CP2 has a
number theoretic interpretation and standard model symmetries can be understood number theoretically
as symmetries of hyper-quaternionic planes of hyper-octonionic space.
The associativity condition A(BC)= (AB)C suggests itself as a fundamental physical law of both
classical and quantum physics. Commutativity can be considered as an additional condition. In
conformal field theories associativity condition indeed fixes the n-point functions of the theory. At the
level of classical TGD space-time surfaces could be identified as maximal associative (hyperquaternionic) sub-manifolds of the imbedding space whose points contain a preferred hyper-complex
plane M2 in their tangent space and the hierarchy finite fields-rationals-reals-complex numbersquaternions-octonions could have direct quantum physical counterpart. This leads to the notion of
number theoretic compactification analogous to the dualities of M-theory: one can interpret space-time
surfaces either as hyper-quaternionic 4-surfaces of M8 or as 4-surfaces in M4× CP2. As a matter fact,
commutativity in number theoretic sense is a further natural condition and leads to the notion of number
theoretic braid naturally as also to direct connection with super string models.
At the level of modified Dirac action the identification of space-time surface as a hyper-quaternionic
sub-manifold of H means that the modified gamma matrices of the space-time surface defined in terms
of canonical momentum currents of Kähler action using octonionic representation for the gamma
matrices of H span a hyper-quaternionic sub-space of hyper-octonions at each point of space-time
surface (hyper-octonions are the subspace of complexified octonions for which imaginary units are
octonionic imaginary units multiplied by commutating imaginary unit). Hyper-octonionic representation
leads to a proposal for how to extend twistor program to TGD framework .
How to achieve associativity in the fermionic sector?
In the fermionic sector an additional complication emerges. The associativity of the tangent- or
normal space of the space-time surface need not be enough to guarantee the associativity at the level of
Kähler-Dirac or Dirac equation. The reason is the presence of spinor connection. A possible cure could
be the vanishing of the components of spinor connection for two conjugates of quaternionic coordinates
combined with holomorphy of the modes.
1. The induced spinor connection involves sigma matrices in CP2 degrees of freedom, which for the
octonionic representation of gamma matrices are proportional to octonion units in Minkowski
- 291 -
degrees of freedom. This corresponds to a reduction of tangent space group SO(1,7) to G2.
Therefore octonionic Dirac equation identifying Dirac spinors as complexified octonions can
lead to non-associativity even when space-time surface is associative or co-associative.
2. The simplest manner to overcome these problems is to assume that spinors are localized at 2-D
string world sheets with 1-D CP2 projection and thus possible only in Minkowskian regions.
Induced gauge fields would vanish. String world sheets would be minimal surfaces in M4× D1⊂
M4× CP2 and the theory would simplify enormously. String area would give rise to an additional
term in the action assigned to the Minkowskian space-time regions and for vacuum extremals
one would have only strings in the first approximation, which conforms with the success of
string models and with the intuitive view that vacuum extremals of Kähler action are basic
building bricks of many-sheeted space-time. Note that string world sheets would be also
symplectic covariants.
Without further conditions gauge potentials would be non-vanishing but one can hope that
one can gauge transform them away in associative manner. If not, one can also consider the
possibility that CP2 projection is geodesic circle S1: symplectic invariance is considerably
reduces for this option since symplectic transformations must reduce to rotations in S1.
3. The fist heavy objection is that action would contain Newton's constant G as a fundamental
dynamical parameter: this is a standard recipe for building a non-renormalizable theory. The very
idea of TGD indeed is that there is only single dimensionless parameter analogous to critical
temperature. One can of coure argue that the dimensionless parameter is hbarG/R2, R CP2
"radius".
Second heavy objection is that the Euclidian variant of string action exponentially damps out
all string world sheets with area larger than hbar G. Note also that the classical energy of
Minkowskian string would be gigantic unless the length of string is of order Planck length. For
Minkowskian signature the exponent is oscillatory and one can argue that wild oscillations have
the same effect.
The hierarchy of Planck constants would allow the replacement hbar→ hbareff but this is not
enough. The area of typical string world sheet would scale as heff and the size of CD and
gravitational Compton lengths of gravitationally bound objects would scale (heff)1/2 rather than
heff = GMm/v0 which one wants. The only way out of problem is to assume T ∝ (hbar/heff)2. This
is however un-natural for genuine area action. Hence it seems that the visit of the basic
assumption of superstring theory to TGD remains very short. In any case, if one assumes that
string connect gravitationally bound masses, super string models in perturbative description are
definitely wrong as physical theories as has of course become clear already from landscape
catastrophe.
Is super-symmetrized Kähler-Dirac action enough?
Could one do without string area in the action and use only K-D action, which is in any case forced
by the super-conformal symmetry? This option I have indeed considered hitherto. K-D Dirac equation
indeed tends to reduce to a lower-dimensional one: for massless extremals the K-D operator is
effectively 1-dimensional. For cosmic strings this reduction does not however take place. In any case,
this leads to ask whether in some cases the solutions of Kähler-Dirac equation are localized at lowerdimensional surfaces of space-time surface.
1. The proposal has indeed been that string world sheets carry vanishing W and possibly even Z
fields: in this manner the electromagnetic charge of spinor mode could be well-defined. The
vanishing conditions force in the generic case 2-dimensionality.
- 292 -
Besides this, the canonical momentum currents for Kähler action defining 4 imbedding space
vector fields must define an integrable distribution of two planes to give string world sheet. The
four canonical momentum currents Πkα= ∂ LK/∂∂α hk identified as imbedding 1-forms can have
only two linearly independent components parallel to the string world sheet. Also the Frobenius
conditions stating that the two 1-forms are proportional to gradients of two imbedding space
coordinates Φi defining also coordinates at string world sheet, must be satisfied. These conditions
are rather strong and are expected to select some discrete set of string world sheets.
2. To construct preferred extremal one should fix the partonic 2-surfaces, their light-like orbits
defining boundaries of Euclidian and Minkowskian space-time regions, and string world sheets.
At string world sheets the boundary condition would be that the normal components of canonical
momentum currents for Kähler action vanish. This picture brings in mind strong form of
holography and this suggests that might make sense and also solution of Einstein equations with
point like sources.
3. The localization of spinor modes at 2-D surfaces would would follow from the well-definedness
of em charge and one could have situation is which the localization does not occur. For instance,
covariantly constant right-handed neutrinos spinor modes at cosmic strings are completely delocalized and one can wonder whether one could give up the localization inside wormhole
contacts.
4. String tension is dynamical and physical intuition suggests that induced metric at string world
sheet is replaced by the anti-commutator of the K-D gamma matrices and by conformal
invariance only the conformal equivalence class of this metric would matter and it could be even
equivalent with the induced metric. A possible interpretation is that the energy density of Kähler
action has a singularity localized at the string world sheet.
Another interpretation that I proposed for years ago but gave up is that in spirit with the TGD
analog of AdS/CFT duality the Noether charges for Kähler action can be reduced to integrals
over string world sheet having interpretation as area in effective metric. In the case of magnetic
flux tubes carrying monopole fluxes and containing a string connecting partonic 2-surfaces at its
ends this interpretation would be very natural, and string tension would characterize the density
of Kähler magnetic energy. String model with dynamical string tension would certainly be a
good approximation and string tension would depend on scale of CD.
5. There is also an objection. For M4 type vacuum extremals one would not obtain any non-vacuum
string world sheets carrying fermions but the successes of string model strongly suggest that
string world sheets are there. String world sheets would represent a deformation of the vacuum
extremal and far from string world sheets one would have vacuum extremal in an excellent
approximation. Situation would be analogous to that in general relativity with point particles.
6. The hierarchy of conformal symmetry breakings for K-D action should make string tension
proportional to 1/heff2 with heff=hgr giving correct gravitational Compton length Λgr= GM/v0
defining the minimal size of CD associated with the system. Why the effective string tension of
string world sheet should behave like (hbar/hbareff)2?
The first point to notice is that the effective metric Gαβ defined as hklΠkαΠlβ, where the
canonical momentum current Πkα=∂ LK/∂∂α hk has dimension 1/L2 as required. Kähler action
density must be dimensionless and since the induced Kähler form is dimensionless the canonical
momentum currents are proportional to 1/αK.
Should one assume that αK is fundamental coupling strength fixed by quantum criticality to
αK≈1/137? Or should one regard gK2 as fundamental parameter so that one would have 1/αK=
hbareff/4π gK2 having spectrum coming as integer multiples (recall the analogy with inverse of
critical temperature)?
- 293 -
The latter option is the in spirit with the original idea stating that the increase of heff reduces
the values of the gauge coupling strengths proportional to αK so that perturbation series
converges (Universe is theoretician friendly). The non-perturbative states would be critical
states. The non-determinism of Kähler action implying that the 3-surfaces at the boundaries of
CD can be connected by large number of space-time sheets forming n conformal equivalence
classes. The latter option would give Gαβ ∝ heff2 and det(G) ∝ 1/heff2 as required.
7. It must be emphasized that the string tension has interpretation in terms of gravitational coupling
on only at the GRT limit of TGD involving the replacement of many-sheeted space-time with
single sheeted one. It can have also interpretation as hadronic string tension or effective string
tension associated with magnetic flux tubes and telling the density of Kähler magnetic energy per
unit length.
Superstring models would describe only the perturbative Planck scale dynamics for emission
and absorption of heff/h=1 on mass shell gravitons whereas the quantum description of bound
states would require heff/n>1 when the masses. Also the effective gravitational constant
associated with the strings would differ from G.
The natural condition is that the size scale of string world sheet associated with the flux tube
mediating gravitational binding is G(M+m)/v0, By expressing string tension in the form 1/T=n2
hbar G1, n=heff/h, this condition gives hbar G1= hbar2/Mred2, Mred= Mm/(M+m). The effective
Planck length defined by the effective Newton's constant G1 analogous to that appearing in string
tension is just the Compton length associated with the reduced mass of the system and string
tension equals to T= [v0/G(M+m)]2 apart from a numerical constant (2G(M+m) is Schwartschild
radius for the entire system). Hence the macroscopic stringy description of gravitation in terms
of string differs dramatically from the perturbative one. Note that one can also understand why in
the Bohr orbit model of Nottale for the planetary system and in its TGD version v0 must be by a
factor 1/5 smaller for outer planets rather than inner planets.
Are 4-D spinor modes consistent with associativity?
The condition that octonionic spinors are equivalent with ordinary spinors looks rather natural but in
the case of Kähler-Dirac action the non-associativity could leak in. One could of course give up the
condition that octonionic and ordinary K-D equation are equivalent in 4-D case. If so, one could see KD action as related to non-commutative and maybe even non-associative fermion dynamics. Suppose
that one does not.
1. K-D action vanishes by K-D equation. Could this save from non-associativity? If the spinors are
localized to string world sheets, one obtains just the standard stringy construction of conformal
modes of spinor field. The induce spinor connection would have only the holomorphic
component Az. Spinor mode would depend only on z but K-D gamma matrix Γz would annihilate
the spinor mode so that K-D equation would be satisfied. There are good hopes that the
octonionic variant of K-D equation is equivalent with that based on ordinary gamma matrices
since quaternionic coordinated reduces to complex coordinate, octonionic quaternionic gamma
matrices reduce to complex gamma matrices, sigma matrices are effectively absent by
holomorphy.
2. One can consider also 4-D situation (maybe inside wormhole contacts). Could some form of
quaternion holomorphy allow to realize the K-D equation just as in the case of super string
models by replacing complex coordinate and its conjugate with quaternion and its 3 conjugates.
Only two quaternion conjugates would appear in the spinor mode and the corresponding
quaternionic gamma matrices would annihilate the spinor mode. It is essential that in a suitable
gauge the spinor connection has non-vanishing components only for two quaternion conjugate
- 294 -
coordinates. As a special case one would have a situation in which only one quaternion
coordinate appears in the solution. Depending on the character of quaternionion holomorphy the
modes would be labelled by one or two integers identifiable as conformal weights.
Even if these octonionic 4-D modes exists (as one expects in the case of cosmic strings), it is
far from clear whether the description in terms of them is equivalent with the description using
K-D equation based ordinary gamma matrices. The algebraic structure however raises hopes
about this. The quaternion coordinate can be represented as sum of two complex coordinates as
q=z1+Jz2 and the dependence on two quaternion conjugates corresponds to the dependence on
two complex coordinates z1,z2. The condition that two quaternion complexified gammas
annihilate the spinors is equivalent with the corresponding condition for Dirac equation
formulated using 2 complex coordinates. This for wormhole contacts. The possible
generalization of this condition to Minkowskian regions would be in terms Hamilton-Jacobi
structure.
Note that for cosmic strings of form X2× Y2⊂ M4× CP2 the associativity condition for S2
sigma matrix and without assuming localization demands that the commutator of Y2 imaginary
units is proportional to the imaginary unit assignable to X2 which however depends on point of
X2. This condition seems to imply correlation between Y2 and S2 which does not look physical.
Summary
To summarize, the minimal and mathematically most optimistic conclusion is that Kähler-Dirac
action is indeed enough to understand gravitational binding without giving up the associativity of the
fermionic dynamics. Conformal spinor dynamics would be associative if the spinor modes are localized
at string world sheets with vanishing W (and maybe also Z) fields guaranteeing well-definedness of em
charge and carrying canonical momentum currents parallel to them. It is not quite clear whether string
world sheets are present also inside wormhole contacts: for CP2 type vacuum extremals the Dirac
equation would give only right-handed neutrino as a solution (could they give rise to N=2 SUSY?).
Associativity does not favor fermionic modes in the interior of space-time surface unless they
represent right-handed neutrinos for which mixing with left-handed neutrinos does not occur: hence the
idea about interior modes of fermions as giving rise to SUSY is dead whereas the original idea about
partonic oscillator operator algebra as SUSY algebra is well and alive. Evolution can be seen as a
generation of gravitationally bound states of increasing size demanding the gradual increase of h_ eff
implying generation of quantum coherence even in astrophysical scales.
The construction of preferred extremals would realize strong form of holography. By conformal
symmetry the effective metric at string world sheet could be conformally equivalent with the induced
metric at string world sheets. Dynamical string tension would be proportional to hbar/heff2 due to the
proportionality αK∝ 1/heff and predict correctly the size scales of gravitationally bound states for
hgr=heff=GMm/v0. Gravitational constant would be a prediction of the theory and be expressible in terms
of αK and R2 and hbareff (G∝ R2/gK2).
In fact, all bound states - elementary particles as pairs of wormhole contacts, hadronic strings,
nuclei, molecules, etc. - are described in the same manner quantum mechanically. This is of course
nothing new since magnetic flux tubes associated with the strings provide a universal model for
interactions in TGD Universe. This also conforms with the TGD counterpart of AdS/CFT duality.
See the chapter Recent View about Kähler Geometry and Spin Structure of "World of Classical Worlds"
of "Physics as infinite-dimensional geometry".
- 295 -
posted by Matti Pitkanen @ 9:28 PM
37 Comments:
At 2:19 AM,
Ulla said...
What about this? http://www.sciencealert.com/new-type-of-chemical-bond-discovered
At 4:05 AM,
[email protected] said...
The description of chemical bonds using magnetic flux tubes or associated strings (by
generalisation of AdS/CFT duality about which I talk also as quantum classical
correspondence) applies to all kinds of bondings - say bonds between nucleons in nuclear
string model or colour magnetic bonds between quarks inside hadrons.
It is not possible to say anything about so specific as a particular kind of chemical
bond: it is of course quite possible that also vibrating bonds are possible.
What is marvellous that by fractality the same extremely general description applies in
all length scales.
Second big step forward is that h_eff is not a firm prediction of TGD and without it
there is no hope of describing bound states: be they gravitational, chemical, hadronic, or
nuclear. h_eff hierarchy is not only possible, it is necessary.
It is not an accident that bound states have been the fundamental weakness of quantum
field theories: h_eff hierarchy makes possible to apply perturbative approach but for h_eff
rather than h.
Depending on one's tastes one can interpreth_eff =n*h as effective Planck constant or
as genuine.
At 1:51 PM, Ulla said...
I think this is a typo:
Second big step forward is that h_eff is not a firm prediction of TGD and without it there
is no hope of describing bound states...
the first not shall not be there?
At 7:19 PM, [email protected] said...
Yes, modern text editing programs know often think they knowsbetter. My intention was
"is now" but editor decided "is not".
At 1:54 AM,
Ulla said...
What happens with hbar and cosmological constant in an expanding universe? I know that
c.c. is not in TGD, but as mainstream thinkers this lead to the incredible 'error' in the
models, interpreted as dark energy? Not even Plancks constant can be constant any more,
because it depends on relations?
At 4:29 AM,
[email protected] said...
- 296 -
The hierarchy of Planck constants means a hierarchy of bound states of increasing size.
The longer the fermionic string/magnetic flux tube responsible for the bound state, the
larger the value of h_eff.
The generation of bound states (gravitational and also other kinds) of increasing size is
due to cosmic expansion, which at space-time level is due to the zero energy ontology
involving causal diamonds whith are analogous to Big Bang followed by Big Crunch.
CDs themselves increase in size during the sequence of state function reductions to
fixed boundary of CD as h_eff increases.
What is somewhat unexpected but understandable in terms of NMP is that the cosmic
expansion automatically supports evolution as increase of h_eff and formation of more
complex and bigger bound states.
What looks also strange that at every step in which h_eff increases, conformal
symmetry breaks down and conformal some gauge degrees of freedom transform to
physical ones. One goes off from quantum criticality. This is a spontaneous process,
andmeans evolution.
Originally I thought that evolution would mean becoming critical. The solution of
paradox is that thermodynamical criticality (second law) is not same as quantum criticality
(NMP implying second law at the level of ensembles with entropy meaning different
thing). Approaching thermodynamical criticality could be going off from quantum
criticality. Complicated!
At 4:35 AM,
[email protected] said...
To Ulla: I have pondered cosmological constant a lot.
In TGD it does not appear at fundamental level butin GRT description it is the only
manner to describe the accelerated expansion. The GRT limit of TGD should give
cosmological constant as a phenomenological parameter.
At more fundamental level the magnetic energy density of magnetic flux tubes
carrying monopole fluxes and magnetic tension as negative pressure give rise to a
description of dark energy. I am waiting for years that some colleague would finally
decide to discover this explanation of dark energy;-).
At 11:49 AM,
Ulla said...
Can it be that the existence of an ad hoc cosmological constant hide away the explanation
for dark energy?
At 7:45 PM, [email protected] said...
Cosmological constant term in Einstein's equations would modify them and actually
mean that there would be no real vacuum energy density. This would save from negative
pressure term.
Its introduction can of course can hide the source of dark energy.
- 297 -
Second interpretation is that Einstein equations in original form are true but the energy
momentum tensor of matter contains term which gives rise to dark energy and negative
"pressure". The latter interpretation is more flexible and for instance allows time
dependent cosmological constant.
The resulting equations are same so that distinguishing between option is not easy.
Vacuum energy density can be however modelled and cosmological constant corresponds
to only particular density.
I believe that Einstein's equations- with or without genuine cosmological constant must come from a "microscopic" theory (strange attribute in cosmology;-)) as also
standard model. String models was one attempt in this direction but the sign of Lambda
was wrong as many other things.
At 7:40 AM,
Leo Vuyk said...
May I support the combination of a microscopic theory with the SM. However, IMHO,
GR will face a hard time without a strong equivalence principle and other stuff.
https://www.flickr.com/photos/93308747@N05/?details=1
and
https://tudelft.academia.edu/LeoVuyk/Papers
At 6:34 PM, [email protected] said...
I think that the basic problem is that the followers of Einstein refuse to see the
problems of GR. Loss of Poincare invariance- even the notion of broken invariance is
broken in strict mathematical sense.
The idea about extending Poincare transformations to General Coordinate Invariance.
which define gauge symmetry is simply wrong. Noether of course lurks background.
Equivalence Principle in strong sense - I understand it as the identity of gravitational
and inertial masses and possibility to talk about them separately - is second key problem.
This may sound academic talk in ears of those who have not tried to build a unified
theory extending GRT. When one does it, one learns how essential the understanding of
these problems is and how big holes in the understanding actually exist and suggest that
something is badly wrong in GRT.
A lot of sloppy thinking is allowed since one can always refer to Einstein or some
other big name without arguing on basis of content.
At 8:39 PM, Anonymous said...
"Square of CP2 radius fixed by quantum criticality." brings to mind the basic notion of
quadrance that substitutes notion of length in purely algebraic rational geometry. The more
general philosophical implication is that (dimensional) mathematical objecs cannot and not
should be viewed atomistically, separate from the more higher dimensional observation
space of a dimensional object, perhaps as simply as +1 dimension for observation space
for any dimensional object. Which of course would also imply that the mathematical
"imaginary" observation space of 8D imbedding space would be also have higher number
of dimensions . The deep relation of imaginary number and +1 dimensionality is also
intriguing. So no surprise that QM cannot do without notion of n-dimentional Hilbert
space, and your suggestion about Hilbertian number theory. What is not at all clear is the
- 298 -
meaning - theoretically contextualized and/or generalized (scale invariance?) - of
"quantum criticality".
At 8:50 PM, Anonymous said...
To make the main point as clearly as I can, all mathematical objects are "internal" objects
by definition, inside Platonia of mathematical imagination aka "God's Eye". Hence, any
mathematical object can be meaningful and fully comprehended only from the imaginary
higher dimension of the object. The dynamics of combining empirical "here-and-now"
participatory nature of "material" experiencing and holistic God's Eye of mathematical
imagination in communicable way is the main inspiration and challenge of TGD from
what I have gathered so far. If fully accepted and rigidly applied, this approach has of
course dynamic implications for any general measurement theory.
At 9:04 PM, Anonymous said...
PS: the statement that "thermodynamics is the square root of general measurement theory"
is of course fully expected and now self-evident implication of generalized notion of +1D
"quadrancy" requirement to be able to view any and all mathematical objects. Kiitos ja
anteeksi <3
At 10:54 PM, [email protected] said...
To Anonymous: "Quantum theory is the square root of thermodynamics" is what I meant.
This is not at all self-evident to me and implies generalisation of the notion of S-matrix
replacing it with a collection of M-matrices between positive and negative energy parts of
zero energy states. They can be seen "complex" square roots of densities matrices (real
diagonal square root of density matrix multiplied by unitary S-matrix). M-matrices would
organise into unitary U-matrix between zero energy states.
At 11:00 PM, [email protected] said...
To Anomymous: "Square of CP2 radius fixed by quantum criticality". I am at all not sure
whether I have said this;-). I talk always about ratios only: it is in the spine of every
theoretical physicist! The values of dimensional quantities depend on units chosen! I might
have said that the ratio of hbarG/R^2 which is dimensionless is be fixed by quantum
criticality;-).
Imbedding space could indeed have infinite-dimensionality in number theoretic sense:
these dimensions would be totally outside of physical measurement since they would
correspond to number theoretical anatomy of numbers due to existence of infinite number
of real units realised as ratios of infinite primes (and integers). As real numbers all these
copies of real number would be identical.
At 11:04 PM, [email protected] said...
"Seeing from higher dimension" certainly gives information about geometric
object.But it also creates information. Circle is just circle. When its imbedded to 3-D space
it can be knotted in arbitrarily complex manner. Same about space-times as surfaces:
without imbedding one would have no standard model interactions, just gravitation!
Individual without social environment is much less than individual with it!
At 4:01 AM,
Anonymous said...
- 299 -
"A circle is a circle is a circle" approach presupposes observation independent objectivist
metaphysics, which any theory of consciousness can't do, while asking how - where and
when etc - thought-observation events of mathematical objects happen. Physicists have
age old bad habit of thinking "objectively", what interests in TGD is the challenge to
consistently integrate consciousness at all levels of theory-forming. That means also
letting go of the role of passive observation (in Greek: theory) and accepting dynamic
participation in the drama (Greek: fronesis, sometimes translated as 'practical reason')
At 4:16 AM,
Anonymous said...
Searching
for
'quantum
criticality
(QC)'
http://www.physics.rutgers.edu/qcritical/frontier3.htm
I
found
this
explanation:
The notion of 'emergent matter' from QC superposition strongly suggests that dynamic
participatory consciousness theory is involved in these observation events, in the form of
mathematical imagination. But maybe you could expand on what you mean by 'quantum
criticality', are you thinking about some kind of generalization that can be communicated,
fits with other pieces of puzzle?
At 8:22 AM,
[email protected] said...
To Anonymous: Seeing observer as active part of the physical system rather than objective
outsider is indeed the whole point of TGD view. I have worked hardly to understand what
I might possibly mean with criticality;-). I find it sometimes difficult to understand what I
am saying;-).
Criticality is intuitively clear notion: you are at saddle or top of the hill where big
things can happen with very small perturbation. Returning back to criticality is what
biosystems are able to achieve: homeostasis might be seen as this ability to not fall down
where dead system would fall immediately.
My recent big surprise was that the increase Planck constant seems to happen
spontaneously and generates potentially conscious information. Living systems are able to
return to smaller h_eff: for instance, magnetic flux tubes connecting reacting biomolecules
shorten because of reduction of h_eff and make possible reaction products to find each
other. Maybe getting rid of Karma's cycle means letting go and allow the h_eff to
increase;-).
At 3:11 AM,
Anonymous said...
"Top of the hill" metaphor associates with 'Law of diminishing returns', well known
for gardeners, and ideal state of growth factors. In that sense quantum jump to higher
degree of Planck scale makes... sense ;).
"Simplest" or deepest mathematical generalization I can think of in this regard is the
extra dimensional curve route from 1 to -1 in Euler's Identity, and imaginary-axis (rational
axis = 0) of complex plane (by which I mean gaussian rationals, as I'm highly skeptical of
"real" numbers). The poetic "coincidence" of square root of -1 being called "imaginary" is
remarkable, letting go of karmic rational (or "real", if you insist) valuation and letting
imagination flow freely.
I just found Weyl's little book on Symmetry which begins with reference to argument
about philosophical or theological debate about left and right between Leibnitz
- 300 -
(relativism) and Descarters (reifying objectivism). What kind of symmetry break, if any
kind, is the "identity" of square root 1, and extra dimensionality of square root of -1?
At 4:50 AM,
Anonymous said...
Re Einstein's followers, GRT and relativistic field theory(?), could this article on Noether's
second(!)
theorem
and
Weyl
bear
helpful
meaning?
http://www3.nd.edu/~kbrading/Research/WhichSymmetryStudiesJuly01.pdf
At 4:55 AM,
[email protected] said...
To believe on continuous number fields (both reals and various p-adic number fields
and the various algebraic extensions of the latter) is to me like believing in God.
There is no way to prove that transcendentals exist (the term is certainly not an
accent!) but without the assumption about their existence physics would become an
aesthetic nightmare unless you happen to be a practical person loving to do numerics.
Imaginary unit is a represent ion of reflection or rotation by pi, as you wish. In Kahler
geometry, its action as purely algebraic entity is geometrized/ tensorized so that it does not
look anymore so mystic.
At 5:41 AM,
Anonymous said...
I DO believe I am a God :). Continuity of rational number field is not only easy to
prove, it is intuitively clear. The ability to believe otherwise is proof of limitless ability of
self-deception. The idea that irrationals are "gaps" in relation to perfectly dense rational
line is based on false expectation that irrationals can fit 1-dimensional line like natural
numbers and rationals. A well known physics blogger once imagined a Hilbertian or ndimensional number theory space where also irrationals find their natural and exact place.
Again, let's remember that the ability to THINK about observables like continuous line
requires +1-dimensional "observation space" aka "mind", and let not get confused with
definition before we know what we are doing when we think math.
Transcendental metanumbers are also very much loved and admired as very special
(meta)rationals. Thanks for the Kähler hint for generalization of i.
I'm sure we can agree that thinking is a kind of fluid. Here's excellent clip on
lagrangian
vs.
eulerian
approaches
to
fluids
like
thinking
etc:
https://www.youtube.com/watch?v=zUaD-GMARrA
Lagrangian might get the girl, or punished for obsessive stalking, but eulerian has
better change of gnothi seauton.
At 12:55 PM, Anonymous said...
Remember Pythagoras Theorem and how it gave birth to relations that are not ratios of
whole numbers, and do not fit the 1D-continuum of rational number _line_? Pythagoras
theorem does not say anything about 1D-lengths (they don't exist for it) but 2D _areas_:
quadrance1 + quadrance1 = quadrance3 (diagonal of isosceles square).
- 301 -
The claim that rational _line_ is not continuous is derived from fundamental category
error of claiming that SQR2 is a "gap" missing from rational number line, even though
there is nothing that can be reduced to 1D- length about fundamentally 2D object like
diagonal of square. The information content of inherently 2D area cannot be contained by
1D number line, if we hold to the ideals of rigour and honesty.
More: https://www.youtube.com/watch?v=REeaT2mWj6Y
At 7:26 PM, [email protected] said...
I remember that we debated about this and I can only say that I disagree. Rationals and
their algebraic extensions emerge at the level of cognition and measurement where one
always has finite measurement/cognitive resolution. The intersection of cognitive and
sensory worlds. At the level of geometry describing perceived world continuous number
fields are indispensable for practical description and I think they really exist.
At 10:16 AM,
Anonymous said...
It's good to disagree! Notions of 'continuity', continuum' and 'field' and even 'number'
are not that clear and require careful philosophical thinking. When we talk about
"continuous number fields" of classical physics, are we talking about 'numbers' or just
practical approximations (intervals) in some limited form of (all possible) cauchysequenses? And how do these approximate intervals of classical fields relate to the
operators of quantum fields? Also, these approximate intervals of cauchy sequenses have a
strong flavor of 'ontological uncertainty' about them. Or do you mean by "continuous
number field" something else than 'all possible cauchy sequenses'?
PS: at the end of this lecture Norman comes close to an idea that is close to TDG:
(classical)
physical
finite
fields
based
on
some
high
prime:
https://www.youtube.com/watch?v=Y3-wqjV6z5E&list=PLIljB45xT85BfcS4WHvTIM7E-ir3nAOf&index=24
At 8:24 PM, [email protected] said...
I see the entire spectrum of number fields from finite fields serving as convenient
approximation tools to rationals and their algebraic extensions, to reals, p-adic numbers,
complex number, quaternions, and octonions as physically very natural structure of
structure. To cut off everything after finite fields would destroy the whole TGD so that I
am a little bit worried about such attempts;-).
There is a lot to destroy by keeping only finite fields. One can go even beyond reals as
they are defined. One an introduce an infinite hierarchy of infinite
primes/integers/rationals and form a hierarchy of ratios of these number behaving like real
units. This implies that every point of real axis becomes infinite-D space which could
represent entire Universe: algebraic holography or Brahman= Atman mathematically.
This all is of course something totally outside sensory perception and simple book
keeping mathematics. What is however amazing that the hierarchy in question has very
"physical" structure: supersymmetric arithmetic quantum theory second quantized again
and again. Even more amazing, infinite primes analogous to bound states- something
which is the Achilles heel of QFTs - emerge naturally so that infinite primes could pave
- 302 -
the way to the construction of quantum theories and also TGD. Many-sheeted space-time
with its hierarchical structure could directly correspond to it.
All this I would lose besides TGD if I would accept the view of Norman about
numbers. My view is that one should not take the limitations of human thinking as
criterion for what is possible mathematically. We have also the ability to imagine within
confines of logic. For me this is the spiritual view as opposed to the "practical" view
accepting only discrete structures, a view which -rather ironically - does not work in
practice.
At 4:15 AM,
Anonymous said...
As pure mathematician, Norman has problem with the notion of infinite sets, and set
theory as whole, and hence would not probably accept infinite primes either. Applied math
of engineers can be more relaxed, but there is a category mistake if we identify
approximations of applied math with rigorous algebraic definitions. As reals are defined,
or rather not defined, (what do YOU mean be definition of reals?!) there is a serious
logical problem of drawing ontological conclusions ("is") from epistemic limitations of
applied math ("should").
Yes, we can and do postulate infinite fields of cauchy sequenses, create such imaginary
number spaces with our minds (cf. "eulerian" in the video clip above), but their arithmetics
cease to be algebraic as there is in some sense too _much_ information (cf. 1/3 and
0,333...), at least on the "real" side of comma. There is a kind of uncertainty principle
_created_ by postulation of cauchy sequences on the "real" side, which would not
necessarily emerge in purely algebraic approach.
When you say that "every point of real axis becomes infinite-D space", I'm reminded
again that the space of cauchy sequenses, called "line" for no good reason, is _said_ to
contain inherently multidimensional algebraic roots ("irrationals"), and mostly infinite
sequenses that by some theorem are considered transcendentals. Giving up the invalid
notion of category mistakes called "real _line_" as 1D-continuum does not mean
abandoning the sea of cauchy sequenses and using that number space for what it is good
for, it means that multidimensional algebraic relations such as SQR2 etc. are not
_identified_ with its approximates in the cauchy sea. The "lagrangian" of an algebraic
relation and the "eulerian" observation space of cauchy sea are not identified, but
understood for their mutual purposes.
I agree with "ability to imagine within confines of logic". Notion of Real _line_ just
does not fit the criteria of logic, anymore than square circle. Nothing logically solid can be
constructed from an object that has not been logically derived, but by adding inherently
multidimensional algebraic relations etc. to rational line, and calling the creation 'line'.
I believe you had good and solid motivations for searching "quantum math", and I
hope this "disagreement" nourishes that interest. :)
At 7:43 AM,
[email protected] said...
Just a short comment about Cauchy sequences. Infinite primes are purely algebraic
notion as also the infinitude of units obtained as ratios of infinite rationals. No limits are
involved as for Cauchy sequences. In p-adic topology their norm is unity for any finite
prime and also for smaller infinite primes. Infinity is is in the eye of topologist.
- 303 -
This notion of infinity is also different from that of Cantor which also relies on
imagine infinite limits obtained by adding 1 again and again: n-->n+1. Now one has
explicit formulas for infinite primes: no limiting procedures.
The notion generalises to algebraic extensions of rationals allowing also the notion of
primeness.
My view is that no number field alone is enough: all are needed. Reals and all p-adic
number fields are like pages of book glued along common rationals. Algebraic extensions
give rise to morepages so that book becomes rather thick. This structure is the natural
starting point for physics which would describe also correlates of cognition and intention.
At 2:32 AM,
Anonymous said...
Yes, I also noticed infinite primes are not dependent from notion of real line, or even
infinite sets of set theory, but can be classified as algebraic type.
The book metaphor for Cauchy number space brings to mind sea of white pages, where
there can be found and written letters and words of symbolically meaningful interval
approximations for algebraic relations. On the other hand, book metaphor does not reveal
the "Russian doll" structure of Cauchy exponentiality and the zero state of Euler's Doubly
Infinite Identity. Finnish expression for the essence of exponent 'kertoa itsellään' can be
translated both as 'multiply by itself' and 'narrate by itself'. :)
Intuitively the exponential sea of all possible Cauchy sequenses has fractal
dimensional structure, but my technical skills are not enough to tell if it can be assigned
exact
Hausdorff
dimension
value
and
if
so,
what.
(http://en.wikipedia.org/wiki/Hausdorff_dimension)
At 4:52 AM,
[email protected] said...
As far as real numbers are considered they are equivalence classes of Cauchy
sequences converging to the real number. All information about individual Cauchy
sequences disappears in the definition of real number.
One can also consider the sequence of differences of two subsequent points in the
sequence and the sequences could be regarded as a real space in obvious inner product if
one allows arbitrarily large differences and all kinds of wanderings before the converge to
the real number.
At 1:21 PM, Anonymous said...
I assume in the above by "(definition of) real number" you mean: irrational number. :)
Cauchy sequenses of just negative exponents is metric space because "the continued
fraction expansion of an irrational number defines a homeomorphism from the space of
irrationals to the space of all sequences of positive integers". So as model for 'continuous
field' Cauchy sequenses of negative powers is not more interesting than field of natural
numbers.
- 304 -
As you know, the whole of Cauchy space that includes also p-adic positive exponents
is much more interesting, and here's some interesting discussion on Euler's doubly infinite
identity:
http://math.stackexchange.com/questions/669078/eulers-doubly-infinite-geometricseries
The second answer relating the identity to Sobolev-like space with Hilbert-space norm
goes over my head, but hopefully not yours.
At 6:49 PM, [email protected] said...
I meant all reals. One can of course have also Cauchy sequences which converge to
rational.
What to my opinion matters is the convergence point of CSs. Not sequences as such:
one identifies reals as equivalence class of CSs.
I am not a specialist in technical details of functional analysis, Sobolev, etc… I am
more interested in the notion number itself from the point of view of physics and
cognition. The technique of rigorization (I cannot avoid association with rigor mortis;-) is
the task of mathematicians.
I did not understand your argument about CSs of negative powers. They are very
special case about CS and the limit of power is usually 0, 1, or infinity depending on
whether initial point is smaller, equal or larger than 1. For negative x one has limits 0,infty or cycle hopping between -1 and +1.
Euler's double identify does not make sense to me except formally and I do not see it
as a gateway to new math. Either (or both) of the two series involved does not converge
for reals. It fails to converge also for p-adics. If x has norm smaller than 1 then 1/x has
norm larger than one. If norm is 1 for both, then both series fail to converge.
At 4:01 AM,
Anonymous said...
So sorry, don't know why I thought of and called nominator exponents for real side
decimal extensions as "negative". Probably because of I was also thinking of some kind of
zero state ontology relation of Euler's doubly infinite identity (EDII), and I associated
"determinant" p-adic side as 'positive' and "nominator" real side as 'negative'.
I believe that in Euler's mind EDII is/was intimately tied with his identity and proof
concerning pi, e, i and 1 and 0. I inteprete EDII as number theoretical presentation of the
maxim "as above, so below", and in that sense metamathematical equation of "if this
arises, that arises". Or, inner spaces of nominator power series dividing one and "outer"
inclusive spaces of determinant power series. This Russian doll relation between p-adic
and real feels to me more significant than book metaphor.
As said, the (hindu-arabic) number fields for decimal (or any base of natural number)
extensions for reals and also p-adics are homomorphic with field of all natural numbers,
and the differences or meaningful relations are said to different notions of "distance". This
is how - the space in which - we have been taught to think about numbers.
- 305 -
The notion of distance, is however, very problematic and baffles me, as it is inherently
tied with notion of 'length', and as long as we do and think math inside some
dimensional(orthogonal) coordinate system, notion of length is valid only in context of
1D-line, notion of area is valid only in context of 2D-plane, notion of volume is valid only
in context of 3D-space, etc. This level or rigor (...mortis ;D) is IMHO highly non-trivial in
order not to commit to category errors at most basic level, when questioning cognition and
numbers in philosophically sincere way.
As dimensionality is inherent already at the level of exponential multiplying or
dividing a number by itself, the 1D-notion of "distance" giving different meanings to
(natural, real, p-adic etc.) homomorphic number fields, which we think of as 2D planes,
needs to be reconsidered. I believe Norman is on the right track with quadrance as the
most basic level of mathematical aka coordinated measuring for observers like us, and
with the algebraic generalizations of quadrance into higher dimensions. This would mean
in some sense letting go of karmic ties (investing more and more energy to self-deception)
to what we have been authoritatively taught to believe and letting heff jump to a higher
level of number theoretical self-comprehension. Greek 'a-letheia' means letting go of
active ignorance of feeding energy to attachments and letting world come true by itself...
in the baby steps of Tao... ;)
At 12:02 PM, Anonymous said...
One way to think of EDII is that both sides converge to half of negative one. The
symmetry of multiplication and division is deeper than this or that entangled property of
this or that number system.
The simplest level of number theory is base one, and generating natural numbers in
base one does not differentiate between multiplication and division, there is no either-or
choice between inclusive whole 1 that gets divided and atomistic 1 that gets multiplied.
Both movements generate identically endless strings of 1's, the platonic 'hen kai agathon'.
This unity gets hidden and forgotten when we invent no-one (aka "zero") and start playing
with base 2 and must invent rules to place 1 and 0 in a partially coherent way (e.g. boolean
rules, peano axioms, etc.), and are faced with constant barrage of number theoretical
choices similar to and including "is it a wave or particle", "which property of particle is
entangled in this Bell measurement", etc. In the entangled state of base '0, 1' aka 2dimensional base the choice of not choosing is no longer present.
This is what the that of EDII brought me now to contemplate, and you may consider
this an empirical example of an untrained mathematical cognition having an observation
event, an 'anamnesis'. What I really can't tell now if this was a movement of heff in the
context or limit of base '1, 0', and if so, from 1 to 0 or from 0 to 1... :)
At 12:25 PM, Anonymous said...
Dedekind cuts
http://en.m.wikipedia.org/wiki/Dedekind_cut
Farey sequences / series , has relations to Riemann hypothesis too
--Stephen
- 306 -
03/07/201 - http://matpitka.blogspot.com/2015/03/is-formation-of-gravitationalbound.html#comments
Is the formation of gravitational bound states impossible in superstring models?
I decided to take here from a previous posting an argument allowing to conclude that super string
models are unable to describe macroscopic gravitation involving formation of gravitationally bound
states. Therefore superstrings models cannot have desired macroscopic limit and are simply wrong. This
is of course reflected also by the landscape catastrophe meaning that the theory ceases to be a theory in
macroscopic scales. The failure is not only at the level of superstring models: it is at the level of
quantum theory itself. Instead of single value of Planck constant one must allow a hierarchy of Planck
constants predicted by TGD. My sincere hope is that this message could gradually leak through the iron
curtain to the ears of the super string gurus.
Superstring action has bosonic part proportional to string area. The proportionality constant is string
tension proportionalto 1/hbar G and is gigantic. One expects only strings of length of order Planck
length be of significance.
It is now clear that also in TGD the action in Minkowskian regions contains a string area. In
Minkowskian regions ofspace-time strings dominate the dynamics in an excellent approximation and the
naive expectation is that string theory should give an excellent description of the situation.
String tension would be proportional to 1/hbar G and this however raises a grave classical counter
argument. In string model massless particles are regarded as strings, which have contracted to a point in
excellent approximation and cannot have length longer than Planck length. How this can be consistent
with the formation of gravitationally bound states is however not understood since the required nonperturbative formulation of string model required by the large valued of the coupling parameter GMm is
not known.
In TGD framework, strings would connect even objects with macroscopic distance and would obviously
serve as correlates for the formation of bound states in quantum level description. The classical energy
of string connecting say the two wormhole contacts defining elementary particle is gigantic for the
ordinary value of hbar so that something goes wrong.
I have however proposed that gravitons - at least those mediating interaction between dark matter
have large value of Planck constant. I talk about gravitational Planck constant and one has heff=
hgr=GMm/v0, where v0/c<1 (v0 has dimensions of velocity). This makes possible perturbative approach
to quantum gravity in the case of bound states having mass larger than Planck mass so that the parameter
GMm analogous to coupling constant is very large. The velocity parameter v0/c becomes the
dimensionless coupling parameter. This reduces the string tension so that for string world sheets
connecting macroscopic objects one would have T ∝ v0/G2Mm. For v0= GMm/hbar which remains
below unity for Mm/mPl2 one would have hgr/h=1. Hence the action remains small and its imaginary
exponent does not fluctuate wildly to make the bound state forming part of gravitational interaction
short ranged.
This is expected to hold true for ordinary matter in elementary particle scales. The objects with size
scale of large neutron (100 μm in the density of water) - probably not an accident - would have mass
above Planck mass so that dark gravitons and also life would emerge as massive enough gravitational
bound states are formed. hgr=heff hypothesis is indeed central in TGD based view about Living matter.
- 307 -
To conclude, it seems that superstring theory with single value of Planck constant cannot give rise to
macroscopic gravitationally bound matter and would be therefore simply wrong much better than to be
not-even-wrong.
See the chapter Recent View about Kähler Geometry and Spin Structure of "World of Classical
Worlds" of "Quantum TGD as Infinite-Dimensional Spinor Geometry" .
posted by Matti Pitkanen @ 10:35 PM
03/06/2015 - http://matpitka.blogspot.com/2015/03/updated-view-about-k-geometry-ofworld.html#comments
Updated view about the Kähler geometry of "World of Classical Worlds"
TGD differs in several respects from quantum field theories and string models. The basic
mathematical difference is that the mathematically poorly defined notion of path integral is replaced
with the mathematically well-defined notion of functional integral defined by the Kähler function
defining Kähler metric for WCW ("world of classical worlds"). Apart from quantum jump, quantum
TGD is essentially theory of classical WCW spinor fields with WCW spinors represented as fermionic
Fock states. One can say that Einstein's geometrization of physics program is generalized to the level of
quantum theory.
It has been clear from the beginning that the gigantic super-conformal symmetries generalizing
ordinary super-conformal symmetries are crucial for the existence of WCW Kähler metric. The detailed
identification of Kähler function and WCW Kähler metric has however turned out to be a difficult
problem. It is now clear that WCW geometry can be understood in terms of the analog of AdS/CFT
duality between fermionic and space-time degrees of freedom (or between Minkowskian and Euclidian
space-time regions) allowing to express Kähler metric either in terms of Kähler function or in terms of
anti-commutators of WCW gamma matrices identifiable as super-conformal Noether super-charges for
the symplectic algebra assignable to δ M4+/-× CP2. The string model description of gravitation emerges
and also the TGD based view about dark matter becomes more precise.
Kähler function, Kähler action, and connection with string models
The definition of Kähler function in terms of Kähler action is possible because space-time regions
can have also Euclidian signature of induced metric. Euclidian regions with 4-D CP2 projection wormhole contacts - are identified as lines of generalized Feynman diagrams - space-time correlates for
basic building bricks of elementary particles. Kähler action from Minkowskian regions is imaginary and
gives to the functional integrand a phase factor crucial for quantum field theoretic interpretation. The
basic challenges are the precise specification of Kähler function of "world of classical worlds" (WCW)
and Kähler metric.
There are two approaches concerning the definition of Kähler metric: the conjecture analogous to
AdS/CFT duality is that these approaches are mathematically equivalent.
1. The Kähler function defining Kähler metric can be identified as Kähler action for space-time
regions with Euclidian signature for a preferred extremal containing 3-surface as the ends of the
space-time surfaces inside causal diamond (CD). Minkowskian space-time regions give to
Kähler action an imaginary contribution interpreted as the counterpart of quantum field theoretic
- 308 -
action. The exponent of Kähler function defines functional integral in WCW. WCW metric is
dictated by the Euclidian regions of space-time with 4-D CP2 projection.
The basic question concerns the attribute "preferred". Physically the preferred extremal is
analogous to Bohr orbit. What is the mathematical meaning of preferred extremal of Kähler
action? The latest step of progress is the realization that the vanishing of generalized conformal
charges for the ends of the space-time surface fixes the preferred extremals to high extent and is
nothing but classical counterpart for generalized Virasoro and Kac-Moody conditions.
2. Fermions are also needed. The well-definedness of electromagnetic charge led to the hypothesis
that spinors are restricted to string world sheets. It has become also clear that string world sheets
are most naturally minimal surfaces with 1-D CP2 projection (this brings in gravitational
constant) and that Kähler action in Minkowskian regions involves also the string area (, which
does not contribute to Kähler function) giving the entire action in the case of M 4 type vacuum
extremals with vanishing Kähler form. Hence vacuum extremals might serve as an excellent
approximation for the sheets of the many-sheeted space-time in Minkowskian space-time
regions.
3. Second manner to define Kähler metric is as anticommutators of WCW gamma matrices
identified as super-symplectic Noether charges for the Dirac action for induced spinors with
string tension proportional to the inverse of Newton's constant. These charges are associated with
the 1-D space-like ends of string world sheets connecting the wormhole throats. WCW metric
contains contributions from the spinor modes associated with various string world sheets
connecting the partonic 2-surfaces associated with the 3-surface.
It is clear that the information carried by WCW metric about 3-surface is rather limited and
that the larger the number of string world sheets, the larger the information. This conforms with
strong form of holography and the notion of measurement resolution as a property of quantums
state. Clearly. Duality means that Kähler function is determined either by space-time dynamics
inside Euclidian wormhole contacts or by the dynamics of fermionic strings in Minkowskian
regions outside wormhole contacts. This duality brings strongly in mind AdS/CFT duality. One
could also speak about fermionic emergence since Kähler function is dictated by the Kähler
metric part from a real part of gradient of holomorphic function: a possible identification of the
exponent of Kähler function is as Dirac determinant.
Realization of super-conformal symmetries
The detailed realization of various super-conformal symmetries has been also a long standing
problem but recent progress leads to very beautiful overall view.
1. Super-conformal symmetry requires that Dirac action for string world sheets is accompanied by
string world sheet area as part of bosonic action. String world sheets are implied and can be
present only in Minkowskian regions if one demands that octonionic and ordinary
representations of induced spinor structure are equivalent (this requires vanishing of induced
spinor curvature to achieve associativity in turn implying that CP 2 projection is 1-D). Note that
1-dimensionality of CP2 projection is symplectically invariant property. Neither string world
sheet area nor Kähler action is invariant under symplectic transformations. This is necessary for
having non-trivial Kähler metric. Whether WCW really possesses super-symplectic isometries
remains an open problem.
2. Super-conformal symmetry also demands that Kähler action is accompanied by what I call
Kähler-Dirac action with gamma matrices defined by the contractions of the canonical
momentum currents with imbedding space-gamma matrices. Hence also induced spinor fields in
the space-time interior must be present. Indeed, inside wormhole contacts Kähler-Dirac equation
reducing to CP2 Dirac equation for CP2 vacuum extremals dictates the fermionic dynamics.
- 309 -
Strong form of holography implied by strong form of general coordinate invariance strongly
suggests that super-conformal invariance in the interior of the space-time surface is a broken
gauge invariance in the sense that the super-conformal charges for a sub-algebra with conformal
weights vanishing modulo some integer n vanish. The proposal is that n corresponds to the
effective Planck constant as heff/h=n. For string world sheets super-conformal symmetries are not
gauge symmetries and strings dominate in good approximation the fermionic dynamics.
Interior dynamics for fermions, the role of vacuum extremals, dark matter, and SUSY
The key role of CP2-type and M4-type vacuum extremals has been rather obvious from the beginning
but the detailed understanding has been lacking. Both kinds of extremals are invariant under symplectic
transformations of δ M4× CP2, which inspires the idea that they give rise to isometries of WCW. The
deformations CP2-type extremals correspond to lines of generalized Feynman diagrams. M 4 type
vacuum extremals in turn are excellent candidates for the building bricks of many-sheeted space-time
giving rise to GRT space-time as approximation. For M4 type vacuum extremals CP2 projection is (at
most 2-D) Lagrangian manifold so that the induced Kähler form vanishes and the action is fourth-order
in small deformations. This implies the breakdown of the path integral approach and of canonical
quantization, which led to the notion of WCW.
If the action in Minkowskian regions contains also string area, the situation changes dramatically
since strings dominate the dynamics in excellent approximation and string theory should give an
excellent description of the situation: this of course conforms with the dominance of gravitation.
String tension would be proportional to 1/hbar G and this raises a grave classical counter argument.
In string model, massless particles are regarded as strings, which have contracted to a point in excellent
approximation and cannot have length longer than Planck length. How this can be consistent with the
formation of gravitationally bound states is however not understood since the required non-perturbative
formulation of string model required by the large valued of the coupling parameter GMm is not known.
In TGD framework strings would connect even objects with macroscopic distance and would
obviously serve as correlates for the formation of bound states in quantum level description. The
classical energy of string connecting say the two wormhole contacts defining elementary particle is
gigantic for the ordinary value of hbar so that something goes wrong.
I have however proposed that gravitons (at least those mediating interaction between dark matter)
have large value of Planck constant. I talk about gravitational Planck constant and one has h eff=
hgr=GMm/v0, where v0/c<1 (v0 has dimensions of velocity). This makes possible perturbative approach
to quantum gravity in the case of bound states having mass larger than Planck mass so that the parameter
GMm analogous to coupling constant is very large. The velocity parameter v0/c becomes the
dimensionless coupling parameter. This reduces the string tension so that for string world sheets
connecting macroscopic objects one would have T ∝ v0/G2Mm. For v0= GMm/hbar, which remains
below unity for Mm/mPl2 one would have hgr/h=1. Hence the action remains small and its imaginary
exponent does not fluctuate wildly to make the bound state forming part of gravitational interaction
short ranged.
This is expected to hold true for ordinary matter in elementary particle scales. The objects with size
scale of large neutron (100 μm in the density of water) - probably not an accident - would have mass
above Planck mass so that dark gravitons and also life would emerge as massive enough gravitational
bound states are formed. hgr=heff hypothesis is indeed central in TGD based view about living matter. In
- 310 -
this framework superstring theory with single value of Planck constant would not give rise to
macroscopic gravitationally bound matter and would be thus simply wrong.
If one assumes that for non-standard values of Planck constant only n-multiples of super-conformal
algebra in interior annihilate the physical states, interior conformal gauge degrees of freedom become
partly dynamical. The identification of dark matter as macroscopic quantum phases labeled by h eff/h=n
conforms with this.
The emergence of dark matter corresponds to the emergence of interior dynamics via breaking of
super-conformal symmetry. The induced spinor fields in the interior of flux tubes obeying Kähler Dirac
action should be highly relevant for the understanding of dark matter. The assumption that dark particles
have essentially same masses as ordinary particles suggests that dark fermions correspond to induced
spinor fields at both string world sheets and in the space-time interior: the spinor fields in the interior
would be responsible for the long range correlations characterizing heff/h=n. Magnetic flux tubes
carrying dark matter are key entities in TGD-inspired quantum biology. Massless extremals represent
second class of M4 type non-vacuum extremals.
This view forces once again to ask whether space-time SUSY is present in TGD and how it is
realized.
With a motivation coming from the observation that the mass scales of particles and sparticles most
naturally have the same p-adic mass scale as particles in TGD Universe I have proposed that sparticles
might be dark in TGD sense. The above argument leads to ask whether the dark variants of particles
correspond to states in which one has ordinary fermion at string world sheet and 4-D fermion in the
space-time interior so that dark matter in TGD sense would almost by definition correspond to
sparticles!
See the chapter Recent View about Kähler Geometry and Spin Structure of "World of Classical Worlds"
of "Towards M-matrix" .
posted by Matti Pitkanen @ 12:07 AM
31 Comments:
At 1:18 AM,
Leo Vuyk said...
Dear Matti,
Could you also describe your Kähler geometry of "world of classical worlds" in a more
classical ontological way?
At 3:21 AM,
[email protected] said...
I am not quite sure what you mean by "classical".
The very definition of WCW Kahler metric must assign to given 3-surface 4-D spacetime surface in order that 4-D general coordinate invariance can be realised. This translates
to holography.
Classical world translates to 3-surface and by holography space-time surface identified
as preferred extremal of Kahler action analogous to Bohr orbit.
GRT space-time and gauge theory description can be seen as "classical". GRT spacetime is obtained by replacing many-sheeted space-time with region of Minkowski space
with the deviation of metric from flat one obtains as sum of the deviations for the sheets.
Gauge potentials in QFT limit correspond to sums for gauge potentials at sheets.
- 311 -
Euclidian space-time regions and the boundaries between Euclidian and Minkowskian
regions are definitely something "non-classical" but absolutely essential for the reduction
of Feynman diagrams to space-time topology and geometry. Point like particles of QFT
emerge when one makes the 4- "lines" of generalised Feynman diagrams infinitely thin.
At 3:33 AM,
[email protected] said...
You asked about classical description of Kaehler geometry: above I talked about this
description for WCW.
Kaehler geometry is defined usually=classically in terms of Kahler function, call it K
(say in the case of CP_2). One generalises this approach.
This is the first approach and gives K as Kahler action for a preferred extremal in
Euclidian regions (lines of generalised Feynman diagrams having purely geometric and
topological meaning, not so classical;-)).
Not so classical manner to define K is in terms of anticommutators of WCW gamma
matrices identifiable as symplectic Noether super-charges, which can be calculated
immediately and one obtains explicit expressions for the matrix elements of WCW metric.
This is the analog of AdS/CFT duality: I have talked also about strong form of
holography and quantum classical correspondence. WCW geometry can be described in
terms of fermions at strings in Minkowskian regions or geometric degrees of freedom of
space-time surface in Euclidian regions.
Above I have cheated a little bit to create impression of classicality;-) : 3-surface in
ZEO is union of space-like 3-surfaces at the ends of space-time surface located a opposite
boundaries of causal diamond CD plus the light-like orbits of partonic surfaces at which
the signature of the induced metric changes. In standard ontology it would be space-like 3surface.
At 12:19 AM,
Leo Vuyk said...
Thanks Matti, I have to soak that in before I come back.
At 2:57 AM,
Leo Vuyk said...
Matti, about Kahler structures, Vector space and configuration space, could you imagine
even the sort of vector space which is Chiral? or better have only spiral ( left or
righthanded spiral curved vectors?
see perhaps as an example:
https://www.flickr.com/photos/93308747@N05/14210603469/
At 8:31 AM,
[email protected] said...
Chirality makes sense geometrically. Helical magnetic flux tubes would be chiral surface
with well-defined handedness in geometric sense.Biomolecules would correspond to this
kind of space-time surfaces.
- 312 -
At 10:52 AM,
Leo Vuyk said...
Don't you think that a chiral vacuum could be the origin of our MATERIAL universe in
contrast with an Anti-Material universe?
At 11:32 AM,
Stephen said...
https://medium.com/the-physics-arxiv-blog/the-origin-of-life-and-the-hidden-role-ofquantum-criticality-ca4707924552
At 8:50 PM, [email protected] said...
Stuart Kauffman, by the way works, in Finland. Quantum criticality is one of the basic
predictions of TGD. I wrote just some time ago a series of postings about quantum
criticality and hierarchy of Planck constants as crucial for life. Both dark matter and Life
are critical phenomena It is amazing that all these TGD notions are discovered but no one
mentions TGD;-). Academic outsiders do not exist academically!
The picture proposed still introduction of high Tc superconductivity, magnetic flux
tubes, macroscopic quantum coherence, hierarchy of Planck constants-dark matter
connection, and many other things. This is a problem since all this is against the existing
belief systems.
TGD is there just waiting but how to discover all these fantastic things without
mentioning TGD? This defines perhaps the greatest challenge that recent day academic
science is facing;-).
At 8:59 PM, [email protected] said...
To Leo Vyuk:
Living matter is good example of chiral vacuum. Chirality selection of biomolecules is
a good example about chiral ground state. In TGD molecule of given handedness
represents its own space-time sheet defining mini-sub-Universe containing smaller spacetime sheets as sub-Univserses and belonging to a bigger one.
Chirality selection is basically due to the presence of weak interactions in the Compton
scale of weak bosons which is now scaled up by h_eff/h =n to a value which is of the order
of molecule or cell or even bigger. This makes the system also macroscopic quantum
system in this scaled. Dark weak interactions select molecules with preferred handedness
as minimum energy states. Geometric chirality in turn induces polarisation of light for
instance.
At 8:25 AM,
Leo Vuyk said...
A righthanded chiral vacuum vector lattice should have a direct relation with the right
handed double helkix structure of the DNA molecule?
At 11:44 AM,
Leo Vuyk said...
Sorry, it must be: "the right handed double helix structure of the DNA molecule?" (not
helkix)
At 11:52 AM,
Ulla said...
http://arxiv.org/pdf/1103.1833.pdf
- 313 -
At 10:52 PM, [email protected] said...
To Ulla:
What article proposes are entire critical phase transitions, which should play key role
in biology. 4-dimensionality instead of 3-dimensionality. These authors have "discovered"
zero energy ontology without saying anything about TGD. Very clever!
TGD ideas are now being discovered with constant rate. In ResearchGate there where
more than 120 loadings of TGD-related material so that the ideas will now find their way
to the articles.
It would be of course nice if the discoverers would mention TGD but this is too much
to hope in recent the day science, which tries to cope without ethics and moral.
At 10:56 PM, [email protected] said...
To Leo:
I am not sure whether one needs lattice at all. Just the 3-surface which has geometric
chirality is enough. Surface geomery of course induces chirality to possible lattice like
structure - say sequence of DNA nucleotides. Essential is that one has sub-manifold
geometry: one sees the structure from outside.
I do not know how to define chiral lattices in abstract manifold geometry where
"seeing from outside" is not possible.
At 12:17 AM,
Ulla said...
http://en.wikipedia.org/wiki/Leggett%E2%80%93Garg_inequality
compared to Bells inequality
Would this give the 4D in biology? Jenny Nielsen told she is doing her thesis on this. I
asked if she is interested in quantum biology. Her answer: hmmmm
She does not yet know...
At 3:30 AM,
[email protected] said...
4-D brain, even 4-D society were the catch words that I introduced long time ago. 3-D
state is replaced with 4-D behaviour pattern in zero energy ontology. This justifies notions
like function, behavioural pattern, etc… used in bio-sciences but having no counterpart in
fundamental physics.
At 7:52 AM,
Leo Vuyk said...
Pleas let me become a bit more specific about my view on the multiverse:
I think that Stephen Hawking did not calculate with the possibility of a chiral
oscillating Higgs field vacuum lattice combined with propeller shaped Fermions. Then,
due to Vacuum chirality, Electron- and Positron propellers could both pushed away from
the BH horizon after spin flip polarization at different distances, forming two charged
separated spheres. With quark ( plasma) formation in between.
- 314 -
Based on such a simple object ( propeller and process) oriented ontology, Black Holes
could be imagined as charge splitters violating the 2e law af thermodynamics, combined
with a continuous microscopic big bang plasma creation process!
The result I try to describe :
1: Black holes are the same as Dark Matter, they all consume photons, even gravitons
and the Higgs field, but REPEL Fermions due to their propeller shape. They
produce electric charged plasma.
2: Dark Energy is the oscillating ( Casimir) energy of the Higgs Field equipped with a
tetrahedron lattice structure with variable Planck length..
3: Quantum Gravity = Dual Push gravity= Attraction (Higgs-Casimir opposing
Graviton push).
4: The Big Bang is a Splitting dark matter Big Bang Black Hole (BBBH), splitting into
smaller primordial BBBH Splinters forming the Fractalic Lyman Alpha forest
and evaporating partly into a zero mass energetic oscillating Higgs particle based
Higgs field.
5: Dual PBBSs hotspots, produce central plasma concentration in electric Herbig Haro
systems as a base for star formation in open star clusters as a start for Spiral
Galaxies.
6: Spiral Galaxies will keep both Primordial Dark Matter Black Holes as Galaxy
Anchor Black Holes (GABHs) at long distance.
7: After Galaxy Merging, these GABHs are the origin of Galaxy- and Magnetic field
complexity and distant dwarf galaxies .
8: Black Holes produce Plasma direct out of the Higgs field because two Higgs
particles are convertible into symmetric electron and positron (or even dual
quark-) propellers (by BH horizon fluctuations).
9: The chirality of the (spiralling) vacuum lattice is the origin our material universe.
(propeller shaped positrons merge preferentially first with gluons to form (u)
Quarks to form Hydrogen.
10: The first Supernovas produce medium sized Black Holes as the base for secondary
Herbig Haro systems and open star clusters.
11: ALL Dark Matter Black Holes are supposed to be CHARGE SEPARATORS with
internal positive charge and an external globular shell of negative charged Quark
electron plasma.
12: The lightspeed is related to gravity fields like the earth with long extinction
distances to adapt with the solar gravity field.
See also: vixra.org/author/leo_vuyk
- 315 -
At 1:27 AM,
Leo Vuyk said...
Quantum FFF Theory as I propose: states that the raspberry shaped multiverse is
symmetric and instant entangled down to the smallest quantum level. Also down to living
and dying CATS in BOXES.
If our material universes has a chiral oscillating Higgs field, then our material Right
Handed DNA helix molecule could be explained.
However it also suggests that in our opposing ANTI-MATERIAL multiverse neighbour
universe the DNA helix should have a LEFT HANDED spiral.
Interestingly, according to Max Tegmark: in a multiverse we may ask: is there COPY
PERSON over there, who is reading the same lines as I do?
At 1:48 AM,
Leo Vuyk said...
If this COPY person is indeed living over there, then even our consciousness should be
shared in a sort of DEMOCRATIC form,
Then we are not alone with our thoughts and doubts,see:
Democratic Free Will in the instant Entangled Multiverse.
http://vixra.org/pdf/1401.0071v2.pdf
At 2:55 AM,
[email protected] said...
I believe in multiverse in the restricted sense that there is an interacting hierarchy of
space-time sheets collapsed to single sheet in GRT approximation leading to analogous
such as several maximal signal speeds. I believe also in a hierarchy of conscious entities,
which in certain sense means multiverse too. Subselves represent mental images of self.
Self is identified as a sequences of state function reductions at same boundary of causal
diamond.
I cannot take seriously the multiverse in the sense of inflationary theories: in TGD the
sheets of many-sheeted space-time obey essentially the same standard model physics at the
level of symmetries and there is no need to introduce inflaton fields. Conscious entities
can mimic each other and by looking around it becomes clear that they might love to do so
but the idea about additional copies of colleagues looks extremely unattractive to me;-).
As a whole, multiverse in standard sense has ended up to playing with mathematical
and conceptual pathologies such as Boltzmann brains. This could have been avoided with
some critical philosophical thought and contact with experiment: not only cosmology.
Superstring theory has also suffered the same fate. When the connection with
experimental world splits, theoreticians begin to hallucination begin just as under sensory
deprivation induced by think tanks. Places like Harward are indeed think tanks carefully
isolated from external world;-).
At 3:32 AM,
Leo Vuyk said...
I think it could much more simple, see:
- 316 -
“The Navel Cord Multiverse with Raspberry Shape, a Super Symmetric Entangled 12 Fold
Bubble Universe.”
http://vixra.org/pdf/1312.0143v2.pdf
At 4:12 PM, Anonymous said...
http://www.dailymail.co.uk/sciencetech/article-2975606/Did-Homer-Simpson-discoverHIGGS-BOSON-Maths-1998-episode-predicts-particle-s-mass-14-years-CERN.html
\mmmmm donuts
At 8:58 PM, [email protected] said...
Lubos wrote about Homer's discovery recently. As a matter fact, the formula giving the
prediction was of order Planck mass as one learns from the picture so that it was by a
factor of order 10^19 too large!
At 11:18 PM, Stephen said...
Now
this
is
interesting...http://m.phys.org/news/2015-03-quantum-scheme-statestransmitting-physical.html still no one else has thought of a hierarchy of plank constants?
Could it also be thought of as an array ? Hierarchies more resemble graphs than sequences
At 12:30 AM,
[email protected] said...
Looks interesting. It seems that that it is difficult to distinguish quantum information
scientists from parapsychologists who have talked about information transfer without
physical signal!;-).
*How to generate entanglement without sending particles between the systems to be
entangled? *
This seems to be the basic question: prior entanglement is necessary for quantum
commutation schemes. Have never thought about this kind of bottleneck question.
Authors claim that it is possible to generate entanglement without interaction that is
sending physical particles between them. *Interaction free measurement* is mentioned:
this is central in TGD inspired theory of consciousness: kind of telepathic effect too!
The notion of *chained Zeno effect* is introduced and claimed to make possible
entanglement between two distant object. This in turn would drake possible information
transfer without transfer of particles. I hope I understood.
Quantum version of obstructing object is a further notion: object obstructs if it it
absorbs the incoming radiation. Is *quantum obstructing object* in superposition of
obstructing and non-obstructing states?
How could this relate to TGD?
a) Interaction free measurement provides a manner to transformed negentropic
entanglement to conscious information without destroying it.
- 317 -
b) Zeno effect in TGD means repeated reductions on same boundary of CD and gives
rise to self as conscious entity. During this period no decoherence occurs (interpreted as
first reduction to opposite boundary of CD).
c) In TGD-based quantum biology, communication involving cyclotron resonance of
dark photons is essential element. Magnetic body can become absorber if it changes flux
tube thickness controlling magnetic field strength controlling cyclotron frequency.
Could the proposed idea have some relevance for consciousness theory and quantum
biology: kind of telepathic information transfer would be in question!
At 11:35 AM,
Ulla said...
Information moves without movement of informational particles also in nerves, according
to the new model, if I remember right. This was the new thing for me.
At 8:52 PM, [email protected] said...
To Ulla:
The ideas in quantum computation are being rapidly transferred to biology. I hope I
will find time to understand what the chain Zeno effect means. Basic prerequisites for it
seems to be met in TGD model.
At 2:35 AM,
Ulla said...
Some kind of holography must be involved, as instance in form of imprints in light?
At 11:02 PM, sanam arzoo said...
Great details. Thanks intended for providing us all a real useful details. Continue the
excellent work and also proceed providing us all far more high quality details
02/28/2015 - http://matpitka.blogspot.com/2015/02/quaternions-octonions-and-tgd.html#comments
Quaternions, Octonions, and TGD
Quaternions and octonions have been lurking around for decades in hope of getting deeper role in
physics but as John Baez put it: "I would not have the courage to give octonions as a research topic for a
graduate student". Quaternions are algebraically a 4-D structure and this strongly suggests that spacetime could be analogous to complex plane.
Classical continuous number fields reals, complex numbers, quaternions, octonions have dimensions
1, 2, 4, 8 coming in powers of 2. In TGD imbedding space is 8-D structure and brings in mind
octonions. Space-time surfaces are 4-D and bring in mind quaternions. String world sheets and partonic
2-surfaces are 2-D and bring in mind complex numbers. The boundaries of string world sheets are 1-D
and carry fermions and of course bring in mind real numbers. These dimensions are indeed in key role in
TGD and form one part of the number theoretic vision about TGD involving p-adic numbers, classical
number fields, and infinite primes.
What quaternionicity could mean?
Quaternions are non-commutative: AB is not equal to BA. Octonions are even non-associative:
A(BC) is not equal to (AB)C. This is problematic and in TGD problems is turned to a victory if spacetime surfaces as 4-surface in 8-D M4× CP2 are associative (or co-associative in which case normal space
- 318 -
orthogonal to the tangent space is associative). This would be extremely attractive purely number
theoretic formulation of classical dynamics.
What one means with quaternionicity of space-time is of course highly non-trivial questions. It
seems however that this must be a local notion. The tangent space of space-time should have
quaternionic structure in some sense.
1. It is known that 4-D manifolds allow so called almost quaternionic structure: to any point of
space-time one can assign three quaternionic imaginary units. Since one is speaking about
geometry, imaginary quaternionic units must be represented geometrically as antisymmetric
tensors and obey quaternionic multiplication table. This gives a close connection with twistors:
any orientable space-time indeed allows extension to twistor space which is a structure having as
"fiber space" unit sphere representing the 3 quaternionic units.
2. A stronger notion is quaternionic Kähler manifold, which is also Kähler manifold - one of the
quaternionic imaginary unit serves as global imaginary unit and is covariantly constant.CP 2 is
example of this kind of manifold. The twistor spaces associated with quaternion-Kähler
manifolds are known as Fano spaces and have very nice properties making them strong
candidates for the Euclidian regions of space-time surfaces obtained as deformations of so called
CP2 type vacuum extremals represenging lines of generalized Feynman diagrams.
3. The obvious question is whether complex analysis including notions like analytic function,
Riemann surface, residue integration crucial in twistor approach to scattering amplitudes, etc...
generalises to quaternions. In particular, can one generalize the notion of analytic function as a
power series in z to that for quaternions q. I have made attempts but was not happy about the
outcome and had given up the idea that this could allow to define associative/ co-associative
space-time surface in very practically manner. It was quite a surprise to find just month or so ago
that quaternions allow differential calculus and that the notion of analytic function generalises
elegantly but in a slightly more general manner than I had proposed. Also the conformal
invariance of string models generalises to what one might call quaternion conformal invariance.
What is amusing is that the notion of quaternion analyticity had been discovered for aeons ago
(see this) and I had managed to not stumble with it earlier! See this.
Octonionicity and quaternionicity in TGD
In TGD framework, one can consider further notions of quaternionicity and octonionicity relying on
sub-manifold geometry and induction procedure. Since the signature of the imbedding space is
Minkowskian, one must replace quaternions and octonions with their complexification called often split
quaternions and split octonions. For instance, Minkowski space corresponds to 4-D subspace of
complexified quaternions but not to an algebra. Its tangent space generates by multiplication
complexified quaternions.
The tangent space of 8-D imbedding space allows octonionic structure and one can induced (one of
the keywords of TGD) this structure to space-time surface. If the induced structure is quaternionic and