Download observable Universe - faculty.ucmerced.edu

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

First observation of gravitational waves wikipedia , lookup

Dark matter wikipedia , lookup

Strangeness production wikipedia , lookup

Gravitational lens wikipedia , lookup

Outer space wikipedia , lookup

Astronomical spectroscopy wikipedia , lookup

Big Bang nucleosynthesis wikipedia , lookup

Weakly-interacting massive particles wikipedia , lookup

Shape of the universe wikipedia , lookup

Cosmic microwave background wikipedia , lookup

Expansion of the universe wikipedia , lookup

Big Bang wikipedia , lookup

Transcript
The Observable Universe
“Space is big. Really big. You just won’t believe how vastly hugely mindbogglingly big it is. I mean, you may think it’s a long way down the road to the
chemist, but that’s peanuts to space.” Douglas Adams, The Hitchhiker’s Guide
to the Galaxy.
This chapter opens the story as we discuss the observed Universe and note the apparent
problems we find in trying to describe that universe. We will see that the standard Big
Bang picture is very useful for explaining the observable Universe, but is not quite enough
to solve the problems where the Universe came from and how it evolved. It appears that
the Universe could not have arisen from generic initial conditions, but seems to have evolved
from a very finely-tuned origin.
1
What Do We See?
Looking out at the night sky, one can’t help but be awed by the vast scales open before them.
Even the naked eye can detect hundreds or thousands of stars (depending on whether you’re
looking from the city or middle of nowhere). The eye of the Hubble space telescope has
detected countless more. Figure 1 shows the Ultra Deep Field [1]. This image was formed
over a period of several months by focusing the Hubble telescope on a patch of sky roughly
one-tenth of the size of the full moon. This patch was previously thought to be empty, when
viewed from other telescopes. Almost every point of light in the image is a galaxy, containing
roughly a hundred billion stars! This image shows the Universe as it was roughly 13 billion
years ago, which is only a few hundred million years after the Big Bang. It is already easy
to see that the Universe is big!
It is the goal of physics in general (and cosmology, specifically) to try to explain where
the Universe came from and how it evolves. As we will see, this will not be an easy task, in
general. The attempt to describe the Universe will lead us into new and exciting areas of
physics, including Einstein’s General Theory of Relativity, as well as the laws of quantum
mechanics. The work done in achieving the goal will be well-worth it, however.
To begin our journey, let’s start by making some observations about the Universe. We
will find that these observations lead to some very interesting puzzles which will require some
new ideas to address. It will, in fact, be our goal to solve these puzzles. We will discuss these
observations and puzzles in quantitative detail later, and so we now focus on a qualitative
discussion.
1.0.1
The Universe is Very Old and Very Big.
We’ve already seen that the Universe is extremely large. The visible Universe extends out
to some 1026 meters. However, there are many reasons to expect that the actual Universe is
much larger! Clearly for these sorts of distances the meter is a very bad unit of measure to
use. Just as when we use nanometers to discuss quantum-level phenomenon, we should like
to find a more convenient unit of measure for very large distances.
1
Figure 1: The Hubble Ultra Deep Field, shows an estimated 10,000 galaxies. Nearly every
point of light in the image is a galaxy!
One useful method of measuring distance would be to use the distance that light travels
in some amount of time, typically taken to be one year. This distance, which is about
9.5 × 1015 meters, is called a light-year, and measures distance and not time! This is a very
large distance; Pluto orbits the Sun at an average distance on order of about 10−5 light
years. Proxima Centauri, the nearest star (other than the Sun), is about 4.22 light years
away. The observable Universe extends out to about 5 × 1010 light years.
The light year is still not the conventional unit of measurement, however. The unit that
has been adopted is the megaparsec, Mpc, or one million parsecs. A parsec is defined as the
distance away such that the Earth-Sun separation distance (about 150 million kilometers)
subtends an angle of one second of arc, and corresponds to a distance of about 3.1 × 1016
meters, or about 3.26 light years. So, a megaparsec is about 3.26 million light years, and
corresponds roughly to the distance between galaxies. The observable Universe is of order
14,000 Mpc.
We might expect that such a large system has been around for a long time, one might
even expect infinitely long. Even based only on the observations from carbon-dating rocks
here on Earth, we know that the Universe must be billions of years old. This measurement
is also corroborated by our observations of the amounts of hydrogen and helium in the Sun,
and knowledge of the fusion reaction rates, and so on. We will get a better estimate in our
discussion below where we actually find a finite lifetime for the Universe!
2
1.0.2
The Universe Looks Pretty Much the Same Everywhere.
If we look out at the sky in our own solar system we see a nice variety of things: the Sun,
some nearby planets, the asteroid belt, etc. The solar system certainly doesn’t look the same
from place to place as we move around. Suppose we pull back far from our solar system, and
place ourselves midway between the Sun and Proxima Centauri. Then, if we look around,
the view is a little bit more similar in each direction. Although we do have some differences
here and there. If we look up, out of the plane of our own galaxy, then we might see the
Andromeda Galaxy in one direction, and not in the other. So, it seems like the Universe
does not look the same in every direction.
Suppose we instead go out to very large scales, on the order of megaparsecs. In this
case we are looking at the sky in terms of galaxies, instead of stars, and our view in every
direction would be very much like that in Figure 1. At such scales, we become blind to the
small details that differentiate one place or direction from another. On very large scales, the
Universe is homogeneous and isotropic.
Homogeneity and isotropy are not the same things. Homogeneity means that the Universe
is the same from one place to another (meaning that there is no preferred place), while
isotropy means that it is the same in every direction (meaning that there is no preferred
direction). A system can be everywhere homogeneous, but not isotropic. Consider, for
example, a uniform electric field pointing everywhere along the x direction. This system is
everywhere homogeneous, since we can move from place to place and still see the same field,
but it is not isotropic since the field points along a specific direction. The electric field of a
point charge is isotropic about that charge, but it is not homogeneous, since the field looks
very different if we move anywhere at all. However, a system that is isotropic about every
point is necessarily homogeneous, as well. The Universe on the biggest observable scales
looks both homogeneous and isotropic.
We don’t need to rely only on our intuition about the Universe on large scales, or even
images like that in Figure 1, to imagine the homogeneity and isotropy of the Universe. Figure
2 shows the galaxy map, covering more than 930,000 galaxies, obtained by the Sloan Digital
Sky Survey [2] over a period of eight years, covering a quarter of the sky, with the Earth
at the center. Each point is a galaxy, comprised of about 100 billion stars, and the color
denotes the age of the stars in the galaxy with redder points showing galaxies comprised
of mostly older stars. The units of redshift will be discussed later, but may be taken to
be an indication of distance. Figure 2 ranges over a distance of a little more than half a
megaparsec, which is about 2 billion light years. Even on these “small” scales we can begin
to see some of the homogeneity and isotropy of the Universe. We can also begin to see the
web-like structure of the galaxies, forming galactic filaments which are the largest known
structures in the Universe.
We can see the homogeneity and isotropy of the Universe in a much more dramatic
way. Looking out into the sky, in every direction, we see a faint microwave glow, originally
discovered back in 1964. This glow, called the Cosmic Microwave Background (CMB), has
been measured very precisely by the Wilkinson Microwave Anisotropy Probe (WMAP) [3],
and is seen in the whole sky map in figure 3. The CMB has a spectrum as though it
was emitted by a blackbody, with an average temperature of about 2.725 Kelvins to an
extraordinary accuracy. The temperature fluctuations, δT /T , are of the order ∼ O (10−5 )!
3
Figure 2: The SDSS Galaxy Map. The map has a range of about 2 billion light years, with
the Earth at the center. Each point on the map is a galaxy, with redder points corresponding
to older galaxies.
As we’ll discuss in detail later, the redder regions are slightly warmer, corresponding to
regions which are slightly underdense, while the more blue regions are cooler, corresponding
to regions that are slightly overdense (these density fluctuations are also of the same order as
the temperature fluctuations; if the background density is ρ, then δρ/ρ ∼ O (10−5 )). As we
will see later, the CMB is a snapshot of the early Universe, and provides the most striking
evidence of the homogeneity and isotropy that we have found.
While we have seen that the Universe is homogeneous and isotropic on the largest observable scales, we must not forget that it seems very inhomogeneous on the smallest scales;
the environment in the neighborhood of a star looks very different from that in interstellar space. To ignore the tiny deviations from homogeneity would be to ignore stars, and
even individual galaxies. Any theory of the Universe must, therefore, not only explain the
large-scale properties of the Universe, but also the tiny variations. Furthermore, as we will
discuss later, there is reason to believe that our observable Universe may be only an “island”
Universe in a larger structure often called the “Multiverse,” which is again inhomogeneous.
We will discuss these ideas in more detail later.
1.0.3
Parallel Lines Never Seem to Intersect.
It’s a well-known fact that the shortest distance between two points is a straight line, and
that two parallel lines never intersect. This is only true on a plane. If we were to draw to
4
Figure 3: The Cosmic Microwave Background pervades the entire Universe and has a blackbody spectrum with a temperature of about 2.725 Kelvins, with fluctuations of only about
10−5 .
parallel lines on a ball, or on the surface of a saddle, then things may not be so simple. You
may know that airplanes do not travel along straight line paths, but rather curved paths.
This is because (as we’ll discuss in detail later) the shortest distance between two points on
a sphere (the Earth), is a curved path, a section of a great circle. Because the surface over
which the line is being drawn is curved, the line itself will be curved.
This can lead to some very interesting behavior. For example, on a sphere two parallel
lines can intersect (think about lines of longitude meeting at the north pole). Also, the sum
of the angles of a triangle can be different than 180◦ ; on a sphere the angles are greater than
180◦ (again - think of the triangle formed by the equator and two lines of longitude), while
on a saddle the angles can be less (the verticies get pinched together). This observation
allows for a method of determining whether a surface is curved without actually looking at
it from the outside: draw a triangle and see if the angles add up to 180◦ . However, one
has to be careful. If the size of the sphere was large enough, then any measurements (i.e.,
adding the angles of triangles) would have negligible errors and the space would look flat.
This is because if we get in close enough to a curved surface, it eventually looks flat, like the
surface of the Earth. In order to see any deviations from flatness we would need to draw a
big triangle, whose sides are of the order of the radius of the sphere.
While it’s often taken for granted that the space that we live in is flat, this does not
necessarily have to be the case. We could be living in a curved space, like a sphere. Because
space is so large we would need to look at the biggest things in the Universe, which is the
whole-sky CMB map seen in figure 3. The temperature fluctuation “spots” on the CMB have
a certain angular separation on the sky, which provides our triangle for measuring flatness.
If the intervening space is curved, then the light would also travel along curves, and the
apparent size of the spots would be different. Theoretical calculations can predict the size
of these spots. If the space is flat then the spots would have the angular size predicted by
5
theoretical calculations. If the geometry of space was like a sphere, then the light coming
from the spots would span a larger angle and so would appear larger. If the geometry of
space was like a saddle, then the light coming from the spots would span a smaller angle,
and so would appear smaller.
To a very good degree, it is found that the Universe is approximately flat. This will turn
out to have very important consequences for the evolution, and ultimately the fate, of the
Universe. As we will see, a Universe of zero or negative curvature (like a plane or a saddle,
respectively) will exist indefinitely into the future, while one of positive curvature (like a
sphere) will not! Again, we will return to these issues later.
1.0.4
The Universe is Filled with “Good” Things.
Just a glance at figure 1 tells us that the Universe is not empty. It is filled with stars, clustered
together into galaxies. The stars are mostly hydrogen and helium (and trace amounts of
heavier elements), and therefore are made out of atoms. The atoms are composed of protons
and neutrons in the nucleus, circled by electrons in orbit around them.
The electrons are elementary particles (as far as we can tell), but the protons and neutrons
have been found to be composite particles, constructed of quarks which are elementary.
There are six different “flavors” of quark, listed in Table 1.
Particle
Up (u)
Down (d)
Charm (c)
Strange (s)
Top (t)
Bottom (b)
Quarks
Mass (GeV/c2 )
0.003
0.006
1.3
0.1
175
4.3
Charge (qp )
2/3
-1/3
2/3
-1/3
2/3
-1/3
Table 1: The Six Flavors of Quarks.
Also given in the table is the mass of each quark, in units of GeV/c2 (as is common in
particle physics), as well as it’s electric charge, expressed in terms of the proton charge, qp .
The proton is composed of two up quarks and one down quark, giving a net charge of +1.
The neutron is composed of two down quarks and one up quark for a net charge of zero.
The quarks comprise one class of particles which interact through the strong force, which
we’ll discuss soon. This class of strongly-interacting particles are called hadrons (hadrons
actually refer to the strongly-interacting quark composites, baryons and mesons). There is
another class of elementary particles which do not interact via the strong force. This class
is called leptons, the most familiar of which is the electron. The various leptons are listed
in Table 2, including again the mass and charge. Each of the three charged leptons has an
associated neutrino.
Each of the particles listed in Tables 1 and 2 is a spin 1/2 fermion, and interacts via at
least two forces. Let’s take a moment to recall the different forces.
6
Leptons
Particle
Mass (GeV/c2 ) Charge (qp )
Electron (e)
0.000511
-1
Muon (µ)
0.106
-1
Tau (τ )
1.7771
-1
−8
Electron neutrino (νe )
< 10
0
Muon neutrino (νµ )
< 0.0002
0
Tau neutrino (ντ )
< 0.02
0
Table 2: The Six Leptons.
There is, of course, electromagnetism, which gives an interaction between electrical
charges. It was discovered long ago that electricity and magnetism are not two different phenomena, but are really different aspects of the same electromagnetic field. It was Maxwell’s
grand discovery in the 19th century that blended together not only electricity, and magnetism, but also light into one elegant theory. Electromagnetism is responsible for holding
together atoms and molecules, and therefore everyday objects such as tables and people.
The positively-charged protons would tend to blow apart the nucleus if there was not
some additional non-electrical force holding it together. This new force is called the strong
force, and operates on both protons and neutrons, holding the nucleus together. However,
if the nucleus gets too big, then the strong force doesn’t quite hold it together - some pieces
might fly off, as in alpha-decay. In alpha decay an α particle (a helium nucleus) breaks off
of a larger nucleus (say uranium). This means that the strong force is not long-range, but
has a finite range, typically operating on nuclear scales, ∼ 10−15 meters.
There are other types of nuclear decay, for example beta decay, where beta particles
(electrons) are emitted. One example of a system undergoing beta decay is a free neutron.
Outside the nucleus a neutron is not stable, but decays after about 15 minutes, or so. It
decays into a proton, an electron and the antimatter partner to the electron-type neutrino
(we’ll discuss antimatter soon). Because the protons and neutrinos are not elementary, it
is really the quarks interacting through the force, with a down quark decaying into an up
quark, plus an electron and antineutrino. This interaction has to be mediated by some
force, but the leptons do not experience the strong force, and neutrinos are not electrically
charged. So, this force can’t be either the strong force, or electromagnetism; it must be a
new force, called the weak force. The force is “weak” because it is much weaker than either
the electromagnetic or strong force. The weak force is also a short-range force, operating on
even shorter distances than the strong force!
Finally, we come to perhaps the most familiar of all forces - gravity. Newton told us that
anything with mass gravitates. Einstein later told us that energy and mass are equivalent
since E = mc2 . So, this suggests that anything with energy gravitates, which is a direct
prediction of Einstein’s General Relativity, as we’ll discuss in detail later. This is a unique
prediction of Einstein’s theory, differing from Newton’s theory of gravity. According to
Newton, light, which has no mass, should not be affected by gravitational fields. Einstein
says that, because it has energy, light should be affected by gravitational fields. It has been
7
found that light from distant sources is deflected gravitationally around massive objects
(such as galaxies) through gravitational lensing, which we’ll discuss in detail later. This
gives evidence for Einstein’s theory. Gravity interacts between any systems that have energy,
including gravity, itself - gravity gravitates! This property of nonlinearity makes problems
involving gravity very difficult to solve exactly, as we’ll discuss later.
These are the four known fundamental forces. All other forces can be understood in terms
of these forces (particularly electromagnetism). We would like to try to understand these
forces at a deeper level. The classical interpretation of forces is that sources set up fields
in the surrounding space (for example, electric or gravitational) that other objects respond
to, accelerating in response to these forces. Quantum mechanics changes the interpretation
somewhat. Upon applying quantum mechanics to the electromagnetic field, for example, one
finds that it is made up of a large number of photons, the individual quanta of light. Because
the electromagnetic force is transmitted via the electromagnetic field, we are then led to
believe that the electromagnetic force is carried by photons. That is, electric charges interact
via the exchange of photons. We would say that the photons mediate the electromagnetic
force.
This idea has proven to be most useful in particle physics, and the modern viewpoint is
that all of the forces are transmitted via the exchange of different bosonic mediator particles.
As we’ve discussed, the electromagnetic force is carried by spin-1 photons, which couple to
electric charge. This means they interact with (are absorbed and emitted by) electric charge.
The photon, itself, is not charged, and so photons don’t interact with each other (at least
to a first approximation). This means that electromagnetism is a linear theory, obeying the
principle of superposition. This makes it the easiest of the forces to understand and compute.
The fundamental theory of electromagnetism is called “Quantum Electrodynamics” (QED).
It was the first theory of the forces to be understood in a deep way, and is the pride and
joy of theoretical physics, obtaining extremely accurate results which agree precisely with
experiment to many decimal places. There has never been a prediction of QED which did
not agree with experiment.
The interaction of electric charges via the exchange of photons can be visualized in a very
nice manner using a method developed by Richard Feynman, seen in the Feynman diagram
in Figure 4. In this diagram, time runs along the vertical axis, while space runs along the
horizontal. This diagram represents the simplest interaction of two electrons (the straight
lines labeled e− ), by means of a photon (the wiggly line labeled γ). This is only the firstorder diagram; there are, in fact, an infinite number of increasingly complicated diagrams.
All of the diagrams must be added together to obtain the final answer.
It seems like this would be an impossible task, adding up all of the different diagrams.
However, there is a bit of good luck, since more complicated diagrams contribute less to
the final answer. One finds that the first few (relatively easy to calculate) diagrams tell the
majority of the story. QED (as well as each of the forces described below) is a beautiful
theory, which we unfortunately do not have time to discuss properly; details may be found
on any book on particle physics or quantum field theory.
We have discussed the electromagnetic force from the modern viewpoint, but this is only
one of the fundamental forces. Can we explain the other three in a similar way? Let’s
start with the strong force. We also need to have the quarks interact by exchanging some
sort of particle. The strong force holds the quarks together inside the protons or neutrons
8
e−
e−
γ
e−
e−
Figure 4: The Simplest Electromagnetic Interaction.
(the nucleons), and this force leaks out of the nucleons just a little bit, holding the nucleus
together. So, the force acts like glue, binding the quarks together. For this reason, the
messenger particles have been called gluons, and there are eight of them! This is to be
contrasted with the photon, for which there is only one. The gluons couple to a generalized
charge called “color charge.” This charge has nothing to do with real color, but is instead the
quantity defined in analogy with electric charge which couples strongly-interacting particles
to the gluons.
The equations describing the quantum theory of the strong force, called “Quantum Chromodynamics,” or QCD, are a bit more complicated than those of QED, and are not fully
understood. However, one can perform calculations using Feynman diagrams for QCD, like
those seen in the first-order diagram in Figure 5, which represents the interaction of an up
quark with a down quark, exchanging a gluon. In this case, however, the summation of
the diagrams is not so simple, since higher-order diagrams contribute about as much as the
simpler ones.
u
d
gµa
u
d
Figure 5: The Simplest Strong Force Interaction.
That’s two forces down - what about the weak force? Once again we postulate that the
force comes from the exchange of a particle. Beta decay proceeds due to the weak force
and has a negatively-charged down quark becoming a positively-charged up quark, with the
9
emission of a negative electron and neutral particle. Although the total charge was conserved
in this process, the charge of the initial and final quarks was different. This means that the
mediator particle is charged. We can draw the (simplest) Feynman diagram for this decay
in Figure 6. The particle, technically called a “vector intermediate boson,” is usually just
called a “W” particle. In this process, the W must be negative, with the same charge as the
electron.
ν̄e
u
e−
W−
d
Figure 6: The Simplest Weak Force Interaction.
We also have another difference in this diagram in that the arrow for the antineutrino is
running from top to bottom; in other words, it seems to be running backward in time! This is
actually a convention, where an antiparticle may be viewed as a regular particle propagating
backwards in time. This is completely consistent with all ideas of causality and poses no
problem - it’s only a convention.
Beta decay is not the only possible weak interaction. For example, a muon neutrino can
interact with a down quark, producing a muon and an up quark. The neutrino carries no
electrical charge, and so this interaction could not go via photons (also, photons don’t change
the type of particle). The neutrino starts off neutral, and then becomes a negatively-charged
muon, while the negatively-charged down quark becomes a positively-charged up quark. The
neutrino had to lose a +1 amount of charge to the down quark. This means that we need a
positively charged mediator particle, W + .
Finally, there are still more interactions which preserve the electric charge (often called
“neutral currents”) and are mediated by still another messenger particle, called Z 0 . So,
altogether there are three different messenger particles. The weak bosons also have another
very interesting property compared to the other force mediators: they are massive! The W ±
particles have a mass equivalent to roughly 80 protons, while the Z 0 has a mass of roughly
92 protons. This property actually caused some theoretical difficulties in trying to formulate
a quantum theory of the weak interactions, and was only satisfactorily solved by merging
the weak and electromagnetic forces into a single “electroweak” theory which is discussed
below.
Finally, we come to gravity. One would like to try to formulate a quantum mechanical
theory of gravity, in analogy with QED, as has been done with the other forces. We can draw
the simplest gravitational Feynman diagrams, like that in Figure 7, coupling the energy of
10
two particles to a gravitational messenger particle called a “graviton;” energy is the “charge”
of gravity. Unfortunately, when we try to do this we immediately run into problems.
At energy scales currently available in particle accelerators, quantum gravitational interactions are infinitesimally small, with the ratio of the gravitational to electric forces between
two protons FG /FE ∼ 10−38 . However, the strength of forces actually change depending on
the energies available. In the case of gravity, the interactions actually become stronger at
higher energies. At the Planck energy, EPlanck ∼ 1018 GeV, it is expected that gravity will
become the dominant interaction. This is important for understanding systems in which
quantum gravity is important, such as black holes and the initial Big Bang singularity. A
theory of quantum gravity contains numerous conceptual issues which have not been satisfactorily overcome, and the graviton remains experimentally undiscovered. However, as long
as we limit the energy densities of our investigations to scales smaller than the Planck scale,
quantum gravitational interactions are expected to be negligible. As we will see, inflationary dynamics will often take place at sub-Planckian scales and hence will be describable by
General Relativity.
T (2)
T (1)
hµν
T (2)
T (1)
Figure 7: The Simplest Gravitational Interaction.
Continuing our list of the particles populating the Universe, we have the different force
mediator particles in Table 3, including the spin (in units of ~), the relative interaction
strength (normalizing the strong force to unity), and the effective range of the force. The four
forces appear to be very different, but it is the hope that these forces may turn out to actually
be just different aspects of the same “Unified Force,” in much the same way as electricity
and magnetism are not two separate forces, but really are just different manifestations of
the same electromagnetic force. This hope has already been partially realized when the
electromagnetic and weak forces have been unified in the electroweak theory at energy scales
above roughly 100 GeV. There is some theoretical evidence that the strong force might merge
with the electroweak force at energies of about 1016 GeV to a single “Grand Unified Theory,”
(GUT) but the current experimental range of particle accelerators is far too low to confirm
this. Finally, there is some expectation that all of the forces might be unified at the Planck
scale, but this is still unknown at present.
We can already see that there are lots of particles in the Universe! We have the quarks, the
leptons, and the messenger bosons. All of these particles (except the graviton) are part of the
11
Force Mediators.
Interaction
Particle
Spin (~) Strength Range (cm)
Electromagnetism Photon, Aµ
1
∼ 1/137
∞
Weak Force
W ±, Z 0
1
∼ 10−5
. 10−16
Strong Force
Gluons, gµa
1
1
. 10−13
Gravitation
Graviton, hµν
2
∼ 10−38
∞
Table 3: The Force Mediator Particles.
Standard Model of Particle Physics, which also includes one more particle, called the Higgs.
The Higgs is thought to be the origin of mass by interacting with the the other particles which
are initially massless. The Higgs is theoretically predicted, but remains (as of this writing)
the last part of the Standard Model that has yet to be found experimentally. However, with
the construction of the Large Hadron Collider (LHC) there is every expectation that the
Higgs will be found.
Furthermore, each of these particles has an associated antimatter partner (although in
some cases the particle is it’s own antiparticle, as the photon is). Antimatter was originally
predicted in Dirac’s relativistic theory of the electron, and then was extended to all the
particles. The antimatter partner for the electron is the positron, and was discovered first.
All the rest of the antimatter are simply called the “anti”-particle, for example the antiproton,
the antineutrino, etc. When a particle and it’s antiparnter interact, they mutually annihilate,
producing new particles. In the case of an electron and positron reaction, the most common
products are two gamma rays (two being required by momentum conservation), but other
products are possible, depending on the initial energies of the particles.
With the exception of the graviton (and also the Higgs, though possibly for not much
longer), all of the particles listed above have been discovered experimentally. Modern
high-energy physics theories also contain more speculative ideas including supersymmetry
(SUSY), which relates bosons and fermions. SUSY predicts that every boson has an associated fermionic partner, while every fermion has an associated bosonic partner, doubling the
number of particles in our list above. Supersymmetry has not been found experimentally
either, but there are high hopes for finding supersymmetric particles at the LHC. The inclusion of SUSY provides even better evidence for GUTs, with the unification becoming much
more exact. There is also hope that SUSY might provide a possible explanation for dark
matter, which we’ll discuss below.
There are even more speculative idea emerging from string theory, which postulates that
particles are actually tiny vibrating bits of “string.” Depending on whether the strings
were open or closed loops, and depending on how the strings vibrate, one obtains different
particles. In this way unification is achieved since all the particles come from the same
fundamental string. String theory also naturally includes not only SUSY, but also even the
graviton - string theory is a natural theory of gravity! String theory also predicts that the
Universe contains extra spatial dimensions, leading to a total of 10 or 11, depending on the
theory. These extra dimensions are often taken to be rolled up into tiny little shapes undetectable by current accelerators. Conversely, the extra dimensions could even be infinitely
12
large, but our Universe could be confined to a 3-dimensional slice (or brane) of the extra
dimensions, rendering them unobserved.
String theory also provides possible answers for some other questions including why
the forces appear to be so different in scale, which is the so-called “hierarchy problem.”
Overall string theory appears very favorable, holding out the possibility of being a “Theory
of Everything,” (TOE). Unfortunately, the ideas are very speculative, and are unlikely to be
experimentally verified for a very long time. This stems from the tininess of the strings, which
are ∼ 10−33 centimeters, or so, which is far below current (or even projected) experimental
scales. If string theory is correct, it is likely to be unobserved for a very long time (although
there are some indirect observations which would lend credence to string theory).
We have quite a list of particles - a good theory of the Universe should explain where
they came from!
1.0.5
The Universe Doesn’t Have Any “Bad” Things.
According to Maxwell’s equations of electrodynamics, electric charges come in single units
(called monopoles), but magnetic fields come from electric currents. These currents always
produce a north and south magnetic pole; there is never just a magnetic north pole without the corresponding magnetic south pole (and vice-versa). Maxwell’s equations explicitly
forbid any magnetic monopoles. However, it is not at all difficult to reformulate Maxwell’s
equations to include magnetic monopoles. The only reason that they do not include them
is because magnetic monopoles have not been found, experimentally. However, there are
many speculative theories of physics which do include magnetic monopoles. In fact, Dirac
originally predicted that, if monopoles did exist, then it would explain why electric charge
is quantized. Furthermore, GUTs often predict that monopoles will be produced at high
energies. If monopoles do exist in the Universe, then they must be in such small numbers
that we never see them.
Einstein’s General Relativity allows for a particular type of object called a cosmic string.
A cosmic sting is a very dense, one-dimensional string-like object, under tremendous tension,
which would produce an interesting gravitational effect. The actual gravitational attraction
of an extended cosmic string is, in fact, zero! However, the string has an effect on the
surrounding space, leading to a deflection of light around the string. Because gravitational
lensing is seen regularly in astronomical observations, one can place limits on the number of
cosmic strings in the Universe, and current observations have found none.
Magnetic monopoles and cosmic strings are examples of topological defects. A topological
defect is formed during a phase transition in the early Universe. Phase transitions are familiar
from when ice melts into water, for example, when the state transforms from one state to
another. Phase transitions in the early Universe involve the Universe transitioning from one
vacuum state into another as it cools, breaking a symmetry in the theory. The electroweak
theory does the same thing, splitting the fundamental quanta of the electroweak theory into
the photon and W ± and Z 0 particles; a more symmetric system has broken down into a less
symmetric system via a phase transition.
Another example of a topological defect may be found in a ferromagnetic material, like
iron. A magnetic domain is a region of a magnetic material in which all the magnetic
moments line up and point in a single direction. A magnetic material is made up of a large
13
number of these magnetic domains, and each domain is separated from the other domains
by a “domain wall.” The domain wall is the topological defect, separating one magnetic
domain from another.
Topological defects formed during phase transitions in the Universe come in several different varieties, which we’ve actually discussed already. Zero-dimensional topological defects are
the magnetic monopoles, one-dimensional defects are the cosmic strings, and two-dimensional
defects are domain walls. Observations have not shown any of these objects, but GUTs often
predict that they should exist. Again, if they do exist then their numbers should be very
small.
One might include antimatter in the list of “bad things” that we don’t see since the
Universe seems to have so much more matter than antimatter. However, we do actually
see some antimatter occurring naturally. The question of why we see so little antimatter
relative to matter is still unanswered, but it is thought that the mechanism that produced
matter and antimatter in the early Universe produced a very slight excess of matter. For
every billion particles of antimatter produced there were about a billion and one of matter.
The antimatter annihilated with the matter, leaving only the excess matter particles which
then populated the Universe. So, although naturally-occuring antimatter in bulk is rare, it
is not a “bad” thing.
A good theory of the Universe should not only explain where the good things come from,
but also why we don’t see any bad things!
1.0.6
There Are “Dark” Things.
Our discussion of the constituents of the Universe actually leaves out two pieces (that we
know about) The identity of these pieces are completely unknown, but they provide very
interesting effects which we’ll discuss now. Due to the still-mysterious nature of these pieces
they are called “dark,” and are dark matter and dark energy. We’ll start with dark matter,
and then discuss dark energy in the next section.
Observations of galaxies have shown some very peculiar behavior. If we count up the
visible stars then the majority of the stars lie near the center of the galaxy, with the stellar
density decreasing as we move outward. Thus, most of the mass should be concentrated in
the center. Since gravity gets weaker as we move away from a mass, the outer stars should
move at a different rate in their orbits than do the stars closer to the center. Simply setting
the Gravitational force law, FG = GM m/r2 equal p
to the centripetal force FC = mv 2 /r
suggests that the stars should move with speeds v = GM/r, where r is the distance from
the center. In other words, the stars further out from the center of the galaxy should move
more slowly than do those closer to the center, in exactly the same way that planets more
distant from the Sun move more slowly than those closer to the Sun.
Careful observations, beginning with Zwicky in the 1930s, show that the velocity does
not fall off with distance! In fact, the speeds tend to approach a constant value as the
radius increases. What this suggests is that the matter in a galaxy is not concentrated in
the center, but is rather dispersed throughout the galaxy (extending even past the visible
stars). In order for the velocity curve to go to a constant at large distances we would need
the mass to increase as we move out, i.e., M ∝ r.
We can explain the different speeds by immersing the visible galaxy in a large roughly
14
spherically-symmetric “cloud” of massive particles, also called a halo. As is well-known, the
gravitational force on a mass inside a material is different from the force from a point mass;
for example, inside a constant density cloud, the gravitational force is proportional to the
distance away from the center. In order for the mass to be proportional to distance, the
density of these particles needs to fall off as ρ ∼ r−2 at large distances, concentrating the
particles near the center. These hypothetical particles are called dark matter, where “dark”
means that it doesn’t interact electromagnetically, and so doesn’t emit light.
This discussion on galactic rotation rates relied very heavily on the correctness of Newtonian dynamics at large distance (galactic) scales. Shouldn’t we have used Einstein’s theory
of gravity? For these distance scales, it turns out that Newtonian gravity is a good enough
approximation such that it is perfectly fine to use it. General relativity is really only needed
on very large distance scales (say megaparsec scales), or when the gravitational fields are
extremely strong, as in black holes; the dynamics here are well-described by Newtonian
gravity.
However, we find very different behavior for the galaxy rotation rates than we expect,
based on Newton’s theory of gravity. We have attributed it to a dark matter particle, but
one might wonder if the dynamics might be due, instead, to the modification of Newtonian
gravity on large scales. If Newton’s theory of gravity changes from the usual inverse-square
law, becoming instead FG ∼ 1/r at large distances, then the galactic rotation curves will
have the observed behavior.
A possible modification of gravity is called modified Newtonian dynamics, or MOND,
and has been considered for some time. MOND was considered a serious contender to the
particle theory of dark matter for quite a while, but then a very interesting observation was
made of the Bullet cluster [4]. The Bullet cluster is a system of two colliding clusters of
galaxies, as seen in Figure 8.
Figure 8: The Bullet Cluster shows the best evidence for dark matter particles.
During the collision, most of the stars miss each other, but the gases comprising the
galaxies interact electromagnetically leading to friction and therefore heating to temperatures
15
of around 107 Kelvins. Upon heating the gases emit X-rays, which are seen as the pink areas
of the image (the galaxies are seen optically in the picture, as well). The gases are comprised
of the usual baryonic matter and account for the visible matter in the galaxies. The blue
areas show the distribution of mass in the system via gravitational lensing. Notice that the
pink and blue areas overlap only slightly, meaning that most of the matter comprising the
galaxy is not emitting X-rays, and so is dark.
Due to the frictional slowing caused by both electromagnetic and gravitational interactions, the baryonic matter moves slightly more slowly than the non-baryonic matter which
interacts only gravitationally at these large distance scales, leading to the separation of pink
and blue. According to MOND the hot gas is still the most massive component in the
galaxies, and so the gravitational lensing effect would be centered on the gas, leading to no
separation between the pink and blue areas. Observationally, the Bullet cluster seems to
rule out MOND as a viable theory, enforcing the we need for a “dark” particle, which in
fact comprises the majority of matter in the Universe. It is expected that the dark matter
provides the initial seeds for structure (like galaxy) formation in the early Universe, as well
as contributing to anisotropies in the CMB.
Now, the question is “what is the dark matter particle?” There could be all sorts of different contributions to the extra mass of the galaxies. For example, neutrinos seem to fit the
bill, since they don’t interact electromagnetically. However, neutrinos are nearly massless,
and so move at highly relativistic speeds which would not allow them to clump together
under their mutual gravitation. If the dark matter in the early Universe (called primordial
dark matter) provided the initial gravitational seeds to form galaxies, then neutrinos could
not help. While neutrinos might contribute some amount to dark matter, this relativistic
(hot) component must contribute only a small amount.
We thus expect dark matter to be cold, or non-relativistic. This means that the dark
matter particles will be massive, since heavy particles don’t move as fast, and so will tend
to clump together gravitationally. As we will discuss later, the very early Universe was
extremely hot, which provided plenty of energy to create particles of every type. Some of
these particles annihilated or decayed, and the leftover ones (including dark matter) filled the
Universe. If the particles decayed too fast or too slowly, then we wouldn’t get the observed
dark matter density. It turns out that if the particles interact via the weak force then the
dark matter density comes out right. If the particles interact either electromagnetically
(which is already ruled out since we don’t see any light coming from them), or strongly then
the decay rate is too fast. If the particles only interact gravitationally then the decay rate
is far too slow; weak seems just right, predicting the correct relic abundance of dark matter.
So, we expect the dark matter particles to be weakly-interacting massive particles (WIMPS).
There are no good candidates in the Standard Model of particle physics, but the more speculative theories (particularly SUSY) provide excellent candidates. The lightest superpartner
(LSP) is stable, protected by a particular symmetry (called “R”-parity), and having no
lighter superparticles to decay into, and provides a possible dark matter candidate. The
most likely superpartner is called the neutralino, χ, and is actually a mixture of the superpartners of the Z boson, the photon, and the Higgs particle. String theory also predicts an
infinite number additional particles (called a Kaluza-Klein or KK tower), of ever increasing masses. These particles also have the potential to describe dark matter, but are more
speculative than even SUSY.
16
So, we see that observations require the existence of a brand new particle, so far undiscovered by experiment. However, this is not the only unexpected observation that has been
found. Observations in the early 20th century found that distant galaxies are moving away
from us with a speed depending on their distance. This led to a fundamental understanding
that the Universe is expanding, suggesting that it has a finite lifetime. This surprising observation was compounded in the late 20th century with the additional observation that the
rate of the expansion is actually accelerating! Physics is now faced with the non-trivial task
of explaining where this acceleration is coming from and leads to the idea of dark energy, to
which we now turn.
1.0.7
The Universe is Expanding!
Looking out at the night sky, we see a blanket of stars which seems unchanging, and it would
be natural to expect that the Universe has always been as we see it. However, this actually
leads to several apparent problems. Among the first to be thought up, known as Olber’s
paradaox, asks why the night sky is dark. Olber suggested that if the Universe was infinite
and uniformly distributed with stars, then any line of sight should end on a star. Therefore
the night sky should be as bright as the daytime sky. One might think that the starlight
could be absorbed by intervening dust and gas, but this doesn’t solve the problem. The dust
would absorb the starlight and heat up, eventually reaching the same temperature as the
star and re-radiating the light to us.
One possible way out of Olber’s paradox is that the stars turned on at some point in the
past, and this is the case. We’ll return to this discussion below, but it turns out that there
is an even bigger problem. If the Universe was infinite and uniformly distributed with stars,
then it could form a static gravitational system. The stars wouldn’t collapse towards the
center of mass, since there would be no center of mass to collapse to! However, this system
is extremely unstable, like a pencil balanced on the tip. Any slight perturbation of the stars,
pushing two any closer to each other by even a small amount would lead to a gravitational
instability. The two stars would attract, moving together, leading to a single area of greater
mass. This causes some stars to collapse together, while others drift apart. This leads to a
runaway gravitational collapse which would eventually lead to a sky devoid of any stars at
all, save perhaps for large collections of them in individual areas.
The answer of whether the Universe is static or evolving was finally answered only near
the start of the 20th century, and was done by observing the light from distant galaxies.
Since the stars in these galaxies are the same, overall, as the stars in our own, we know what
sort of light we should expect to see. However, upon looking at this light we see that it is not
quite right; the light from the most distant galaxies is redder than we should expect. The
explanation for this may be found in the familiar Doppler effect: we know that light emitted
from a source moving away from us has its wavelength stretched out, leading to a redshift.
A source which is moving towards us would have it’s wavelength compressed, leading to a
blueshift. This means that the distant galaxies are moving away from us!
This discovery was made by Hubble in the 1920’s, and Figure 9 shows his plot of recessional velocity of the galaxy, versus the distance to that galaxy, in megaparsecs [5]. There
is a very clear dependence of velocity on distance, leading to a linear relationship, v ∝ d.
Hubble was the first to note this relationship, which has since been called Hubble’s law.
17
We’ll discuss Hubble’s law in detail later; for now, it’s enough to realize that the Universe is
expanding.
Figure 9: Hubble found that the Universe was expanding, as seen here in his original plot
from 1929.
Einstein originally believed the Universe to be static, but he knew of the problems with
stability. Therefore, when he applied his theory of gravity to the Universe, he tried to add an
additional constant term to his equations that could provide a repulsive force, counteracting
the collapse. When Hubble discovered the expanding Universe, Einstein no longer saw a
need for the new term and banished the cosmological constant from his equations, calling it
his “biggest blunder.” For the better part of a century, this term was believed to be zero,
but everything changed in 1998 when two teams of astronomers looked at distant type Ia
supernovae [6, 7].
Type Ia supernovae occur in binary star systems where a white dwarf accretes material
from a nearby red giant star. The mass of the white dwarf increases until it reaches about 1.4
solar masses. At this point, called the Chandrasekhar limit, the white dwarf can no longer
support itself via electron degeneracy pressure (an effect of the Pauli exclusion principle)
against gravity and it begins to collapse. The gravitational collapse raises the temperature
of the white dwarf fusing heavier elements in the core until iron is reached.
Once iron forms in the core, it takes more energy to fuse it than is released in the fusion
process. The core then begins to gravitationally collapse forming neutrons, which then resist
further compression via neutron degeneracy pressure. This resistance leads to a rebound
effect, creating an outward shockwave. By a process that is still not completely understood,
this shockwave acts with other processes (involving neutrinos, for example) to blow away
the outer layers of the white dwarf in a tremendous explosion (which, in some cases, blows
the companion red giant star away). For a brief while the supernova explosion is brighter
than all the rest of the stars in the galaxy combined (see Figure 10)!
Because the mechanism of type Ia supernova is so regular, occurring when the mass
reaches the Chandrasekhar limit, the explosion has a characteristic power released (called
the luminosity), which waxes and wanes in a predictable manner. This intrinsic luminosity
18
Figure 10: Hubble Space Telescope Image of Supernova 1994D in Galaxy NGC 4526 [8].
The very bright spot, outshining all the rest of the stars in the galaxy, is a single star going
supernova!
leads to a specific brightness (apparent magnitude) at a specific distance, falling off inversely
as the square of the distance to the supernova. So, if we measure the distance to the
supernova, then we can determine the distance. Objects of this type, in which the distance
can be inferred from measurements of the brightness are called standard candles, and are
among the most important distance measurement techniques in astronomy.
In 1998 two teams of astronomers looked at dozens of type Ia supernovae. They knew that
the Universe is expanding, and it is expected that gravity should be slowing that expansion
rate, just as gravity slows the rise of a ball thrown in to the air. The teams set out to measure
that rate of slowing. Because the Universe is expanding, light from distant supernovae should
be redshifted, just as is the light from distant galaxies, which we can also use as a method
for determining distance. If gravity is slowing down the expansion, then the light that we
see should be brighter than we should expect based on the redshift. This is because the
light was emitted from the supernova at some point in the past. The wavelength of the light
is stretched out on it’s way to us by the Universal expansion, and depends on the speed
that the supernova had when the light was emitted (i.e., the speed of the expansion in the
past). However, the brightness that we see depends on the distance that the supernova is
now, because the light is spread out over a spherical surface area (this is just the flux law,
required from conservation of energy). If the expansion rate is slowing down due to mutual
gravitational attraction, then the supernova would appear to be moving faster, based on
the redshift, and we would expect that the distance to them is smaller than it would be if
19
the rate was constant. So, since the distance is smaller than we would expect based on the
redshift, the supernova would look brighter than we would expect, if we use the distance
given by the redshift. So, if the Universe is slowing down it’s rate of expansion, then the
supernova would be brighter than expected. This difference in brightness is exactly what the
two teams set out to measure.
When the astronomers actually made the measurements, they found that the supernovae
were, in fact, dimmer than they were expecting! This implies that the Universal expansion
rate is speeding up; the Universe is expanding faster today than it was yesterday! This is
completely contrary to the expected result; it means that there must be some sort of agent,
resisting the gravitational collapse, accelerating the Universe’s expansion. The question now
is, “what is it?”
This unknown “whatever-it-is” has been called Dark Energy, but this is only a name for
our ignorance. While there are many theories, the identity of dark energy is still completely
unknown. The currently prevailing theory, however, is that dark energy could be Einstein’s
old nemesis, the Cosmological Constant.
The Cosmological Constant (CC) fits the bill, giving a repulsive force acting on megaparsec scales. Furthermore, it also has a very straightforward interpretation coming from
quantum mechanics: it is the energy of the vacuum! Remember the uncertainty principle,
which says that you can’t know the exact energy of a system to arbitrary accuracy. This
uncertainty leads to fluctuations in the energy about it’s ground state (called zero-point
fluctuations), which are most familiar in the quantum mechanical harmonic oscillator. Each
oscillator carries a zero-point energy, 12 ~ω, which contributes to the overall energy of the
system. Adding the contributions from all the different frequencies gives an infinite energy.
Typically this is not actually a problem; just as only differences in potential energy is
physically relevant, so too is the case here. The infinite vacuum energy sets the background,
against which all other energies are measured. Since only the differences are important,
one can rescale the energy, subtracting away the infinite contribution. However, one finds a
problem when applying this idea to gravitation. Einstein tells us that mass and energy are
equivalent, and so anything with energy gravitates. This means that the infinte background
should create an infinite gravitational effect!
There are artificial ways out of this problem. For example, in exact supersymmetry one
finds that the total vacuum energy is precisely zero, with the contributions from all the
particles being canceled by contributions from their superpartners. However, we don’t see
SUSY at everyday energies, and so it is a broken symmetry, which will then give a vacuum
energy. The vacuum energy obtained upon the breaking of SUSY then looks to be again
infinite.
One might suggest that we don’t get contributions to the vacuum energy from all of
the frequencies, but rather only up to a certain cutoff frequency, beyond which our theory
breaks down. In this case, the infinite effect becomes finite, which can be dealt with. This
means that our theory is no longer fundamental, but only an effective theory, which works
well below the cutoff scale, but fails above it. Above the cutoff scale, one needs the truly
fundamental (but unknown) theory (often called the UV completion of the theory) to which
the effective theory is only an approximation. A familiar example of this is Special Relativity
reducing to Newton’s laws at low speeds.
The problem with this idea is that the natural cutoff is of the order of the Planck energy,
20
or about 1018 GeV, which gives a CC so large that the Universe would immediately have
blown itself apart! Upon performing this straightforward calculation, one finds that the
observed value of the CC is ∼ 10−120 of the theoretical result; i.e., it is zero for 120 decimal
places, and then nonzero! This wildly differing expanse between theory and experiment
is called the Cosmological Constant Problem, and has been called the “Biggest Mystery in
Physics.” Why is the CC so small? It is the job of modern theoretical physics to answer
why the CC is not infinite (as predicted by quantum mechanics), and not precisely zero (as
would be the case with unbroken SUSY), but instead the tiny value that it is.
One particular area of interest regarding the Cosmological Constant asks “is it really a
constant?” The simplest description of dark energy is a constant vacuum energy, taking the
same value everywhere in the Universe. However, as we will see, there are other objects that
can mimic a CC; for example a slowly-varying scalar field can approximate the effect of a
CC to arbitrary accuracy. Because dark energy is so completely unknown, the observational
properties of the Universal expansion are a subject of great theoretical interest. If it is found
that the rate of acceleration changes with time, then one can rule out a CC, and proceed to
even more interesting explanations.
As has already been discussed, the Universe is spatially flat. This flatness tells us something about the density of the Universe. If the density was too large, then the Universe would
recollapse at some point due to mutual gravitational attraction, regardless of the small Cosmological Constant - the Universe is closed. If the density is too small, then the CC takes
over and blows the Universe apart at an ever-increasing rate, leading to an open Universe.
If the density is critical (just right), then the Universe expands forever, but asymptoting to
a finite acceleration. This behavior is analogous to the escape velocity of a projectile. If the
initial velocity is too small then the object is pulled back to Earth, in analogy with a closed
Universe. If the initial velocity is larger than escape, then the projectile escapes with extra
velocity, as in an open Universe. When the projectile is fired at escape velocity, then it just
barely escapes with no extra speed, as in the critical (flat) Universe case.
The flatness places constraints on the total density of the Universe, telling us that the
density is extremely close to critical. Since the total density is determined by the densities
of the individual components (ordinary or baryonic matter, dark matter, and dark energy),
the densities must add up to critical. We’ve already seen that there is far more dark matter
than baryonic matter, and so the dark matter density will contribute more. Finally, on
large scales the dark energy is the important part, and so we expect that the dark energy
density will also be important. Observations from the redshift of supernovae have placed
fairly precise values on each of these densities. We now know that the Universe is made up
of about 4% baryonic material, 23% dark matter, and about 73% dark energy; the majority
of the Universe is not only dark, but completely unknown as well! As we will see later, which
component of the Universe (matter, or dark energy) dominates, and describes the subsequent
evolution, changes as the Universe evolves, with dark energy becoming important only at
late times. This completes the mosaic of the Universe.
21
2
How Did It Get That Way?
We now have a very good picture of the Universe, even though the majority of it is completely
unknown. We now need to consider what this picture tells us about the evolution of the
Universe and how it came to be in such a state. As we will see, the attempt to answer this
question leads to some new questions. At first glance, it would appear that the Universe
could only have evolved from a very finely-tuned beginning; generic initial conditions do not
lead, at all, to what we see in the sky. Furthermore, it turns out that some of the conditions
are not only unlikely, but also impossible under the standard picture of Universal expansion.
We will return to these problems after we get an idea of where the Universe came from.
2.1
The Big Bang
We’ve seen that the Universe is expanding; distant galaxies are flying away from us with a
speed proportional to their distance away. This means that the galaxies were closer to us
yesterday than they are today, and closer, still, a million years ago. Let’s run the expansion
backwards as far back as we can. Eventually all of the galaxies would have been on top of
each other. Further back and all the stars would have been in the same place. Keep rolling
the film backwards and everything would have been at the same point. We are thus led
to believe that the entire Universe exploded from a single point! The Universe has been
expanding ever since, carried along by the energy of the explosion. This explosion has been
called the Big Bang.
The Big Bang model provides an excellent explanation for the Universe. It explains
why the galaxies are flying apart. It also explains where all the “stuff” in the Universe
came from. The initial energy of the Big Bang was extremely high (effectively infinite),
and some of this energy went into the expansion, while the rest of it goes into the creation
of matter (via E = mc2 ). At first, lighter constituents were formed, such as photons,
neutrinos and quark-antiquark pairs (the pairs being required to conserve charge). The
matter and antimatter annihilated, leaving a slight excess of matter populating the early
Universe. Different quarks paired up into protons and neutrons, which then combined into
mostly hydrogen, plus a smaller amount of helium via primordial nucleosynthesis. Much
later, these primordial elements collapsed gravitationally into stars. Furthermore, because
the initial energy of the explosion was so great, and since temperature is just a measure of
the energy of a system, we can even understand the 2.73 Kelvin background temperature of
space as expansive cooling. Just as a gas-filled piston cools down as the volume is increased,
so too does the Universe. Tracing the expansion of the Universe back places a lifetime of
roughly 13.7 billion years, which we’ll discuss later.
But, we’re still not done; we can also explain the cosmic microwave background (CMB).
Right after the Big Bang, the density of the Universe was extremely high. The tiny Universe
was filled with charged particles that scattered photons back and forth between them, in
exactly the same way as a cloud scatters the light. Just as a cloud is opaque, so too was
the early Universe! About 379,000 years after the Big Bang, the density dropped enough
such that the photons don’t easily find a charged particle to scatter off of and they start
freestreaming (this is the same reason why a cloud has a visible edge). The Universe became
transparent to visible light, and the CMB photons have been traveling to us ever since. This
22
is why the CMB has been called the “afterglow of the Big Bang.”
The photons were initially very high-energy, but have had their wavelengths stretched
by the Universal expansion, and are now in the microwave range. The CMB is the farthest
back in time that we can see optically, but because neutrinos interact weakly, they started
freestreaming much earlier than the photons. There is the hope that neutrino astronomy
could allow us to see further back, but the difficulty in detecting neutrinos has so far rendered
this intractable (in fact, gravitational waves would elucidate still earlier times, but they have
not been detected, yet).
We can see that the Big Bang model seems to work very well. There have been other
attempted explanations from time to time, such as a “steady state” model harkening back
to the static Universe idea, but these other models have not found wide support. Later, we
will see that we get precision agreement with cosmological data using the Big Bang model,
augmented with the inflationary ideas.
3
A Short History of the Universe.
Now that we have a good basic idea of the evolution of the Universe, let’s look at the evolution
in a little more detail. We can actually trace the history of the Universe back to very early
times based on the known laws of nuclear and high-energy physics. The ambient temperature
of the Universe needs to fall to such levels that quarks can form into nucleons, nucleons and
electrons can form into atoms, and so on. This gives us an idea of the background energies
and, since an energy of 1 electron volt corresponds to roughly 12000 Kelvins, temperatures
which can be then linked to the time since the Big Bang. Let’s look at some different
important times in the history of the Universe, listing the times, energies, and temperatures
in these eras.
• t = 0 seconds. This is the Big Bang singularity, the origin of everything! The entire
Universe is crammed down into a single point of effectively infinite density. The temperatures and energies are also effectively infinite. This era is completely beyond the
known laws of physics!
• t . 10−43 seconds. This is the Planck era, where the (unknown) laws of quantum
gravity become important. The background energy is about 1018 GeV, giving a temperature of about T & 1031 K. This era is still completely unknown, but if string theory
is true then string interactions should be important, here. If the four forces are unified
as a single force, then it is at this point that gravity breaks away, leaving the other
three forces as a Grand Unified Theory (GUT).
• t ∼ 10−36 seconds. The energy has dropped to 1016 GeV, with a temperature of
T ∼ 1029 K. It is thought that this could be the GUT scale, where the strong force
decouples from the electroweak force. Topological defects (like cosmic strings) may be
produced in this breaking. This era is still resides in the realm of speculative theories
of physics.
• In the range of 10−35 . t . 10−14 seconds, the energies drop from 1016 GeV to 104
GeV, while the temperatures drop from T ∼ 1029 to T ∼ 1017 K. This entire range
23
is still outside of the reach of current particle accelerators (although the lower limit is
just reachable at the LHC). In this region, the electric and weak forces are still unified
as the electroweak theory. If supersymmetry (SUSY) exists, then it is very likely to be
broken during these times. It is also expected that a period of accelerated expansion,
called inflation, occurs during this time (perhaps at around t ∼ 10−34 seconds). The
investigation of inflation will form the majority of our work.
• t ∼ 10−10 seconds. The energy has dropped to 103 GeV, or one TeV, and the temperatures have fallen to T ∼ 1016 K. This range is right at the edge of current accelerators.
SUSY has broken, and below these energies the electroweak force has split into electromagnetism and the weak force.
• t ∼ 10−5 seconds. The energy is now at about 100 MeV, and the temperatures are
about T ∼ 1012 K. The background quark-gluon plasma has cooled down enough for
them to form baryons (such as protons and neutrons) and mesons (quark-antiquark
pairs) in a way that is still not completely understood. However, the temperatures
are still too high for the nucleons to bond to form any more complicated nuclei (like
helium). The energy is now low enough that baryon-antibaryon pairs no longer form
(allowing net annihilation of the pairs), but there are processes that keep the number
of protons and neutrons in roughly equilibrium. The tiny baryon-antibaryon asymmetry now comes into effect, leading to the slight excess of baryons after the particleantiparticle annihilation.
• In the range 0.01 . t . 0.1 seconds, the energy drops from roughly 10 to 1 MeV. The
temperature is now T ∼ 1010 K. In this range the weak interactions fall out of equilibrium since particles are getting further apart, and so neutrinos start to freestream.
Furthermore, the processes that keep the number of protons and neutrons in equilibrium also fall out of equilibrium, which then fix the relative numbers of protons and
neutrons. The relative abundances of the primordial elements are determined by these
relative numbers of nucleons.
• We finally reach t ∼ 1 second. The energy is now about 0.5 MeV, and the temperature
has fallen to T ∼ 109 K. The energy density has dropped below the rest mass of
the electron, which means the electron-positron pairs are not replenished when they
annihilate. Once again, the very slight excess of electrons over positrons leads to a
slight excess of electrons. The temperatures are still far too high to allow the free
electrons to merge with the nuclei to form neutral atoms, however.
• At t ∼ 200 seconds the energy has dropped to about 0.01 MeV, and the temperature is
now T ∼ 108 K. At these energies the nuclear reactions become important and protons
and neutrons can form larger nuclei. Helium and other lighter elements are formed
during this Big Bang Nucleosynthesis (BBN). The observations of the abundances of
primordial elements are in very good agreement with theoretical predictions, and form
a very important check on the Big Bang theory.
• Now we skip ahead to t ∼ 104 years. The energy is now at about 1 eV, and the temperature is T ∼ 104 K, which is below the temperature of the sun, still too high for neutral
24
atoms to form. This is also the era in which the evolution of the Universe changes from
being dominated by radiation (photons and neutrinos), to being dominated by matter.
This era is called “matter-radiation equality.”
• At at time t ∼ 105 years, the energy has fallen to energy ∼ 0.1 eV, and the temperature
to T ∼ 103 K, or roughly the melting point of lead. The temperature has fallen so that
the free electrons can join with the free nuclei to form neutral atoms in a process called
recombination (which isn’t the best term, since the particles were never combined in the
first place). Now that the Universe is neutral, overall, photons don’t scatter as readily,
and begin to freestream. The Universe becomes transparent to radiation, leading to
the cosmic microwave background (CMB). The tiny density fluctuations leave their
imprints in the CMB and are seen as the temperature fluctuations in Figure 3.
• Skipping ahead again to t ∼ 108 years, the energy has now dropped to about 10−2 eV.
The temperature is T ∼ 50 K, which is below room temperature. By this time the
tiny density perturbations in the early Universe (as seen in the CMB) have provided
the initial gravitational seeds to allow for the stars to form galaxies. This begins the
formation of large-scale structure.
• At t ∼ 109 years, the energy is now ∼ 10−3 eV, and the temperature has fallen to
T ∼ 5 K. Our solar system forms, giving rise to a very pretty blue-green planet called
Earth.
• At t ≈ 13.73 × 109 years, the background energy is on the meV scales, and the temperature has dropped to about 2.725 Kelvins. We’ve reached today!
Based on this timeline, we see that we can understand the Universe all the way back to
t ∼ 10−10 seconds after the Big Bang, based on experimentally-confirmed and well-known
laws of high-energy physics. Extending the time back to t ∼ 10−35 seconds relies on much
more speculative theories, but we don’t expect there to be any big surprises; in particular,
effects from the unknown quantum theory of gravity should not be important. This era also
includes the inflationary regime that will occupy most of our time in later chapters. However,
going even further back towards the initial singularity stretches our understanding of the
laws of physics past its breaking point. Direct experimental evidence of the physics at these
energies is not coming from particle accelerators any time soon, and so our understanding
will have to come from clues in the astronomical observations. It is clear, however, that we
seem to have a pretty good picture of the evolution of the Universe. But now let’s look a bit
closer.
4
The Big Bang Isn’t Perfect!
The Big Bang picture seems remarkably good at first glance, offering us a very nice picture of
the Universe and it’s evolution. However, upon closer examination we find that the Big Bang
model doesn’t quite explain everything satisfactorily; some problems creep into the theory
which aren’t addressed in the standard Big Bang picture. It looks as though the Universe as
we see it today could not arise from generic initial conditions, but only very specific ones. To
25
answer these problems we will need to expand the Big Band theory to include an era of not
just Universal expansion, but accelerated expansion. Let’s look at some of these problems
in a bit more detail.
4.1
4.1.1
Problems Associated With the Big Bang
Initial Singularity.
The most obvious problem associated with the Big Bang is that of the initial singularity that
gave rise to the Big Bang in the first place. The theory does not address what it was that
“banged,” or what caused it to “bang.” All the known laws of physics break down at the
instant of creation; all we can do is calculate what happens at subsequent times. It is hoped
that a complete theory of quantum gravity would solve at least some of the problems with
the initial singularity, but this will not be clear until we have that complete theory. So far,
although there have been suggestions as to how to think about the Big Bang, the solution
to this problem is still unknown.
4.1.2
Flatness Problem.
As we have discussed, the Universe is spatially flat. One might ask, why does it have to
be so flat? The initial singularity had tremendous energy, and could have led to a large
spatial curvature. Could the Universe have started out with a spatial curvature, but evolved
to flatness as it expands, so that it looks flat now? Unfortunately, this explanation is
problematic. We will see later that, if the Universe started out exactly flat, then it remains
so forever. But, if the Universe expands dominated by either matter or radiation then it
actually evolves away from flatness as time goes on. This means that the the Universe should
be flatter in the past than it is now; in particular it should have started off extremely close
to flatness in the instants after the Big Bang. So, the question now becomes “why does the
Universe start out so flat?” The small Cosmological Constant doesn’t help here, since it’s
contribution is subdominant to matter, and especially radiation, in the early Universe. The
flatness problem is sometimes phrased as an age problem since the spatial curvature is tied
to the fate of the Universe; Why is the Universe so old?
4.1.3
Horizon Problem.
Consider again the cosmic microwave background (CMB) seen in the whole-sky map in Figure
3. The temperature of 2.725 K throughout the sky is remarkably uniform; the deviations
from this average are of order δT /T ∼ 10−5 . This poses a very serious problem, “why
are distant parts of space so uniform?” The Universe started out very hot and cooled as
it expanded. But, as it expanded different parts of the Universe moved away from each
other, eventually moving so far apart that they could not communicate with each other via
exchanging light signals (they are separated by horizons). We say that they have fallen out
of causal contact with each other. If this is the case, then why should they cool off in exactly
the same way? The maximum angular separation between two points on the CMB turns
out to be less than 2 degrees, which leads to some 1084 causally disconnected regions just in
26
the first instant of time! So, by what means did two points at different ends of the Universe
come to thermal equilibrium?
4.1.4
Initial Inhomogeneities.
An additional problem has to do with the origin of structure. One needs to figure out a way
of arranging for the initial density of the Universe to be extremely uniform, as is required
by the CMB (the horizon problem). But then, how does one arrange at the same time
for the small deviations from homogeneity? These deviations provide the initial seeds for
galaxy formation, since perfect homogeneity would form no overdense regions to initiate
gravitational collapse.
4.2
How To Fix Up The Problems?
We can see that there are quite a few problems associated with the Big Bang theory. The
severity of these problems actually varies, and depends on whether one believes that the initial conditions of the Universe should be contained in the theory describing it. For example,
the flatness problem is not a problem in the strictest sense; the Universe could simply have
started out flat from the beginning by some unknown (but not unknowable) means. While
it is not ideal to be ignorant of the reason for flatness, there is no reason why it should not
be so, either.
Similarly, the inhomogeneity problem may only be a technical problem. Once the average
homogeneity is explained, one “simply” needs to explain why this overall average deviates.
This may not be that difficult, in general. As we will see, quantum fluctuations may provide
the answer acting as small perturbations about the background. Already, we have some hope
in this problem.
Turning to the horizon problem, however, we do find a genuine problem. Because no
information can travel faster than light, it seems impossible that distant points of the Universe should be similar to such an exacting extent. The solution might seem to require some
acausal, faster-than-light, communication. But such an effect has never been seen, and is
forbidden by Einstein’s theory. If the Universe had existed forever, then it would have an
infinite amount of time to thermalize. But, the expansion of the Universe suggests a finite
lifetime, and so there hasn’t been enough time to reach thermal equilibrium. Here we can’t
appeal to some special initial conditions to provide the answer; we have a real paradox.
So, although the Big Bang seems to be successful in predicting the properties of the
observable Universe, it is not without its faults. We want to save the virtues of the Big Bang
theory, while at the same time fixing up the shortcomings. We don’t want to throw out the
theory, but rather tweak it a little bit. This is precisely what the inflationary theory does.
Inflation says that the Universe arose from a tiny region of space which expanded superluminally (inflated) to vast scales. During this inflationary period, the Universe is actually
pushed towards flatness, irrespective of the initial spatial curvature. Furthermore, if the
Universe started out from a tiny region, then regions that originally started off in causal
contact are blown up. This means that they could evolve in the same way, addressing the
horizon problem. The inhomogeneities in the tiny region are pushed out of the observable
27
Universe, leading to an overall homogenous Universe, while tiny quantum fluctuations are
blown up to the huge scales providing the density perturbations.
The previous discussion of inflation has been very quick and at a completely qualitative
level, as has the rest of this chapter. We will spend most of the rest of our time fleshing
out this basic description of the Universe. Now that we’ve had a nice tour of the Universe,
we need to get more quantitative to understand it. It is to this quantitative picture that we
now turn.
References
[1] The Hubble Ultra Deep image can be found on NASA’s website,
//www.nasa.gov/images/content/56539main_closer.jpg.
http:
[2] The Sloan Digital Sky Survey Galaxy Map image can be found on the SDSS site,
http://www.sdss.org/.
[3] The WMAP image can be found on NASA’s website, http://www.nasa.gov/topics/
universe/features/wmap_five.html.
[4] The Bullet Cluster image can be found on the Chandra X-Ray Observatory website,
http://chandra.harvard.edu/photo/2006/1e0657/.
[5] E. Hubble, “A relation between distance and radial velocity among extra–galactic nebulae,” Proc. Nat. Acad. Sci. 15, 168 (1929).
[6] A. G. Riess et al. [Supernova Search Team Collaboration], “Observational Evidence
from Supernovae for an Accelerating Universe and a Cosmological Constant,” Astron.
J. 116, 1009 (1998) [arXiv:astro-ph/9805201].
[7] S. Perlmutter et al. [Supernova Cosmology Project Collaboration], Astrophys. J. 517,
565 (1999) [arXiv:astro-ph/9812133].
[8] The Supernova image can be found on the European Homepage for the NASA/ESA
Hubble Space Telescope website, http://www.spacetelescope.org/images/html/
opo9919i.html.
28