Download Module 4.1 - The Scale of the Universe [slide 1] We now turn to

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Cygnus (constellation) wikipedia , lookup

Hubble Space Telescope wikipedia , lookup

Lyra wikipedia , lookup

Corona Australis wikipedia , lookup

Cassiopeia (constellation) wikipedia , lookup

Space Interferometry Mission wikipedia , lookup

Fine-tuned Universe wikipedia , lookup

Dark energy wikipedia , lookup

Gamma-ray burst wikipedia , lookup

CoRoT wikipedia , lookup

Supernova wikipedia , lookup

Serpens wikipedia , lookup

Modified Newtonian dynamics wikipedia , lookup

International Ultraviolet Explorer wikipedia , lookup

Ursa Minor wikipedia , lookup

History of supernova observation wikipedia , lookup

Aquarius (constellation) wikipedia , lookup

Non-standard cosmology wikipedia , lookup

Messier 87 wikipedia , lookup

Ursa Major wikipedia , lookup

Chronology of the universe wikipedia , lookup

Perseus (constellation) wikipedia , lookup

Physical cosmology wikipedia , lookup

Galaxy Zoo wikipedia , lookup

Type II supernova wikipedia , lookup

Astronomical unit wikipedia , lookup

Stellar evolution wikipedia , lookup

Hipparcos wikipedia , lookup

Globular cluster wikipedia , lookup

Galaxy wikipedia , lookup

Corvus (constellation) wikipedia , lookup

IK Pegasi wikipedia , lookup

Expansion of the universe wikipedia , lookup

Structure formation wikipedia , lookup

Timeline of astronomy wikipedia , lookup

Lambda-CDM model wikipedia , lookup

H II region wikipedia , lookup

Stellar kinematics wikipedia , lookup

Observational astronomy wikipedia , lookup

Star formation wikipedia , lookup

Open cluster wikipedia , lookup

Hubble Deep Field wikipedia , lookup

Cosmic distance ladder wikipedia , lookup

Transcript
Module 4.1 - The Scale of the Universe
[slide 1] We now turn to Measurement of Cosmological Parameters. The first one of which, is
Hubble Constant, or Hubble Parameter.
[slide 2] It sets, both the spatial and temporal scales for the whole universe. In a given
cosmological model, specified by values of different, omegas, or little w as well, all distances
and all times scale linearly with Hubble's constant. And thus its importance.
So, the inverse of the Hubble constant is the characteristic time unit of cosmology. The Hubble
time. Multiplied by the speed of light, it gives Hubble length, which is commensurate with the
size of the observable universe.
Note that, Hubble's constant, is independent of all other cosmological parameters. We have this
neat separation, between Hubble constant that sets the scale, and all other parameters that
describe the qualitative behavior of the universe. So distances to anything, galaxies, quasars,
anything cosmology, scale with Hubble's constant and thus its importance.
Moreover, physical parameters of, objects like galaxies or anything else, such as their total
luminosity, or their masses or physical sizes, all scale with the distance, with the appropriate
power. And in order to understand these objects better, and how they form, how they evolve, we
need distances to them.
[slide 3] However, measuring the distance in cosmology is, a very difficult thing to do, because
galaxies are very far away. And, in fact, the only distances in astronomy that are measured
cleanly, without recourse to any models or statistics, are trigonometric parallaxes to stars.
Everything else requires some sort of physical modeling or assumptions or statistics.
Thus we have a concept of the distance ladder. Which means, that we first measure, distances to
objects that, where we're sure how to do it, such as nearby stars. Then we use that, to calibrate
distances to some objects further away, star clusters say, and, so on and so on. So we calibrate
the distance indicators relevant to each other. And this is what's known as the distance ladder.
Locally, those will be various stellar types of indicators, say, based on pulsating stars or star
clusters and things like that, then properties of nearby galaxies, using their scaling relations and
so on, until we reach what is called pure Hubble flow, where only the recessional velocities
matter. And that is where the expansion of the universe, unperturbed by local non-uniformities
really sets in.
Now the age of the universe, can be actually estimated Independent of Hubble's constant. The
lookback times, objects cannot, but the total age of the universe can. We can, for example,
estimate ages of stars or clusters, or chemical elements and that gives a lower limit to the age of
the universe, independent of any measurements or distances.
[slide 4] This is a hard thing to do, and thus, measurements of Hubble constant have somewhat
disreputable history. Hubble's own estimate of it was an order of magnitude off from what we
know today is more or less the correct value. And through the history, people always thought
they knew it to about 10% accuracy, even though its value changed by a whole order of
magnitude. The reason for this is that they didn't really account properly for errors of
measurement, and especially for the systematic errors.
So, for example, when Hubble first measured value of Hubble's Constant. One over that was a
couple of billion years, and already then people knew that there were rocks on planet Earth that
are three or four billion years old. So planet Earth was older than the universe, and that was a
problem.
[slide 5] Ever since then, the value of the Hubble constant was revised, usually downward. The
first major revision was due to Walter Baade who recognized that there are really two very
different kinds of pulsating stars, one of which is confused for the other, and he came up with the
concept of stellar populations. That immediately halved the value of Hubble's constant.
Then improved measurements pushed it further down, and down. And in 1970's, it got down to
the range of, between roughly 50 and 100 kilometers per second, per mega-parsec.
[slide 6] There were two prominent schools of thought on this. One, led by Allen Sandage, the
disciple of Hubble, pushed for the lower values, around 50 kilometers per second per
megaparsec. The other one, led by Gerard de Vaucouleurs and his collaborators in Texas and
elsewhere, pushed for twice that much, about 100 kilometers per second per megaparsec. And
the two just could not just get into an agreement. The reasons for this were they made different
assumptions, they used different calibrations, and things went like that until 1980's.
[slide 7] But even in the modern days, the spread continued. People started being more careful
about the error bars, and yet still the actual spread of quoted values in the literature was always
larger error bars. That persisted until roughly, 1990's.
[slide 8] There are two kinds of methods to measure distances. There are methods that ostensibly
give absolute distances to particular kind of objects, like trigonometric parallax for stars, or using
physical models to derive distances to supernovae or clusters of galaxies.
Parallax is the only safe one. The geometry is well-understood, and there are no problems.
However, we cannot measure parallaxes of stars more than about kiloparsec from us now, well
within sight of our own galaxy.
There is so-called moving cluster method, but that relies on statistics and certain assumptions.
There is so-called Baade-Wesselink method that uses pulsating stars. That too makes
assumptions about stellar atmospheres and things like that.
A very similar to it, is so called expanding photosphere method for supernovae. There one also
has to make some, non-trivial assumptions, about physics of what's really an exploding star.
Pushing further into Hubble flow, there are 2 important methods. One is based on so called
Sunyaev-Zeldovich effect, which measures distances to clusters of galaxies using observations of
cosmic microwave background, and of the x ray emission from clusters. We will talk about those
in more detail.
And the other one is using gravitational lens time delays. We will also address that one in due
time.
The other kind of distance indicators is the secondary ones. Which require calibration from
somewhere else. They can be used to measure relative distances say things there are objects
which may be further away but their zero point has to be obtained from something else.
And there are many of those, both using stars and galaxies and we will address them all.
[slide 9] So here is schematically what the distance ladder is like Each method, operates in
certain range of possible, plausible distances. And hopefully it overlaps in part, at least, with
another method, which then can reach deeper, and so on. That is what we call the distance
ladder.
Nearest to us, trigonometric parallaxes, can be used to measure distances. Proper motions of
stars, can also be used but in a statistical sense.
Then we can measure distances to some of the nearby star clusters, and use properties of star
clusters to measure distances to even more distant ones. In particular, this leads to measurement
to distances to pulsating stars: Cepheids and so called RR-Lyrae Stars, which are a key step in
the measurements of the Hubble Constant. But notice that several steps precede this.
With those, we can measure distances to a number of nearby galaxies. And then we can calibrate
distance indicator relations for galaxies themselves. Those are correlations between 2 quantities,
one of which does not depend on distance. Say, rotational speed of a galaxy, and the other one of
which does, like luminosity.
So these important distance relations can be used to measure distances to galaxies far away, but
they have to be calibrated locally, with something else.
Then supernovae come in, and those are currently favorite ways of measuring distances in
cosmology from near to far. We will cover them in some detail.
Once we go beyond the regime where it's easy to do galaxy distance indicators, two other
methods come in, on really cosmological scales. The Sunyaev–Zel'dovich effect, and the
Gravitational Lensing. In principle, they do not depend on the distance ladder, however they are
model-dependent. And therefore, previous calibrations are important as a check.
[slide 10] So, here is a schematic flow chart. It's not there to confuse you, it's just to show you, a
little bit, how complex the network of measurements has to be, with mutual checks to find out
how far things are in the universe. It start with nearer stars and goes all the way to supernovae
and truly cosmological scale.
[slide 11] Well that's it for now, next time we will actually talk about stellar distance indicators.
Module 4.2 - Stellar Distance Indicators
[slide 1] Well let us start climbing the distance ladder. First, we'll talk about, methods that we
can find distances to nearby stars, or star clusters.
[slide 2] The Trigonometric Parallax is the most basic measurement of distance in astronomy.
And, hopefully all of you are, well familiar with it. It is pure geometry, and there is nothing
uncertain about it. So by measuring the annual apparent ‘sloshing’ on the sky of a star, we can
figure out how far it is, if we know the distance between earth and the sun, which we do know,
with a great precision.
So the method by itself is safe, the problem is that, these are very small angles, and the current
state of the art is that we can measure distances using parallaxes to about one kiloparsec out,
more or less, and that's well within our own galaxy, never mind external galaxies.
[slide 3] The next one is so-called moving cluster method. This is a statistical method. Stars do
come in clusters, clusters move relative to the solar system, and stars have internal motions
within a cluster itself.
[slide 4] Since, ostensibly, all cluster stars are moving roughly in the same direction, if we look
from afar, we will see them converging toward some distant point, which is direction which they
are going. By measuring the spread of those angles, we can figure out, what's the angle of their
actual velocity vector to the line of sight?
[slide 3] We also measure proper motions of stars in angular seconds, per unit time on sky, as
well as their radial velocities. We can assume that random motions within the cluster are same
both in radial and tangential direction. So by knowing what the angle is between the radial and
tangential components, we can figure out what is the parallax to the cluster.
Note that this makes some assumptions about the internal motions of the cluster, and it is
basically a statistical technique. So the more stars you have the better, but clusters have finite
number of stars.
[slide 5] By measuring distances to a number of stars using either one of two techniques, we can
calibrate the Hertzsprung-Russell or colour magnitude diagram for stars. Hopefully this is
something you know about very well. It is a lot of stellar luminosity versus temperature.
Temperature is sometimes measured through color. The important, thing to note here, is that
stellar luminosities are distance dependent according to the inverse square law from given
measurements of apparent brightness. Temperature, measured from spectra, or from colors, is
not. Star will have exact same temperature no matter how far it is.
So, once we calibrate the main sequence of this Hipparcos diagram, if we can measure stellar
colors or temperatures, then we can read off their absolute magnitudes. From absolute and
apparent magnitudes, we can figure out how far they are. In a given cluster you may have
thousands of stars, and therefore you can determine the distance very precisely, the distance to
the cluster itself.
Now this works fairly well for young stellar clusters in the galactic disc. However, no globular
cluster is as yet close enough to measure parallaxes to it. And so something else will have to be
done about those. There what we do is we use Field stars of the same type as those that live in
globular clusters, the population 2 stars.
[slide 6] HR diagram measurement is a collective measurement. But not all stars are created
equal. Between the temperature luminosity plane, there is a strip within which stars are unstable
to pulsation. So called instability strip.
On the main sequence, those will be the Cepheids, or delta Cepheid stars. Among the globular
clusters, horizontal branch, those will be RR Lyrae stars. There are many different kinds of
pulsating stars. But those are the principal ones. And indeed, physics of Cepheids and RR Lyrae
is probably the best understood of all.
It turns out that there are correlations between observed period, which again does not depend on
distance, and luminosities of these stars, which do. Those are empirical relations and they can be
calibrated if you had distances to a number of these pulsating stars using one of the previous
techniques.
[slide 7] At first, people did not know that there are different kinds of pulsating stars, they all
thought it was one kind. And the first star that Hubble figured out in Andromeda was Cepheid
pulsating star that produced the first distance to another galaxy. But people are confusing RR
Lyrae which are much dimmer than Cepheids, with Cepheids themselves, and that confusion led
to an error of factor of two in the distance scale. Once Walter Baade understood that there really
are two different kinds of period luminosity relations, that error was corrected.
[slide 8] Let's talk about cepheids in more detail because they remain among the most important
distance indicators altogether.
They're young luminous stars. Therefore, they'll be found in star forming discs. And star forming
regions, whereas Delta Ceti itself is relatively bright one within our own galaxy. It was Henrietta
Leavitt working with Harold Shapley who recognized that there is a correlation between period
and luminosity, as they were studying stars in the Magellanic Clouds, about 50 kiloparsecs away.
Since all of them were roughly the same distance, apparent magnitude would correlate with
period and then they understood that comparing that with nearby Cepheids they can find out how
far the Magellanic Couds are.
Cepheids are important because they are bright and so we can see them far away, we can find
them in galaxies up to maybe 25 mega parsecs or so. So we can calibrate distances to a number
of nearby galaxies using Cepheids, which is not easy but it's possible. And then we can use
distances to those galaxies to calibrate some other relations. It isn't all perfectly safe.
The pulsation of stars must depend on their internal composition and opacity, and therefore
metallicity. The exact effects of metallicity are not firmly established as here. Moreover, there
are external problems such as extinction. Cepheids are found in star-forming regions and they
also tend to be dusty. So one has to make a correction for it. In very distant galaxies, they may be
blended with other stars giving us wrong luminosity. Cepheids remain keystones of the distant
scale and that also applies for the measurements with the Hubble space telescope.
[slide 9] Here are some examples of Cepheid period luminosity relations in the Magellanic
clouds in different filters. The scatter is biggest in the blue band and the smallest in the near infra
red. However the amplitudes are biggest in blue and smallest in infra red. It's a good idea to
observe them in different bands so that the effects of extinction can be taken out.
[slide 10] Until Hipparchus satellite flew, we did not have parallax calibration of Cepheids.
Distances to Cepheids until then were based on the distances through clusters in which they live.
And distances through those clusters were by in large determined using the cluster method.
However, with the Hipparcos, a handful of Cepheids were within reach. And these are the actual
calibration relations for distances to cepheids. As you can see, they're fairly noisy. But in any
case, for the first time, they gave us an absolute calibration of the period luminosity relations for
Cepheids. This is going to get a lot better with the Gaia satellite, which is an astronomy mission
which will measure Cepheids to a much larger number of pulsating stars with a much greater
precision.
[slide 11] The other important kind of pulsating stars are RR Lyrae. Their also named after the
prototype star that was first recognized. They’re population 2 stars, they’re not on the main
sequence, but on the horizontal branch, which is the helium burning main sequence, and their
found in old stellar populations, such as the globular clusters.
They do have an advantage that their periods are short. So, it's much easier to observe full
periodic Pulsating curve for an RR Lyrae Star than it is for a Cepheid. Because they're dimmer,
they can be really used only within the local group of galaxies. But that's still useful, and it
provides a welcome check, on the distances measure using Cepheids.
[slide 12] Now, let's take a closer look to what happens when a star is pulsating. Its photosphere
expands, but the temperature changes as well. So the radius changes, the temperature changes
therefore, luminosity must change. If we observe stars spectroscopically, we can observe the
velocity of the photo sphere. Come towards us and go away from us. So we can measure stellar
temperatures using colors or spectroscopy. We can measure velocity of the pulsating
photosphere using spectroscopy and we can measure the changes in the apparent brightness.
[slide 13] This forms the basis of so called Baade-Wesselink Method. If the pulsating stars were
perfect black bodies, this would be an excellent pure physics based method to determine
distances to them. Unfortunately, real stars are not perfect black bodies, but they're not too far
either. So, at any given time, the flux from a star will be its luminosity, which is in itself, given
by Stefan Boltzmann formula. It's proportional to the temperature to the fourth power. And to the
surface area of the star, which is proportional to the square of its radius. And it's universally
proportional to the square of the distance.
We can measure those quantities all throughout the pulsation period. So temperatures are directly
observable from photometry. So are the fluxes. And the only remaining question is, can we find
out the radius? We can, in a way, because if we integrate motion of the photosphere as traced by
the radial velocity, we can find out how much the radius has been changing. So we have 3
equations in 3 unknowns and we can solve for that. Therefore, we can obtain distances purely
from measurements, and assumptions about black body nature of stellar photospheres.
The problem is, the stars are not perfect black bodies. And some modelling of stellar
photospheres has to be done in order to actually make the method to work. So there is model
dependence. And that's where the uncertainties come from.
[slide 14] There are a couple more statistical methods that are based on stellar indicators.
Globular star clusters themselves have a distribution of luminosities. It turns out that their
distribution function, the luminosity function of globular clusters, seems to be universal among
galaxies for reasons that are not really well understood at all. Actually you can think of many
reasons why this shouldn't be the case, but empirically, the do seem to be very similar.
Thus, if we can calibrate the luminosity function of globular clusters in the milky way using
local distance indicators, then we can apply it to luminosity functions of globular clusters in
other galaxies. A good thing about this is the globular clusters are much brighter than most stars,
and so they're easier to find and easier to measure.
Now one problem is that the numbers of globular clusters vary widely among the galaxies.
Elliptical galaxies, early type spirals, have most. Late type spirals hardly have any and therefore
there be a statistical uncertainty for those galaxies.
[slide 15] A similar method uses luminosity function of planetary nebuli. As you recall planetary
nebuli represent stellar envelopes that have been shed by a star, following its horizontal branch
phase. They are illuminated and iodized by the incandescent core that remains. And most of their
light emerges in recombination emission lines. A very prominent line among those, is the line of
ionized oxygen at 5007 angstroms. We can measure luminosities of those lines alone and then
we can form luminosity function that is distribution of luminosities for that emission line alone
for planetary nebula.
That too, turns out to be more or less the same for the nearby galaxies. Ostensibly, that reflects
the way in which stars evolve. But there isn't solid strong physical basis. This is an empirical
relation. And again it is statistical. It can work up to the distance of the Virgo cluster, which is
not so bad, but not beyond.
[slide 16] And finally, there is, the tip of the red giant branch. The stars cannot get more
luminous than certain amount. This is related to the, so called Eddington Luminosity, which,
hopefully, you have heard about, and which we can address later. So, empirically, they don't
seem to get brighter than a certain limit. And, if we can observe stars in other galaxies, nearby
galaxies like Andromeda, and find out what are the most luminous ones. Where does luminosity
stop? Then that threshold can be used as standard candle.
The advantage of this is, of course, that these stars are bright. The disadvantage is that it is not a
terribly well-defined indicator. There aren't very many of those stars so the numerical
fluctuations can affect the result.
[slide 17] That is it about the stellar. Distance indicator. Next time we will talk about so called
Distance Indicator Relations for galaxies.
Module 4.3 - Distance Indicator Relations
[slide 1] Hello. Last time, we saw how we can use, various relationships to measure distances to
stars, or star clusters. Chief among them, relations between period, and luminosity for pulsating
variables like Cepheids, but also fitting of the main sequence for the star clusters.
[slide 2] These are examples of a really more general idea of Distance indicator relations. So let's
just recap what that really means. In general, if we can find a correlation between some distance
independent quantity, like color or temperature of stars or period for pulsating variables, this
correlates well with something that does depend on distance like, stellar luminosity. And we can
calibrate that relation, then we can see how stars in other clusters, of different distances, fit on
that relation. So now when we compare, these correlations in apparent quantities, say apparent
magnitudes instead of absolute magnitudes, clusters of different distances will systematically
shift from each other. From magnitude of shift, we can find out what the relative distance is
between them.
So thus, we can use these distance to indicate the relations as a measure of relative distance. If
we can actually calibrate them, in the absolute sense, then of course we can get the actual
distances as well. In this is how Cepheids work. So now the question is, can we do the same for
galaxies, and the answer is yes.
[slide 3] The first relation, that we'll consider, is so called, surface brightness fluctuations. And,
conceptually, it's fairly simply. Suppose we are looking at some galaxies with a detector that has
square pixels, like, typically they do, and each of these pixels would cover, a certain number of
stars. Now if we, remove the galaxy twice as far, then there'll be, four times as many stars in
each pixel, and that will look smoother.
[slide 4] The mathematical reason be, behind this is, fairly simply. For elliptical galaxies and
bulges, most of the light comes from luminous red giants. And it's the fluctuations in the
numbers of the red giants that will determine what's going on here.
So if we have, say, stars, that contribute most of the light and the average flux is f and the
number of them in a pixel is n, then the total amount of flux from the pixel will be n*f. And the
variance of that will be given by the square root of the number of stars. It's simple ??? statistics.
So, if we move galaxy further, the number will go as the distance squared, but flux will decline
as the distance square. And, therefore, the variance will scale with the square of the distance,
inversely, and square root of that, which is RMS or sigma as the first power. This is why a
galaxy that's twice as far would appear twice as smooth. So now, if we actually knew what the
real luminosity is for an average star that corresponds to that average f, then we can determine
the distance.
Roughly speaking, the absolute magnitude M corresponding to that main flux F, is the one at the
top of the Red Giant branch, and we can try to calibrate that using Andromeda Galaxy to which
we measure distance using other techniques. Using that gives the following calibrating
relationship here. Note that this is a purely empirically calibrated relationship using a galaxy to
which we know the distance. However, there is likely to be some dependence on the actual
coefficients of this equation, or I should say correlation, that will depend on things like the mean
metallicity of stars in a given population, and so on.
We also have to take care of things like removing the effects of extinction, presence of
foreground stars in our galaxy, background galaxies behind the galaxy we're studying, and so on.
Nevertheless, this is a very powerful method, and can be used to measure distances up to 100
megaparsecs using Hubble-Space telescope.
[slide 5] Here is a simulation of how that works. The 2 columns of pictures correspond to 2
galaxies, fictitious galaxies that 1 of which is twice as far as the other. Top panels schematically
show where the stars would be relative to the pixel grid. Now if we average the fluxes we can see
in the grey scale images the second row what say. Picture might look like. Now let's add more
pixels or alternatively we can move galaxies further out and so we see those grainy structure
which is indeed what is observed.
Finally, we convolve this with atmospheric turbulence thus seeing because we are not observing
in a vacuum. Don't have to do this last step if observing with a space telescope. And the last row
shows selected images of 2 different galaxies, of 2 different distances from us. Obviously the one
that's further away is much smoother.
[slide 6] 100 megaparsecs is a good distance but we'd like to push even further into what's called
pure Hubble. So remember, the definition of Hubble's Law is that distance is proportional to the
recession of velocity, that's due to the expansion of the universe. However, in reality, galaxies do
move. Since the time they form, they acquired so called peculiar velocities because of the mutual
gravitational attraction. You can imagine that if there is a large concentration of mass, in some
direction from a galaxy, it will be accelerated towards it, and over time it will develop certain
velocity.
So the actual observed radio velocities of galaxies have these two components, the components
that's due to the expansion of the universe, the Hubble flow, and the other one which has to do
with dynamics of large scale structure.
Now, typically today these peculiar velocities of galaxies are a few hundred kilometers per
second. For example Milky Way moves 600 kilometers per second relative to the cosmic
microwave background which is a good physical embodiment of the combo and coordinate grid,
and galaxies new cluster. So, in our case, we are a member of the local group, and that is part of
a larger concentration, the local super cluster. And our entire local group is falling to it with a
speed of a few hundred kilometers per second.
Since these large scale non-uniformities extend out to scales of maybe 100 megaparsecs or so, up
until that scale, or thereabouts, there are going to be peculiar velocity components. Going to
larger scales, things pretty much average out, and if the peculiar velocity is few hundred
kilometers per second, few thousand kilometers per second, Hubble flow would make that 10%.
If we go to tens of thousands of kilometers per second of recession velocity, then the peculiar
velocities will only matter on a percent level.
This is why I would like to measure Hubble constant to distances that are considerably larger
than 100 megaparsecs. And for that, we need some very luminous objects. Galaxies and
supernovae are such luminous objects and both have been used to push measurement of the
Hubble constant all the way out to the Hubble flow.
[slide 7] This is where galaxy scaling relations come in. Remember we explained how it works at
the beginning of this clip and if we can find a quantity for galaxies that correlates with another
one, the first one being distance dependent, such as luminosity, the outer one being distance
independent, then we can actually do the same trick as we did with Cepheids.
There are two examples of such distant indicator relations for galaxies. one for spiral galaxies,
called the Tully-Fisher relation, and one for elliptical galaxies, called the fundamental plane.
Because these distance indicator relations, provide a relative distance of galaxies from each
other, they do have to be calibrated locally. So, for example, the Tully-Fisher relation can be
calibrated with distances to spirals that are known safely from Cepheids, whereas the
Fundamental Plane relation can be calibrated from the distances using surface brightness
fluctuations, each of which, of course, has been calibrated with the lower ranks of the distance
ladder.
Now one thing to note here is that we have some idea where these relations come from, but their
exact physical standing for evolution are not yet well understood. So we have to beware. We're
using a tool whose origins and the right functions are not perfectly well understood.
[slide 8] The Tully-Fisher relation, for spirals, is a very useful one, it, connects the luminosity of
a galaxy, with its maximum rotational speed. When we talk about, properties of galaxies, later on
in the class, we will address these relations in much greater detail, but, for now, just take it for
granted, that this is indeed the case, and I'll show you the plot.
The luminosities of spiral galaxies are well correlated with rotational speeds, and the relation has
the power of slope of 4, namely luminosity is proportional to the fourth power of the circular
velocity. Roughly speaking, the power varies depending on the filter that is used and so on.
We can also express that in terms of magnitudes, because log of the luminosity * -2.5 gives the
absolute magnitude, will manifest itself as the width of the observed radial line of neutral
hydrogen. Because as the hydrogen atoms move in the galaxy their Doppler shifted this way or
that and faster the rotation speed, more of the Doppler broadening there is.
So we can establish this, this correlation using spiral galaxies with well measured distances
nearby and then try to apply it to much more distant ones. In practice the scatter of the TullyFisher relations is typically 10 to 20%. The very best that has been done is about 9% and that's
including all the measurement errors. It's actually a very good correlation when done well.
There are some problems. The light from spiral galaxies is affected by the extinction in them.
There is lots of dust that absorbs the blue light, or absorbs light at all wavelengths, but more so in
the blue. And, the fluctuation in star formation can affect the luminosity of galaxy on short time
scales.
[slide 9] So here is the actual Tally-Fisher relation for a set of galaxies nearby, in 3 different
filters. There is a gradual change in slope. But that's okay. Whatever we do, we can just calibrate
to that slope. But also note that the scatter improves, the further in the red we go, for the reasons
I just mentioned. There is less effect of the extinction, and less susceptibility to fluctuations due
to the very luminous young stars.
[slide 10] The other important correlation is so called fundamental plane. This is actually a set of
bivariate correlations, which connect one property of elliptical galaxies with the combination of
2 others. But it's usually shown in this form, where a distance dependent quantity, Radius
determined a certain consistent fashion, is correlated against a combination of velocity
dispersion, which, again, is the Doppler broadening of spectroscopic lines in galaxy spectrum,
and its mean surface brightness - surface brightness not depending on the distance, at least in
Euclidean space.
The observed scatter of the fundamental plane visible bands say a is about 10%. So it is as good
as the Tully-Fisher. It may be even slightly lower and actually why is it so small is an interesting
problem in of itself, which will come back to when we talk about properties of elliptical
galaxies. Its zero point nowadays, tends to be calibrated with surface brightness fluctuations.
[slide 11] The fundamental plane comes in another flavor, so called DN sigma relation, where
DN is the diameter of a galaxy at which its surface brightness reaches certain value. This turns
out to be a slightly oblique projection of the fundamental plane, but it basically works in the
same way.
[slide 12] Next time we will talk about use of supernova as a standard candle, a very popular
method these days.
Module 4.4 - Supernova Standard Candles
[slide 1] Let's now turn to use of supernovae as a standard candle to measure distances in
cosmology. This is today one of the most powerful tools in the observation of cosmology
arsenal, to measure distances over cosmological scales.
[slide 2] We keep using the term, standard candle. And this is where it comes from. Actually,
there used to be such a thing as standard candle. And such standard candles ostensibly had the
same brightness. Now, a supernova is a lot brighter than a candle, but the same concept applies.
So if, somehow, a supernova or something else has a constant luminosity, If we put it at different
distances from us, its brightness will decline according to inverse square law, or rather, the
relativistic version thereof.
So if we can measure relative brightness of the standard candle, at two different distances, we
can derive, what's the ratio of their luminosity distance. Similarly, if we have objects of a
standard size, like a ruler is always the same size, and observe it at the different distances from
us, the ratio of the angle, or diameters will be, equal to the ratio of the angular diameter
distances. So we could use, standard rulers, to determine relative distances to objects we're
looking at.
[slide 3] So how are the supernovae playing this? They are certainly very bright, can be seen
very far away which makes them useful for cosmological tools and, turns out, they can be
actually standardized. Now, there are 2 different kinds of supernovae, and both can be used,
although one of them is much more useful than the other. First, there's so called supernovae of
type 1A. They correspond to detonating white dwarf stars, which have accreted too much
material for their own good, either from their companion or by merging with another white
dwarf, which causes instability and explosion. They're pretty good standard candles already, and
they can be made even better using a trick that I'll show you. We use their light curves.
Brightness as a function of time to put them to a standard.
The other type of supernovae are type 2. Those are very massive stars that are at the end of their
life. And explode, because their core collapses. Now, they have a much larger spread of
luminosities. And that will not make them good standard candles. However, they can be used in
a slightly different test called expanding photosphere method which is similar to the BaadeWesselink method that we mentioned earlier when we were talking about pulsating stars. And
thus they can be used as an independent check to those measurements made with supernovae
type 1A.
[slide 4] So here is schematically shown, the difference between, average light curves of the
supernovae of 2 kind. In both cases, their brightness increases, as the star explodes, and then
declines. But the shape of the light curve is different, because it's powered by slightly different
physical mechanisms.
[slide 5] Supernova classification is actually a little more intricate business. There are these 2
basic channels. Either massive stars at the end of their life that exploded because they, no longer
produced nuclear reactions in their core or white dwarfs that are pushed over their stability limit
by an additional accretion. They come in many different varieties. In terms of spectra and so on
but they still are, these 2 basic mechanisms, although they can manifest themselves, in a broader
fundamentalogical sense.
[slide 6] So type 1a Supernovae are believed to come, from detonating white dwarfs, and I'll tell
you why that is in a moment. A white dwarf is a low mass star that has shed its envelopes. It’s at
the end of its life. Its core is just slowly cooling down; they are not making energy on their own,
there are not thermal nuclear reactions in the core. But, sometimes or often they can be in binary
systems since majority of the stars in binary systems, and if the binary is close enough and their
companion, Is not yet, white dwarf. The gravitational field of a white dwarf can, pull the outer
envelopes of the companion, and accrete, on the surface of it. Once the mass of the white dwarf
crosses the so called Chandrasekhar limit, which is the highest mass a white dwarf can have, and
be stable collapse, due to the degeneracy pressure, the star explodes.
Another way of dumping more matter on it is if there is a binary white dwarf. And they lose
energy by emitting gravitational waves. They spiral in as the two stars merge, you get something
that's twice a big as then what's sustainable. As the two stars merge, the effect is the same.
[slide 7] So, we're pretty sure that type 1A supernovae come from detonating light force,
although this is not yet a 100% certain. The reasons why we think this is the case is as follows.
There are no hydrogen lines in the explosions of type 1A supernovae. Meaning they have shed
all of their envelopes, so it has to be an old stellar remnant which would be just like a white
dwarf. There are also strong lines of silicon, which means that nuclear burning in the progenitor
has to have reached at least that stage.
Second, they are seen in all kinds of galaxies, elliptical as well as spiral. Young massive stars
that are responsible for type 2 explosions, are only found, in star forming regions that is like
discs or spiral galaxies, but not in all stellar populations like bulges or ellipticals.
Type 1A supernovae are seen in all environments, so they have to come from some kind of old
progenitor and white dwarfs fit that bill.
Furthermore, they do have remarkably similar set of properties, unlike type 2s which suggests
that there is single progenitor mechanism. Their light curves are powered by the radioactive
decay of an isotope of nickel. It's about one solar mass worth of radioactive nickel.
[slide 8] However, this is an explosion of the whole star. And that, by definition, is a very messy
business. We can model supernova explosions in super computers but that is still not a perfectly
well solved problem. This is a very complex phenomenon of nature. And, can you imagine it
would be kind of hard to standardize an explosion. So how is it possible that these are standard
candles?
[slide 9] There is an empirical relationship between the shapes of light curves of these
supernovae, and their peak luminosity. And it goes in the sense that those that are intrinsically
more luminous are also slower in decaying. Since the light curves have similar shape, they can
be parameterized by a stretch factor. 1 can be stretched into another, and then they can be shifted
vertically. When you do this, the following happens. Here we show on the top, a set of actual
light curves of, some type 1A supernovae. And the second panel shows what happens when we
normalize them and correct them with the stretch factor. Suddenly, they all seem to fit this one
universal shape.
It turns out that, by doing this, we can standardize the peak luminosity of a type 1A supernova to
10% or even slightly better, which is plenty good enough for cosmological purposes.
Note again, that we have to calibrate this, standard luminosity, using distances to galaxies that
were measured in some other way. Say, with Cepheids. There aren't very many of those, maybe
20 or so, that have both Cepheid measurements, and supernovae.
[slide 10] A very similar, result can be obtained by looking not only at the shape of the light
curve, but also behavior of different colors. The more luminous supernova, tend to be decaying
slower, but also, have, systematically different colors. Either way, supernova, or type 1A can be
standardized, so their peak brightness is nearly constant to within 10%. And that's what makes
them really useful, as cosmological tool. Not just for the measurement of Hubble constant, but
also other cosmological parameters. And for example they have played a key role in the recent
confirmation of the existence of the dark energy.
[slide 11] Here is an example of a supernova 1A Hubble diagram, corrected for the stretch factor
and so on. The scatter is remarkably small. What's plotted here is the distance, luminosity
distance in formal distance modulus, raises the redshift. And it's as good a Hubble diagram as
you'll ever Hope to get.
[slide 12] Now the other kind of Supernovae type IIs can still be used using, with a different
trick. This is so called Expanding Photosphere Method. An interesting thing about this method,
it's based on physical reasoning and, in principle, does not require messy calibrations. However,
it is model-dependent, and that more in compensates for the other benefits.
It is very similar in principle to the Baade-Wesselink method we used for pulsating stars. This
uses type II supernovae and it can be cross checked with Cepheids to see how well it works. The
physical basis behind this method is that supernova photospheres will emit light in a way that's
not too different from the blackbody radiation, according to Stefan Boltzmann law. So if you can
measure temperature, and if you can measure the radius of the photosphere, then you can
immediately derive luminosity. From luminosity and observed apparent brightness, you can find
the distance.
[slide 13] So this is how it works. The angular diameter, of the expanding photosphere is the
ratio of its physical diameter, and the distance. And that can be, folded through StefanBoltzmann formula, as shown, as shown here, except that there is an extra fudge factor, it's
inserted to account for the deviations, of the real supernova spectra from the black body. This is
where, theory comes in, that's where the modeling comes in. Just like with the Baade-Wesselink
method, we can figure out the radius from observing the velocity of the expanding photosphere,
from the moment of the explosion, as a function of time. It is probably as good approximation as
any, to assume that the initial radius is about zero, because it is certainly much smaller, than,
radii of the expanding supernovae shells.
Now we have everything we need. We can simply solve for the distance. But again, there is
model dependence. An expanding, shell, of a stellar explosion, is not exactly in equilibrium, and
its spectrum is not exactly that one of the black body. So modeling has to be done to, to connect
the two.
[slide 14] Next we will talk about what's really the first definitive measurement of the Hubble's
constant, using Hubble's Space Tell.
Module 4.5 - The HST Distance Scale Key Project
[slide 1] You will recall that, value of Hubble's constant, was fairly unsettled all the way through
1980s. The reason for this, is that, these measurements are difficult. There are so many different
relations, then one has to be calibrated against the other, and there are many opportunities for
errors. And systematic errors in particular. And that resulted in the values of Hubble constants
scattering by a factor of 2, which means that distances will be also uncertain by a factor 2 and
things like luminosity by a factor of 4 and that, clearly, was not a satisfactory situation.
[slide 2] So when the Hubble space telescope was launched, measuring Hubble constant was
seen as one of its key goals and it was a subject of a so called distance scale or Hubble Constant
key project. This took ten years of very diligent measurements using Hubble space telescope.
And even today Hubble space telescope continues to be used for this purpose, improving the
results.
The idea here was to observe Cepheids in a number of nearby spiral galaxies, and the reason why
Hubble was needed is that these stars are faint, and they're in crowded fields, so the superb
resolving power of HST was needed in order to actually measure their brightness and populate
their light curves.
Then, using the locally calibrated Cepheid relation, understand the distance to these galaxies and
then use those to calibrate other things such as supernovae.
A choice was made to use the distance to the Large Magellanic Cloud to establish the zero point
to the Cepheid period luminosity relation. You recall that, that was the original discovered period
luminosity relation by Henrietta Leavitt, and it still plays a role. And so any uncertainties in the
distance to the Large Magellanic Cloud would then map directly into the inserted piece in the
Hubble Constant.
Now not only did this team perform wonderful measurements, but they're also very careful about
their analysis, and they tried to honestly account for every source of error they can think of. And
the final result is shown here. Hubble constant turned out to be right in the middle of the disputed
interval between 50 and 100 kilometers per second per megaparsec. It's 72 + or - 3 in just
random errors, but also + or - 7 kilometers per second per megaparsec plus potential systematic
errors and that's an honest result. In fact this is still perfectly consistent with all of the more
modern measurements.
Note, however, that there is still dependence, on the assumed distance to the Magellanic clouds.
And that is something that people continue to improve.
[slide 3] So here is a sample image of what they were looking at. The picture on the left, is 1 of
the spiral galaxies used in their study, and superimposed on that, is an outline of the field of view
of the camera, of a space telescope, that was used. This was the original White field Planetary
camera, and has the strange B2 bomber shape. The picture on the right, shows a zoom in, on one
of those images, with some of the candidate Cepheids circled. As you can see, this would be a
very hard thing to do from the ground. Some of these Cepheids occur in star-forming regions, so
there may well be other bright stars that are blended with them and that has to be taken care of.
[slide 4] And here are some light curves of cepheids discovered by the key project. These
represent many measurements of different times that we folded together into the best fit period.
And you can see that those really look like those of nearby cepheids.
[slide 5] So here are some Hubble diagrams they obtain in the end. The bottom left one shows
only that one For galaxies, whose distance was derived from Cepheid's. The one on the right
includes all possible calibration sources they could come up with.
[slide 6] And here is a table that accounts for different sources of uncertainty in their
measurement. I'm just showing for information only, and you can see how many things they can
think of.
[slide 7] And here is their final error probability distribution around hubble constant This is
actually a good scientific way of presenting result. It's not a number, it's a probability distribution
for that number The peak value is the one that's quoted and the width of that distribution is
indicative of that uncertainty.
[side 8] People continued, to try to determine Hubble constant, using any 1 of the number of
methods and combinations of different indicators and callibrators, and here is again, a table of
those. Not there for you to remember all of it, but just to see how, much the different
measurements, scatter around each other. Most of them are certainly within the error bars of the
value determined by Hubble key project.
[slide 9] Now recall that the basis for the whole thing was distance to the Large Magellanic
Cloud which is about 50 kpc from the Milky Way. So the uncertainty of this distance maps
directly onto the uncertainty on Hubble Constant. And the distance to The Magellanic Cloud was
measured many different ways by many different authors. That alone, has a spread of + or - 10%.
[slide 10] There is maybe about, 10 different methods, by which this was attempted, and here is
the table that shows some of the results. Again, not there for you to remember all of it, but just to
see the, roughly, the spread of the numbers, and the accuracies that are involved.
[slide 11] A more important Check, which is actually becoming a very powerful new method is,
as follows: in the nuclei, of many, large spiral galaxies, there is a massive black hole, which, for
all practical purposes, is like a point mass, just like, essentially, all of the mass in the solar
system is in the sun. Now, if we have test particles moving around that black hole. From
measuring their orbits, we can find out how far the galaxy is. The orbits can be well-assumed to
be Keplerian. And the suitable test particles are so-called interstellar masers. These are
interstellar clouds that have very sharp line due to the coherent emission, and as they're moving
around, the center of this galaxy, they can be used to measure the central mass. But also they can
be used to measure the semi-major axes of these orbits.
This was the first case in which this was done, since then there have been more, and the distance
to this particular galaxy was found to be consistent with that one determined by the Cepheids.
[slide 12] Now wouldn't it be nice, if we can bypass all this messy distance scale ladder climbing
from one to another and so on, and go directly into the deep Hubble flow. Well, there're 2
methods by which death can be accomplished, that do not require any other calibration. They're
both based on physical reasoning. The first one is gravitational lens time delay and the second
one is so-called Synyaev-Zeldovich effect.
Whereas these are based on physics, they're still very much model-dependent. Initially, at least,
we are producing values that were somewhat lower than that when measured by the Hubble Key
Project, but since then, they have converged a little more. Any of these small discrepancies can
be understood in terms of systematic errors.
[slide 13] So firstly, gravitational lens time delays. Assuming that we understand the geometry of
the lensing, and I'll show this in a moment, we can in principle derive the distance between the
lens and the lensed object using the measured time delay. Modeling the lens geometry is the key
uncertainty here. Because masses responsible for gravitational lensing are not always perfectly
straightly symmetric and there can be combination of many potential wells of say galaxies in a
cluster or group.
[slide 14] So here is how it works. Here's a schematic diagram showing what gravitational source
might be. There is a background source, say a quasar, There is a foreground which could be a
galaxy or a cluster. It bends the lights rays coming from the original source and one usually sees
multiple images, on the sky. Each of these images corresponds to particular path of light rays that
came around the gravitational lens and there will generally be a difference in length.
The difference in length would translate itself in to a difference in arrival times. So we are going
to see in our ability in the place are we will first see it in one image and then some time later in
the other. These time delays are typically in the range of weeks or months.
The path difference between different rays, assuming that you know the geometry will scale
directly with everything else, every other length in the system. And the ratio of this path
difference to the, distance to the lens or the source, is also something that the model will tell you.
So by measuring the time delay, multiplying by the speed of light, we directly measure the
difference in the path length. And if we knew the lens model, we can then use it to infer the
distance to the lens, or the lensed object.
[slide 15] Synyaev-Zeldovich Effect is something entirely different. Clusters of galaxies contain
galaxies, dark matter, but also a lot of hot gas, gas that was expelled from galaxies or accreted by
the cluster. And since that gas is in a potential well of a cluster, the speed of individual particles,
protons, electrons has to be such that the kinetic energy balances the potential energy.
It turns out that that responds to temperatures of millions or tens of millions of degrees, which
means that the gas will emit in the X-rays. Now we're looking at the cosmic microwave
background behind the Cluster. The photons of the microwave background will come through,
and some of them will scatter off these hot energetic electrons. In the cases of forward scattering
this will generally result in an increased energy of the photon. The energy's been gained from the
electrons in the cluster. Of course there is equal amounts going out from the other side.
So essentially what you see on the micro background sky is there's going to be a bump that
corresponds to this X-ray cloud.
[slide 16] And the entire spectrum of the cosmic micro background will be shifted towards
somewhat higher energies. By measuring that shift we can find out how long was the path?
Because the longer the path along the line of sight, more chances the photons have to be
scattered.
Therefore, we can derive from this measurement directly, physical linear size of the cluster along
the line of sight. We can measure the apparent size of the cluster on the sky. And on average, we
expect the clusters will have the size in radial direction or orthogonal to it, so since we have
observed apparent angular size of the cluster in the sky, and we know how much that is in
physical units, at the cluster's red shift, we can derive the angular diameter distance
Now, any given cluster is not likely to be spherically symmetric, but the whole ensemble, on
average, will probably work out. A beautiful thing about this method, is that it does not depend,
on the distance to the cluster itself. The source that's observed, is cosmic microwave background.
Cluster could be near by, or it could be very far away. So the method can work over a very broad
range of redshifts.
There are uncertainties in modelling the process because the gas could be clumpy, there are some
density gradients, all of that has to be accounted for before we can derive the actual diameter of a
cluster that photon goes through.
[slide 17] Next we will talk about measurements of the age of the universe.
Module 4.6 - Estimating the Age of the Universe
[slide 1] So far, we have seen how do we measure the spacial scale of the universe, which is
simply one over a Hubble constant. What about its temporal scale? What's the age of the
universe?
[slide 2] It turns out that we actually can't measure the age of the universe directly. However,
what we can do is measure lower limits to the ages of all these things we can find. And, then
presumably the age of the universe is just a little more than that.
Several different possibilities exist. The first one. Is the globular star clusters, These are believed
to be formed over very short time interval in the early universe and all stars in them have
essentially the same age. Understanding the ages of star clusters or stellar populations is the basic
fundamental thing in stellar evolution theory. So in principle we understand how the changes, by
looking at their color magnitude diagrams.
The second kind of thing that we can look at are white dwarfs. Again, these are inert,
incandescent cores of low-mass stars that are now slowly cooling down. The cooling theory is
fairly well understood, and by observing apparent luminosity function of white dwarf stars, you
can estimate how old the, how old are the oldest ones.
Another thing we can do, is estimate the age of the heavy elements, created in stars. Very high
mass, elements, will tend to be radioactively unstable. And, if we can find long lived isotopes, of
some of those elements, and measure their relative abundances, we can estimate the age of those
elements, which presumably came from supernova explosions in some of the very first stars.
This is very similar to the carbon dating that's often used on planet Earth.
That is somewhat model-dependent, because it depends on when the Supernova were exploding,
but the upper limit of that would, again give the lower limit, the age of the universe.
And finally, we could model Stellar populations, in a similar way that we model, star clusters.
But that turns out to be much complicated, and, it's really not used to constrain the age the
universe.
[slide 3] The fundamental measurement here, is estimating the ages of globular clusters, which
are almost as old, as our galaxy itself. This is based on a well established and well tested theory
of stellar evolution, which predicts so called Isochrones which is color magnitude diagrams and
the paths that, main sequence and the giant branch will have, at any given time.
[slide 4] They look like this, the main sequence turn-off, moves to ever lower masses. Lower
luminosity as time goes on, and then stars ascend the giant branch. So, by measuring where the
main sequence turnoff is, and where the giant branch is, we can fit the models and find out how
old the cluster is. Likewise, the difference between the turnoff, and the position of the horizontal
branch is another age indicator. The isograms will depend slightly, on the chemical composition
of stars in the cluster.
[slide 3] So there are many uncertainties, in those estimates but the biggest one of them all, is
the uncertainty in the distance to the globular clusters. Stellar evolution models are pretty good
but in detail, there's still some minor uncertainties that need to be resolved. The exact effects of
the different abundances of chemical elements and diffusion from stellar cores in which they are
made, towards the surface and so on. Those also contribute to the uncertainty and the estimate of
ages of globular clusters. Nevertheless, measuring ages of globular clusters was probably one of
the more reliable cosmological measurements for many years, certainly more so than any other
cosmological parameters until the 1990s.
[slide 5] The key point here was to calibrate exactly where the main sequences are, and this
became possible only after Hipparcos satellite measured the actual parallaxes and distances to
some metal core stars which defined the metal core main sequence that can be then applied to
globular star clusters. Different groups have done that and typically the results they get, are the
ages of globular clusters, on the order of 12 to 13 billion years. Which is very close to what we
now know is the actual age of the universe of about 13.7 billion years.
[slide 6] Here is a probability distribution of globular cluster ages that take into account all of the
sources of uncertainty and so on. Clusters could have different ages and probably do so that will
contribute to the spread as well. So the peak of this distribution is indeed where it should be. A
little more than 13 giga years ago, which allowed for a few hundred million years for galaxy and
clusters to form. Of course there are no clusters older than the universe itself. The fact that the
distribution has a high end tail is simply indicative of the errors in age estimates.
[slide 7] The next method is estimating the ages of white dwarfs. They are fairly well
understood, and they are cooling a very orderly fashion. So the faintest white dwarfs that we can
find are probably the coolest, the ones that had been cooling for the longest period of time, and
they can constrain the age of the cluster, or galactic disk, in which they are found. Again, the
theory is fairly well understood. In over-all sense, but there's still details that need to be ironed
out.
Probably the best way to do this is in star clusters, where again, we know the population of white
dwarfs all have the same age. And because they're really faint, and clusters are crowded, we need
Hubble Space Telescope to do this.
[slide 8] Here is an example of the first measurement of this kind in the upper right you see a
little segment called the main sequence. In the center is the white dwarf cooling sequence, and
you can see it stops at some point. Those are the oldest and the coolest white dwarfs in the
cluster and their position determines the age.
So this method produced results which are perfectly consistent with those from main sequence,
isochrome feeding, and that's, very different physics, so it's very encouraged.
[slide 9] Here is the observed luminosity function, of white dwarfs, in this particular cluster.
Meaning, their distribution of their luminosities. You can see that there is a, fairly sharp cutoff, at
the faint end, which is, indicative, what the age really is.
[slide 10] An entirely different approach, uses ages of chemical elements. Heavy chemical
elements that must have been produced in supernova explosions of say first supernovae often
have unstable isotopes. And their radioactive decay can be used to age date the elements
themselves.
Because the age of the universe is about 13 billion years. You need radioactive decays that are
commensurate with it. Again, measured in billions of years. There are such isotopes like thorium
and uranium, there is also ???ranium and osmium, and a few others. And so you need, something
that has a radioactive half-life, decay time of life time. But that, unfortunately, also means that it
will be really hard to measure, in the laboratory, what the half-life decay time is.
So doing this for variety of different isotopes then provides the estimate of the age of the oldest,
supernova explosions, all this chemical elements in our galaxy. The abundances of these
elements are done by very high resolution, high signal to noise spectroscopy of all the stars we
can find. The lines are very subtle and that measurements are very hard. Nevertheless, they've
been done over the years and results are shown.
[slide 11] So for one particular star, shown here, there are ratios of several isotopes that are being
used to estimate the age of the elements that made a star. And the average of it is remarkably
close to what we now know is the actual age of the universe. And it's also essentially the same as
the age as measured for globular clusters and for white dwarfs.
This is just 1 star, doing it for many stars improves the result.
[slide 12] And so, to recap. We have, now, a fairly good idea of what the age of the universe, or
at least the lower limit to it. Using several completely different methods which rely on different
physics, different assumptions, and different measurements. And they all agree. It's because of
that we think we got it right. Moreover this is also in perfect agreement with age of the universe
deduced from measurement of other cosmological parameters to which we'll come later.
[slide 13] Next time we will address the question of, is the universe actually expanding? You
think we would have probably figured this one out by now, but it's good to be sure.
Module 4.7 - Tests for Expansion of the Universe
[slide 1] Finally, let us address the question of, is the universe really expanding? The reason we
think its expanding is the existence of Hubble's Law, and this is generally accepted to be true.
However, there are other possibilities, hence there is a so-called Tired Light theory, which states
that for some reason that as yet is completely unknown, photons coming from very far away lose
their energy on the way here in a way that's proportional to their travel distance. There is no
physics behind it, but it's a possibility. So, several tests have been actually designed to
demonstrate that universe actually is expanding.
[slide 2] The first of those is so called Tolman Test, which Tolman and Hubble came up with in
1930s, and it uses behavior as surface brightness as a function of distance. And here is how it
works. If the Universe wasn't expanding at all, nothing else funny was going on. Surface
brightness will be constant, not depending on the distance because remember, the luminosity of
the clients was the square of the distance, but so does the area of the angular areal over which its
distributed so the ratio of the two remains constant.
In Tired Light Theory, the brightness will decline with redshift as first power of one plus
redshift.
And finally, in an expanding Universe, it will decline as the fourth power of stretch factor one
plus redshift.
The second method uses time dilation of Supernova light curves. Recall that those can be
standardized for type 1A Supernova to the same shape. Now, these Supernovae or rather their
host galaxies are receiving from us with relativistic speeds, and so clocks are ticking slower
there. So, we have to compensate for the time dilation. So, in addition to the stretch factor that
brings them all together, light curves have to be compensated for time dilation that's proportional
to the first power of, of the stretch factor 1 + z
And finally, somewhat indirect argument, is the black body nature and temperature of the cosmic
micro-background. In an expanding Universe, the shape of the black body curve is preserved and
the energy density has to scale as the fourth power of temperature. If the Universe wasn't
expanding, that relationship would not be exactly power of four. But we do see essentially
perfect black body radiation, which is perfectly consistent with an expanding Universe.
[slide 3] So, here's how Tolman Test works. In non-expanding Euclidian space, surface
brightness is constant and does not depend distance. Because luminosity declines a second power
distance and so does the angular area over which we're dividing it to get surface brightness.
However, in an expanding Universe, we have to deal with angular diameter distance and
luminosity distance. As you recall, the angular diameter distance is equal to the physical distance
divided by one plus redshift because the objects, fixed in proper coordinates, do not expand with
co-moving coordinates. And luminosity distance is bigger than co-moving distance by factor one
plus redshift because the photons lose energy and the rate of the photon emission is also dilated.
So the upshot of this is that surface brightness will decline as one plus redshift to the 4th power
relative to what it would be if the Universe wasn't expanding.
Note, that this has nothing to do with curvature of space or anything else. It simply tests whether
universe is expanding or not, and assumes that special theory of relativity is valid and nobody's
doubting that. So, in some sense, it's completely independent of cosmology of Hubble constant
or all cosmological parameters.
[slide 4] In order to perform this test, we need something that can be seen far away that has a
constant surface brightness, what we may call “standard fuzz”. One good choice is the surface
brightness intercept of the fundamental plane correlations. Remember, they connect things like
radii, velocity dispersion, and mean surface brightness of galaxies in what's essentially perfect
correlation modulo of measurement errors. So, we can reproject it in such a way that one axis has
surface brightness on it, and the intercept on that axis will be defining the standard fuzz.
So, if we have two clusters of galaxies, one larger distance than the other, and we compare the
intercepts for their own fundamental plane solutions, they should shift according to the
expansion law.
[slide 5] This test was done, and here is the result. This is a log log plot so the power law is a
straight line, and the line that's drawn through the points here is exactly one plus redshift to the 4 power. So, the Universe does really seem to be expanding exactly as Tolman Test would say it
would.
[slide 6] Now, the time dilation of Supernova light curves. On, shown on the left here is a set of
light curves as observed for the normalized to the same peak brightness for Supernova of type
1A, and different redshifts.
On the right, we apply the stretch factor to, that's normally used to standardize them. However,
no correction was made for relativistic time dilation. In the lower left now, we see what happens
when we apply the relativistic time dilation correction, the scatter goes way down. And then, if
we, of course, bend [???] the points, it becomes very obvious.
Thus, Supernova light curves a sense of giant clocks, do behave exactly in the way that should if
the Universe was expanding.
[slide 7] Another way to show this result is to plot this characteristic width of Supernova light
curves before and after relativistic correction and before you can see there is a residual trend,
after the distribution is flat. Which means that is the way it should be.
[slide 8] So, next week, we'll start talking about cosmological tests. How do we actually figure
out in what kind of Universe do we live?