Download Encyclopedia - KSU Faculty Member websites

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Molecular Hamiltonian wikipedia , lookup

Bohr model wikipedia , lookup

Tight binding wikipedia , lookup

Electron configuration wikipedia , lookup

Renormalization wikipedia , lookup

Elementary particle wikipedia , lookup

Rutherford backscattering spectrometry wikipedia , lookup

X-ray fluorescence wikipedia , lookup

Double-slit experiment wikipedia , lookup

Atom wikipedia , lookup

Electron scattering wikipedia , lookup

Matter wave wikipedia , lookup

Theoretical and experimental justification for the Schrödinger equation wikipedia , lookup

Wave–particle duality wikipedia , lookup

Atomic theory wikipedia , lookup

Transcript
Production of X Rays _________________________ 3
Applications of X Rays ________________________ 3
Discovery and Early Scientific Use _______________ 4
Potential and Kinetic Energy ___________________ 6
Conversion and Conservation of Energy __________ 6
The Nature of Light __________________________ 7
The Wave, Particle, and Electromagnetic Theories of
Light _____________________________________ 8
Modern Theory of the Nature of Light ____________ 8
The Speed of Light ___________________________ 9
Luminous and Illuminated Bodies _______________ 10
Continuous and Line Spectra __________________ 10
The Quantum Explanation of Spectral Lines ______ 11
Coherent Light and Its Emission in Lasers ________ 12
Characteristics of Lasers _____________________ 13
Applications of Lasers _______________________ 13
Relationship of Energy and Matter _____________ 17
Dual Nature of Waves and Particles _____________ 18
Evolution of Quantum TheoryEarly Developments __ 18
Quantum Mechanics and Later Developments _____ 19
Bose-Einstein statistics Bose-Einstein statistics, ___ 19
Interference in Sound Waves _________________ 21
Interference in Light Waves __________________ 21
Interference as a Scientific Tool _______________ 22
Characteristics of Polarization _________________ 23
Polarization Techniques ______________________ 24
Photometric Units of Measurement _____________ 25
Photometric Instruments ____________________ 26
1
The Nature of the NucleusComposition __________ 28
Size and Density ___________________________ 29
Mass Defect, Binding Energy, and Nuclear Reactions 29
Models of the Nucleus _______________________ 30
Scientific Notation for the Nucleus and Nuclear
Reactions _________________________________ 31
Scientific Investigations of the Nucleus__________ 31
Design of Particle Accelerators ________________ 34
Linear Accelerators _________________________ 34
Circular Accelerators ________________________ 35
Positive and Negative Electric Charges __________ 37
Ionization of Neutral Atoms __________________ 37
Applications of Ionization ____________________ 38
Effect of Isotopes in Calculating Atomic Weight ___ 40
Development of the Concept of Atomic Weight ____ 40
Radioactive Emissions _______________________ 44
Alpha Radiation ____________________________ 45
Gamma Radiation __________________________ 45
Radioactive Decay __________________________ 45
Half-Life of an Element ______________________ 46
Radioactive Disintegration Series ______________ 46
Discovery of Radioactivity ____________________ 46
2
http://www.encyclopedia.com
Xray,
invisible, highly penetrating electromagnetic radiation of much shorter
wavelength (higher frequency) than visible light. The wavelength range for X
rays is from about 10−8 m to about 10−11 m, or from less than a billionth of an
inch to less than a trillionth of an inch; the corresponding frequency range is
from about 3 × 1016 Hz to about 3 × 1019 Hz (1 Hz = 1 cps).
Production of X Rays
An important source of X rays is synchrotron radiation. X rays are also produced
in a highly evacuated glass bulb, called an X-ray tube, that contains essentially
two electrodes-an anode made of platinum, tungsten, or another heavy metal of
high melting point, and a cathode. When a high voltage is applied between the
electrodes, streams of electrons (cathode rays) are accelerated from the cathode
to the anode and produce X rays as they strike the anode.
Two different processes give rise to radiation of X-ray frequency. In one process
radiation is emitted by the high-speed electrons themselves as they are slowed or
even stopped in passing near the positively charged nuclei of the anode material.
This radiation is often called brehmsstrahlung [Ger.,=braking radiation]. In a
second process radiation is emitted by the electrons of the anode atoms when
incoming electrons from the cathode knock electrons near the nuclei out of orbit
and they are replaced by other electrons from outer orbits. The spectrum of
frequencies given off with any particular anode material thus consists of a
continuous range of frequencies emitted in the first process, and superimposed
on it a number of sharp peaks of intensity corresponding to discrete frequencies
at which X rays are emitted in the second process. The sharp peaks constitute
the X-ray line spectrum for the anode material and will differ for different
materials.
Applications of X Rays
Most applications of X rays are based on their ability to pass through matter.
This ability varies with different substances; e.g., wood and flesh are easily
penetrated, but denser substances such as lead and bone are more opaque. The
penetrating power of X rays also depends on their energy. The more penetrating
3
X rays, known as hard X rays, are of higher frequency and are thus more
energetic, while the less penetrating X rays, called soft X rays, have lower
energies. X rays that have passed through a body provide a visual image of its
interior structure when they strike a photographic plate or a fluorescent screen;
the darkness of the shadows produced on the plate or screen depends on the
relative opacity of different parts of the body.
Photographs made with X rays are known as radiographs or skiagraphs.
Radiography has applications in both medicine and industry, where it is valuable
for diagnosis and nondestructive testing of products for defects. Fluoroscopy is
based on the same techniques, with the photographic plate replaced by a
fluorescent screen (see fluorescence; fluoroscope); its advantages over
radiography in time and cost are balanced by some loss in sharpness of the
image. X rays are also used with computers in CAT (computerized axial
tomography) scans to produce cross-sectional images of the inside of the body.
Another use of radiography is in the examination and analysis of paintings,
where studies can reveal such details as the age of a painting and underlying
brushstroke techniques that help to identify or verify the artist. X rays are used
in several techniques that can provide enlarged images of the structure of opaque
objects. These techniques, collectively referred to as X-ray microscopy or
microradiography, can also be used in the quantitative analysis of many
materials. One of the dangers in the use of X rays is that they can destroy living
tissue and can cause severe skin burns on human flesh exposed for too long a
time. This destructive power is used in X-ray therapy to destroy diseased cells.
Discovery and Early Scientific Use
X rays were discovered in 1895 by W. C. Roentgen, who called them X rays
because their nature was at first unknown; they are sometimes also called
Roentgen, or Röntgen, rays. X-ray line spectra were used by H. G. J. Moseley in
his important work on atomic numbers (1913) and also provided further
confirmation of the quantum theory of atomic structure. Also important
historically is the discovery of X-ray diffraction by Max von Laue (1912) and its
subsequent application by W. H. and W. L. Bragg to the study of crystal
structure.
Bibliography
See D. Graham and T. Eddie, X-ray Techniques in Art Galleries and Museums
(1985); B. H. Kevles, Naked to the Bone: Medical Imaging in the Twentieth
Century (1997).
4
electromagnetic radiation
electromagnetic radiation,
energy radiated in the form of a wave as a result of the motion of electric
charges. A moving charge gives rise to a magnetic field, and if the motion is
changing (accelerated), then the magnetic field varies and in turn produces an
electric field. These interacting electric and magnetic fields are at right angles to
one another and also to the direction of propagation of the energy. Thus, an
electromagnetic wave is a transverse wave. If the direction of the electric field is
constant, the wave is said to be polarized (see polarization of light).
Electromagnetic radiation does not require a material medium and can travel
through a vacuum. The theory of electromagnetic radiation was developed by
James Clerk Maxwell and published in 1865. He showed that the speed of
propagation of electromagnetic radiation should be identical with that of light,
about 186,000 mi (300,000 km) per sec. Subsequent experiments by Heinrich
Hertz verified Maxwell's prediction through the discovery of radio waves, also
known as hertzian waves. Light is a type of electromagnetic radiation,
occupying only a small portion of the possible spectrum of this energy. The
various types of electromagnetic radiation differ only in wavelength and
frequency; they are alike in all other respects. The possible sources of
electromagnetic radiation are directly related to wavelength: long radio waves
are produced by large antennas such as those used by broadcasting stations;
much shorter visible light waves are produced by the motions of charges within
atoms; the shortest waves, those of gamma radiation, result from changes within
the nucleus of the atom. In order of decreasing wavelength and increasing
frequency, various types of electromagnetic radiation include: electric waves,
radio waves (including AM, FM, TV, and shortwaves), microwaves, infrared
radiation, visible light, ultraviolet radiation, X rays, and gamma radiation.
According to the quantum theory, light and other forms of electromagnetic
radiation may at times exhibit properties like those of particles in their
interaction with matter. (Conversely, particles sometimes exhibit wavelike
properties.) The individual quantum of electromagnetic radiation is known as
the photon and is symbolized by the Greek letter gamma. Quantum effects are
most pronounced for the higher frequencies, such as gamma rays, and are
usually negligible for radio waves at the long-wavelength, low-frequency end of
the spectrum.
5
energy
energy,
in physics, the ability or capacity to do work or to produce change. Forms of
energy include heat, light, sound, electricity, and chemical energy. Energy and
work are measured in the same units-foot-pounds, joules, ergs, or some other,
depending on the system of measurement being used. When a force acts on a
body, the work performed (and the energy expended) is the product of the force
and the distance over which it is exerted.
Potential and Kinetic Energy
Potential energy is the capacity for doing work that a body possesses because of
its position or condition. For example, a stone resting on the edge of a cliff has
potential energy due to its position in the earth's gravitational field. If it falls, the
force of gravity (which is equal to the stone's weight; see gravitation) will act on
it until it strikes the ground; the stone's potential energy is equal to its weight
times the distance it can fall. A charge in an electric field also has potential
energy because of its position; a stretched spring has potential energy because of
its condition. Chemical energy is a special kind of potential energy; it is the
form of energy involved in chemical reactions. The chemical energy of a
substance is due to the condition of the atoms of which it is made; it resides in
the chemical bonds that join the atoms in compound substances (see chemical
bond).
Kinetic energy is energy a body possesses because it is in motion. The kinetic
energy of a body with mass m moving at a velocity v is one half the product of
the mass of the body and the square of its velocity, i.e., KE = 1/2mv2. Even
when a body appears to be at rest, its atoms and molecules are in constant
motion and thus have kinetic energy. The average kinetic energy of the atoms or
molecules is measured by the temperature of the body.
The difference between kinetic energy and potential energy, and the conversion
of one to the other, is demonstrated by the falling of a rock from a cliff, when its
energy of position is changed to energy of motion. Another example is provided
in the movements of a simple pendulum (see harmonic motion). As the
suspended body moves upward in its swing, its kinetic energy is continuously
being changed into potential energy; the higher it goes the greater becomes the
energy that it owes to its position. At the top of the swing the change from
kinetic to potential energy is complete, and in the course of the downward
motion that follows the potential energy is in turn converted to kinetic energy.
Conversion and Conservation of Energy
It is common for energy to be converted from one form to another; however, the
law of conservation of energy, a fundamental law of physics, states that although
6
energy can be changed in form it can be neither created nor destroyed (see
conservation laws). The theory of relativity shows, however, that mass and
energy are equivalent and thus that one can be converted into the other. As a
result, the law of conservation of energy includes both mass and energy.
Many transformations of energy are of practical importance. Combustion of
fuels results in the conversion of chemical energy into heat and light. In the
electric storage battery chemical energy is converted to electrical energy and
conversely. In the photosynthesis of starch, green plants convert light energy
from the sun into chemical energy. Hydroelectric facilities convert the kinetic
energy of falling water into electrical energy, which can be conveniently carried
by wires to its place of use (see power, electric). The force of a nuclear
explosion results from the partial conversion of matter to energy (see nuclear
energy).
light
light,
visible electromagnetic radiation. Of the entire electromagnetic spectrum, the
human eye is sensitive to only a tiny part, the part that is called light. The
wavelengths of visible light range from about 350 or 400 nm to about 750 or
800 nm. The term "light is often extended to adjacent wavelength ranges that the
eye cannot detect-to infrared radiation, which has a frequency less than that of
visible light, and to ultraviolet radiation and black light, which have a frequency
greater than that of visible light.
If white light, which contains all visible wavelengths, is separated, or dispersed,
into a spectrum, each wavelength is seen to correspond to a different color.
Light that is all of the same wavelength and phase (all the waves are in step with
one another) is called "coherent; one of the most important modern applications
of light has been the development of a source of coherent light-the laser.
The Nature of Light
The scientific study of the behavior of light is called optics and covers reflection
of light by a mirror or other object, refraction by a lens or prism, diffraction of
light as it passes by the edge of an opaque object, and interference patterns
resulting from diffraction. Also studied is the polarization of light. Any
successful theory of the nature of light must be able to explain these and other
optical phenomena.
7
The Wave, Particle, and Electromagnetic Theories of Light
The earliest scientific theories of the nature of light were proposed around the
end of the 17th cent. In 1690, Christian Huygens proposed a theory that
explained light as a wave phenomenon. However, a rival theory was offered by
Sir Isaac Newton in 1704. Newton, who had discovered the visible spectrum in
1666, held that light is composed of tiny particles, or corpuscles, emitted by
luminous bodies. By combining this corpuscular theory with his laws of
mechanics, he was able to explain many optical phenomena.
For more than 100 years, Newton's corpuscular theory of light was favored over
the wave theory, partly because of Newton's great prestige and partly because
not enough experimental evidence existed to provide an adequate basis of
comparison between the two theories. Finally, important experiments were done
on the diffraction and interference of light by Thomas Young (1801) and A. J.
Fresnel (1814-15) that could only be interpreted in terms of the wave theory.
The polarization of light was still another phenomenon that could only be
explained by the wave theory. Thus, in the 19th cent. the wave theory became
the dominant theory of the nature of light.
The wave theory received additional support from the electromagnetic theory of
James Clerk Maxwell (1864), who showed that electric and magnetic fields were
propagated together and that their speed was identical with the speed of light. It
thus became clear that visible light is a form of electromagnetic radiation,
constituting only a small part of the electromagnetic spectrum. Maxwell's theory
was confirmed experimentally with the discovery of radio waves by Heinrich
Hertz in 1886.
Modern Theory of the Nature of Light
With the acceptance of the electromagnetic theory of light, only two general
problems remained. One of these was that of the luminiferous ether, a
hypothetical medium suggested as the carrier of light waves, just as air or water
carries sound waves. The ether was assumed to have some very unusual
properties, e.g., being massless but having high elasticity. A number of
experiments performed to give evidence of the ether, most notably by A. A.
Michelson in 1881 and by Michelson and E. W. Morley in 1887, failed to
support the ether hypothesis. With the publication of the special theory of
relativity in 1905 by Albert Einstein, the ether was shown to be unnecessary to
the electromagnetic theory.
The second main problem, and the more serious of the two, was the explanation
of various phenomena, such as the photoelectric effect, that involved the
interaction of light with matter. Again the solution to the problem was proposed
by Einstein, also in 1905. Einstein extended the quantum theory of thermal
8
radiation proposed by Max Planck in 1900 to cover not only vibrations of the
source of radiation but also vibrations of the radiation itself. He thus suggested
that light, and other forms of electromagnetic radiation as well, travel as tiny
bundles of energy called light quanta, or photons. The energy of each photon is
directly proportional to its frequency.
With the development of the quantum theory of atomic and molecular structure
by Niels Bohr and others, it became apparent that light and other forms of
electromagnetic radiation are emitted and absorbed in connection with energy
transitions of the particles of the substance radiating or absorbing the light. In
these processes, the quantum, or particle, nature of light is more important than
its wave nature. When the transmission of light is under consideration, however,
the wave nature dominates over the particle nature. In 1924, Louis de Broglie
showed that an analogous picture holds for particle behavior, with moving
particles having certain wavelike properties that govern their motion, so that
there exists a complementarity between particles and waves known as particlewave duality (see also complementarity principle). The quantum theory of light
has successfully explained all aspects of the behavior of light.
The Speed of Light
An important question in the history of the study of light has been the
determination of its speed and of the relationship of this speed to other physical
phenomena. At one time it was thought that light travels with infinite speed-i.e.,
it is propagated instantaneously from its source to an observer. Olaus Rømer
showed that it was finite, however, and in 1675 estimated its value from
differences in the time of eclipse of certain of Jupiter's satellites when observed
from different points in the earth's orbit. More accurate measurements were
made during the 19th cent. by A. H. L. Fizeau (1849), using a toothed wheel to
interrupt the light, and by J. B. L. Foucault (1850), using a rotating mirror. The
most accurate measurements of this type were made by Michelson. Modern
electronic methods have improved this accuracy, yielding a value of 2.99792458
× 108 m (c.186,000 mi) per sec for the speed of light in a vacuum, and less for
its speed in other media. The theory of relativity predicts that the speed of light
in a vacuum is the limiting velocity for material particles; no particle can be
accelerated from rest to the speed of light, although it may approach it very
closely. Particles moving at less than the speed of light in a vacuum but greater
than that of light in some other medium will emit a faint blue light known as
Cherenkov radiation when they pass through the other medium. This
phenomenon has been used in various applications involving elementary
particles.
9
Luminous and Illuminated Bodies
In general, vision is due to the stimulation of the optic nerves in the eye by light
either directly from its source or indirectly after reflection from other objects. A
luminous body, such as the sun, another star, or a light bulb, is thus
distinguished from an illuminated body, such as the moon and most of the other
objects one sees. The amount and type of light given off by a luminous body or
reflected by an illuminated body is of concern to the branch of physics known as
photometry (see also lighting). Illuminated bodies not only reflect light but
sometimes also transmit it. Transparent objects, such as glass, air, and some
liquids, allow light to pass through them. Translucent objects, such as tissue
paper and certain types of glass, also allow light to pass through them but
diffuse (scatter) it in the process, so that an observer cannot see a clear image of
whatever lies on the other side of the object. Opaque objects do not allow light
to pass through them at all. Some transparent and translucent objects allow only
light of certain wavelengths to pass through them and thus appear colored. The
colors of opaque objects are caused by selective reflection of certain
wavelengths and absorption of others.
Bibliography
See W. L. Bragg, The Universe of Light (1959); J. Rublowsky, Light (1964); H.
Haken, Light (1981).
spectrum
spectrum,
arrangement or display of light or other form of radiation separated according to
wavelength, frequency, energy, or some other property. Beams of charged
particles can be separated into a spectrum according to mass in a mass
spectrometer (see mass spectrograph). Physicists often find it useful to separate
a beam of particles into a spectrum according to their energy.
Continuous and Line Spectra
Dispersion, the separation of visible light into a spectrum, may be accomplished
by means of a prism or a diffraction grating. Each different wavelength or
frequency of visible light corresponds to a different color, so that the spectrum
appears as a band of colors ranging from violet at the short-wavelength (highfrequency) end of the spectrum through indigo, blue, green, yellow, and orange,
10
to red at the long-wavelength (low-frequency) end of the spectrum. In addition
to visible light, other types of electromagnetic radiation may be spread into a
spectrum according to frequency or wavelength.
The spectrum formed from white light contains all colors, or frequencies, and is
known as a continuous spectrum. Continuous spectra are produced by all
incandescent solids and liquids and by gases under high pressure. A gas under
low pressure does not produce a continuous spectrum but instead produces a line
spectrum, i.e., one composed of individual lines at specific frequencies
characteristic of the gas, rather than a continuous band of all frequencies. If the
gas is made incandescent by heat or an electric discharge, the resulting spectrum
is a bright-line, or emission, spectrum, consisting of a series of bright lines
against a dark background. A dark-line, or absorption, spectrum is the reverse of
a bright-line spectrum; it is produced when white light containing all frequencies
passes through a gas not hot enough to be incandescent. It consists of a series of
dark lines superimposed on a continuous spectrum, each line corresponding to a
frequency where a bright line would appear if the gas were incandescent. The
Fraunhofer lines appearing in the spectrum of the sun are an example of a darkline spectrum; they are caused by the absorption of certain frequencies of light
by the cooler, outer layers of the solar atmosphere. Line spectra of either type
are useful in chemical analysis, since they reveal the presence of particular
elements. The instrument used for studying line spectra is the spectroscope.
The Quantum Explanation of Spectral Lines
The explanation for exact spectral lines for each substance was provided by the
quantum theory. In his 1913 model of the hydrogen atom Niels Bohr showed
that the observed series of lines could be explained by assuming that electrons
are restricted to atomic orbits in which their orbital angular momentum is an
integral multiple of the quantity h/2pi, where h is Planck's constant. The integer
multiple (e.g., 1, 2, 3 …) of h/2pi is usually called the quantum number and
represented by the symbol n.
When an electron changes from an orbit of higher energy (higher angular
momentum) to one of lower energy, a photon of light energy is emitted whose
frequency ν is related to the energy difference ΔE by the equation ν=ΔE/h. For
hydrogen, the frequencies of the spectral lines are given by ν=cR (1/nf2−1/ni2)
where c is the speed of light, R is the Rydberg constant, and nf and ni are the
final and initial quantum numbers of the electron orbits (ni is always greater
than nf). The series of spectral lines for which nf=1 is known as the Lyman
series; that for nf=2 is the Balmer series; that for nf=3 is the Paschen series; that
for nf=4 is the Brackett series; and that for nf=5 is the Pfund series. The Bohr
theory was not as successful in explaining the spectra of other substances, but
later developments of the quantum theory showed that all aspects of atomic and
11
molecular spectra can be explained quantitatively in terms of energy transitions
between different allowed quantum states.
laser
laser
[acronym for light amplification by stimulated emission of radiation], device for
the creation, amplification, and transmission of a narrow, intense beam of
coherent light. The laser is sometimes referred to as an optical maser;
Coherent Light and Its Emission in Lasers
The coherent light produced by a laser differs from ordinary light in that it is
made up of waves all of the same wavelength and all in phase (i.e., in step with
each other); ordinary light contains many different wavelengths and phase
relations. Both the laser and the maser find theoretical basis for their operation
in the quantum theory. Electromagnetic radiation (e.g., light or microwaves) is
emitted or absorbed by the atoms or molecules of a substance only at certain
characteristic frequencies. According to the quantum theory, the electromagnetic
energy is transmitted in discrete amounts (i.e., in units or packets) called quanta.
A quantum of electromagnetic energy is called a photon. The energy carried by
each photon is proportional to its frequency.
An atom or molecule of a substance usually does not emit energy; it is then said
to be in a low-energy or ground state. When an atom or molecule in the ground
state absorbs a photon, it is raised to a higher energy state, and is said to be
excited. The substance spontaneously returns to a lower energy state by emitting
a photon with a frequency proportional to the energy difference between the
excited state and the lower state. In the simplest case, the substance will return
directly to the ground state, emitting a single photon with the same frequency as
the absorbed photon.
In a laser or maser, the atoms or molecules are excited so that more of them are
at higher energy levels than are at lower energy levels, a condition known as an
inverted population. The process of adding energy to produce an inverted
population is called pumping. Once the atoms or molecules are in this excited
state, they readily emit radiation. If a photon whose frequency corresponds to
the energy difference between the excited state and the ground state strikes an
excited atom, the atom is stimulated to emit a second photon of the same
frequency, in phase with and in the same direction as the bombarding photon.
12
The bombarding photon and the emitted photon may then each strike other
excited atoms, stimulating further emissions of photons, all of the same
frequency and all in phase. This produces a sudden burst of coherent radiation as
all the atoms discharge in a rapid chain reaction. Often the laser is constructed
so that the emitted light is reflected between opposite ends of a resonant cavity;
an intense, highly focused light beam passes out through one end, which is only
partially reflecting. If the atoms are pumped back to an excited state as soon as
they are discharged, a steady beam of coherent light is produced.
Characteristics of Lasers
The physical size of a laser depends on the materials used for light emission, on
its power output, and on whether the light is emitted in pulses or as a steady
beam. Lasers have been developed that are not much larger than a common
flashlight. Various materials have been used as the active media in lasers. The
first laser, built in 1960, used a ruby rod with polished ends; the chromium
atoms embedded in the ruby's aluminum oxide crystal lattice were pumped to an
excited state by a flash tube that, wrapped around the rod, saturated the rod with
light of a frequency higher than that of the laser frequency (this method is called
optical pumping). This first ruby laser produced intense pulses of red light. In
many other optically pumped lasers, the basic element is a transparent,
nonconducting crystal such as yttrium aluminum garnet (YAG). Another type of
crystal laser uses a semiconductor diode as the element; pumping is done by
passing a current through the crystal.
In some lasers, a gas or liquid is used as the emitting medium. In one kind of gas
laser the inverted population is achieved through collisional pumping, the gas
molecules gaining energy from collisions with other molecules or with electrons
released through current discharge. Some gas lasers make use of molecular
dissociation to create the inverted population. In a free-electron laser a beam of
electrons is "wiggled by a magnetic field; the oscillatory behavior of the
electrons induces them to emit laser radiation. Another device under
development is the X-ray laser, which presents special difficulties; most
materials, for instance, are poor reflectors of X rays.
Applications of Lasers
The light beam produced by most lasers is pencil-sized, and maintains its size
and direction over very large distances; this sharply focused beam of coherent
light is suitable for a wide variety of applications. Lasers have been used in
industry for cutting and boring metals and other materials, and for inspecting
optical equipment. In medicine, they have been used in surgical operations.
Lasers have been used in several kinds of scientific research. The field of
holography is based on the fact that actual wave-front patterns, captured in a
13
photographic image of an object illuminated with laser light, can be
reconstructed to produce a three-dimensional image of the object.
Lasers have opened a new field of scientific research, nonlinear optics, which is
concerned with the study of such phenomena as the frequency doubling of
coherent light by certain crystals. One important result of laser research is the
development of lasers that can be tuned to emit light over a range of frequencies,
instead of producing light of only a single frequency. Work is being done to
develop lasers for communication; in a manner similar to radio transmission, the
transmitted light beam is modulated with a signal and is received and
demodulated some distance away. Lasers have also been used in plasma physics
and chemistry.
Bibliography
See S. Leinwoll, Understanding Lasers and Masers (1965); F. T. Arecchi and E.
O. Schulz-Dubois, Laser Handbook (1973); J. Walker Light and Its Uses (1980).
photon
photon
foton , the particle composing light and other forms of electromagnetic
radiation, sometimes called light quantum. The photon has no charge and no
mass. About the beginning of the 20th cent., the classical theory that light is
emitted and absorbed by matter in a continuous stream came under criticism
because it led to incorrect predictions about several effects, notably the radiation
of light by incandescent bodies (see black body) and the photoelectric effect.
These effects can be explained only by assuming that the energy is transferred in
discrete packets, or photons, the energy of each photon being equal to the
frequency of the light multiplied by Planck's constant, h. Because the value of
Planck's constant is extremely small (6.62 × 10−27 erg sec.), the discrete nature
of light energy is not evident in most optical phenomena. The light imparts
energy and momentum to a charged particle when one of the photons collides
with it, as is demonstrated by the Compton effect. See quantum theory.
14
black body
black body,
in physics, an ideal black substance that absorbs all and reflects none of the
radiant energy falling on it. Lampblack, or powdered carbon, which reflects less
than 2% of the radiation falling on it, approximates an ideal black body. Since a
black body is a perfect absorber of radiant energy, by the laws of
thermodynamics it must also be a perfect emitter of radiation. The distribution
according to wavelength of the radiant energy of a black body radiator depends
on the absolute temperature of the black body and not on its internal nature or
structure. As the temperature increases, the wavelength at which the energy
emitted per second is a maximum decreases. This phenomenon can be seen in
the behavior of an ordinary incandescent object, which gives off its maximum
radiation at shorter and shorter wavelengths as it becomes hotter and hotter. First
it glows in long red wavelengths, then in yellow wavelengths, and finally in
short blue wavelengths. In order to explain the spectral distribution of black
body radiation, Max Planck developed the quantum theory in 1901. In
thermodynamics the principle of the black body is used to determine the nature
and amount of the energy emitted by a heated object. Black-body radiation has
served as an important source of confirmation for the big-bang theory, which
holds that the universe was born in a fiery explosion some 10 to 20 billion years
ago. According to the theory, the explosion should have left a remnant blackbody cosmic background radiation that is uniform in all directions and has an
equivalent temperature of only a few degrees Kelvin. Such a uniform
background, with a temperature of 2.7°K; (see Kelvin temperature scale), was
discovered in 1964 by Arno A. Penzias and Robert L. Wilson, who were
awarded the Nobel Prize in Physics in 1978 for their work. Recent data gathered
by the NASA satellite Cosmic Microwave Background Explorer (COBE) has
revealed small temperature fluctuations in the radiation that are thought to be
related to the "seeds of stars and galaxies.
15
photoelectric effect
photoelectric effect,
emission of electrons by substances, especially metals, when light falls on their
surfaces. The effect was discovered by H. R. Hertz in 1887. The failure of the
classical theory of electromagnetic radiation to explain it helped lead to the
development of the quantum theory. According to classical theory, when light,
thought to be composed of waves, strikes substances, the energy of the liberated
electrons ought to be proportional to the intensity of light. Experiments showed
that, although the electron current produced depends upon the intensity of the
light, the maximum energy of the electrons was not dependent on the intensity.
Moreover, classical theory predicted that the photoelectric current should not
depend on the frequency of the light and that there should be a time lag between
the reception of light on the surface and the emission of the electrons. Neither of
these predictions was borne out by experiment. In 1905, Albert Einstein
published a theory that successfully explained the photoelectric effect. It was
closely related to Planck's theory of black body radiation announced in 1900.
According to Einstein's theory, the incident light is composed of discrete
particles of energy, or quanta, called photons, the energy of each photon being
proportional to its frequency according to the equation E=hυ, where E is the
energy, υ is the frequency, and h is Planck's constant. Each photoelectron
ejected is the result of the absorption of one photon. The maximum kinetic
energy, KE, that any photoelectron can possess is given by KE = hυ−W, where
W is the work function, i.e., the energy required to free an electron from the
material, varying with the particular material. The effect has a number of
practical applications, most based on the photoelectric cell.
Compton effect
Compton effect
[for A. H. Compton], increase in the wavelengths of X rays and gamma rays
when they collide with and are scattered from loosely bound electrons in matter.
This effect provides strong verification of the quantum theory since the
16
theoretical explanation of the effect requires that one treat the X rays and
gamma rays as particles or photons (quanta of energy) rather than as waves. The
classical treatment of these rays as waves would predict no such effect.
According to the quantum theory a photon can transfer part of its energy and
linear momentum to a loosely bound electron in a collision. Since the energy
and magnitude of linear momentum of a photon are proportional to its
frequency, after the collision the photon has a lower frequency and thus a longer
wavelength. The increase in the wavelength does not depend upon the
wavelength of the incident rays or upon the target material. It depends only upon
the angle that is formed between the incident and scattered rays. A larger
scattering angle will yield a larger increase in wavelength. The effect was
discovered in 1923. It is used in the study of electrons in matter and in the
production of variable energy gamma-ray beams.
quantum theory
quantum theory,
modern physical theory concerned with the emission and absorption of energy
by matter and with the motion of material particles; the quantum theory and the
theory of relativity together form the theoretical basis of modern physics. Just as
the theory of relativity assumes importance in the special situation where very
large speeds are involved, so the quantum theory is necessary for the special
situation where very small quantities are involved, i.e., on the scale of
molecules, atoms, and elementary particles. Aspects of the quantum theory have
provoked vigorous philosophical debates concerning, for example, the
uncertainty principle and the statistical nature of all the predictions of the theory.
Relationship of Energy and Matter
According to the older theories of classical physics, energy is treated solely as a
continuous phenomenon, while matter is assumed to occupy a very specific
region of space and to move in a continuous manner. According to the quantum
theory, energy is held to be emitted and absorbed in tiny, discrete amounts. An
individual bundle or packet of energy, called a quantum (pl. quanta), thus
behaves in some situations much like particles of matter; particles are found to
exhibit certain wavelike properties when in motion and are no longer viewed as
localized in a given region but rather as spread out to some degree.
17
For example, the light or other radiation given off or absorbed by an atom has
only certain frequencies (or wavelengths), as can be seen from the line spectrum
associated with the chemical element represented by that atom. The quantum
theory shows that those frequencies correspond to definite energies of the light
quanta, or photons, and result from the fact that the electrons of the atom can
have only certain allowed energy values, or levels; when an electron changes
from one allowed level to another, a quantum of energy is emitted or absorbed
whose frequency is directly proportional to the energy difference between the
two levels.
Dual Nature of Waves and Particles
The restriction of the energy levels of the electrons is explained in terms of the
wavelike properties of their motions: electrons occupy only those orbits for
which their associated wave is a standing wave (i.e., the circumference of the
orbit is exactly equal to a whole number of wavelengths) and thus can have only
those energies that correspond to such orbits. Moreover, the electrons are no
longer thought of as being at a particular point in the orbit but rather as being
spread out over the entire orbit. Just as the results of relativity approximate those
of Newtonian physics when ordinary speeds are involved, the results of the
quantum theory agree with those of classical physics when very large "quantum
numbers are involved, i.e., on the ordinary large scale of events; this agreement
in the classical limit is required by the correspondence principle of Niels Bohr.
The quantum theory thus proposes a dual nature for both waves and particles,
one aspect predominating in some situations, the other predominating in other
situations.
Evolution of Quantum TheoryEarly Developments
While the theory of relativity was largely the work of one man, Albert Einstein,
the quantum theory was developed principally over a period of thirty years
through the efforts of many scientists. The first contribution was the explanation
of black body radiation in 1900 by Max Planck, who proposed that the energies
of any harmonic oscillator (see harmonic motion), such as the atoms of a black
body radiator, are restricted to certain values, each of which is an integral
(whole number) multiple of a basic, minimum value. The energy E of this basic
quantum is directly proportional to the frequency ν of the oscillator, or E=hν,
where h is a constant, now called Planck's constant, having the value
6.63×10−34 joule-second. In 1905, Einstein proposed that the radiation itself is
also quantized according to this same formula, and he used the new theory to
explain the photoelectric effect. Following the discovery of the nuclear atom by
Rutherford (1911), Bohr used the quantum theory in 1913 to explain both
atomic structure and atomic spectra, showing the connection between the
electrons' energy levels and the frequencies of light given off and absorbed.
18
Quantum Mechanics and Later Developments
Quantum mechanics, the final mathematical formulation of the quantum theory,
was developed during the 1920s. In 1924, Louis de Broglie proposed that not
only do light waves sometimes exhibit particlelike properties, as in the
photoelectric effect and atomic spectra, but particles may also exhibit wavelike
properties. This hypothesis was confirmed experimentally in 1927 by C. J.
Davisson and L. H. Germer, who observed diffraction of a beam of electrons
analogous to the diffraction of a beam of light. Two different formulations of
quantum mechanics were presented following de Broglie's suggestion. The wave
mechanics of Erwin Schrödinger (1926) involves the use of a mathematical
entity, the wave function, which is related to the probability of finding a particle
at a given point in space. The matrix mechanics of Werner Heisenberg (1925)
makes no mention of wave functions or similar concepts but was shown to be
mathematically equivalent to Schrödinger's theory.
Quantum mechanics was combined with the theory of relativity in the
formulation of P. A. M. Dirac (1928), which, in addition, predicted the existence
of antiparticles. A particularly important discovery of the quantum theory is the
uncertainty principle, enunciated by Heisenberg in 1927, which places an
absolute theoretical limit on the accuracy of certain measurements; as a result,
the assumption by earlier scientists that the physical state of a system could be
measured exactly and used to predict future states had to be abandoned. Other
developments of the theory include quantum statistics, presented in one form by
Einstein and S. N. Bose (the Bose-Einstein statistics) and in another by Dirac
and Enrico Fermi (the Fermi-Dirac statistics); quantum electrodynamics,
concerned with interactions between charged particles and electromagnetic
fields; its generalization, quantum field theory; and quantum electronics.
Bibliography
See W. Heisenberg, The Physical Principles of the Quantum Theory (1930) and
Physics and Philosophy (1958); G. Gamow, Thirty Years that Shook Physics
(1966); J. Gribbin, In Search of Schrödinger's Cat (1984).
Bose-Einstein statistics
Bose-Einstein statistics,
class of statistics that applies to elementary particles called bosons, which
include the photon, pion, and the W and Z particles. Bosons have integral values
of the quantum mechanical property called spin and are "gregarious in the sense
that an unlimited number of bosons can be placed in the same state. All of the
particles that mediate the fundamental forces of nature are bosons. See
elementary particles; Fermi-Dirac statistics; statistical mechanics.
19
holography
holography
hologrf, ho- , method of reproducing a three-dimensional image of an object by
means of light wave patterns recorded on a photographic plate or film.
Holography is sometimes called lensless photography because no lenses are
used to form the image. The plate or film with the recorded wave patterns is
called a hologram. The light used to make a hologram must be coherent, i.e. of a
single wavelength or frequency and with all the waves in phase. (A coherent
beam of light can be produced by a laser.) Before reaching the object, the beam
is split into two parts; one (the reference beam) is recorded directly on the
photographic plate and the other is reflected from the object to be photographed
and is then recorded. Since the two parts of the beam arriving at the
photographic plate have travelled by different paths and are no longer
necessarily coherent, they create an interference pattern, exposing the plate at
points where they arrive in phase and leaving the plate unexposed where they
arrive out of phase (nullifying each other). The pattern on the plate is a record of
the waves as they are reflected from the object, recorded with the aid of the
reference beam. When this hologram is later illuminated with coherent light of
the same frequency as that used to form it, a three-dimensional image of the
object is produced; it can even be photographed from various angles. This
technique of image formation is known as wave front reconstruction. Dennis
Gabors, a British scientist who in 1948 developed the wave theory of light (itself
first suggested by Christopher Huygens in the late 17th cent.) can be viewed as
the father of theoretical holography. However, no adequate source of coherent
light was available until the invention of the laser in 1960. Holography using
laser light was developed during the early 1960s and has had several
applications. In research, holography has been combined with microscopy to
extend studies of very small objects; it has also been used to study the
instantaneous properties of large collections of atmospheric particles. In
industry, holography has been applied to stress and vibrational analysis. Color
holograms have been developed, formed using three separate exposures with
laser beams of each of the primary colors (see color). Another new technique is
acoustical holography, in which the object is irradiated with a coherent beam of
ultrasonic waves (see sound; ultrasonics); the resulting interference pattern is
recorded by means of microphones to form a hologram, and the photographic
plate thus produced is viewed by means of laser light to give a visible threedimensional image.
20
See G. W. Stroke, An Introduction to Coherent Optics and Holography (2d ed.
1969); T. Okoshi, Three-Dimensional Imaging Techniques (1976); N.
Abramson, The Making and Evaluation of Holograms (1981); J. E. Kasper and
S. A. Feller, The Complete Book of Holograms (1987).
interference
interference,
in physics, the effect produced by the combination or superposition of two
systems of Waves, in which these waves reinforce, neutralize, or in other ways
interfere with each other. Interference is observed in both sound waves and
electromagnetic waves, especially those of visible light and radio.
Interference in Sound Waves
When two sound waves occur at the same time and are in the same phase, i.e.,
when the condensations of the two coincide and hence their rarefactions also,
the waves reinforce each other and the sound becomes louder. This is known as
constructive interference. On the other hand, two sound waves occurring
simultaneously and having the same intensity neutralize each other if the
rarefactions of the one coincide with the condensations of the other, i.e., if they
are of opposite phase. This canceling is known as destructive interference. In
this case, the result is silence.
Alternate reinforcement and neutralization (or weakening) take place when two
sound waves differing slightly in frequency are superimposed. The audible
result is a series of pulsations or, as these pulsations are called commonly, beats,
caused by the alternate coincidence of first a condensation of the one wave with
a condensation of the other and then a condensation with a rarefaction. The beat
frequency is equal to the difference between the frequencies of the interfering
sound waves.
Interference in Light Waves
Light waves reinforce or neutralize each other in very much the same way as
sound waves. If, for example, two light waves each of one color
(monochromatic waves), of the same amplitude, and of the same frequency are
combined, the interference they exhibit is characterized by so-called fringes-a
series of light bands (resulting from reinforcement) alternating with dark bands
(caused by neutralization). Such a pattern is formed either by light passing
21
through two narrow slits and being diffracted (see diffraction), or by light
passing through a single slit. In the case of two slits, each slit acts as a light
source, producing two sets of waves that may combine or cancel depending
upon their phase relationship. In the case of a single slit, each point within the
slit acts as a light source. In all cases, for light waves to demonstrate such
behavior, they must emanate from the same source; light from distinct sources
has too many random differences to permit interference patterns.
The relative positions of light and dark lines depend upon the wavelength of the
light, among other factors. Thus, if white light, which is made up of all colors, is
used instead of monochromatic light, bands of color are formed because each
color, or wavelength, is reinforced at a different position. This fact is utilized in
the diffraction grating, which forms a spectrum by diffraction and interference
of a beam of light incident on it. Newton's rings also are the result of the
interference of light. They are formed concentrically around the point of contact
between a glass plate and a slightly convex lens set upon it or between two
lenses pressed together; they consist of bright rings separated by dark ones when
monochromatic light is used, or of alternate spectrum-colored and black rings
when white light is used. Various natural phenomena are the result of
interference, e.g., the colors appearing in soap bubbles and the iridescence of
mother-of-pearl and other substances.
Interference as a Scientific Tool
The experiments of Thomas Young first illustrated interference and definitely
pointed the way to a wave theory of light. A. J. Fresnel's experiments clearly
demonstrated that the interference phenomena could be explained adequately
only upon the basis of a wave theory. The thickness of a very thin film such as
the soap-bubble wall can be measured by an instrument called the
interferometer. When the wavelength of the light is known, the interferometer
indicates the thickness of the film by the interference patterns it forms. The
reverse process, i.e., the measurement of the length of an unknown light wave,
can also be carried out by the interferometer.
The Michelson interferometer used in the Michelson-Morley experiment of
1887 to determine the velocity of light had a half-silvered mirror to split an
incident beam of light into two parts at right angles to one another. The two
halves of the beam were then reflected off mirrors and rejoined. Any difference
in the speed of light along the paths could be detected by the interference
pattern. The failure of the experiment to detect any such difference threw doubt
on the existence of the ether and thus paved the way for the special theory of
relativity.
Another type of interferometer devised by Michelson has been applied in
measuring the diameters of certain stars. The radio interferometer consists of
22
two or more radio telescopes separated by fairly large distances (necessary
because radio waves are much longer than light waves) and is used to pinpoint
and study various celestial sources of radiation in the radio range (see radio
astronomy).
Waves
Waves
(Women Appointed for Voluntary Emergency Service), U.S. navy organization,
created (1942) in World War II to release male naval personnel for sea duty. The
organization was commanded until 1946 by Mildred Helen McAfee. Waves
served in communications, air traffic control, naval air navigation, and clerical
positions in the United States, Hawaii, Alaska, and the Caribbean. Recruiting
ended in 1945, with a peak enrollment of 86,000. Waves forces were reduced
when the war ended. After the passage (1948) of the Women's Armed Service
Integration Act, women were enlisted into the regular navy, though they
continued to be known as Waves for some time.
polarization of light
polarization of light,
orientation of the vibration pattern of light waves in a singular plane.
Characteristics of Polarization
Polarization is a phenomenon peculiar to transverse waves, i.e., waves that
vibrate in a direction perpendicular to their direction of propagation. Light is a
transverse electromagnetic wave (see electromagnetic radiation). Thus a light
wave traveling forward can vibrate up and down (in the vertical plane), from
side to side (in the horizontal plane), or in an intermediate direction. Ordinarily a
ray of light consists of a mixture of waves vibrating in all the directions
perpendicular to its line of propagation. If for some reason the vibration remains
constant in direction, the light is said to be polarized.
23
It is found, for example, that reflected light is always polarized to some extent.
Light can also be polarized by double refraction. Any transparent substance has
the property of refracting or bending a ray of light that enters it from outside.
Certain crystals, however, such as calcite (Iceland spar), have the property of
refracting unpolarized incident light in two different directions, thus splitting an
incident ray into two rays. It is found that the two refracted rays (the ordinary
ray and the extraordinary ray) are both polarized and that their directions of
polarization are perpendicular to each other. This occurs because the speed of
the light in the crystal-hence the angle at which the light is refracted-varies with
the direction of polarization. Unpolarized incident light can be regarded as a
mixture of two different polarization states separated into two components by
the crystal. (In most substances the speed of light is the same for all directions of
polarization, and no separation occurs.)
Polarization Techniques
Unpolarized light can be converted into a single polarized beam by means of the
Nicol prism, a device that separates incident light into two rays by double
refraction; the unwanted ray is removed from the beam by reflection. Polarized
light can also be produced by using a tourmaline crystal. Tourmaline (a doublerefracting substance) removes one of the polarized rays by absorption. Another
commonly used polarizer consists of a sheet of transparent material in which are
embedded many tiny polarizing crystals.
Any system by which light is polarized in a particular direction is transparent
only to light polarized in that direction. Thus, when originally unpolarized light
passes successively through two polarizers whose directions of polarization are
mutually perpendicular the light is completely blocked; light transmitted by the
first polarizer is polarized and is stopped by the second. If the second polarizer
is rotated so that the directions of polarization are no longer perpendicular, the
amount of light transmitted gradually increases, becoming brightest when the
polarizers are exactly aligned. This property is used in various light filter
combinations.
A number of substances can polarize light in other ways than in one plane,
causing what are called circular polarization or elliptical polarization, for
example. Organic substances that affect polarized light that passes through their
solution are called optically active. In certain acids and other solutions the plane
of polarized light is rotated to either the right or the left; their activity is usually
indicated by the prefix dextro- or d- if the rotation is to the right and by levo-,
laevo-, or l- if the rotation is to the left.
The instrument used to determine in which direction this optical rotation occurs
is called a polariscope. A very simple form consists essentially of two crystals of
some polarizing substance such as tourmaline. The solution to be tested is
24
placed between them. Light is then directed through the first crystal, or
polarizer, and is plane-polarized. After passing through the solution its plane is
rotated; the direction and the degree of rotation are indicated by the position in
which the second crystal must be placed to permit passage of the light that has
gone through the solution. The polarimeter is a polariscope that measures the
amount of rotation; when used for sugar solutions it is commonly called a
saccharimeter.
photometry
photometry
fotomtr , branch of physics dealing with the measurement of the intensity of a
source of light, such as an electric lamp, and with the intensity of light such a
source may cast on a surface area.
Photometric Units of Measurement
The intensity of electric lights is commonly given as so many candlepower, i.e.,
so many times the intensity of a standard candle. Since an ordinary candle is not
a sufficiently accurate standard, the unit of intensity has been defined in various
ways. It was originally defined as the luminous intensity in a horizontal
direction of a candle of specified size burning at a specified rate. Later the
international candle was taken as a standard; not actually a candle, it is defined
in terms of the luminous intensity of a specified array of carbon-filament lamps.
In 1948 a new candle, about 1.9% smaller than the former unit, was adopted. It
is defined as 1/60 of the intensity of one square centimeter of a black body
radiator at the temperature at which platinum solidifies (2,046°K;). This unit is
sometimes called the new international candle; the official name given to it by
the International Commission of Illumination (CIE) is candela.
Other quantities of importance in photometry include luminous flux, surface
brightness (for a diffuse rather than point source), and surface illumination.
Luminous flux is the radiation given off in the visible range of wavelengths by a
radiating source. It is measured in lumens, one lumen being equal to the
luminous flux per unit solid angle (steradian) emitted by a unit candle. Surface
brightness is measured in lamberts, one lambert being equal to an average
intensity of 1/pi candle per square centimeter of a radiating surface. The
intensity of illumination, also called illuminance, is a measure of the degree to
which a surface is illuminated and is thus distinguished from the intensity of the
light source. Illumination is given in footcandles, i.e., so many times the
25
illumination given by a standard candle at 1 ft. Another unit of illumination is
the lux, one lux being equal to one lumen incident per square meter of
illuminated surface. One lux equals 0.0929 footcandle.
Photometric Instruments
Instruments used for the measurement of light intensity, called photometers,
make possible a comparison between an unknown intensity and a standard or
known intensity. They are based on the inverse-square law, which states that as
a light source is moved away from a surface it illuminates, the illumination
decreases in an amount inversely proportional to the square of the distance. Thus
the illumination of a surface by a source of light 2 ft away is 1/4 of the
illumination at 1 ft from the source. Conversely, for two light sources, one at 1 ft
from a surface and the other at 2 ft, to give the same illumination to the surface,
it would be necessary for the source at 2 ft to have an intensity 4 times that of
the source at 1 ft.
A photometer measures relative rather than absolute intensity. The Bunsen
photometer (named for R. W. Bunsen) determines the light intensity of a source
by comparison with a known, or standard, intensity. The two light sources (one
of known, one of unknown intensity) are placed on opposite sides of the surface
(a disk of paper) to be illuminated. In the center of this surface is a grease spot
that, when illuminated equally from both sides, will appear neither lighter nor
darker than the paper but will become almost invisible. Using the inverse-square
law, the intensity of the unknown light source can be easily determined when the
relative distances at which the two sources produce equal illumination are
known. The Rumford photometer (named for Count Rumford), or shadow
photometer, compares intensities of light sources by the density of the shadows
produced. In the Lummer-Brodhun photometer, an opaque screen is placed
between the two sources, and a comparison is made possible by an ingenious
arrangement of prisms.
force
force,
commonly, a "push or "pull, more properly defined in physics as a quantity that
changes the motion, size, or shape of a body. Force is a vector quantity, having
both magnitude and direction. The magnitude of a force is measured in units
such as the pound, dyne, and newton, depending upon the system of
measurement being used. An unbalanced force acting on a body free to move
26
will change the motion of the body. The quantity of motion of a body is
measured by its momentum, the product of its mass and its velocity. According
to Newton's second law of motion (see motion), the change in momentum is
directly proportional to the applied force. Since mass is constant at ordinary
velocities, the result of the force is a change in velocity, or an acceleration,
which may be a change either in the speed or in the direction of the velocity.
Two or more forces acting on a body in different directions may balance,
producing a state of equilibrium. For example, the downward force of gravity
(see gravitation) on a person weighing 200 lb (91 km) when standing on the
ground is balanced by an equivalent upward force exerted by the earth on the
person's feet. If the person were to fall into a deep hole, then the upward force
would no longer be acting and the person would be accelerated downward by
the unbalanced force of gravity. If a body is not completely rigid, then a force
acting on it may change its size or shape. Scientists study the strength of
materials to anticipate how a given material may behave under the influence of
various types of force.
There are four basic types of force in nature. Two of these are easily observed;
the other two are detectable only at the atomic level. Although the weakest of
the four forces is the gravitational force, it is the most easily observed because it
affects all matter, is always attractive and because its range is theoretically
infinite, i.e., the force decreases with distance but remains measurable at the
largest separations. Thus, a very large mass, such as the sun, can exert over a
distance of many millions of miles a force sufficient to keep a planet in orbit.
The electromagnetic force, which can be observed between electric charges, is
stronger than the gravitational force and also has infinite range. Both electric
and magnetic forces are ultimately based on the electrical properties of matter;
they are propagated together through space as an electromagnetic field of force
(see electromagnetic radiation). At the atomic level, two additional types of
force exist, both having extremely short range. The strong nuclear force, or
strong interaction, is associated with certain reactions between elementary
particles and is responsible for holding the atomic nucleus together. The weak
nuclear force, or weak interaction, is associated with beta particle emission and
particle decay; it is weaker than the electromagnetic force but stronger than the
gravitational force.
acceleration
acceleration,
27
change in the velocity of a body with respect to time. Since velocity is a vector
quantity, involving both magnitude and direction, acceleration is also a vector.
In order to produce an acceleration, a force must be applied to the body. The
magnitude of the force F must be directly proportional to both the mass of the
body m and the desired acceleration a, according to Newton's second law of
motion, F=ma. The exact nature of the acceleration produced depends on the
relative directions of the original velocity and the force. A force acting in the
same direction as the velocity changes only the speed of the body. An
appropriate force acting always at right angles to the velocity changes the
direction of the velocity but not the speed. An example of such an accelerating
force is the gravitational force exerted by a planet on a satellite moving in a
circular orbit. A force may also act in the opposite direction from the original
velocity. In this case the speed of the body is decreased. Such an acceleration is
often referred to as a deceleration. If the acceleration is constant, as for a body
falling near the earth, the following formulas may be used to compute the
acceleration a of a body from knowledge of the elapsed time t, the distance s
through which the body moves in that time, the initial velocity vi, and the final
velocity
vf:a=(vf2−vi2)/2s
a=2(s−vit)/t2
a=(vf−vi)/t
nucleus
nucleus,
in physics, the extremely dense central core of an atom.
The Nature of the NucleusComposition
Atomic nuclei are composed of two types of particles, protons and neutrons,
which are collectively known as nucleons. A proton is simply the nucleus of an
ordinary hydrogen atom, the lightest atom, and has a unit positive charge. A
neutron is an uncharged particle of about the same mass as the proton. The
number of protons in a given nucleus is the atomic number of that nucleus and
determines which chemical element the nucleus will constitute when surrounded
by electrons.
The total number of protons and neutrons together in a nucleus is the atomic
mass number of the nucleus. Two nuclei may have the same atomic number but
28
different mass numbers, thus constituting different forms, or isotopes, of the
same element. The mass number of a given isotope is the nearest whole number
to the atomic weight of that isotope and is approximately equal to the atomic
weight (in the case of carbon-12, exactly equal).
Size and Density
The nucleus occupies only a tiny fraction of the volume of an atom (the radius
of the nucleus being some 10,000 to 100,000 times smaller than the radius of the
atom as a whole), but it contains almost all the mass. An idea of the extreme
density of the nucleus is revealed by a simple calculation. The radius of the
nucleus of hydrogen is on the order of 10−13 cm so that its volume is on the
order of 10−39 cm3 (cubic centimeter); its mass is about 10−24 g (gram).
Combining these to estimate the density, we have 10−24 g/10−39 cm31015
g/cm3, or about a thousand trillion times the density of matter at ordinary scales
(the density of water is 1 g/cm3).
Mass Defect, Binding Energy, and Nuclear Reactions
When nuclear masses are measured, the mass is always found to be less than the
sum of the masses of the individual nucleons bound in the nucleus. The
difference between the nuclear mass and the sum of the individual masses is
known as the mass defect and is due to the fact that some of the mass must be
converted to energy in order to make the nucleus stable. This nuclear binding
energy is related to the mass defect by the famous formula from relativity, E =
mc2, where E is energy, m is mass, and c is the speed of light. The binding
energy of a nucleus increases with increasing mass number.
A more interesting property of a nucleus is the binding energy per nucleon,
found by dividing the binding energy by the mass number. The average binding
energy per nucleon is observed to increase rapidly with increasing mass number
up to a mass number of about 60, then to decrease rather slowly with higher
mass numbers. Thus, nuclei with mass numbers around 60 are the most stable,
and those of very small or very large mass numbers are the least stable.
Two important phenomena result from this property of nuclei. Nuclear fission is
the spontaneous splitting of a nucleus of large mass number into two nearly
equal nuclei whose mass numbers are in the most stable range. Nuclear fusion,
on the other hand, is the combining of two light nuclei to form a heavier single
nucleus, again with an increase in the average binding energy per nucleon. In
both cases, the change to a more stable final state is accompanied by the release
of a large amount of energy per unit mass of the reacting materials as compared
to the energy released in chemical reactions (see nuclear energy).
29
Models of the Nucleus
Several models of the nucleus have evolved that fit certain aspects of nuclear
behavior, but no single model has successfully described all aspects. One model
is based on the fact that certain properties of a nucleus are similar to those of a
drop of incompressible liquid. The liquid-drop model has been particularly
successful in explaining details of the fission process and in evolving a formula
for the mass of a particular nucleus as a function of its atomic number and mass
number, the so-called semiempirical mass formula.
Another model is the Fermi gas model, which treats the nucleons as if they were
particles of a gas restricted by the Pauli exclusion principle, which allows only
two particles of opposite spin to occupy a particular energy level described by
the quantum theory. These particle pairs will fill the lowest energy levels first,
then successively higher ones, so that the "gas is one of minimum energy. There
are actually two independent Fermi gases, one of protons and one of neutrons.
The tendency of nucleons to occupy the lowest possible energy level explains
why there is a tendency for the numbers of protons and neutrons to be nearly
equal in lighter nuclei. In heavier nuclei the effect of electrostatic repulsion
among the larger number of charges from the protons raises the energy of the
protons, with the result that there are more neutrons than protons (for uranium235, for example, there are 143 neutrons and only 92 protons). The pairing of
nucleons in energy levels also helps to explain the tendency of nuclei to have
even numbers of both protons and neutrons.
Neither the liquid-drop model nor the Fermi gas model, however, can explain
the exceptional stability of nuclei having certain values for either the number of
protons or the number of neutrons, or both. These so-called magic numbers are
2, 8, 20, 28, 50, 82, and 126. Because of the similarity between this phenomenon
and the stability of the noble gases, which have certain numbers of electrons that
are bound in closed "shells, a shell model was suggested for the nucleus. There
are major differences, however, between the electrons in an atom and the
nucleons in a nucleus. First, the nucleus provides a force center for the electrons
of an atom, while the nucleus itself has no single force center. Second, there are
two different types of nucleons. Third, the assumption of independent particle
motion made in the case of electrons is not as easily made for nucleons. The
liquid-drop model is in fact based on the assumption of strong forces between
the nucleons that considerably constrain their motion. However, these
difficulties were solved and a good explanation of the magic numbers achieved
on the basis of the shell model, which included the assumption of strong
coupling between the spin angular momentum of a nucleon and its orbital
angular momentum. Various attempts have been made, with partial success, to
construct a model incorporating the best features of both the liquid-drop model
and the shell model.
30
Scientific Notation for the Nucleus and Nuclear Reactions
A nucleus may be represented conveniently by the chemical symbol for the
element together with a subscript and superscript for the atomic number and
mass number. (The subscript is often omitted, since the element symbol fixes the
atomic number.) The nucleus of ordinary hydrogen, i.e., the proton, is
represented by 1H1, an alpha particle (a helium nucleus) is 2He4, the most
common isotope of chlorine is 17Cl35, and the uranium isotope used in the
atomic bomb is 92U235.
Nuclear reactions involving changes in atomic number or mass number can be
expressed easily using this notation. For example, when Ernest Rutherford
produced the first artificial nuclear reaction (1919), it involved bombarding a
nitrogen nucleus with alpha particles and resulted in an isotope of oxygen with
the release of a proton: 2He4+7N14→8O17+1H1. Note that the total of the
atomic numbers on the left is equal to the total on the right (i.e., 2+7=8+1), and
similarly for the mass numbers (4+14=17+1).
Scientific Investigations of the Nucleus
Following the discovery of radioactivity by A. H. Becquerel in 1896, Ernest
Rutherford identified two types of radiation given off by natural radioactive
substances and named them alpha and beta; a third, gamma, was later identified.
In 1911 he bombarded a thin target of gold foil with alpha rays (subsequently
identified as helium nuclei) and found that, although most of the alpha particles
passed directly through the foil, a few were deflected by large amounts. By a
quantitative analysis of his experimental results, he was able to propose the
existence of the nucleus and estimate its size and charge.
After the discovery of the neutron in 1932, physicists turned their attention to
the understanding of the strong interactions, or strong nuclear force, that bind
protons and neutrons together in nuclei. This force must be great enough to
overcome the considerable repulsive force existing between several protons
because of their electrical charge. It must exist between nucleons without regard
to their charge, since it acts equally on protons and neutrons, and it must not
extend very far away from the nucleons (i.e., it must be a short-range force),
since it has negligible effect on protons or neutrons outside the nucleus.
In 1935 Hideki Yukawa proposed a theory that this nuclear "glue was produced
by the exchange of a particle between nucleons, just as the electromagnetic force
is produced by the exchange of a photon between charged particles. The range
of a force is dependent on the mass of the particle carrying the force; the greater
the mass of the particle, the shorter the range of the force. The range of the
electromagnetic force is infinite because the mass of the photon is zero. From
the known range of the nuclear force, Yukawa estimated the mass of the
31
hypothetical carrier of the nuclear force to be about 200 times that of the
electron. Given the name meson because its mass is between that of the electron
and those of the nucleons, this particle was finally observed in 1947 and is now
called the pi meson, or pion, to distinguish it from other mesons that have been
discovered (see elementary particles).
Both the proton and the neutron are surrounded by a cloud of pions given off
and reabsorbed again within an incredibly short interval of time. Certain other
mesons are assumed to be created and destroyed in this way as well, all such
particles being termed "virtual because they exist in violation of the law of
conservation of energy (see conservation laws) for a very short span of time
allowed by the uncertainty principle. It is now known, however, that at a more
fundamental level the actual carrier of the strong force is a particle called the
gluon.
Bibliography
See G. Gamow, The Atom and Its Nucleus (1961); R. K. Adair, The Great
Design: Particles, Fields, and Creation (1987).
proton
proton,
elementary particle having a single positive electrical charge and constituting the
nucleus of the ordinary hydrogen atom. The positive charge of the nucleus of
any atom is due to its protons. Every atomic nucleus contains one or more
protons; the number of protons, called the atomic number, is different for every
element (see periodic table). The mass of the proton is about 1,840 times the
mass of the electron and slightly less than the mass of the neutron. The total
number of nucleons, as protons and neutrons are collectively called, in any
nucleus is the mass number of the nucleus. The existence of the nucleus was
postulated by Ernest Rutherford in 1911 to explain his experiments on the
scattering of alpha particles; in 1919 he discovered the proton as a product of the
disintegration of the atomic nucleus. The proton and the neutron are regarded as
two aspects or states of a single entity, the nucleon. The proton is the lightest of
the baryon class of elementary particles. The proton and other baryons are
composed of triplets of the elementary particle called the quark. A proton, for
instance, consists of two quarks called up and one quark called down, a neutron
consists of two down quarks and an up quark. The antiparticle of the proton, the
antiproton, was discovered in 1955; it has the same mass as the proton but a unit
32
negative charge and opposite magnetic moment. Protons are frequently used in a
particle accelerator as either the bombarding (accelerated) particle, the target
nucleus, or both. The possibility that the proton may have a finite lifetime has
recently come under examination. If the proton does indeed decay into lighter
products, however, it takes an extremely long time to do so; experimental
evidence suggests that the proton has a lifetime of at least 1031 years.
neutron
neutron
star,
extremely small, extremely dense star, about double the sun's mass but only a
few kilometers in radius, in the final stage of stellar evolution. Astronomers
Baade and Zwicky predicted the existence of neutron stars in 1933. In the
central core of a neutron star there are no stable atoms or nuclei; only
elementary particles can survive the extreme conditions of pressure and
temperature. Surrounding the core is a fluid composed primarily of neutrons
squeezed in close contact. The fluid is encased in a rigid crystalline crust a few
hundred meters thick. The outer gaseous atmosphere is probably only a few
centimeters thick. The neutron star resembles a single giant nucleus because the
density everywhere except in the outer shell is as high as the density in the
nuclei of ordinary matter. There is observational evidence of the existence of
several classes of neutron stars: pulsars are periodic sources of radio frequency,
X ray, or gamma ray radiation that fluctuate in intensity and are considered to be
rotating neutron stars. A neutron star may also be the smaller of the two
components in an X-ray binary star.
particle accelerator
particle accelerator,
apparatus used in nuclear physics to produce beams of energetic charged
particles and to direct them against various targets. Such machines, popularly
33
called atom smashers, are needed to observe objects as small as the atomic
nucleus in studies of its structure and of the forces that hold it together.
Accelerators are also needed to provide enough energy to create new particles.
Besides pure research, accelerators have practical applications in medicine and
industry, most notably in the production of radioisotopes. A majority of the
world's particle accelerators are situated in the United States, either at major
universities or national laboratories. In Europe the principal facility is the
European Laboratory for Particle Physics (CERN) near Geneva, Switzerland; in
Russia important installations exist at Dubna and Serpukhov.
Design of Particle Accelerators
There are many types of accelerator designs, although all have certain features
in common. Only charged particles (most commonly protons and electrons, and
their antiparticles; less often deuterons, alpha particles, and heavy ions) can be
artificially accelerated; therefore, the first stage of any accelerator is an ion
source to produce the charged particles from a neutral gas. All accelerators use
electric fields (steady, alternating, or induced) to speed up particles; most use
magnetic fields to contain and focus the beam. Meson factories (the largest of
which is at the Los Alamos, N.Mex., Scientific Laboratory), so-called because
of their copious pion production by high-current proton beams, operate at
conventional energies but produce much more intense beams than previous
accelerators; this makes it possible to repeat early experiments much more
accurately. In linear accelerators the particle path is a straight line; in other
machines, of which the cyclotron is the prototype, a magnetic field is used to
bend the particles in a circular or spiral path.
Linear Accelerators
The early linear accelerators used high voltage to produce high-energy particles;
a large static electric charge was built up, which produced an electric field along
the length of an evacuated tube, and the particles acquired energy as they moved
through the electric field. The Cockcroft-Walton accelerator produced high
voltage by charging a bank of capacitors in parallel and then connecting them in
series, thereby adding up their separate voltages. The Van de Graaff accelerator
achieved high voltage by using a continuously recharged moving belt to deliver
charge to a high-voltage terminal consisting of a hollow metal sphere. Today
these two electrostatic machines are used in low-energy studies of nuclear
structure and in the injection of particles into larger, more powerful machines.
Linear accelerators can be used to produce higher energies, but this requires
increasing their length.
Linear accelerators, in which there is very little radiation loss, are the most
powerful and efficient electron accelerators; the largest of these, the Stanford
Univ. linear accelerator (SLAC), completed in 1957, is 2 mi (3.2 km) long and
34
produces 20-GeV-in nuclear physics energies are commonly measured in
millions (MeV) or billions (GeV) of electron-volts (eV)-electrons. New linear
machines differ from earlier electrostatic machines in that they use electric fields
alternating at radio frequencies to accelerate the particles, instead of using high
voltage. The acceleration tube has segments that are charged alternately positive
and negative. When a group of particles passes through the tube, it is repelled by
the segment it has left and is attracted by the segment it is approaching. Thus the
final energy is attained by a series of pushes and pulls. Recently, linear
accelerators have been used to accelerate heavy ions such as carbon, neon, and
nitrogen.
Circular Accelerators
In order to reach high energy without the prohibitively long paths required of
linear accelerators, E. O. Lawrence proposed (1932) that particles could be
accelerated to high energies in a small space by making them travel in a circular
or nearly circular path. In the cyclotron, which he invented, a cylindrical magnet
bends the particle trajectories into a circular path whose radius depends on the
mass of the particles, their velocity, and the strength of the magnetic field. The
particles are accelerated within a hollow, circular, metal box that is split in half
to form two sections, each in the shape of the capital letter D. A radio-frequency
electric field is impressed across the gap between the D's so that every time a
particle crosses the gap, the polarity of the D's is reversed and the particle gets
an accelerating "kick. The key to the simplicity of the cyclotron is that the
period of revolution of a particle remains the same as the radius of the path
increases because of the increase in velocity. Thus, the alternating electric field
stays in step with the particles as they spiral outward from the center of the
cyclotron to its circumference. However, according to the theory of relativity the
mass of a particle increases as its velocity approaches the speed of light; hence,
very energetic, high-velocity particles will have greater mass and thus less
acceleration, with the result that they will not remain in step with the field. For
protons, the maximum energy attainable with an ordinary cyclotron is about 10
million electron-volts.
Two approaches exist for exceeding the relativistic limit for cyclotrons. In the
synchrocyclotron, the frequency of the accelerating electric field steadily
decreases to match the decreasing angular velocity of the protons. In the
isochronous cyclotron, the magnet is constructed so the magnetic field is
stronger near the circumference than at the center, thus compensating for the
mass increase and maintaining a constant frequency of revolution. The first
synchrocyclotron, built at the Univ. of California at Berkeley in 1946, reached
energies high enough to create pions, thus inaugurating the laboratory study of
the meson family of elementary particles.
35
Further progress in physics required energies in the GeV range, which led to the
development of the synchrotron. In this device, a ring of magnets surrounds a
doughnut-shaped vacuum tank. The magnetic field rises in step with the proton
velocities, thus keeping them moving in a circle of nearly constant radius,
instead of the widening spiral of the cyclotron. The entire center section of the
magnet is eliminated, making it possible to build rings with diameters measured
in miles. Particles must be injected into a synchrotron from another accelerator.
The first proton synchrotron was the cosmotron at Brookhaven (N.Y.) National
Laboratory, which began operation in 1952 and eventually attained an energy of
3 GeV. The 6.2-GeV synchrotron (the bevatron) at the Lawrence Radiation
Laboratory, Univ. of California at Berkeley, was used to discover the antiproton
(see antiparticle).
The 500-GeV synchrotron at the Fermi National Accelerator Laboratory at
Batavia, Ill., was built to be the most powerful accelerator in the world in the
early 1970s; the ring has a circumference of approximately 6 kilometers, or 4
miles. The machine was upgraded in 1983 to accelerate protons and
counterpropagating antiprotons to such enormous speeds that the ensuing
impacts deliver energies of up to 2 trillion electron-volts (TeV)-hence the ring
has been dubbed the Tevatron. The Tevatron is an example of a so-called
colliding-beams machine, which is really a double accelerator that causes two
separate beams to collide, either head-on or at a grazing angle. Because of
relativistic effects, producing the same reactions with a conventional accelerator
would require a single beam hitting a stationary target with much more than
twice the energy of either of the colliding beams. Plans were made to build a
huge accelerator in Waxahachie, Tex. Called the Superconducting Supercollider
(SSC), a ring 87 kilometers (54 miles) in circumference lined with
superconducting magnets (see superconductivity) would produce 40 TeV
particle collisions. However, the program was ended in 1993 when government
funding was stopped.
The synchrotron can be used to accelerate electrons but is inefficient. An
electron moves much faster than a proton of the same energy and hence loses
much more energy in synchrotron radiation. A circular machine used to
accelerate electrons is the betatron, invented by Donald Kerst in 1939. Electrons
are injected into a doughnut-shaped vacuum chamber that surrounds a magnetic
field. The magnetic field is steadily increased, inducing a tangential electric field
that accelerates the electrons (see induction).
36
ion
ion,
atom or group of atoms having a net electric charge.
Positive and Negative Electric Charges
A neutral atom or group of atoms becomes an ion by gaining or losing one or
more electrons or protons. Since the electron and proton have equal but opposite
unit charges, the charge of an ion is always expressed as a whole number of unit
charges and is either positive or negative. A simple ion consists of only one
charged atom; a complex ion consists of an aggregate of atoms with a net
charge. If an atom or group loses electrons or gains protons, it will have a net
positive charge and is called a cation. If an atom or group gains electrons or
loses protons, it will have a net negative charge and is called an anion.
Since ordinary matter is electrically neutral, ions normally exist as groups of
cations and anions such that the sum total of positive and negative charges is
zero. In common table salt, or sodium chloride, NaCl, the sodium cations, Na+,
are neutralized by chlorine anions, Cl−. In the salt sodium carbonate, Na2CO3,
two sodium cations are needed to neutralize each carbonate anion, CO3−2,
because its charge is twice that of the sodium ion.
Ionization of Neutral Atoms
Ionization of neutral atoms can occur in several different ways. Compounds
such as salts dissociate in solution into their ions, e.g., in solution sodium
chloride exists as free Na+ and Cl− ions. Compounds that contain dissociable
protons, or hydrogen ions, H+, or basic ions such as hydroxide ion, OH−, make
acidic or basic solutions when they dissociate in water (see acids and bases;
dissociation). Substances that ionize in solution are called electrolytes; those that
do not ionize, like sugar and alcohol, are called nonelectrolytes. Ions in solution
conduct electricity. If a positive electrode, or anode, and a negative electrode, or
cathode, are inserted into such a solution, the ions are attracted to the electrode
of opposite charge, and simultaneous currents of ions arise in opposite directions
to one another. Nonelectrolytes do not conduct electricity.
Ionization can also be caused by the bombardment of matter with high-speed
particles or other radiation. Ultraviolet radiation and low-energy X rays excite
molecules in the upper atmosphere sufficiently to cause them to lose electrons
and become ionized, giving rise to several different layers of ions in the earth's
atmosphere (see ionosphere). A gas can be ionized by passing an electron
current through it; the ionized gas then permits the passage of a much higher
current. Heating to high temperatures also ionizes substances; certain salts yield
ions in their melts as they do in solution.
37
Applications of Ionization
Ionization has many applications. Vapor lamps and fluorescent lamps take
advantage of the light given off when positive ions recombine with electrons.
Because of their electric charge the movement of ions can be controlled by
electrostatic and magnetic fields. Particle accelerators, or atom smashers, use
both fields to accelerate and aim electrons and hydrogen and helium ions. The
mass spectrometer utilizes ionization to determine molecular weights and
structures. High-energy electrons are used to ionize a molecule and break it up
into fragment ions. The ratio of mass to charge for each fragment is determined
by its behavior in electric and magnetic fields. The ratio of mass to charge of the
parent ion gives the molecular weight directly, and the fragmentation pattern
gives clues to the molecular structures.
In ion-exchange reactions a specially prepared insoluble resin with attached
dissociable ions is packed into a column. When a solution is passed through the
column, ions from the solution are exchanged with ions on the resin (see
chromatography). Water softeners use the mineral zeolite, a natural ionexchange resin; sodium ions from the zeolite are exchanged for metal ions from
the insoluble salt that makes the water hard, converting it to a soluble salt. Ionpermeable membranes allow some ions to pass through more readily than
others; some membranes of the human nervous system are selectively permeable
to the ions sodium and potassium.
Engineers have developed experimental ion propulsion engines that propel
rockets by ejecting high-speed ions; most other rocket engines eject combustion
products. Although an ion engine does not develop enough thrust to launch a
rocket into earth orbit, it is considered practical for propelling one through
interplanetary space on long-distance trips, e.g., between the earth and Jupiter. If
left running for long periods of time on such a trip, the ion engine would
gradually accelerate the rocket to immense speeds.
electron-volt
electron-volt,
abbr. eV, unit of energy used in atomic and nuclear physics; 1 electron-volt is
the energy transferred in moving a unit charge, positive or negative and equal to
that charge on the electron, through a potential difference of 1 volt. The
maximum energy of a particle accelerator is usually expressed in multiples of
the electron-volt, such as million electron-volts (MeV) or billion electron-volts
38
(GeV). Because mass is a form of energy (see relativity), the masses of
elementary particles are sometimes expressed in electron-volts; e.g., the mass of
the electron, the lightest particle with measurable rest mass, is 0.51 MeV/c2,
where c is the speed of light.
isotope
isotope
istop , in chemistry and physics, one of two or more atoms having the same
atomic number but differing in atomic weight and mass number. The concept of
isotope was introduced by F. Soddy in explaining aspects of radioactivity; the
first stable isotope (of neon) was discovered by J. J. Thomson. The nuclei of
isotopes contain identical numbers of protons, equal to the atomic number of the
atom, and thus represent the same chemical element, but do not have the same
number of neutrons. Thus isotopes of a given element have identical chemical
properties but slightly different physical properties and very different half-lives,
if they are radioactive (see half-life). For most elements, both stable and
radioactive isotopes are known. Radioactive isotopes of many common
elements, such as carbon and phosphorus, are used as tracers in medical,
biological, and industrial research. Their radioactive nature makes it possible to
follow the substances in their paths through a plant or animal body and through
many chemical and mechanical processes; thus a more exact knowledge of the
processes under investigation can be obtained. The very slow and regular
transmutations of certain radioactive substances, notably carbon-14, make them
useful as "nuclear clocks for dating archaeological and geological samples. By
taking advantage of the slight differences in their physical properties, the
isotopes may be separated. The mass spectrograph uses the slight difference in
mass to separate different isotopes of the same element. Depending on their
nuclear properties, the isotopes thus separated have important applications in
nuclear energy. For example, the highly fissionable isotope uranium-235 must
be separated from the more plentiful isotope uranium-238 before it can be used
in a nuclear reactor or atomic bomb.
39
atomic weight
atomic weight,
mean (weighted average) of the masses of all the naturally occurring isotopes of
a chemical element, as contrasted with atomic mass, which is the mass of any
individual isotope. Although the first atomic weights were calculated at the
beginning of the 19th cent., it was not until the discovery of isotopes by F.
Soddy (c.1913) that the atomic mass of many individual isotopes was
determined, leading eventually to the adoption of the atomic mass unit as the
standard unit of atomic weight.
Effect of Isotopes in Calculating Atomic Weight
Most naturally occurring elements have one principal isotope and only
insignificant amounts of other isotopes. Therefore, since the atomic mass of any
isotope is very nearly a whole number, most atomic weights are nearly whole
numbers, e.g., hydrogen has atomic weight 1.00797 and nitrogen has atomic
weight 14.007. However, some elements have more than one principal isotope,
and the atomic weight for such an element-since it is a weighted average-is not
close to a whole number; e.g., the two principal isotopes of chlorine have atomic
masses very nearly 35 and 37 and occur in the approximate ratio 3 to 1, so the
atomic weight of chlorine is about 35.5. Some other common elements whose
atomic weights are not nearly whole numbers are antimony, barium, boron,
bromine, cadmium, copper, germanium, lead, magnesium, mercury, nickel,
strontium, tin, and zinc.
Atomic weights were formerly determined directly by chemical means; now a
mass spectrograph is usually employed. The atomic mass and relative
abundance of the isotopes of an element can be measured very accurately and
with relative ease by this method, whereas chemical determination of the atomic
weight of an element requires a careful and precise quantitative analysis of as
many of its compounds as possible.
Development of the Concept of Atomic Weight
J. L. Proust formulated (1797) what is now known as the law of definite
proportions, which states that the proportions by weight of the elements forming
any given compound are definite and invariable. John Dalton proposed (c.1810)
an atomic theory in which all atoms of an element have exactly the same weight.
He made many measurements of the combining weights of the elements in
various compounds. By postulating that simple compounds always contain one
atom of each element present, he assigned relative atomic weights to many
elements, assigning a weight of 1 to hydrogen as the basis of his scale. He
thought that water had the formula HO, and since he found by experiment that 8
weights of oxygen combine with 1 weight of hydrogen, he assigned an atomic
40
weight of 8 to oxygen. Dalton also formulated the law of multiple proportions,
which states that when two elements combine in more than one proportion by
weight to form two or more distinct compounds, their weight proportions in
those compounds are related to one another in simple ratios. Dalton's work
sparked an interest in determining atomic weights, even though some of his
results-such as that for oxygen-were soon shown to be incorrect.
While Dalton was working on weight relationships in compounds, J. L. GayLussac was experimenting with the chemical reactions of gases, and he found
that, when under the same conditions of temperature and pressure, gases react in
simple whole-number ratios by volume. Avogadro proposed (1811) a theory of
gases that holds that equal volumes of two gases at the same temperature and
pressure contain the same number of particles, and that these basic particles are
not always single atoms. This theory was rejected by Dalton and many other
chemists.
P. L. Dulong and A. T. Petit discovered (1819) a specific-heat method for
determining the approximate atomic weight of elements. Among the first
chemists to work out a systematic group of atomic weights (c.1830) was J. J.
Berzelius, who was influenced in his choice of formulas for compounds by the
method of Dulong and Petit. He attributed the formula H2O to water and
determined an atomic weight of 16 for oxygen. J. S. Stas later refined many of
Berzelius's weights. Stanislao Cannizzaro applied Avogadro's theories to
reconcile atomic weights used by organic and inorganic chemists.
The availability of fairly accurate atomic weights and the search for some
relationship between atomic weight and chemical properties led to J. A. R.
Newlands's table of "atomic numbers (1865), in which he noted that if the
elements were arranged in order of increasing atomic weight "the eighth
element, starting from a given one, is a kind of repetition of the first. He called
this the law of octaves. Such investigations led to the statement of the periodic
law, which was discovered independently (1869) by D. I. Mendeleev in Russia
and J. L. Meyer in Germany. T. W. Richards did important work on atomic
weights (after 1883) and revised some of Stas's values.
atomic mass
atomic mass,
41
the mass of a single atom, usually expressed in atomic mass units (amu). Most
of the mass of an atom is concentrated in the protons and neutrons contained in
the nucleus. Each proton or neutron weighs about 1 amu, and thus the atomic
mass is always very close to the mass number (total number of protons and
neutrons in the nucleus). Atoms of an isotope of an element all have the same
atomic mass. Atomic masses are usually determined by mass spectrography (see
mass spectrograph). They have been determined with great relative accuracy,
but their absolute value is less certain.
mass number
mass number,
often represented by the symbol A, the total number of nucleons (neutrons and
protons) in the nucleus of an atom. All atoms of a chemical element have the
same atomic number (number of protons in the nucleus) but may have different
mass numbers (from having different numbers of neutrons in the nucleus).
Atoms of an element with the same mass number make up an isotope of the
element. Different isotopes of the same element cannot have the same mass
number, but isotopes of different elements often do have the same mass number,
e.g., carbon-14 (6 protons and 8 neutrons) and nitrogen-14 (7 protons and 7
neutrons).
atomic mass unit
atomic mass unit
or amu, in chemistry and physics, unit defined as exactly 1/12 the mass of an
42
atom of carbon-12, the isotope of carbon with six protons and six neutrons in its
nucleus. One amu is equal to approximately 1.66 × 10−24 grams.
half-life
half-life,
measure of the average lifetime of a radioactive substance (see radioactivity) or
an unstable subatomic particle. One half-life is the time required for one half of
any given quantity of the substance to decay. For example, the half-life of a
particular radioactive isotope of thorium is 8 minutes. If 100 grams of the
isotope are originally present, then only 50 grams will remain after 8 minutes,
25 grams after 16 minutes (2 half-lives), 12.5 grams after 24 minutes (3 halflives), and so on. Of course the 87.5 grams that are no longer present as the
original substance after 24 minutes have not disappeared but remain in the form
of one or more other substances in the isotope's radioactive decay series.
Individual decays are random and cannot be predicted, but this statistical
measure of the great number of atoms in the sample is very accurate. The halflife of a radioactive isotope is a characteristic of that isotope and is not affected
by any change in physical or chemical conditions.
radioactive isotope
radioactive isotope
or radioisotope, natural or artificially created isotope of a chemical element
having an unstable nucleus that decays, emitting alpha, beta, or gamma rays
until stability is reached. The stable end product is a nonradioactive isotope of
another element, i.e., radium-226 decays finally to lead-206. Very careful
measurements show that many materials contain traces of radioactive isotopes.
For a time it was thought that these materials were all members of the actinide
series; however, exacting radiochemical research has demonstrated that certain
43
of the light elements also have naturally occurring isotopes that are radioactive.
Since minute traces of radioactive isotopes can be sensitively detected by means
of the Geiger counter and other methods, they have various uses in medical
therapy, diagnosis, and research. In therapy, they are used to kill or inhibit
specific malfunctioning cells. Radioactive phosphorus is used to treat abnormal
cell proliferation, e.g., polycythemia (increase in red cells) and leukemia
(increase in white cells). Radioactive iodine can be used in the diagnosis of
thyroid function and in the treatment of hyperthyroidism. Since the iodine taken
into the body concentrates in the thyroid gland, the radioaction can be confined
to that organ. In research, radioactive isotopes as tracer agents make it possible
to follow the action and reaction of organic and inorganic substances within the
body, many of which could not be studied by any other means. They also help to
ascertain the effects of radiation on the human organism (see radiation sickness).
In industry, radioactive isotopes are used for a number of purposes, including
measuring the thickness of metal or plastic sheets by the amount of radiation
they can stop, testing for corrosion or wear, and monitoring various processes.
radioactivity
radioactivity,
spontaneous disintegration or decay of the nucleus of an atom by emission of
particles, usually accompanied by electromagnetic radiation. The energy
produced by radioactivity has important military and industrial applications.
However, the rays emitted by radioactive substances can cause radiation
sickness, and such substances must therefore be handled with extreme care (see
radioactive waste).
Radioactive Emissions
Natural radioactivity is exhibited by several elements, including radium,
uranium, and other members of the actinide series, and by some isotopes of
lighter elements, such as carbon-14, used in radioactive dating. Radioactivity
may also be induced, or created artificially, by bombarding the nuclei of
normally stable elements in a particle accelerator. Essentially there is no
difference between these two manifestations of radioactivity.
The radiation produced during radioactivity is predominantly of three types,
designated as alpha, beta, and gamma rays. These types differ in velocity, in the
44
way in which they are affected by a magnetic field, and in their ability to
penetrate or pass through matter. Other, less common, types of radioactivity are
electron capture (capture of one of the orbiting atomic electrons by the unstable
nucleus) and positron emission-both forms of beta decay and both resulting in
the change of a proton to a neutron within the nucleus-an internal conversion, in
which an excited nucleus transfers energy directly to one of the atom's orbiting
electrons and ejects it from the atom.
Alpha Radiation
Alpha rays have the least penetrating power, move at a slower velocity than the
other types, and are deflected slightly by a magnetic field in a direction that
indicates a positive charge. Alpha rays are nuclei of ordinary helium atoms (see
alpha particle). Alpha decay reduces the atomic weight, or mass number, of a
nucleus, while beta and gamma decay leave the mass number unchanged. Thus,
the net effect of alpha radioactivity is to produce nuclei lighter than those of the
original radioactive substance. For example, in the disintegration, or decay, of
uranium-238 by the emission of alpha particles, radioactive thorium (formerly
called ionium) is produced. The alpha decay reduces the atomic number of the
nucleus by 2 and the mass number by 4:
Gamma Radiation
Gamma rays have very great penetrating power and are not affected at all by a
magnetic field. They move at the speed of light and have a very short
wavelength (or high frequency); thus they are a type of electromagnetic
radiation (see gamma radiation). Gamma rays result from the transition of nuclei
from excited states (higher energy) to their ground state (lowest energy), and
their production is analogous to the emission of ordinary light caused by
transitions of electrons within the atom (see atom; spectrum). Gamma decay
often accompanies alpha or beta decay and affects neither the atomic number
nor the mass number of the nucleus.
Radioactive Decay
The nuclei of elements exhibiting radioactivity are unstable and are found to be
undergoing continuous disintegration (i.e., gradual breakdown). The
disintegration proceeds at a definite rate characteristic of the particular nucleus;
that is, each radioactive isotope has a definite lifetime. However, the time of
decay of an individual nucleus is unpredictable. The lifetime of a radioactive
substance is not affected in any way by any physical or chemical conditions to
which the substance may be subjected.
45
Half-Life of an Element
The rate of disintegration of a radioactive substance is commonly designated by
its half-life, which is the time required for one half of a given quantity of the
substance to decay. Depending on the element, a half-life can be as short as a
fraction of a second or as long as several billion years.
Radioactive Disintegration Series
The product of a radioactive decay may itself be unstable and undergo further
decays, by either alpha or beta emission. Thus, a succession of unstable
elements may be produced, the series continuing until a nucleus is produced that
is stable. Such a series is known as a radioactive disintegration, or decay, series.
The original nucleus in a decay series is called the parent nucleus, and the nuclei
resulting from successive disintegrations are known as daughter nuclei.
There are four known radioactive decay series, the members of a given series
having mass numbers that differ by jumps of 4. The series beginning with
uranium-238 and ending with lead-206 is known as the 4n+2 series because all
the mass numbers in the series are 2 greater than an integral multiple of 4 (e.g.,
238=4×59+2, 206=4×51+2). The accompanying illustration shows a portion of
the uranium disintegration series, i.e., from radium-226 to lead-206. The series
beginning with thorium-232 is the 4n series, and that beginning with uranium235 is the 4n+3 series, or actinide series. The 4n+1 series, which begins with
neptunium-237, is not found in nature because the half-life of the parent nucleus
(about 2 million years) is many times less than the age of the earth, and all
naturally occurring samples have already disintegrated. The 4n+1 series is
produced artificially in nuclear reactors.
Because the rates of disintegration of the members of a radioactive decay series
are constant, the age of rocks and other materials can be determined by
measuring the relative abundances of the different members of the series. All of
the decay series end in a stable isotope of lead, so that a rock containing mostly
lead as compared to heavier elements would be very old.
Discovery of Radioactivity
Natural radioactivity was first observed in 1896 by A. H. Becquerel, who
discovered that when salts of uranium are brought into the vicinity of an
unexposed photographic plate carefully protected from light, the plate becomes
exposed. The radiation from uranium salts also causes a charged electroscope to
discharge. In addition, the salts exhibit phosphorescence and are able to produce
fluorescence. Since these effects are produced both by salts and by pure
uranium, radioactivity must be a property of the element and not of the salt. In
1899 E. Rutherford discovered and named alpha and beta radiation, and in 1900
46
P. Villard identified gamma radiation. Marie and Pierre Curie extended the work
on radioactivity, demonstrating the radioactive properties of thorium and
discovering the highly radioactive element radium in 1898. Frédéric and Irène
Joliot-Curie discovered the first example of artificial radioactivity in 1934 by
bombarding nonradioactive elements with alpha particles.
Bibliography
See Sir James Chadwick, Radioactivity and Radioactive Substances (rev. ed.
1962); A. Romer, ed., Radiochemistry and the Discovery of Isotopes (1970).
phosphorescence
phosphorescence
fosfresns , luminescence produced by certain substances after absorbing radiant
energy or other types of energy. Phosphorescence is distinguished from
fluorescence in that it continues even after the radiation causing it has ceased.
Phosphorescence was first observed in the 17th cent. but was not studied
scientifically until the 19th cent. According to the theory first advanced by
Philipp Lenard, energy is absorbed by a phosphorescent substance, causing
some of the electrons of the crystal to be displaced. These electrons become
trapped in potential troughs from which they are eventually freed by
temperature-related energy fluctuations within the crystal. As they fall back to
their original energy levels, they release their excess energy in the form of light.
Impurities in the crystal can play an important role, some serving as activators or
coactivators, others as sensitizers, and still others as inhibitors, of
phosphorescence. Organo-phosphors are organic dyes that fluoresce in liquid
solution and phosphoresce in solid solution or when adsorbed on gels. Their
phosphorescence, however, is not temperature-related, as ordinary
phosphorescence is, and some consider it instead to be a type of fluorescence
that dies slowy.
fluorescence
47
fluorescence
flooresns , luminescence in which light of a visible color is emitted from a
substance under stimulation or excitation by light or other forms of
electromagnetic radiation or by certain other means. The light is given off only
while the stimulation continues; in this the phenomenon differs from
phosphorescence, in which light continues to be emitted after the excitation by
other radiation has ceased. Fluorescence of certain rocks and other substances
had been observed for hundreds of years before its nature was understood.
Probably the first to explain it was the British scientist Sir George G. Stokes,
who named the phenomenon after fluorite, a strongly fluorescent mineral.
Stokes is credited with the discovery (1852) that fluorescence can be induced in
certain substances by stimulation with ultraviolet light. He formulated Stokes's
law, which states that the wavelength of the fluorescent light is always greater
than that of the exciting radiation, but exceptions to this law have been found.
Later it was discovered that certain organic and inorganic substances can be
made to fluoresce by activation not only with ultraviolet light but also with
visible light, infrared radiation, X rays, radio waves, cathode rays, friction, heat,
pressure, and some other excitants. Fluorescent substances, sometimes also
known as phosphors, are used in paints and coatings, but their chief use is in
fluorescent lighting.
luminescence
luminescence,
general term applied to all forms of cool light, i.e., light emitted by sources other
than a hot, incandescent body, such as a black body radiator. Luminescence is
caused by the movement of electrons within a substance from more energetic
states to less energetic states. There are many types of luminescence, including
chemiluminescence, produced by certain chemical reactions, chiefly oxidations,
at low temperatures; electroluminescence, produced by electric discharges,
which may appear when silk or fur is stroked or when adhesive surfaces are
separated; and triboluminescence, produced by rubbing or crushing crystals.
Bioluminescence is luminescence produced by living organisms and is thought
to be a type of chemiluminescence. The luminescence observed in the sea is
produced by living organisms, many of them microscopic, that collect at the
surface. Other examples of bioluminescence include glowworms, fireflies, and
various fungi and bacteria found on rotting wood or decomposing flesh. If the
luminescence is caused by absorption of some form of radiant energy, such as
48
ultraviolet radiation or X rays (or by some other form of energy, such as
mechanical pressure), and ceases as soon as (or very shortly after) the radiation
causing it ceases, then it is known as fluorescence. If the luminescence continues
after the radiation causing it has stopped, then it is known as phosphorescence.
The term phosphorescence is often incorrectly considered synonymous with
luminescence.
bioluminescence
bioluminescence
bioloominesns , production of light by living organisms. Organisms that are
bioluminescent include certain fungi and bacteria that emit light continuously.
The dinoflagellates, a group of marine algae, produce light only when disturbed.
Bioluminescent animals include such organisms as ctenophores, annelid worms,
mollusks, insects such as fireflies, and fish. The production of light in
bioluminescent organisms results from the conversion of chemical energy to
light energy. In fireflies, one type of a group of substances known collectively as
luciferin combines with oxygen to form an oxyluciferin in an excited state,
which quickly decays, emitting light as it does. The reaction is mediated by an
enzyme, luciferase, which is normally bound to ATP (see adenosine
triphosphate) in an inactive form. When the signal for the specialized
bioluminescent cells to flash is receive, the luciferase is liberated from the ATP,
causes the luciferin to oxidize, and then somehow recombines with ATP.
Different organisms produce different bioluminescent substances.
Bioluminescent fish are common in ocean depths; the light probably aids in
species recognition in the darkness. Other animals seem to use luminescence in
courtship and mating and to divert predators or attract prey.
synchrotron radiation
synchrotron radiation,
49
in physics, electromagnetic radiation emitted by high-speed electrons spiraling
along the lines of force of a magnetic field (see magnetism). Depending on the
electron's energy and the strength of the magnetic field, the maximum intensity
will occur as radio waves, visible light, or X rays. The emission is a
consequence of the constant acceleration experienced by the electrons as they
move in nearly circular orbits; according to Maxwell's equations, all accelerated
charged particles emit electromagnetic radiation. Although predicted much
earlier, synchrotron radiation was first observed as a glow associated with
protons orbiting in high-energy particle accelerators, such as the synchrotron. In
astronomy, synchrotron radiation has been suggested as the mechanism for
producing strong celestial radio sources like the Crab Nebula (see radio
astronomy). Synchrotron radiation is employed in a host of applications, ranging
from solid-state physics to medicine. As excellent producers of X rays,
synchrotron sources offer unique probes of the semiconductors that lie at the
heart of the electronics industry. Both ultraviolet radiation and X rays generated
by synchrotrons are also employed in the treatment of diseases, especially
certain forms of skin cancer.
fluoroscope
fluoroscope
floorskop , instrument consisting of an X-ray machine (see X ray) and a
fluorescent screen that may be used by physicians to view the internal organs of
the body. During medical diagnosis the patient stands between the X-ray
machine, or other radiation source, and the fluorescent screen. Radiation passes
through the body, producing varying degrees of light and shadow on the screen.
Although the regular X-ray photograph shows more detail, fluoroscopy is
preferable when the physician wants to see the live image, i.e., observe the size,
shape, and movement of the patient's internal organs. In industry the fluoroscope
is used for the examination of materials, manufactured objects, welds, castings,
and other objects, principally for flaws.
50
diffraction
diffraction,
bending of waves around the edge of an obstacle. When light strikes an opaque
body, for instance, a shadow forms on the side of the body that is shielded from
the light source. Ordinarily light travels in straight lines through a uniform,
transparent medium, but those light waves that just pass the edges of the opaque
body are bent, or deflected. This diffraction produces a fuzzy border region
between the shadow area and the lighted area. Upon close examination it can be
seen that this border region is actually a series of alternate dark and light lines
extending both slightly into the shadow area and slightly into the lighted area. If
the observer looks for these patterns, he will find that they are not always sharp.
However a sharp pattern can be produced if a single, distant light source, or a
point light source, is used to cast a shadow behind an opaque body. Diffraction
also occurs when light waves interact with a device called a diffraction grating.
A diffraction grating may be either a transmission grating (a plate pierced with
small, parallel, evenly spaced slits through which light passes) or a reflection
grating (a plate of metal or glass that reflects light from polished strips between
parallel lines ruled on its surface). In the case of a reflection grating, the smooth
surfaces between the lines act as narrow slits. The number of these slits or lines
is often 12,000 or more to the centimeter (30,000 to the inch). The ruling is
generally done with a fine diamond point. Since the light diffracted is also
dispersed (see spectrum), these gratings are utilized in diffraction spectroscopes
for producing and analyzing spectra and for measuring directly the wavelengths
of lines appearing in certain spectra. The diffraction of X rays by crystals is used
to examine the atomic and molecular structure of these crystals. Beams of
particles can also exhibit diffraction since, according to the quantum theory, a
moving particle also has certain wavelike properties. Both electron diffraction
and neutron diffraction have been important in modern physics research. Sound
waves and water waves also undergo diffraction.
51