Download 1. BASICS

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Sensorineural hearing loss wikipedia , lookup

Speed of sound wikipedia , lookup

Sound localization wikipedia , lookup

Sound wikipedia , lookup

Auditory system wikipedia , lookup

Sound from ultrasound wikipedia , lookup

Transcript
Sound
11
Sound is one of our most important forms of communication. The science of sound
is known as acoustics. In this chapter we learn about the physical properties of sound
and how to describe sound in the language of waves. We study how sound can be produced in speech as well as musical instruments, and how our ear works to detect
sound and transform its energy into electrical signals to be interpreted by our brain.
Depending on the relative motion of the sound source and detector, the frequency of
sound is changed according to the Doppler effect, studied next in this chapter.
Ultrasound is simply sound at frequencies beyond the detection capabilities of our
ears. It has a number of medical and scientific applications that we study, including
ultrasonic imaging, routinely used for fetal monitoring and for imaging internal
organs of the body.
1. BASICS
What happens when someone is speaking to you that enables you to hear them? The
sound you hear is first generated by the person forcing a set of vocal chords in their
larynx to vibrate while expelling air. The intonation and pitch are controlled by various muscles, the tongue, lips, and mouth. Sound emitted by the person then travels
through the air to your ears where in a series of remarkable steps it is converted into
an electrical signal that travels to the auditory center of your brain. We interpret
sound to have several properties, including loudness, pitch, and tonal qualities or timbre, but what is sound, how does it travel through the air, and what physical qualities
does it have that correspond to the properties just mentioned?
When vocal chords vibrate, they force molecules of air in the larynx to vibrate
through collisions that periodically transfer momentum to the surrounding air
(Figure 11.1). Consider a zone or band of air molecules in the vicinity of a vocal
chord and let’s follow those particular molecules through one oscillation in
Figure 11.2. The vocal chord’s motion to the right increases the local momentum of
our neighboring band of molecules thus increasing the local pressure (in the figure
we code the increased local momentum or pressure with a darker band). There is also
a corresponding increase in the local density above the mean density as our molecules collide with those just to the right and a subsequent corresponding decrease in
the local pressure and density below the mean of the band of molecules just to the left
of the vocal chord. As momentum of our band on the right is transferred through collisions with neighboring molecules farther to the right and the vocal chord oscillates
to the left, our band of molecules slows down, reducing its pressure and density, and
a net restoring force to the left is applied from the pressure (and density) imbalance.
Then, as the vocal chord moves again to the right, our molecules collide with others
from the left that have been pushed to the right and this process repeats itself. Thus
J. Newman, Physics of the Life Sciences, DOI: 10.1007/978-0-387-77259-2_11,
© Springer Science+Business Media, LLC 2008
BASICS
269
FIGURE 11.1 The larynx, showing
the vocal chords that vibrate to
produce sounds.
any particular molecule will oscillate longitudinally about some position and as a result, there is a local pressure and density variation in
time at any point.
The local pressure and density adjacent to the vocal chord vary
periodically, however, the collisions with neighboring molecules cause
the pressure variation to propagate outward in space. Sound is this spatially periodic pressure (and density) longitudinal wave that travels
outward from the source. In a band where the pressure is high, so is the
density of the molecules and this pressure tends to push the molecules
apart. Similarly in a band of lower density, the neighboring higherpressure bands tend to restore the density and pressure toward their
mean values. The air is said to be compressed and rarefied in a periodic manner. The centers of the bands of higher and lower density (and
pressure) instantaneously have zero displacement because molecules from either side
have moved either toward or away from them, respectively (Figure 11.3). These positions are called displacement nodes. Furthermore, the maximum displacements of the
molecules, or antinodes, occur precisely at the bands of zero density variation located
between those of high and low density extremes. This agrees with our discussion of
the energy propagated along a traveling wave on a string in Chapter 10, where we
showed that the maximum energy occurs at the displacement nodes where the slope
of the string is greatest. For sound, the pressure nodes are the positions where the
pressure equals atmospheric and there is no pressure (or density) variation. We can
summarize the situation by stating that the displacement antinodes occur at the pressure nodes and the displacement nodes occur at the pressure antinodes. We return to
this idea in our discussion of musical instruments in Section 4.
We can write the pressure variation from atmospheric pressure in the form
¢P ⫽ ¢Pmaxsin (kx ⫺ vt),
(11.1)
where a positive value of ⌬P corresponds to compression and a negative value to
expansion and the other variables are just as defined in Chapter 10 in our discussion
of traveling waves. There is a similar expression for the displacement of air molecules
¢s ⫽ ¢smax cos (kx ⫺ vt).
(11.2)
According to our previous discussion, points of maximum displacement correspond to points of zero pressure variation; the change from a sine to a cosine function accounts for this difference because when the sine is zero, the cosine function
has an extreme value of ⫾ 1 (see Figure 11.3). Values for ⌬Pmax are usually very
small fractions of the ambient pressure (the maximum value that does not cause pain
to the ear is only 0.03% of atmospheric pressure) whereas values for ⌬smax are
extremely small (with a value of about 10 ␮m corresponding to the pain threshold
just cited).
From our discussion, we might guess that the velocity of sound is related to the
mean velocity of the molecules themselves and this is true in an ideal gas. The speed of
sound, in general, depends on two parameters of the medium: its density ␳ and a parameter of its elastic properties. For a fluid medium, the velocity of sound is given by
v⫽
FIGURE 11.2 Schematic of density
variations in air emanating from a
vibrating vocal chord over one
oscillation. The arrows indicate the
oscillatory velocity of the local
molecules. These density oscillations comprise the sound wave and
travel outward at the speed of
sound.
270
B
Ar
,
(11.3)
where B is the bulk modulus, the elastic constant of proportionality between the pressure variation and the resulting volume strain (see Figure 3.17 and its discussion).
This equation has the same form as Equation (10.14) for the velocity of a mechanical wave on a string. There the tension serves as the elastic parameter and the linear
mass density (mass/length) is the volume mass density analog. For a long solid rod,
such as a railway track, the velocity of sound is given by a similar expression but with
the elastic modulus E replacing the bulk modulus in Equation (11.3).
SOUND
The speed of sound in air at 20°C and 1 atm pressure is 343 m/s (about
770 miles/hour). Aircraft that break the “sound barrier” fly faster than this speed,
known as Mach 1. The Mach number is the ratio of the airspeed to the speed of
sound. Beyond Mach 1, also known as supersonic speeds, a shock wave is created. This is a directed wave in which the gas density and pressure change dramatically as the wave passes.
Because the density of gases is dependent on temperature, the speed of
sound in air actually increases approximately 0.6 m/s for each 1°C increase in
temperature, as the density decreases. In liquids and solids, which are much less
compressible or much “stiffer” than gases with correspondingly higher bulk or
elastic moduli, the speed of sound is much faster. Table 11.1 lists the velocity of
sound in various materials.
Air
Water
Seawater
Body tissue (37°C)
Glass (pyrex)
Pambient
Δs
Motion of air molecules
confined to very small range
Table 11.1 Densities and Velocities of Sound
Material (20°C Unless Noted)
ΔP
Density (kg/m3)
Speed (m/s)
1.20
998
1,025
1,047
2,320
343
1,482
1,522
1,570
5,170
Frequency, wavelength, and intensity are other parameters characterizing sound.
Audible sound corresponds to frequencies in the range of about 20–20,000 Hz.
Lower frequencies than this are called infrasonic, whereas higher frequencies
are called ultrasonic and are discussed later in this chapter. From the general relation
␭ ⫽ v/f, wavelengths of sound waves can range from cm to many meters. The pitch
of sound is the audible sensation corresponding most closely to frequency; increasing frequency corresponds to increasing pitch.
Intensity represents the energy per unit time (or the power) crossing a unit surface area. Units for intensity are therefore given by J/s/m2 or W/m2. The intensity of
sound is discussed in some detail in the next section. Loudness is the audible sensation corresponding most closely to intensity, although there is no direct relation. For
example, at frequencies that are barely audible, a sound will not seem loud even if
the intensity is quite large. We discuss loudness later in the chapter after discussing
the ear and hearing.
vwave
λ
FIGURE 11.3 Pressure or density
variation along a sound wave in air.
Zero displacements of air occur at
the centers of the densest and
least dense bands whereas maximum displacements occur where
the density equals the mean density located midway between these
bands.
2. INTENSITY OF SOUND
Sound is a longitudinal traveling wave that carries energy in the form of mechanical
oscillations of the medium. For a one-dimensional longitudinal traveling wave, such
as travels along an ideal spring as seen in Chapter 10 (where we neglect damping),
the amplitude of the wave remains constant along its direction of travel. In this case,
the energy per unit time, or power, traveling with the wave velocity is constant. The
wave can be pictured as traveling along a fixed direction of propagation and represented as a plane wave, one having parallel wavefronts. These are the surfaces constructed by connecting all in phase points along the direction of propagation. For a
one-dimensional wave, points of common phase are planes with normals along the
wave velocity direction. Sound traveling along a railroad track is an example of such
a one-dimensional sound wave, although there is some damping or attenuation of
sound over large distances.
In three-dimensional examples, however, as the wave spreads out spatially, the
energy crossing a unit cross-sectional area decreases with increasing distance from
the sound source (see Figure 11.4). It is therefore more common to speak about the
INTENSITY
OF
SOUND
Unit area
R
FIGURE 11.4 The power radiated
from a point source into the
pyramid shown with vertex at
the source is a constant, thus
the power density, or intensity,
must decrease according to
Equation (11.4).
271
intensity of a three-dimensional wave than about its power. In this case, if the sound
originates at a localized source and flows outward in all directions, the wavefronts are
spherical and their surface area increases with radius from the source as A ⫽ 4␲ r2.
If the power emitted by the source of sound is constant, then as the spherical wavefront travels outward, the total amount of energy crossing any spherical shell centered
at the source is the same. Therefore the energy per unit time crossing a unit area must
decrease at increasing distances from the source. Mathematically, the intensity of
sound is related to the power P, generated by the source and the distance r from the
source by
I⫽
P
P
⫽
.
A 4pr2
(11.4)
If the power is constant then we see that the intensity is inversely proportional to
the square of the distance from the source
I r
1
.
r2
(11.5)
This is a general characteristic of spherical waves of any type and has only to do with
the geometry of space.
For all waves, whether mechanical, sound, light, or any other type, the intensity
I of the wave is proportional to the square of the wave amplitude. We know that this
is true in the case of a spring because the total spring energy is
1 2
kx
2 max
and the intensity will therefore be proportional to x2max. In the case of sound, the
intensity is given by
I⫽
¢P 2max
2rv
,
(11.6)
where the intensity, pressure wave amplitude, and density values all refer to the same
spatial location. This expression can also be shown to be proportional to the square
of the amplitude of vibration of the medium, ⌬smax. Recall from the last section that
these amplitudes are very small with typical ⌬Pmax/Patm and ⌬smax values of under a
few percent and submicrometer distances, respectively.
Sound intensities vary over an enormous range. The least intense sound that can
be heard by the human ear is called the threshold of hearing and is taken as 10⫺12
W/m2. Of course, this value actually varies from person to person as well as with a
person’s age. As the intensity increases so does the perceived loudness. The most
intense sound that the human ear can respond to without harm is called the threshold
of pain and is taken as 1 W/m2. Because of the enormous range of intensities to
which the ear responds, 12 orders of magnitude, sounds that are 10 times more
intense do not seem 10 times as loud to the ear. In fact, the ear responds nearly logarithmically to sound intensity, the sound loudness doubling for each decade increase
in intensity. A useful scale for intensity level is the decibel scale for which the sound
intensity level ␤ is given by
b ⫽ 110 dB2 log
I
,
Io
(11.7)
where the logarithm is the common logarithm, with base 10, I0 is a reference intensity, taken as the threshold of hearing (10⫺12 W/m2), and the unit of sound intensity
is the decibel or dB (where 1 dB ⫽ 1/10 bel, named in honor of Alexander Graham
Bell). The scale is chosen so that at I ⫽ I0 the intensity level is 0 dB, whereas at the
threshold of pain, I ⫽ 1012 I0 , the intensity level is 120 dB (check this by substitution
272
SOUND
in Equation (11.7)). Table 11.2 gives examples of various sounds and their corresponding intensity levels. We return to a discussion of the response of the ear to
sound intensity in Section 5 below.
Table 11.2 Intensities of Sounds
Sound
Threshold of hearing
Whisper
Normal conversation (at 1 m)
Street traffic in major city
Live rock concert
Threshold of pain
Jet engine (at 30 m)
Rupture of eardrum
Intensity (W/m2)
Intensity Level (dB)
10⫺12
10⫺10
10⫺6
10⫺5
10⫺1
1
10
104
0
20
60
70
110
120
130
160
Example 11.1 Find the ratio of the intensity of two sounds that differ by 3 dB.
Solution: Let the two intensities be I1 and I2. According to Equation (11.7), the
two sounds have dB given by ␤1 ⫽ 10 log I1/I0 and ␤2 ⫽ 10 log I2/I0, so that if
the two sounds differ by 3 dB, we have that
b 2 ⫺ b 1 ⫽ 3 dB ⫽ a10 log
I2
I0
⫺ 10 log
I1
I0
b⫽
110 log I2 ⫺ 10 log I0 ⫺ 10 log I1 ⫹ 10 log I02 ⫽ 10 log
I2
I1
.
Solving for the ratio of the intensities, we find I2/I1 ⫽ 100.3 ⫽ 2.0. Any two
sounds differing by 3 dB have intensities that differ by a factor of two. The best
human ears can hear a difference in loudness corresponding to about 1 dB. To
what ratio of intensities does this correspond?
3. SUPERPOSITION OF SOUND WAVES
REFLECTION, REFRACTION, AND DIFFRACTION
When sound waves traveling in more than one dimension come to a boundary between two different media, additional considerations beyond what we
have seen in the last chapter are required. Consider the case of a plane
boundary between two different media and let’s imagine a sound wave traveling through one medium and impinging on the boundary. Let’s take the
wave to be a plane wave, with all points along a plane wavefront in phase, an
often-used idealized wave that is traveling in synchrony in a particular direction. The wavefronts are drawn perpendicular to the propagation direction as
shown in Figure 11.5. When this wave meets the boundary, as in the case of
waves on a string, there will be a reflected wave as well as a transmitted
wave. If the wave approaches the boundary along the perpendicular, or normal, to the planar boundary, then the reflected and transmitted waves will
remain along that direction and the problem is quite similar to the onedimensional case of waves on a string.
SUPERPOSITION
OF
S O U N D W AV E S
FIGURE 11.5 Reflection and
refraction of an incident plane wave
at a planar boundary between two
different media.
θinc.
θrefl.
θrefr.
273
If the wave approaches the boundary along a line making an angle ␪incident with the
normal to the planar boundary then the reflected and transmitted waves do not travel
along the same line. In such a case the reflected wave remains in the incident medium,
remains a plane wave, and propagates in a direction making an angle ␪reflection with the
boundary normal that is equal to the incident angle as shown in Figure 11.5. The incident wave, reflected wave, and normal to the surface all lie in a common plane, known
as the plane of incidence. These two sentences comprise a statement of the law of
reflection: the reflected wave lies in the incidence plane at an angle of reflection equal
to the incident angle. When we study sound further and optics later on we show some
consequences of this law for acoustic and light waves. Although seemingly simple,
this law is fundamental to ultrasonic imaging, the functioning of mirrors, the imaging
of x-rays, and a wide variety of applications in optics.
The transmitted wave enters the second medium but is deviated from the original propagation direction. Due to the different speed of the wave in the second
medium, the wavelength (but not the frequency) is changed and the wave direction is
bent or refracted (Figure 11.5). The angle of refraction, or the angle between the
direction the transmitted wave travels and the normal to the surface, can be related to
the incident angle and the ratio of wave velocities in the two media by
sin uincident
sin urefracted
⫽
vincident
vrefracted
(11.8)
which is known as the law of refraction.
Example 11.2 An ultrasonic wave is incident on a person’s abdomen at a 20°
angle of incidence. Where should it be directed so as to hit a kidney stone
located 7 cm beneath the surface as shown in the figure? The ultrasonic waves
are emitted directly into an aqueous gel coating the abdomen. Take the speed of
sound in the gel to be vgel ⫽ 1400 m/s and in body tissue vtissue ⫽ 1570 m/s, and
specify the location in terms of the transverse distance x from the normal to the
surface going through the kidney stone.
x
θrefract
Solution: The wave entering the abdomen tissue will refract at the surface entering at an angle of refraction given by
sin urefract ⫽ sin uinc a
vtissue
1570
b ⫽ sin 20a
b ⫽ 0.38,
vget
1400
so that ␪refract ⫽ 22.6°. To then hit the kidney stone 7 cm beneath the surface, we
must have that tan ␪refract ⫽ x/(7 cm), so that x ⫽ 2.9 cm along the surface from
the normal. Note that without making the correction for refraction the distance x
would be 7(tan 20°) ⫽ 2.5 cm, and the wave would probably miss the kidney
stone.
274
SOUND
FIGURE 11.6 Diffraction of water waves around obstacles. Ripples spreading out from
bottom center diffract around rocks and are seen in their “shadow” region.
One other general property of waves should be briefly mentioned here. When a
wave meets either an obstacle or a hole in a reflecting boundary, it spreads out behind
the obstacle or hole into the “shadow” region (Figure 11.6). The extent of this diffraction, or bending, of the wave depends on the wavelength of the wave relative to the size
of the obstacle or hole. If the physical dimensions of the object are much larger than
the wavelength then there will be little diffraction of the wave but if the object is comparable or smaller than the wavelength there can be dramatic spreading of a wave
around an obstacle or behind the edges of a hole. When we study optics we show that
diffraction sets fundamental limits on our ability to “see” microscopic objects.
TEMPORAL SUPERPOSITION
Up until now we have been discussing sound as if it were of a single frequency, as in
Equation (11.1). Almost all of the sounds we hear cannot be described in such simple
terms, but can be thought of as the superposition of a variety of pure sine waves each
of a different frequency and amplitude. Figure 11.7a shows a time record of the
amplitude of vibration of air for a relatively simple sound. An analysis of this sound
record (waveform) is usually presented in the form of a spectrum, in which the amplitudes of the different frequency components are plotted as a function of the frequency
(Figure 11.7b). In simple cases there will be a small number of discrete frequency
components present, as in our example in which there are four components. These are
the resonant frequencies of the sound source. As we discussed in Chapter 10, the lowest frequency is called the fundamental whereas often the other frequency components in the spectrum will be integral multiples of the fundamental and are known as
harmonics.
The mathematics involved in the superposition of harmonics of varying amplitude is known as Fourier series and is illustrated in Figure 11.8 for the example of
the previous figure. The four different sine curves, with relative amplitudes and frequencies given by the spectrum in Figure 11.7b, add together to reproduce the sound
waveform of Figure 11.7a. In fact, any periodic waveform, no matter how complex,
can be represented as the superposition of harmonics according to Fourier’s theorem.
Musical sounds are characterized by spectra that are constant over periods of
time of at least fractions of a second, the duration of the musical notes being played.
The waveform of a musical sound is therefore repetitive over at least that time interval. Noise, on the other hand, is characterized by a chaotic frequency spectrum that
SUPERPOSITION
OF
S O U N D W AV E S
275
amplitude (arbitrary
units)
FIGURE 11.7 (a) Amplitude
versus time for a simple sound.
(b) Spectrum of frequency
components for the sound in (a).
3
2
1
0
–1
–2
–3
0
1
2
3
4
time (ms)
5
6
7
relative
amplitude
1
0.8
0.6
0.4
0.2
0
0.5
1
1.5
2
frequency (kHz)
changes rapidly with time and is nonrepetitive. Example spectra from complex music
and from noise are shown in Figure 11.9. Each musical instrument has its own unique
spectral tone that accompanies the playing of any particular note. Detailed analysis
of the Fourier composition of these tones from different instruments has led to digital synthesizers that can mimic the sounds from a large variety of musical instruments
with high quality. For each note played by these “computers” to mimic an instrument,
the appropriate set of overtones is added to give the proper tone quality for that particular instrument. The analysis and synthesis of musical tones has progressed to the
point where some digital synthesizers can actually give better tone quality than even
moderately priced individual instruments.
Let’s examine the particularly simple case of the temporal (time) superposition
of two pure tones of the same amplitude that are relatively close together in frequency. What will we hear if this occurs? We show just below that we’ll hear a sound
at the average of the two frequencies that has an intensity that varies slowly in time
in a whining fashion. The tone of the sound does not change but the intensity oscillates at the difference, or beat, frequency resulting in a slow repetitive whine as
briefly discussed in Section 4 of Chapter 10 (see Figure 10.15).
If we listen to these two sounds at the same spatial location, we can write expressions for the time variation of their amplitudes as
y1 ⫽ A sin v1 t and y2 ⫽ A sin v2 t.
(11.9)
Superposition of these two sounds results in a time-varying signal given by
y ⫽ y1 ⫹ y2 ⫽ A1sin v1 t ⫹ sin v2 t2.
(11.10)
By using the same trigonometric identity previously used to get Equation
(10.16),
FIGURE 11.8 The waveform from
Figure 11.7, in black, is the sum of
the four colored sine curves with
frequencies and amplitudes from
Figure 11.7b shown in this Fourier
series addition.
1
1
1u ⫺ w2 sin 1u ⫹ w2,
2
2
amplitude
sin u ⫹ sin w ⫽ 2 cos
0
1
2
3
4
5
6
7
time (ms)
276
SOUND
Relative Amplitude
0.0006
0.0005
0.0004
0.0003
0.0002
0.0001
0
0
500
1000
1500
Frequency (Hz)
2000
FIGURE 11.9 (left) Noise frequency spectrum from hitting a table with a plastic ruler;
(right) black curve is spectrum from a trumpet.
we can rewrite Equation (11.10) as
y ⫽ c2A cosa
v1 ⫺ v2
2
b t d sin a
v1 ⫹ v2
2
bt.
(11.11)
If the two angular frequencies are nearly equal, then the average value (in the
second term) is approximately equal to each original frequency, whereas the difference term has a much lower frequency, close to zero. We can think of this as resulting in a time-varying amplitude prefactor multiplying a sine term with angular
frequency equal to the average
y ⫽ [2A cos¢vt]sin␼t,
(11.12)
where ⌬␻ ⫽ (␻1 ⫺ ␻2)/2 and ␼ ⫽ (v1 ⫹ v2)/2 and the square bracket emphasizes
that this term is a more slowly varying amplitude. Because the intensity is proportional to the square of this amplitude, a beat, or maximum sound, will occur when
cos ⌬␻ t is equal to either 1 or ⫺1. This occurs at an angular frequency of twice ⌬␻
or at ␻1 ⫺ ␻2. The corresponding beat frequency is
fbeat ⫽ f1 ⫺ f2,
(11.13)
and it is at this frequency that one hears the loudness pulsate. Listening to beats is a
commonly used method of tuning musical instruments. Using calibrated standard
tones, the instrument is adjusted to make the beat frequency as long as possible, eventually disappearing when the two tones have matched frequencies.
Example 11.3 Suppose that two small speakers each play a pure tone. If one
speaker emits a frequency of 1000 Hz and you hear a beat frequency of 5 Hz,
what is the wavelength difference between the two tones?
Solution: The frequency of the second tone is either 1005 or 995 Hz, both of
which would produce 5 beats/s. Using the speed of sound in air from Table 11.1,
the wavelength of the first tone is (343/1000) ⫽ 0.343 m. The second tone has
a wavelength of either (343/1005) ⫽ 0.341 m, or (343/995) ⫽ 0.345 m, both
giving a wavelength difference of about 2 mm.
SPATIAL SUPERPOSITION
After having examined the superposition of two different frequency sound waves, we
now turn to the situation when two sounds, produced at different locations, combine at
some point in space. In this case we can write the two sound waves in one dimension as
y1 ⫽ A1 sin (kx ⫺ vt ⫹ w1) and y2 ⫽ A2 sin (kx ⫺ vt ⫹ w2),
SUPERPOSITION
OF
S O U N D W AV E S
277
where A1 and A2 are the amplitudes, ␸1 and ␸2 the phase angles, and k and ␻, as
usual, are given by k ⫽ 2␲/␭ and ␻ ⫽ 2␲ /T ⫽ 2␲ f, and we have assumed the two
sounds have the same frequency and wavelength. The phase angles account for the
relative shift of the sine curves with respect to the origin of coordinates because in
general the two waves originated at different locations with different phases. Setting
x ⫽ 0 in the expressions for y1 and y2, the phase angles are seen to determine the
amplitudes at a given time at the origin and thereby at any other point x. At a point
where these two sound waves overlap the net amplitude is simply the sum of the individual amplitudes and the intensity is proportional to the square of those amplitudes.
To simplify the problem, suppose that the two amplitudes are also equal to each other
(we have considered a similar problem in Section 4 of Chapter 10 for waves on a
string). Then using a similar argument that lead to Equation (11.11) above, we can
write that
ynet ⫽ y1 ⫹ y2 ⫽ 2Accos a
w1 ⫺ w2
2
b d sin (kx ⫺ vt).
(11.14)
amplitude
1
0.5
0
–0.5
–1
amplitude
0
50
100
150
200
1
0.5
0
–0.5
–1
0
50
100
150
200
1
0.5
0
–0.5
–1
250
250
amplitude
amplitude
We see that when these two sounds combine at a point in space, the net amplitude
depends on the relative phases of the two waves. If the two waves have some definite
phase relationship that remains constant in time (i.e., the phase angles ␸1 and ␸2 are
constants), the two waves are said to be spatially coherent and exhibit interference. At
each point in space, if the two sine waves are “in phase”, meaning they have zero
phase difference, then because cos(0) ⫽ 1, the net amplitude is 2A, just as you would
expect when two identical sine curves exactly overlap in space (Figure 11.10). This is
known as constructive interference. If the two sine waves are out of phase by 180°, or
␲ radians, then because cos(90°) ⫽ 0, the two waves exactly cancel, again just as
expected if the waves are shifted with respect to each other by half a wavelength. This
is known as total destructive interference. At any intermediate situation Equation
(11.14) gives the net amplitude and there will be some intermediate situation with the
amplitude in general lying between 0 and 2A.
Because the intensity is proportional to the square of the amplitude, the intensity
of the combined sound wave will be between 0 and 4I, where I is the intensity of each
of the two sounds. This should seem strange at first glance because the intensity is a
measure of the energy carried by the sound wave, and energy must be conserved. So
if each wave carries an intensity I, how can the sum ever be larger than 2I? What’s
going on here? It is clear that if the intensity of the combined sound wave is averaged
over a large region of space that the average intensity must be 2I, since each sound
wave carries intensity I. The phenomenon of interference leads to a redistribution of
the energy, concentrating it in some regions and depleting it in others, depending on
0
50
100
150
200
250
0
50
100
150
200
250
0
50
100
1
0.5
0
–0.5
–1
1
amplitude
amplitude
2
0
–1
–2
0
50
100
150
200
250
1
0.5
0
–0.5
–1
150
200
250
time
position
FIGURE 11.10 Interference between two waves. ( left) Two in phase waves, with their
constructive interference superposition at bottom; ( right) two equal amplitude out-ofphase waves showing complete destructive interference when added together at bottom.
278
SOUND
the phase relationship of the waves; maxima have intensity 4I, but
minima have zero intensity.
Although we have limited our discussion to one-dimensional
waves, real sound waves travel in real space. In Figure 11.11 we
show two experimental measurements of the superposition of two
waves emanating from “point sources” and traveling radially outward. On the top is a photo of the surface ripples in a water tank
and the measurement on the bottom using NMR techniques is sensitive to the local density and shows an image of sound waves traveling through a material simulating human tissue. We show later
how this methodology can be used to image inside the human
body. In the last section of this chapter we return to take a further
look at imaging inside human tissue with ultrasound.
Another example of interference effects in three dimensions
involves designing a musical auditorium or concert hall where the
phenomenon of interference can lead to disasters. Because sounds
reverberate off walls as well as travel directly out to someone in
the audience, the listener hears the superposition of a complex collection of sound waves. Depending on the phase relationships of
the different sound waves, there can be “dead spots” in an auditorium where there is significant destructive interference. Special
baffles as well as ceiling and wall designs and materials are used
to reduce direct reflections in order to avoid this problem.
We return to the very important and general phenomenon of
interference when we discuss other types of waves, including light
and also matter waves in our discussion of quantum mechanics.
4. PRODUCING SOUND
Aside from incidental sounds generated from chemical or other forms of energy, such
as the crackling of a campfire or the noise when a branch of a tree falls (even in a forest with no one around), the production of sound usually involves two requirements:
a way to generate mechanical vibrations and a resonant cavity structure to amplify
and “shape” the sound. Here we discuss the generation of music from a variety of
instrument types. Each of these generates mechanical vibrations of a string, wire, or
drumhead (as in stringed instruments, pianos, or drums, respectively), or of the air
directly by vibrations of a reed (woodwinds) or the lips (brasses). The music generated then acquires its tone and quality from a resonant cavity such as the hollow
wooden body of a stringed instrument or the tube of a woodwind or brass instrument.
A loudspeaker produces sound by converting an electrical signal into mechanical
vibrations of a diaphragm. The mechanism for this conversion is the electromagnetic
force, discussed later, used to vibrate the diaphragm. In this case the shape and design
of the diaphragm help to amplify and direct the sound.
Let’s first review the generation of sound by a string held under tension, discussed
in Section 5 of Chapter 10, as a model for a stringed instrument such as a violin.
Excitation by plucking or bowing the string results in standing waves. The fundamental frequency is determined by the requirement of nodes at only both fixed ends of the
string so that the fundamental wavelength is twice the string length yielding
f1 ⫽
v
,
2L
string,
FIGURE 11.11 ( top) Interference of
ripples of water waves in a tank;
(bottom) magnetic resonance techniques used to image the interference between two sound waves
inside a material medium from
“point” sources at the top. Note the
similarities.
(11.15)
where v is the wave speed and L is the string length between fixed points. Recall that
the wave speed on a string is given by
v⫽
P RO D U C I N G S O U N D
T
A m/L
,
279
FIGURE 11.12 Examples of simple
standing wave patterns on the
back-plate of a violin. The dark
lines, formed by black sand, represent nodal lines where the wood
does not vibrate.
where T is the tension in the string and m/L is its mass
per unit length. In a violin, the four strings each have a
different mass per unit length and the tensions are adjusted
to tune the fundamental frequency appropriately. Recall
also that the harmonics are given as integral multiples
of the fundamental frequency. When a string on a violin
is played, not only does the string vibrate, so does the
entire volume of air within the wooden cavity as well as
the wood itself. These vibrations not only help to amplify
the sound by more effectively causing the air to vibrate,
but also add depth and quality to the sound. Figure 11.12
shows two examples of simple vibration patterns of a violin.
In general the standing wave patterns of the wood of the violin can be quite
complicated.
Wind and brass instruments have a resonant tube that serves to amplify only
those frequencies that produce a standing wave pattern. There are two main configurations that occur in different musical instruments: tubes with two open ends,
such as in a flute (Figure 11.13a) or organ pipe, where the blowhole serves as an
open end, and tubes with one open and one closed end, such as a trumpet or trombone, where the lips act as a closed end. Figure 11.13b shows a simple schematic
of both cases.
The conditions at the tube ends, known as the boundary conditions, are what
determine the nature of the standing waves produced. At a closed end, because
air is not able to oscillate longitudinally due to the wall, there must be a node
of displacement and the sound is completely reflected, neglecting losses. At
the open end, the sound wave is partially reflected and partially transmitted out
of the resonant tube. Although it is less obvious, there must be a displacement
antinode at the open end. We can see this by first observing that because atmospheric pressure outside the tube serves to maintain a constant pressure at the open
end, there must be a node of pressure variation there. Any increase or decrease
from atmospheric pressure at the open end is immediately compensated for by
bulk flow of outside air to maintain a constant pressure node. As discussed in
Section 1, positions of pressure nodes correspond to displacement antinodes, and
so we see that the proper boundary condition at a tube open end is a displacement
antinode.
From these boundary conditions it is straightforward to detail the fundamental
and harmonic frequencies allowed for each configuration of a resonant tube. For
tubes that are open at both ends, the fundamental resonant mode has a displacement
antinode at each end so that half of one wavelength corresponds to the tube length L
(see Figure 11.14a). Therefore the fundamental wavelength is 2L and the fundamental frequency is v/2L. Each higher harmonic adds an additional node giving a set of
resonant mode wavelengths
ln ⫽
FIGURE 11.13 ( left ) Emily playing a
flute as a resonant tube; (right )
simple models for wind and brass
instruments.
2L
, n ⫽ 1, 2, 3, Á ,
n
(11.16)
Resonant tube open
at both ends
Resonant tube open
at one end
280
SOUND
where the integer n is the harmonic number. Corresponding to these wavelengths are
the resonant frequencies of the open tube
fn ⫽
v
nv
⫽ nf1, n ⫽ 1, 2, 3, Á
⫽
ln 2L
(open tube),
(11.17)
where v is the speed of sound.
For tubes that are open at one end and closed at the other, the fundamental has
an antinode at the open end and a node at the closed end so that only 1/4 wave fits in
the tube length L (see Figure 11.14b). Therefore the fundamental wavelength is equal
to 4L. Each higher harmonic adds one additional node within the tube giving a set of
resonant wavelengths
ln ⫽
4L
, n ⫽ 1, 3, 5 Á ,
n
a)
(11.18)
b)
FIGURE 11.14 The first three
resonant modes of (a) a tube open
at both ends: n ⫽ 1 blue; n ⫽ 2
red; n ⫽ 3 green; and (b) a tube
open at one end: n ⫽ 1 blue;
n ⫽ 3 red; n ⫽ 5 green.
where in this case only odd harmonics are present. The corresponding resonant frequencies in this case are
one side closed tube
fn ⫽
v
nv
⫽ nf1.
⫽
ln 4L
n ⫽ 1, 3, 5. Á
(11.19)
We see that for a tube closed at one end, only the odd harmonics are present. The
differences in each of these cases (as well as those of resonant modes on a string) are
due to the different boundary conditions.
Example 11.4 Compare the resonant frequencies from two tubes, one open at
both ends with twice the length of the second one which is closed at one end.
Will they have the same fundamental and harmonics?
Solution: For the open tube the resonant frequencies are given by
fn ⫽ n
v
, n ⫽ 1, 2, 3, Á ,
2Lo
whereas the tube closed at one end will have resonant frequencies given by
fn ⫽ n
v
, n ⫽ 1, 3, 5, Á .
4Lc
Because Lo ⫽ 2 Lc, the fundamental frequencies (for n ⫽ 1) will be the same for
the two tubes. However, notice that the closed tube will be missing every other harmonic that the open tube will have, although the common frequencies will match.
For a circular drumhead, the standing wave patterns observed when the drumhead is made to vibrate are two-dimensional and arise from the condition that there
must be a node at the fixed circular boundary. The fundamental has a single antinode
at the center of the drumhead so that the entire membrane oscillates together. Higherorder modes of vibration include a variety of interesting patterns, some of which are
shown in Figure 11.15.
FIGURE 11.15 Examples of modes
of vibration of a circular drumhead.
P RO D U C I N G S O U N D
281
5. THE HUMAN EAR: PHYSIOLOGY
AND FUNCTION
FIGURE 11.16 Overall structure
of the ear.
Hearing is one of the primary sensory systems in man as
well as in many animals. It gives us information about our
surroundings, allows for oral communication, and gives us
pleasure in listening to music. Although hearing is one of
the earliest biophysical systems studied, until quite recently
there was surprisingly little known about the fundamental
physical processes involved. This is due, in part, to the
extremely complex and nonlinear nature of these processes
and also to the location of the ear within the skull in close
proximity to the brain, making it difficult to study in detail
while intact and functioning normally. Here we summarize
the important features and functions of the various portions
of the ear.
The ear is composed of three sections, the outer (or
external), middle, and inner ear, each of which has a specific purpose in the transduction of sound from a pressure wave in the air to an electrical signal that is interpreted as sound by the brain (Figure 11.16). The outer ear consists of the external
pinna and the outer auditory canal that ends at the tympanic membrane (or ear
drum). In the air-filled middle ear lie the three tiny bones, the ossicles, known as the
malleus (hammer), incus (anvil), and stapes (stirrup) already introduced in Section
2 of Chapter 8 in connection with the hydraulic effect. The middle ear is bounded
by the tympanic membrane on the outer side and the oval window on the inner side.
There is also a connection, through the round window to the Eustachian tube that
connects with the pharynx. This is important in equalizing pressure between the
middle and outer ear and can lead to painful infections when clogged. Beyond the
oval window lies the inner ear, a complex multichambered cavity that contains both
the semicircular canals involved in balance (but not in hearing) and the cochlea, the
transduction center of hearing.
OUTER EAR
Serving two functions, the outer ear amplifies sound and protects the delicate
tympanic membrane. Protection is accomplished by providing a narrow (~0.75 cm
diameter) long (~2.5 cm) tube or ear canal, lined with hairs and wax-secreting
cells. In many animals the pinnae can be directed at the source of sound and
can help not only to increase sensitivity to sounds but also to locate their source.
In humans the pinnae serve no known purpose other than wiggling to make people
laugh.
Amplification occurs because the ear canal serves as a resonator. Recall that a
tube with one closed and one open end has a fundamental resonant wavelength equal
to four times the tube length. If we approximate the ear canal as such a tube, we find
that the resonant wavelength is about 10 cm, corresponding to a frequency of 3430 Hz
(using the velocity of sound in air as 343 m/s). In fact our ears are most sensitive near
this frequency as discussed later. Although the closed end of the ear canal, the tympanic membrane, is fairly thick (~0.1 mm) and stiff, both it and the walls of the ear
canal are elastic and there is not a sharp resonance, but a broad resonance spanning
about three octaves (frequency doublings) with a peak at about 3300 Hz. Typically
sound in the range from 1.5 kHz to 7 kHz is amplified by about 10–15 dB (a factor of
10–30) by the outer ear.
As we show in the next section, sound in air does not penetrate water very well.
Just think of how quiet it gets when you submerge your head under water in a bath
or when swimming. Over 99.9% of the sound energy traveling in air is reflected from
water. How then does sound, traveling in air, enter the cochlea, a fluid-filled tiny
coiled structure, in order for us to hear?
282
SOUND
MIDDLE EAR
The middle ear functions to efficiently transmit and amplify
sound from the vibrating tympanic membrane (ear drum) to
the oval window at the entrance to the cochlea. The ossicles
are suspended by a set of ligaments and muscles so that the
malleus is in close proximity to the tympanic membrane, and
the “footplate” of the stapes is in the oval window, basically a
hole in the bone surrounding the inner ear (see Figure 11.17).
Fluctuating pressure differences between the outer and middle ear will cause the tympanic membrane to vibrate. (Excess
pressure within the middle ear is relieved via the Eustachian
tube. When in a rapidly descending airplane, the pressure
buildup in the middle ear can be painful and can even cause a
temporary hearing loss. A similar pressure increase can occur
in an infected ear.) The ossicles provide a transmission and
amplification mechanism in two basic ways.
First, there is some “lever action” of the mechanical
force transmission from the malleus to the stapes, providing
roughly a 30% increase in the force. In addition, there is a
large (~17-fold) reduction in area from that of the tympanic
membrane to that of the portion of the stapes in contact with
the oval window. This reduction in area results in a similar
phenomenon to “hydraulic pressure” with an increase in
pressure. The ratio of the pressure at the oval window to that
at the tympanic membrane is given by
Poval
Ptymp
⫽
a
a
Foval
Aoval
Ftymp
Atymp
b
b
FIGURE 11.17 The middle ear
(see also Figure 8.3).
⫽
Foval Atymp
Ftymp Aoval
⫽ 11.321172 ⫽ 22.
(11.20)
Thus, the overall theoretical pressure amplification (ignoring damping losses) of
this simple model is about a factor of 22, comparing quite well with the actual experimental value of about 17. The middle ear effectively changes the larger amplitude,
smaller pressure vibrations of the tympanic membrane to smaller amplitude, larger pressure vibrations at the oval window. This is precisely what
is needed in order to effectively couple the sound waves into the fluid of
the cochlea. The middle ear is said to act as an impedance matching system (see the next section), allowing the maximum transmission of energy.
FIGURE 11.18 The cochlea of the
inner ear.
INNER EAR
It is the cochlea of the inner ear that converts sound energy into an electrical
signal sent via the auditory nerve to the auditory centers of the brain for
interpretation. Humans can hear without a tympanic membrane and without
ossicles, although there is significant loss of hearing under these conditions,
but the cochlea has been thought to be essential for hearing. Recent cochlear
implants have had some success in direct coupling to auditory nerves. Each
inner ear is actually a cavity in the temporal bone (the hardest bone in the
body) with six independent sensory organs (Figure 11.18): there are two
detectors of linear acceleration, the saccule (mainly detecting vertical accelerations) and utricle (mainly detecting horizontal accelerations); three
THE HUMAN EAR: PHYSIOLOGY
AND
FUNCTION
283
FIGURE 11.19 A cross-section of
the cochlea showing the three
parallel ducts that spiral around
the organ.
FIGURE 11.20 The organ of Corti,
showing the three chambers (tympani (3), vestibuli (2), and media (1)),
basilar membrane (4), and tectorial
membrane (5).
284
semicircular canals, each monitoring angular acceleration about a different
orthogonal axis and aiding in maintaining balance; and the cochlea, a fluidfilled, snail-shaped cavity with three turns having a total length of about
35 mm and ending in a closed apex. All of these detectors function in essentially the same way. Each contains hair cells that are mechanically sensitive
and serve as the basic transducers, converting mechanical forces, due to
accelerations or sound waves, into electrical signals.
Along the cochlea there are three parallel ducts filled with fluid
(Figure 11.19). The total fluid volume is about 15 ␮l, roughly a drop of
water. The basilar membrane separates two of these, the scala tympani
and the scala media, or cochlear duct, and is the site of the organ of Corti
where the hair cells are located and the transduction occurs. The third, the
scala vestibuli, is separated from the cochlear duct by Reisner’s membrane
and connects with the scala tympani at the apex through a small opening.
If we imagine the cochlea to be unwound and examine a detail of the
organ of Corti (Figure 11.20), all of the “action” occurs between the basilar
and tectorial membranes along the length of the cochlea. There are about
16,000 hair cells in this region, each of which has a hair bundle, composed of about 50–100
stereocilia projecting from their apex into the surrounding fluid in precise geometric patterns. Each stereocilia is a thin (0.2 ␮m) rigid cylinder composed of cross-linked actin filaments that are arranged to increase uniformly in length from about 4 ␮m at the stapes end
to about 8 ␮m at the apex end of the cochlea (Figure 11.21). The stereocilia are so rigid
that applied forces do not bend them; instead they pivot at their base. Within a hair bundle,
all the stereocilia are interconnected by filamentous cross-links so that the entire hair bundle moves together. For this to occur, stereocilia must slide along their neighbors by breaking and reattaching filamentous cross-links in a complex and incompletely understood
process. It is thought that this relative sliding mechanism results in ion channels opening
and closing along the stereocilia membrane that, in turn, lead to the propagation of electrical signals down to the hair cell base. These electrical signals then trigger the release of
a chemical neurotransmitter near synaptic junctions leading to nerve cells comprising the
auditory nerve. We study nerve conduction in much more detail later in this book.
So, in principle, we see the path by which sound waves in air are eventually converted into an electrical signal along a nerve fiber. Sound waves collected by the outer
ear vibrate the tympanic membrane. In turn, through mechanical vibrations, the stapes
sets up traveling waves along the basilar membrane and other structures of the cochlea.
For the stapes oscillations to effectively produce vibrations within the fluid of the inner
ear, there must be another site for pressure relief because the fluid is incompressible; this
is the round window. There are actually two types of hair cells, known as inner and outer.
The outer hair cells are attached to the tectorial membrane and have efferent (motor)
2
1
SOUND
FIGURE 11.21 ( left) Electron
microscope detail of hair cells of
the cochlea, inner hair cells in a
nearly linear array in the background and outer hair cells in a
characteristic pattern; (middle)
inner hair cells; (right) outer hair
cells. (bar ⫽ 3 ␮m).
neuron connections so that they do not provide information to the brain, but instead play
an active feedback role, taking signals from the brain and modifying the elastic interaction between the basilar and tectorial membranes. Such processes are inherently both
extremely complex as well as nonlinear. The inner hair cells on the organ of Corti are
sheared by relative motions of the basilar membrane in the surrounding fluid to produce
an electrical change in the stereocilia membrane leading to a series of electrochemical
events that culminate in the recognition of sound in the auditory cortex of the brain.
Although we have given a reasonably complete outline of the primary mechanism for
the transduction of sound to nerve impulse, a number of general unanswered questions
remain, among them: how do we distinguish sounds of different frequency and intensity?
FREQUENCY RESPONSE
Our early understanding of how we hear different frequencies of sound is due to von
Békésy during the 1940s to 1960s, although a more complete picture came only in the
1980s. The key point is that the basilar membrane acts as a frequency filter in an as yet
incompletely understood, but remarkable way. Vibrations of the stapes result in traveling waves of varying amplitude along the basilar membrane. These waves have a maximum amplitude that occurs at different distances along the cochlear spiral from the
stapes, with higher frequencies having a maximum closer to the stapes and lower frequencies having their maximum further toward the apex (Figure 11.22). At high enough
frequencies there is no displacement at all near the apex. The variation in the position of
the wave amplitude maximum reflects variations in the basilar membrane thickness,
elastic properties and structure along the spiral. The cochlea ducts all become narrower
toward the apex, however, the basilar membrane thickens and widens so as to act as a
frequency filter. Only in the 1980s was it shown that the membrane stiffness turns out to
decrease exponentially along the spiral by almost a factor of 1000 (Figure 11.23), large
enough to account for the frequency range of hearing, so that the location of the maximum wave amplitude varies with the logarithm of the frequency. These experiments
20 Hz
Relative
amplitude
200 Hz
2000 Hz
stapes
apex
FIGURE 11.22 Frequency response of the
basilar membrane as a function of distance
from the stapes.
THE HUMAN EAR: PHYSIOLOGY
AND
FUNCTION
FIGURE 11.23 Stiffness of the basilar membrane versus distance into the cochlea (Note
log scale on y-axis).
285
FIGURE 11.24 The sensitivity of
the human ear.
were done using laser holographic techniques (see Chapter 25) to visualize the variation
in membrane modes of vibration with the frequency of stimulation.
The human ear can typically detect sound within the frequency range of from 20 to
20,000 Hz, although the upper limit decreases dramatically with age. The ear is not
equally sensitive to all frequencies in this range, however, being most sensitive between
about 200 and 4000 Hz (see Figure 11.24). This range is sufficient to hear speech,
although a wider range is clearly beneficial for a fuller appreciation of music.
INTENSITY EFFECTS
The human ear has a tremendous range of response to sound intensity. At our most sensitive frequency of 3 kHz, the ear responds to intensity levels as low as 10⫺12 W/m2,
the threshold of hearing, taken as 0 dB, as discussed above in Section 2. Taking the area
of the tympanic membrane as 0.5 cm2, the total threshold power incident on the ear is
equivalent to only 0.5 ⫻ 10⫺16 W. This corresponds to, for example, the average power
generated by dropping a tiny pin made from 100 million aluminum atoms from a height
of 1 m every second (remember the telephone commercial). Using Equation (11.6), this
intensity corresponds to a maximum pressure variation of about 2.8 ⫻ 10⫺5 Pa (recall
that atmospheric pressure is 1 ⫻ 105 Pa). Amazingly, this minimally detected pressure
variation corresponds to an amplitude of vibration of air molecules about 10 times
smaller than the radius of a single atom! The ear is an exquisitely sensitive detector. At
this same frequency, our ears can also tolerate sounds a million million times louder, or
1 W/m2, known as the threshold of pain. Using the decibel scale this corresponds to 120
dB. At this intensity level, air molecules have a displacement amplitude of about 11 ␮m
and beyond this level, sound becomes painful.
6. THE DOPPLER EFFECT IN SOUND
The Doppler effect in sound occurs when either the source of sound or the listener
(detector) are moving. It is commonly experienced from the characteristic frequency
changes heard from the siren on a fire truck as it rushes by. The sudden drop in pitch
heard as the truck goes by is due to the Doppler effect. Although not as obvious, the
286
SOUND
frequency of the siren is also actually higher as the fire truck approaches the listener than
it would be if the truck stopped. This phenomenon occurs for all types of waves including light, a form of electromagnetic wave that we discuss in detail later in this text.
In the case of light, when the frequency shifts, the color of the light changes. The
well-known red shift of starlight in astronomy is due to the fact that stars are rapidly
receding from us. Characteristic frequencies of light are emitted by various atomic elements as we show in Chapter 25. By comparing the frequencies of emitted light from
atoms in the laboratory with that emitted from stars, the frequency shifts can be used to
determine the recessional velocities of stars using similar equations to those derived
below. This is the ultimate source of our knowledge of the extent and age of the universe.
We can understand the Doppler effect by imagining that a point source of a pure
frequency sound emits a continuous set of spherical wavefronts, each one wavelength
␭ apart and that travel at velocity v, as shown in Figure 11.25. If the source and
observer are stationary then the frequency of the sound is determined simply by counting the number of wave crests received per second. Because in a time t the number of
wavefronts reaching the detector is vt/␭, the frequency is given by dividing this by
time to find the usual expression f ⫽ v/␭.
Imagine that the detector now moves with a constant velocity vD along the line
towards (or away from) the source. In this case, the number of wavefronts reaching
the detector will increase (or decrease) because of the increased (decreased) relative
speed of the waves as seen by the detector, so that the detected frequency will be
f¿ ⫽
1v ; vd2t
lt
⫽
v ; vd
l
.
wavelength
detector
emitter
FIGURE 11.25 Spherical waves
from a stationary source detected
by a stationary observer.
(11.21)
This can be rewritten in terms of the frequency detected when the source and
detector are both stationary by substituting ␭ ⫽ v/f to find
f ¿ ⫽ f a1 ⫹
vd
b.
v
(⫹sign for D approaching; ⫺sign for D receding).
(11.22)
When the detector velocity is zero, Equation (11.21) predicts correctly that there is
no frequency shift. If the detector approaches the source the frequency rises above f,
whereas if it recedes from the source the frequency drops below f.
A similar phenomenon occurs if the detector is stationary but the source moves
toward or away from the detector at a constant velocity of vs. In this case the
motion of the source changes the distance between wavefronts emitted depending
on direction. As shown in Figure 11.26, the wavelength is decreased in the forward
direction and increased in the backward direction due to the motion of the source.
A stationary observer along the line of motion will hear a higher frequency as the
source approaches and a lower frequency as the source recedes. This is the explanation of the fire truck siren effect for a stationary observer. In mathematical form
theœ detected frequency is changed due to the wavelength compression or expansion
(l ⫽ l < vsT, where T is the period, T ⫽ 1/f ) so that the detected frequency is
f¿ ⫽
v
l
œ
⫽
v
v
⫽
.
vs
l < vs T
v
<
f
f
(11.23)
Rewriting this we have a result for the frequency detected from a moving source
f¿ ⫽fa
1
b.
1 < vs/v
THE DOPPLER EFFECT
(⫺for motion toward D; ⫹for motion away from D).
(11.24)
IN
SOUND
FIGURE 11. 26 Doppler effect for
moving emitter and stationary
detector. The wavefront spacing in
the forward direction is decreased
whereas that in the backward
direction is increased.
forward
wavelength
backward
wavelength
detector at
rest
moving
emitter
287
In the more general case in which both source and detector are moving, but still along
the line joining them, the detected frequency, from Equation (11.21) and (11.23), is
f¿ ⫽f a
1 ; vd /v
1 < vs/v
b,
(11.25)
where the upper signs are used when the relative motion brings the source and detector closer and the lower signs apply when that distance is increasing.
The Doppler effect can be used to measure the velocity of moving objects by
aiming a wave at the object and measuring the frequency of the reflected wave. This
technique is probably most familiar to you in the form of radar. Police radar uses
high-frequency radio waves (a form of electromagnetic radiation) to detect the velocity of cars on a highway; weathermen use Doppler radar to measure the velocities of
clouds to make forecasts. A medical application of the Doppler effect is the use of
ultrasound to determine blood velocities as discussed in the next section.
7. ULTRASOUND
Sound at frequencies above 20,000 Hz is called ultrasound. Although our ears do not
respond to sounds of those frequencies, many animals can hear at frequencies ranging up to 100 MHz. Ultrasound may be familiar to you from its use in ultrasonic
cleaning baths (for jewelry or glassware), cool mist humidifiers, and fetal monitoring, a very common method of imaging a fetus within the womb. In this section we
study some of the physical properties of ultrasound and its interaction with matter.
We also learn the fundamental ideas behind medical imaging using ultrasound.
Ultrasound differs from audible sound only in its higher frequency and correspondingly shorter wavelength. In most of the applications we discuss, ultrasound is
traveling through water or biological tissue in which the speed of sound is quite a bit
faster than in air. Referring back to Table 11.1 we see that the velocity of sound in
water and various biological tissues is quite fast (nearly a mile per second). For 1.5
MHz ultrasound, the wavelength in water (using the speed of sound as 1480 m/s) is
just about 1 mm. The fact that the wavelength is so short is important because the
wavelength ultimately limits the possible obtainable resolution when imaging with
ultrasound.
Ultrasonic waves traveling in a material undergo several interactions. Some portion of the wave is absorbed as it travels through the material. This is usually
described by an absorption coefficient ␣ that describes the loss in intensity of the
wave as it travels along
I1x2 ⫽ Io e ⫺ax,
(11.26)
where I0 is the intensity at some arbitrary point labeled x ⫽ 0 and I(x) is the intensity
transmitted through the material after the wave has traveled a further distance x. The
smaller the absorption coefficient, the longer the wave can travel through the medium
without appreciable loss. In pure water absorption over the distances of 0.1–0.2 m
used in imaging systems is negligible. The absorption coefficient in human soft tissue depends on the frequency of the ultrasound, increasing with frequency in the
MHz range with a typical value of about 12% per cm of distance per MHz. Thus,
1 MHz ultrasound loses 12% in the first 1 cm, an additional 12% in the second cm,
and so on, so that after 10 cm, there is only 28% of the original signal intensity left,
the rest being absorbed. At 5 MHz, in the first 1 cm 60% of the intensity is lost, so
that after 10 cm there is less than 0.01% of the original intensity left, all the rest being
absorbed.
This particular interaction of ultrasound with tissue is used in two different ways.
At low-intensity levels, the absorbed energy heats the tissue. This interaction is clinically used in diathermy to locally heat tissue. At higher powers a new phenomenon
288
SOUND
occurs, known as cavitation. At these higher-intensity levels the local presz2
z1
sure variation is sufficient to tear apart the medium, forming spherical holes
or cavities. Medical applications of cavitation include the disruption of kidney stones or tumors using focused ultrasound. Other applications include
reflected
transmitted
incident
cleaning solid surfaces (such as glassware or jewelry) and disrupting cells and
cell constituents for scientific applications.
When an ultrasonic wave reaches a boundary between two different
media, some of the wave is reflected back and the rest of the wave is transFIGURE 11.27 The acoustic
mitted (Figure 11.27). The acoustic impedance z, a parameter defined as
impedance of the two media deterthe product of the mass density and the velocity of sound in the medium, z ⫽ ␳v, mines the division of the incident
determines the fraction of the wave that is reflected. If z1 and z2 are the acoustic acoustic energy into reflected and
impedances of the two media at a planar boundary then the fraction of the incident transmitted waves.
intensity that is reflected back is
Ireflected
Iincident
⫽
1z1 ⫺ z222
1z1 ⫹ z222
.
(11.27)
If the two impedances are equal, then Equation (11.27) confirms that there will be
no reflection and all the intensity will be transmitted (because Itransmitted ⫹ Ireflected ⫽
Iincident, we have that
Itransmitted
Iincident
⫹
Ireflected
Iincident
⫽ 1).
If one impedance differs from the other by a factor of 10 then Equation (11.27) predicts 67% of the intensity will be reflected. Table 11.3 lists the acoustic impedance of
some materials relevant for biological imaging. Different tissues in the body all have
impedance values similar to those of water except for bone, whereas air has a much
lower value, implying that the lungs should have a distinctly lower impedance. These
values are important in describing the “contrast” of different tissues to ultrasound.
That is, if neighboring tissues have similar impedances, there will only be a small
reflection of intensity at their boundary, but at bone or lung interfaces there will be a
much larger reflected signal. In addition, at an air–tissue interface, only a small fraction of the intensity will be transmitted, so that it is difficult to “couple” ultrasound
into the body. We return to these ideas shortly when we consider imaging methods.
Table 11.3 Acoustic Impedances
Material
Air
Water
Fat
Muscle
Bone
Acoustic Impedance (kg/m2s)
430
1.48 ⫻ 106
1.33 ⫻ 106
1.64 ⫻ 106
6.27 ⫻ 106
In order to generate ultrasound, a mechanism for producing vibrations at MHz
frequencies is required. The diaphragm of a loudspeaker cannot be made to vibrate
at these high frequencies, however, there are special materials, known as piezoelectric ceramics, which oscillate at such frequencies in response to a MHz time-varying
electrical signal. Other materials, known as magnetostrictive ceramics, respond similarly to time-varying magnetic signals. Furthermore, these materials work reversibly,
just as a loudspeaker does. Loudspeakers normally interchange electrical energy for
sound energy, taking an oscillating electrical signal and producing vibrations of the
speaker, leading to sound. A microphone that converts sound into an electrical signal
is basically just a small speaker working in reverse. Sound impinging on the speaker
U LT R A S O U N D
289
produces vibrations that cause a small electric signal to oscillate at the same frequency. We show how this works later
when we learn about electromagnetism.
Devices that change one form of energy into another
form are known as transducers. Ultrasonic transducers are
very efficient devices that can be used as a source or detector
of ultrasound because the conversion of acoustic energy to
electrical or magnetic energy is reversible in these devices. In
other words, an applied high-frequency electric or magnetic
signal can produce the mechanical oscillations that yield
ultrasound, or an ultrasonic wave impinging on the transducer
will induce mechanical oscillations that, in turn, produce a
time-varying electric or magnetic signal that “detects” the
presence of ultrasound.
FIGURE 11.28 An ultrasonic fetal monitor at work.
Ultrasonic transducers must be very sensitive in order to
“see” the reflections from soft tissue boundaries because the
acoustic impedances are very similar and the reflections are correspondingly weak.
For example, at a boundary between fat and water only 0.5% of the incident wave is
reflected, as a short calculation using the data in Table 11.3 and Equation (11.27) indicates. In ultrasonic imaging, the transducer is mounted in a microphone-type housing
with a fluid-filled tip that is pressed against the skin, coated with a layer of gel to eliminate an air gap through which ultrasound would not penetrate (Figure 11.28). The single transducer is used as both source and detector of pulses of ultrasound as we now
describe.
Ultrasonic imaging is based on the pulse–echo method. A short pulse of ultrasound, typically of several MHz in frequency, is directed into the soft tissue of the
body. Reflections from boundaries with different acoustic impedance arrive back at
the transducer in times that depend on the round-trip distance and on the average
speed of sound (which we take as 1570 m/s for soft tissue: see Table 11.1). From the
delay time between the emission of the pulse and the detection of the echo, we can
reconstruct the distance to the boundary as
d⫽
1570t
,
2
(11.28)
where d is measured in meters, t is the delay time, and the factor of 2 accounts for
the round-trip of the pulse. This pulse–echo method is the same as is used in sonar to
map the ocean’s floor or by flying bats to navigate. In ultrasonic imaging, this simplest of methods is called an A-scan and gives information on not only the depths of
boundaries corresponding to each reflection, but also information as to the acoustic
impedance (and therefore the tissue type) of each region based on the intensity of the
pulse echo. Note that the transducer must be both very sensitive to detect the low
intensities of the echoes and have a fast response time. A-scans, however, give only
information on the depth of tissue boundaries; they do not give any spatial information in the directions transverse to the direction of travel of the pulse.
By recording the information from an A-scan differently and by scanning the
incident pulse along a transverse line, an image of the major acoustic boundaries can
be displayed on a computer screen in a B-scan. The pulse–echo information is
recorded so that one axis of the image corresponds to the echo depth in the tissue and
the image brightness corresponds to the intensity of the echo. Without any scanning
a strong single echo would appear as a bright dot, a weaker echo as a fainter dot, and
multiple reflections as a series of such dots along the axis. If the incident pulses are
scanned along a transverse line, then because the pulse duration is short and the
reflection times are short, an entire sequence of such scans can be independently
accumulated to yield the outline of tissue boundaries. This is done by displaying the
scanning distance along an orthogonal axis. The time for a complete scan is short
enough to persist on the computer screen, much the same way as television works.
290
SOUND
FIGURE 11.29 High resolution 3-D
ultrasound images of a fetus.
Techniques have been developed to produce narrow beams of ultrasound that
are scanned rapidly and continually to produce a continuous real-time image.
Figure 11.29 shows examples of a B-scan. Note that false color is added to the pictures to enhance the contrast for our eyes. Each color corresponds to a different level
of intensity according to some grayscale level in which intensity is scaled between
black and white with shades of gray. The intensity levels of the pulses used in imaging are sufficiently low (⬍3 ⫻ 104 W/m2) so that this method is considered a safe
and completely noninvasive technique. It is widely used in fetal monitoring and in
imaging internal organs of the body. The spatial resolution is limited to about 1 mm
due to the frequency of ultrasound; higher frequencies would give better resolution
in principle, but the absorption increase with frequency is prohibitive.
A third type of imaging, known as the M-scan or motion-scan, is similar to the
A-scan but measures the position of a moving target, such as a heart valve, in a time
sequence of pulse echoes. A more sophisticated version, known as Doppler scans,
makes use of the Doppler shift of sound (see the previous section) to produce velocity profile images. This technique is useful in mapping motions within the heart and
gives a two-dimensional image similar to a B-scan, except that the false color does
not indicate the intensity of the reflection but rather its frequency shift (related to the
velocity of the target). Figure 11.30 gives an example of this type of image.
Ultrasonic imaging is the first of a number of imaging methods that we study, including CT scans (using x-rays), MRI (using radio waves), and PET (using the emission
products of radioactive particle decays). These techniques have revolutionized medical care as well as our knowledge of the human body.
FIGURE 11.30 Doppler scan of the
adult kidney with color code indicating flow rates.
U LT R A S O U N D
291
CHAPTER SUMMARY
Sound is a longitudinal pressure wave that can be
described by either a traveling pressure wave or a displacement (of air, or whatever medium it travels in) wave:
¢P ⫽ ¢Pmax sin(kx ⫺ vt),
¢s ⫽ ¢smax cos(kx ⫺ vt).
(11.1)
Sounds produced by wind or brass instruments can
be modeled by closed or open tubes, or columns of air,
leading to a set of resonant frequencies able to be
excited in each type of tube according to
open tube fn ⫽
v
nv
⫽
⫽ nf1,
ln 2L
n ⫽ 1, 2, 3, Á ,
(11.2)
(11.17)
v
nv
⫽ nf1.
⫽
ln 4L
n ⫽ 1, 3, 5 . Á
(11.19)
open side closed tube fn ⫽
Sound intensities are proportional to the square of
⌬P and are measured using the decibel scale
b ⫽ (10 dB)log
I
,
Io
(11.7)
where Io is a reference intensity (here taken as 10⫺12
W/m2).
When sound waves strike a boundary between two
different materials, in which the speed of sound differs,
some fraction of the intensity is reflected and the rest is
transmitted but is refracted, or bent, according to the
law of refraction,
sin uincident
sin urefracted
vincident
⫽
.
vrefracted
v1 ⫺ v2
2
bt d sin a
v1 ⫹ v2
2
1 ; vd /v
1 < vs /v
b.
(11.25)
Ultrasound, sound waves at frequencies above
those capable of human detection (⬎20,000 Hz), can
be used to probe inside the human body by detecting
reflections from “objects” (organs, a fetus, blood, etc.,
with different acoustic impedance) and measuring
pulse echos to determine depth information.
bt. (11.11)
QUESTIONS
1. Give a conceptual argument based on the nature of a
pressure wave as to why the speed of sound should be
greater in a liquid than a gas and still greater in a solid.
2. If we lived in “Flatland,” the two-dimensional world
of Edwin Abbott, and sound were confined to our
two-dimensional world, repeat the argument in
Section 2 to find how intensity would vary with distance from the source.
3. What is the ratio of intensities of two sounds that
differ by 1 dB? What is the intensity level difference
292
f¿ ⫽f a
(11.8)
Two overlapping sound waves of different frequencies will exhibit a phenomenon known as beats, in
which the net sound produced by interference will have
the average frequency, but will have an amplitude that
oscillates at the difference, or beat, frequency,
y ⫽ c2Acos a
The relationship between the structure and function of the three parts of the ear is discussed, showing
how a pressure wave incident on the outer ear ends up
as an electrical signal produced by the hair cells of the
inner ear.
Sound waves that are either produced by a moving
source, detected by a moving sensor, or both, will have
their frequency f shifted, to f ⬘, according to the Doppler
effect,
(in dB) between two sounds that differ by a factor of
2 in intensity?
4. Discuss the differences and similarities between temporal and spatial superposition of sounds.
5. Why do two sound waves need to be coherent in order
to exhibit interference phenomena?
6. Suppose that you are given a set of three consecutive
resonant frequencies from a resonant tube. You do not
know if the tube is open at one end or at both.
Comparing Equations (11.17) and (11.19) how could
you tell?
SOUND
7. Musicians commonly tune their instruments to “A” ⫽
440 Hz. Two violinists prepare to play a duet
together. One of them claims his instrument is tuned
perfectly to A. The partner is also sure that his instrument is tuned to A. They draw their bows across their
respective instruments and hear a beat of 2 Hz. Is
there any way they can tell whose instrument is in
perfect tune?
8. Review the basic sequence of events that lead from an
incident sound wave to a signal along the auditory
nerve.
9. There is also a Doppler effect for light. If a source of
visible light is receding from an observer, based on
the discussion in Section 6 for sound, do you expect
a shift of detected frequency toward the red or toward
the blue? What if the source is directed towards the
observer? This effect is used, with other measurements, to determine the recessional velocities of stars.
10. From a consideration of acoustic impedance, why
would ultrasound be better for detecting a bone fracture than for detecting fat blockages in arteries?
11. The resolution of ultrasound is dependent on the
wavelength, increasing with decreasing wavelength.
Why doesn’t ultrasonic imaging use much higher frequencies (shorter wavelengths) in order to increase
the resolution to be much better than about 1 mm
(Hint: consider absorption and its effects)?
MULTIPLE CHOICE QUESTIONS
1. Ultrasonic imaging is not based on (a) pulse echo
techniques, (b) differences in acoustic impedance,
(c) cavitation, (d) scanning.
Questions 2–5 refer to an acoustic resonator tube with a
speaker mounted at one end and a solid piston able to slide
in the tube mounted at the other end.
2. The ends of an acoustic resonator tube correspond to
which of the following pressure conditions: (a) antinode at the speaker, antinode at the piston; (b) antinode
at the speaker, node at the piston; (c) node at the
speaker, antinode at the piston; (d) node at the speaker,
node at the piston.
3. You set the frequency of the speaker to 1000 Hz. As
you draw the piston head back from the speaker the
first resonance you hear occurs when the head is at
2.5 cm. The next resonance you hear is most likely to
occur at (a) 25 cm, (b) 20 cm, (c) 12.5 cm, (d) 7.5 cm.
4. Suppose you have a tube 0.25 m long with a speaker at
one end and with the other end open. If you gradually
increase the frequency of the speaker from zero at
about what frequency will you hear the first resonance?
(a) 350 Hz, (b) 700 Hz, (c) 1050 Hz, (d) 1400 Hz.
5. Suppose the tube is replaced with a tube that is open
instead of blocked by a piston head. Suppose further
that a fundamental resonance is produced for an input
frequency of 350 Hz. At about what frequency will a
QU E S T I O N S / P RO B L E M S
6.
7.
8.
9.
first overtone be produced in the same tube? (a) 117 Hz,
(b) 175 Hz, (c) 700 Hz, (d) 1050 Hz.
An organ pipe of length 0.5 m has two open ends. The
fundamental and first overtones in this pipe have frequencies of about (a) 350 Hz and 700 Hz, (b) 350 Hz
and 1050 Hz, (c) 700 Hz and 1400 Hz, (d) 175 Hz
and 525 Hz, respectively.
A fundamental standing wave is produced in the
vibrating wire at an input frequency of 22 Hz.
The first overtone will be produced when the input
frequency is set at (a) 7 Hz, (b) 11 Hz, (c) 44 Hz,
(d) 66 Hz.
Two people talk simultaneously, each creating a
sound intensity of 50 dB at a given point. The total
sound intensity at that point is (a) 0 dB, (b) 50 dB,
(c) 100 dB, (d) between 0 dB and 100 dB.
A car heads toward a wall at high speed while its horn
is blowing. The frequency of the horn when the car is
at rest in still air is f. An observer sitting on the wall
hears the horn having a frequency f⬘. The driver hears
an echo from the wall that has a frequency (a) equal
to f, (b) equal to f ⬘, (c) greater than f ⬘, (d) less than f ⬘.
Questions 10–12 refer to: A room is filled with air with a
pressure P0. A speaker creates a sound wave in the room
described by ⌬P ⫽ ⌬Pmax sin(2␲ x ⫺ 700␲ t). The average intensity of this wave is I.
10. Under typical conditions ⌬Pmax is (a) about the same
as P0, (b) much greater than P0, (c) much less than P0,
(d) about 350 m/s.
11. At one point in the room a wave directly from the
speaker combines with a wave that reflects off a wall
to produce a stationary node. This will occur if the
difference in distances traveled by the two waves is
(a) 0 m, (b) 0.5 m, (c) 1.0 m, (d) 3.14 m.
12. Suppose you wanted to increase the intensity of
the wave from I to 4I. You would have to change
(a) ⌬Pmax to 2⌬Pmax, (b) ⌬Pmax to 4⌬Pmax, (c) the
2␲ x to ␲ x and the 700␲ t to 1400␲ t, (d) the 2␲ x to
4␲ x and the 700␲ t to 350␲ t.
13. You have an empty 20 oz. soda bottle and an empty
32 oz. soda bottle, both roughly the same diameter.
You blow air over the opening of one and produce a
fundamental standing wave. Then you blow air over
the opening of the other and produce another fundamental standing wave. Which is true: (a) The fundamental tone in the 20 oz. bottle is lower in frequency
than in the 32 oz. bottle. (b) The fundamental tone in
the 20 oz. bottle is higher in frequency than in the
32 oz. bottle. (c) The tones are both fundamentals and
therefore are the same frequency. (d) The speed of the
airflow must be the same for both bottles.
14. You have an empty 20 oz. soda bottle and you blow
air over the opening to excite a fundamental standing
wave. Now, you slice off the bottom of the bottle (it’s
plastic) without changing its length very much. You
blow over the opening and excite a fundamental
293
15.
16.
17.
18.
19.
20.
21.
standing wave in the bottle with its bottom end open.
The frequency of the standing wave in the second
case (a) is higher than that in the first case, (b) is
lower than that in the first case, (c) is the same as that
in the first case, (d) no sound is produced in the second case.
Which one of the following is true? (a) The air pressure in a room is 1 atm; therefore the amplitude of a
sound wave in the air must be about 1 atm. (b) A horizontal string is 1 m off the floor; therefore the amplitude of a transverse wave on the string must be about
1 m. (c) A traveling water wave carries mass along
with it. (d) A traveling wave of people alternately
standing and sitting in a baseball stadium carries
energy along with it.
How much louder (in dB) is a sound heard 2 m from a
point source than when it is heard by the same ear 4 m
from the source? (a) 4, (b) 2, (c) 10 log 4, (d) 10 log
2, (e) none of the above.
In a resonant tube open at one end and closed at the
other, the resonant frequencies are determined by all
of the following except (a) the speed of sound, (b) the
length of the tube, (c) the boundary conditions at
the ends of the tube, (d) the temperature of the air, (e)
the tube diameter.
The intensity of sound wave A is 10 dB greater than
that of sound wave B. Measured in W/m2 the intensity
of A must be greater than the intensity of B by (a) a
factor of 2 times, (b) a factor of 10 times, (c) 10 N/m2,
(c) 105 N/m2.
Suppose that the speed of sound in still air is 350
m/s. A source of a pure tone of 1000 Hz moves
through the air at a speed of 30 m/s. An observer at
rest with respect to the air hears the tone at a frequency of 1094 Hz. This is primarily because the
(a) speed of sound to the observer is 380 m/s,
(b) speed of sound to the observer is 320 m/s,
(c) wavelength of the tone as measured by the
observer is 0.32 m, (d) wavelength of the tone as
measured by the observer is 0.38 m.
Three speakers, all connected to the same amplifier,
all put out the same single frequency tone. At one
point in the vicinity of the speakers the three tones
add coherently, producing an intensity maximum. If
the intensity of each individual speaker at that point is
I (in W/m2) the intensity of sum of tones is (a) 9I,
(b) 3I, (c) I, (d) zero.
The auditory canal of a human ear is about 2.5 cm
long. From this we can infer that humans are especially sensitive to sound with a wavelength of about
(a) 2.5 cm, (b) 5 cm, (c) 7.5 cm, (d) 10 cm.
PROBLEMS
1. A beaver swims near its den on the shore of a lake
800 feet wide. Startled, it slaps its tail on the water
surface before diving underwater. How long does it
294
2.
3.
4.
5.
6.
7.
take the sound of the slap to cross the lake to a beaver
near the opposite shore if the second animal is
(a) Above the water surface?
(b) Underwater?
A hunter stands 200 m away from one side of a steepwalled canyon that is itself 600 m wide. If he fires a
gun, describe the sequence of echoes that is heard.
Write an equation for the speed of sound at any temperature given the information in Section 1 of the chapter.
Determine how big a change there is in the speed of
sound due to seasonal extremes in outdoor air temperature, taking the warm summer upper value to be
30°C and the cold winter lower value to be ⫺10°C.
Compute the two wavelengths of sound, ␭low and
␭high, corresponding to the 20 Hz low- and the 20 kHz
high-frequency limits of human hearing. Assume
343 m/s for the speed of sound.
An ironworker at a large construction site guides a steel
girder into place with a mallet, slamming the mallet
down onto the steel every 1.5 s. A foreman watching the
ironworker from some distance away discerns no time
lag between sight of the mallet impact and the sound of
the clang of the steel. How far away is the foreman?
Fill in the table with the lengths of resonant tubes that
will produce fundamental frequencies at the low and
high limits of human hearing, 20 Hz and 20 kHz,
respectively,
Tube, Open Both
Ends
Tube with One
End Closed
Low freq.
High freq.
8. If the intensity of sound from a jet engine is 10 W/m2
at a distance of 30 m, how far away from the jet do
you have to be for the intensity to be 0.1 W/m2?
9. How much acoustic energy is emitted by a source
every second if the sound intensity is 80 dB at a distance away of 20 m?
10. At a distance of 10 m away, the equipment of a road
repair crew emits sound of 90 dB intensity.
(a) How much farther away would a passerby have to
remove himself so that the sound intensity would
be a somewhat more tolerable 80 dB?
(b) If a member of the repair crew must work at a distance of 1 m from the noisy equipment, to what
sound intensity, in dB, is he exposed?
11. Using values for the variation in air pressure due to
sound waves and the dimensions of the eardrum
(tympanic membrane), both given in the chapter, calculate the force on the eardrum for sound at maximum safe intensity.
12. A crying child emits sound with an intensity of 8.0 ⫻
10⫺6 W/m2.
(a) What is the intensity level in decibels for the
child’s sounds?
SOUND
(b) Suppose that two children are crying with the
same intensity. What is the intensity level in decibels for the two children crying together?
(c) Derive a general rule for the intensity level in
decibels (based on parts (a) and (b)) if there were
four children, eight children, or any even number
of children.
(d) How long does it take you to hear the children
crying if you are 100 m from them when they start
crying?
13. Suppose that you hear a clap of thunder 5 s after seeing the lightning stroke. If the speed of sound in the
air is 343 m/s and the speed of light in air is 3 ⫻ 108
m/s, how far are you from the lightning strike?
14. A listener moves with respect to a musician who
plays a steady middle C note of 262 Hz.
(a) Determine the speed with which a listener must
approach a musician such that the perceived pitch
is shifted upward a half step to C# (C-sharp) ⫽
277 Hz.
(b) If the musician were instead playing C#, would
the note be perceived by the listener as C if the listener recedes from the musician at a speed equal
to that of the previous case?
(c) Suppose it was the source (i.e., the musician) that
was in motion. What is the magnitude and direction of such motion that would result in the middle C in fact being played by the musician to be
perceived by the listener as C#?
15. The musical scale of “equal temperament” has its
notes tuned as shown in the table below. Suppose a
string is stretched at such tension that the fundamental of the string oscillation is the lowest C of the scale.
Determine the lengths for the same string that will
produce fundamentals for all of the notes, assuming a
sound velocity of 350 m/s.
Note
Freq. (Hz)
C
D
E
F
G
A
B
C
262
294
330
349
392
440
494
523
String Length (m)
16. Suppose a string similar to that of the previous problem
is one meter long and carries tension for C ⫽ 262 Hz.
Determine the set of tensions necessary, in terms of the
initial tension T, for the rest of the notes of the scale
using strings of the same length.
17. A piano has about 240 strings (one key controls several strings). Increasing the string tension increases
the pitch (i.e., the frequency of the fundamental).
QU E S T I O N S / P RO B L E M S
18.
19.
20.
21.
22.
23.
Higher tension also increases sound volume. Therefore, it is musically advantageous to have the strings
for the lowest notes have as high a tension as possible. Piano wires have diameters ranging from 31 to
55 mils (0.79–1.4 mm) made of steel only, or of steel
cores wound with copper. Determine the string type
and size that will result in the largest volume of sound
for the lowest notes. Assume the length is fixed,
determined by the dimensions of the piano. Note density of steel ⫽ 7.8 ⫻ 103 kg/m3; density of copper ⫽
8.9 ⫻ 103 kg/m3.
What will be the fraction of ultrasound intensity
reflected from the surface of the heart? Consider the
heart to be a muscle, surrounded by water.
How long is the time gap between ultrasound
reflections from the front and back of the heart,
assuming the heart to be modeled as a cube of edge
length 15 cm?
If we use the value given in the text for an absorption
coefficient of 0.12/cm/MHz, what distance in water
will result in an absorption of a 5 MHz ultrasound
beam
(a) of 10%?
(b) of 90%?
(c) Suppose instead the frequency is reduced to the
nominal minimum of 1 MHz. Calculate the distances traveled for the same fractional absorption.
A basic property of measurement with waves of any
type is diffraction, wherein the interaction of the
object under study with the wave gives rise to a distortion of the direction of wave travel. Diffraction
effects impose an effective lower limit on the determination of size of the target object and this limit
can be taken to be roughly equal to the wavelength of
the wave. By calculating the wavelength of an ultrasound beam of frequency of 10 MHz in water, what
is the size limit for objects under observation with
ultrasound?
A drummer begins to drum on iron railway tracks
with a regular beat. You are nearby with your ear near
the tracks and hear two sets of drumming, one starting 0.8 s after the other. (The speed of sound in air is
345 m/s and in iron is 5,000 m/s.)
(a) How far away are you from the drummer?
(b) If the delayed sounds are 5 dB less intense than
the first set of drumming heard, find the ratio of
the intensities of the two sounds.
(c) If the drummer drums at a frequency of 4 Hz,
what frequency will a person hear on a
train approaching at 60 mph (conversion factor:
1 mph ⫽ 0.45 m/s)?
A scientist playing with musical instruments has a 1
m long guitar string with total mass 0.010 kg hooked
up to a mechanical oscillator.
(a) If the string oscillates in the second harmonic
with f2 ⫽ 330 Hz, what is the tension in the
string?
295
(b) If the scientist doubled the oscillation frequency,
how many oscillating lobes would there be?
(c) Also in the laboratory is a pipe, open at both ends,
which the scientist wants to have resonate in the
fundamental mode at the same 330 Hz from part
(a). How long should this pipe be?
(d) The pipe in part (c) is slightly too long, such that
the beat note between the fundamental mode of
the pipe and the 330 Hz from part (a) is 5 Hz.
How much should it be shortened to reach the resonance sought in part (c)?
(e) A second pipe in the laboratory has resonances at
330 Hz, 550 Hz, and 770 Hz. Is this pipe open or
closed?
24. A nerdy scientist proposes to measure how fast he is
traveling toward vertical cliffs by blasting a pure
296
1000 Hz tone and listening for beats produced by the
echo. If he hears a beat frequency of 2 Hz, what is his
speed? (Use vsound ⫽ 343 m/s and remember that he
is both a moving source and a moving detector.)
25. A stationary bat sends out an ultrasonic tone at
60,000 Hz searching for food. At what frequency
does the bat hear the echo from a dragonfly moving
away from the bat at 5 m/s?
26. A Doppler beat device is used to measure the velocity of blood flowing in an artery. Taking the velocity
of sound in tissue as 1500 m/s, what is the velocity
of blood flowing away from the detector emitting
ultrasound at 1 MHz that results in a beat frequency
of 15 Hz?
SOUND