Download Chapter 11

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Earplug wikipedia , lookup

Noise-induced hearing loss wikipedia , lookup

Soundscape ecology wikipedia , lookup

Sound wikipedia , lookup

Sound localization wikipedia , lookup

Sensorineural hearing loss wikipedia , lookup

Sound from ultrasound wikipedia , lookup

Auditory system wikipedia , lookup

Transcript
3/21/5
Chapter 11 – Hearing
Page 280, column 2 has a mistake – low frequency sounds results in greater activity at the apex.
Sound Stimulus: Changes of pressure in the medium in which the ear resides. G9 p 262
The ear usually resides in air, but we can hear when ear is in other media, e.g., water.
Process
Sound Generator –
vibrating object, such as a
tuning fork.
Ear
Ear
Canal
L
H
L
H
L
Some physical characteristics of Sound
H
L
H
Rapidity of vibration: For humans, from 20 vibrations (cycles) per second up to 20,000 per second.
Speed of expansion from source: About 700+ miles/hour; about 1100 ft/sec in air at 0o C
Velocity increases as temperature increases
Travels faster through less compressible media
So faster in fluids and solids. 4 times faster in water than in air.
Intensity decreases proportionate to square of distance from sound source without walls.
Distance
Intensity
1 foot
100
2 feet
25
Decrease is 1/22 of the original or 1/4.
3 feet
11.1
Decrease is 1/32 of the original or 1/9.
4 feet
6.25
Decrease is 1/42 of the original or 1/16.
5 feet
4
Decrease is 1/52 of the original or 1/25.
.
.
10 feet
1
Decrease is 1/102 of the original or 1/100.
Distance between successive peaks:
20 CPS:
20,000 CPS:
50 feet
0.5 inches between peaks at 20,000 cps
G9 Ch 11 - 1
3/21/5
Absorption / Reflection:
When sound in a gas strikes a solid object, some of it is absorbed – passing into the object creating
sound within the object – and some is reflected.
Absorption is best when the object is most like the originating medium. So soft materials
absorb more sound originating in air. This is why “acoustic tiles” are made of soft materials
– so they’ll be more like air and absorb sound.
H
L
H
Soft surface
Reflection is best when substance is least like the originating. So hard wood, steel are good
reflectors of sounds in air.
H
L
H
Hard surface
Implications for “sound proofing” –
To decrease sound in the same room as the sound source, make the walls of soft material.
To decrease sound in other rooms, make the walls of hard materials. Put a vacuum in the
walls.
G9 Ch 11 - 2
3/21/5
Representing Sounds visually. (the irony – we have to use vision to understand audition)
Two Ways to Visually Represent Sounds.
First Way: Waveform: A plot of pressure vs. time at a specific place,
e.g., the ear
Pressure
Average
pressure
Time -------------------------------------->
Two categories of waveform
A. Periodic – Repeats forever.
B. Aperiodic – Any waveform that does not repeat.
G9 Ch 11 - 3
3/21/5
Characteristics of periodic sound waveforms . . .
I. Intensity / Amplitude –
A. Instantaneous amplitude: The deviation of waveform from average atmospheric
pressure at a single point in time.
Problem: Instantaneous amplitude varies from millisecond to millisecond.
Which one should you use?
B. Average Absolute Amplitude: Average of absolute values of all the
instantaneous amplitudes.
This is a possibility.
C. Root Mean Square (RMS) Amplitude:
Square root of the average of all the squared instantaneous amplitudes.
The last forms the basis for most common measures.
Most preferred by researchers, practitioners, makers of instruments.
If you purchase sound equipment, you should look for sound output values in
RMS Watts.
Some advertisers report Peak Watts which is an exaggeration of the capability
of an amplifier.
G9 Ch 11 - 4
3/21/5
Units of RMS Intensity/Amplitude Measurement
1. Formal: micropascals
(Mike note: 1 µPa = .00001 d/cm2)
Based on the force exerted by the change in pressure.
Minimum audible sound amplitude: 20 micropascals.
Maximum safely hearable sound amplitude: 200,000,000 micropascals
So the range of sounds in which we must function is
200,000,000 to 20 or 10,000,000 to 1 or 10 million to 1 (G9 Table 11.1, p 265).
That is, the loudest sound we can experience without pain is 10 million times more
intense than the faintest sound we can experience.
The large range requires conversion to a more manageable scale.
2. Practical: The decibel scale G9 p 265
db = 20log10(Pressure of the sound in micropascals / 20 micropascals)
Most sound level meters display RMS amplitude in db.
Many also only measure sound whose frequencies would be audible to humans.
Range:
Minimum audible:
Leaves rustling:
Average speaking:
Loud Music/traffic:
Maximum tolerable for any period of time:
0 db
40 db
60 db
80 db
85 db
Don’t expose yourself to sounds louder than 85 db for long periods of time.
Maximum bearable: 140 db
Play VL 11-1 Decibel Scale here.
How to destroy your hearing: Always listen to music at the loudest possible
intensity. Listen to a lot of it. Unlike vision, for which high intensity light is not
pleasant, high intensity sound, sound that can destroy your ability to hear, is often
enjoyable.
G9 Ch 11 - 5
3/21/5
II. Period / Frequency
A. Period: The time between identical points on a periodic waveform.
The period of a waveform is a measure that is analogous to wavelength in the description
of light.
B. Frequency: The number of times a periodic waveform repeats in a given unit of time,
typically 1 second, i.e., number of periods per second.
F = 1/P and P = 1/F, so we can describe this characteristic of sound using either period or
frequency. Frequency is almost always used in the study of audition.
Typical measurement: Frequency in cycles per second, cps.
Current name for frequency in cycles per second: Hertz or Hz
Range of hearing in adult humans:
Minimum frequency: 20 Hz
Maximum frequency: 20,000 Hz
Person with a huge head.
Person with a small head.
Dogs: Upper limit is between 25,000 and 40,000
Bats: Upper limit is 100,000 Hz.
Old Person trick – Driving kids away: Play high frequency sounds.
G9 Ch 11 - 6
3/21/5
III. Phase
Time at which a waveform begins, relative to some arbitrary time point.
Often phase is used to describe the relationship between starting points of two waveforms.
For example, here are two identical waveforms, completely “out-of-phase” with each other:
Phase relationships between two sounds are typically measured in degrees.
Two sounds with same phase: 0.
Two sounds with opposite phases: 180. (The illustration above.)
Use: If two sounds have a phase relationship of 180, their sum will be complete silence.
This is how some noise canceling devices work. A sound with a 180 phase relationship
with the offending sound is generated by an amplifier in the headphones and played in the
ear, canceling the offending sound. The result is silencing of the noise.
Beautiful Music +
Signal
Reversed phase
Headphone
combiner
ambient noise
Ambient
Noise
Beautiful
Music
Ear
Ambient
Noise
Reversed
Phase
Ambient noise
Phase
reverser
Tiny
microphone
Could use Audacity to demonstrate out-of-phase addition of sounds.
IV. Complexity.
The number of, amplitudes of, frequencies of, and phases of the collection of sounds that are
combined into a single composite sound. The experience of complexity is called timbre.
G9 Ch 11 - 7
3/21/5
Most important periodic waveform: The sine wave; G9 p 263.
Pictorially
Hypotenuse
Opposite side
Sin of an angle is ratio of length of
opposite side to hypotenuse of a
right triangle.
The waveform of a sine wave is: Pressure = sin(time)
As the angle changes, sin changes.
Reason for importance:
1. Mathematical: Can be used as a building block for any periodic waveform.
ANY other periodic waveform can be created by adding the appropriate combination of sine
waves.
Converse: ANY complex waveform can be analyzed (broken up) into a collection of sine
wave components which would recreate the complex waveform if added together.
This analysis is called Fourier (for ee ay) analysis after Joseph Fourier who discovered it.
Note also that this means that if we have a bunch of sine wave generators – about 100 would
probably do – we can create virtually any sound imaginable. We can reproduce anyone’s
voice or any other sound we would like to produce or reproduce, such as voices generated by
phones. Think of sound chefs, like food chefs, adding a pinch of 1000 Hz, a pinch of
4000Hz etc. Every sound is made up of sine waves. Fourier told us how to discover what
they are. Many artificial voices are created in this way.
2. Psychological: Sine waves sound purer than any other sounds.
(This is why G9 labeled the section on Sine Waves, “Pure Tones”.)
Leads to the question: What characteristic of our auditory system
makes the sine wave sound so pure?
Joseph Fourier
1768-1830
G9 Ch 11 - 8
3/21/5
Second Way of Representing Sounds: G9 p 266
Sound Spectrum:
frequencies.
A plot of intensities of sine wave components of the waveform vs. their
Note: This is analogous to the spectrum of a light, studied in the chapter on color.
Why?
The waveform gives only a fraction of a second’s worth of the sound.
It’s difficult to discern the frequencies that have been added to create a complex sound.
Think: Structuralists.
Thus, we need a way of visually representing the frequencies that comprise each sound so
we can correlate the actual sound frequencies with auditory system responses.
G9 Ch 11 - 9
3/21/5
Some common sounds and their waveforms and spectra
(See Class Videos: Hearing Lecture pages 10 thru 12)
Individual Sine Waves of various frequencies . .(Sigview: Individual sine waveforms & FFTs . . .)
F = 1/.01 = 100
100 Hz waveform
100 Hz Spectrum
500 Hz waveform
500 Hz Spectrum
5000 Hz waveform
5000 Hz spectrum
10000 Hz waveform
10000 Hz spectrum
G9 Ch 11 - 10
3/21/5
More importantly, the spectrum gives us the frequencies making up composites of sine waves
100 Hz + 5000 Hz sine wave (Sigview – Two tone combo – 100,5000, and 500,5000.sws)
100+5000 Waveform
Spectrum
500 Hz + 5000 Hz sine wave
500+5000 Waveform
Spectrum
Three tone combinations
100 + 500 + 5000 Hz sine waves
100+500+5000
Waveform
Spectrum
G9 Ch 11 - 11
3/21/5
Spectra tell us stuff about sounds that we might not suspect
A 500 Hz Square Wave (Sigview –Open signal – 500 Hz 1 sec Square.wav; Edit -> Zoom to 512 )
The Spectrum of the 500 Hz Square waveform
It can be shown that a square wave can be created by forming the sum of an infinite number of sine waves,
with each successive sine wave being the next odd harmonic (1 times, 3 time, 5 time, 7 times, etc) of the
fundamental frequency (500 in this example) and intensity equal to the reciprocal of the harmonic number
(1/3, 1/5, 1/7, 1/9, etc).
G9 Ch 11 - 12
3/21/5
Spectra tell us about different kinds of noise – Pink, Blue, and White
(Blue,White,Pink Noise 1 sec Full View.sws)
Pink
Waveform
Pink
Spectrum
Blue
Waveform
Blue
Spectrum
White
Waveform
White
Spectrum
G9 Ch 11 - 13
3/21/5
Physical
vs.
Psychological Characteristics of Sound – G9 p 267
Physical
-->
Psychological
Intensity
-->
Loudness: The experience of sound intensity.
Frequency
.
Complexity
-->
Pitch: The experience of tone frequency
-->
Timbre: The experience of sound complexity
Relationship of Loudness to Intensity
Loudness measurement has often involved the method of magnitude estimation – Ch 1.
Method of Magnitude Estimation: A standard tone intensity is presented. Participants assign it the
arbitrary loudness value of 100.
A series of randomly selected intensities is presented. Participants assign each a loudness number
representing that the tone’s intensity relative to the standard.
An often stated equation is L = I0.5 or L = square root of I.
10.00
8.00
Loudness
Value l
6.00
4.00
2.00
0.00
100.00
95.00
90.00
85.00
80.00
75.00
70.00
65.00
60.00
55.00
50.00
45.00
40.00
35.00
30.00
25.00
20.00
15.00
10.00
5.00
.00
i
Intensity
Implications:
Suppose you increase the power of an amplifier from 100 Watts to 200 Watts, increasing intensity by a
factor of 2.
Loudness will increase by only a factor of 1.4.
100% intensity increase leads to a 40% loudness
increase.
Suppose you wanted to increase the loudness of your stereo system.
Intensity
Loudness
So you would have to make the sound 4 times as intense for it to
1
1
be perceived as 2 times as loud.
2
1.4
3
1.7
If you want to drive the person who lives next to you crazy,
4
2
you’ll have to make your sound system 4 or 16 times as intense.
5
2.2
Twice as intense probably won’t work.
G9 Ch 11 - 14
3/21/5
Argh!! Loudness Depends on Frequency – Equal Loudness Contours – G9 p 268
Each line is a line of equal loudness. Height of
the line is how intense it must be to achieve the
level of loudness it represents.
Note that the Intensity (height of line) needed to
achieve equal loudness varies across frequencies.
Intensity
Low and high frequency sounds must be very
intense to have a specific loudness.
Generally, the least intensity for a given loudness
is needed at about 3,000 Hz.
This graph illustrates two major results.
1) The intensity required to achieve a specific loudness varies as frequency of a tone of constant loudness
changes, with the lowest intensity required for a given loudness always around 2000-3000 Hz.
Tones of extremely low or extremely high frequencies are not perceived as loud as middle-frequency
tones of the same intensity.
2. The curves are flatter at higher intensities – near the top of the graph.
The intensity required to create a high loudness sound is roughly constant – top line in graph.
But the intensity required to create equally low loudness changes considerably – brown line in graph.
This means that at high loudness, it doesn’t what the frequency of the sound is – it’s all loud.
But at low loudness, the frequency of the sound matters a lot – middle frequency sounds need not be
nearly as intense as low or high frequency sounds.
To correct for inequities at low intensities, all modern audio reproduction equipment has built in
circuits that boost the intensities of low frequency and high frequency sounds when overall intensity
is low. This helps make the perception of sound quality of soft music the same as the perception of
loud music.
.
In the olden days, amplifiers used to have a switch, called the “Loudness” switch. The user had to
flip that switch in order to get the boost of low and high frequency sounds. Now, the boost occurs
automatically. “Thanks, modern science and engineering!!”
G9 Ch 11 - 15
3/21/5
Pitch vs. Frequency G9 p 268
Frequency measured in Hz – cycles per second.
Pitch measured in mels. Based on behavioral measures analogous to those used to define loudness.
Relationship of Pitch in mels to frequency of pure tones in Hz is not easily characterized, but it’s essentially
a straight line relationship.
Pitch in
mels
Frequency
Timbre vs. Complexity – G9 p 269
Many sounds are composed of a fundamental frequency and other high frequencies that are integer
multiples of the fundamental frequency.
In such cases, the fundamental frequency is called the first harmonic. The higher frequencies are named
according to their relationship to the fundamental. A frequency twice the fundamental is the 2nd harmonic.
3 times the fundamental is the 3rd harmonic, etc.
Many naturally occurring sounds have this fundamental + harmonics structure that we apparently are
structured to hear the harmonics as being like the fundamental. This accounts for the fact that musical notes
that are integer multiples of other notes sound similar to those other notes. These are the octave similarities
so apparent in music.
G9 Ch 11 - 16
3/21/5
Timbre and spectrum differences between a sine wave and a square wave.
For both sounds, the fundamental frequency is the same – 500 Hz.
But the square wave has a ton of higher harmonics – the 3rd, 5th, 7th, 9th, 11th, etc.
(500 Hz sine and square waveforms and FFTs.sws)
The square wave sounds different from the sine wave – because its timbre is quite different.
In the case of a square wave – the extra sounds are the odd harmonics – 3rd, 5th, 7th, etc.
We hear those odd harmonics. So the timbre of a square wave is different from that of a sine wave.
Important Questions:
What’s going on? How can we hear the sine wave components?
We can’t see the individual wavelength components of a light – all we can see is the composite.
But we CAN hear the sine wave components of a sound.
What’s the explanation for these differences?
G9 Ch 11 - 17
3/21/5
Auditory Apparatus – G9 p 270
Outer ear
Earwig
Pinna – Ear flap. Some amplification compared to a simple hole in head.
Auditory canal
1” long; About 6 cc in volume
Earwigs are fairly abundant and
found in many areas of the world.
There is no evidence that they
transmit diseases to humans or
other animals. Their pincers are
commonly believed to be
dangerous, but in reality even the
curved pincers of males cause
little harm to humans.[45] It is a
common myth that earwigs crawl
into the human ear and lay eggs
in the brain.[46][47] Finding earwigs
in the human ear is rare, as most
species do not fly and prefer dark
and damp areas (such as
basements) rather than typical
bedrooms.[4]
Auditory canal resonates at about 3400 Hz
This means that sounds at 3400 Hz are more intense – about 5-10 db - than sounds of other
frequencies
Intensity
of sound
at inner
end of
ear canal
10 db
Tympanic membrane - Ear drum
Vibrates in unison with the air pressure changes.
Transforms air pressure changes into movement.
Vibrates in unison with the vibration of the device causing the sound.
Punctures leave danger of intrusion.
G9 Ch 11 - 18
3/21/5
Middle ear
A Rube Goldberg-seeming device that transmits vibration of the eardrum to the inner ear.
Example of a Rube Goldberg cartoon
G9 Ch 11 - 19
3/21/5
Middle ear continued . . .
Three smallest bones of body.
Eardrum connects to malleus
Malleus connects to incus.
Incus connect to stapes.
Stapes is attached to a membrane that is stretched across a hole in the cochlea called the Oval Window.
Goldstein’s
Yantis’s – note, the cochlea is a cave in bone.
Round
window
Functions of middle ear
1. To increase force of vibrations of eardrum. G9 p. 272
a. By concentrating the area of force from the huge eardrum to the
small stapes footplate – the high heel shoe effect.
b. By increasing the force through a lever action
These two factors results in a 22:1 increase in the force of movement
from the eardrum movement to the movement of the oval window
membrane.
That 22:1 ratio is approximately a 30 db increase in sound level.
That is, sounds would be 30 db lower than they are if we didn’t have
the middle ear.
Mike – demo this if you have the sound equipment in the room.
G9 Ch 11 - 20
3/21/5
Functions of middle ear continued . . . G9 p272
2. To control intensity of sound reaching inner ear through the acoustic reflex
There are two sets of muscles attached to the bones of the middle ear . . .
Tensor tympani muscle connects to malleus
Stapedius muscle connect to stapes
When these muscles contract . . .
Malleus is pulled to one side – so doesn’t impart as much movement to incus
Stapes is forced to move from side-to-side rather than back and forth
The effect of contraction of these muscles is to reduce the sound pressure reaching the inner ear by several
db.
They contract in two types of situations.
2 A) In response to loud sounds, but the latency is about .150 seconds, too long to prevent damage.
.150 is about 1/6 of a second. In that time, 83 pressure changes of a 500 Hz tone will affect the inner
ear. 833 pressure changes of a 5,000 Hz tone of a 5000 Hz tone would get through. Here’s a combination
tone 500+5000 – in the first .002 seconds (2 milliseconds), 1 low frequency and 11 high frequency
pressure changes occurred. So, in .150 second, that would be 75 low and 8250 high frequency changes.
2 B) In response to self-produced noises in the mouth.
So there are two possible reasons for the acoustic reflex . . .
2 A) To reduce levels of loud sounds, although latency is too long to protect against high frequency
sounds of high intensity. It’s like having an anti-missile defense system that has to be taken out of storage
before it’ll work.
2 B) To reduce noise of self-produced sounds, the distraction of eating and talking,
This is probably the major function of these muscles.
G9 Ch 11 - 21
3/21/5
Inner ear G9 p 272
Structure is cochlea
A cave in the skull.
Filled with fluid
Two windows covered with membranes – oval window and round window.
Semi-circular canals
Auditory Nerve
Stapes attached to oval
window
Round window
The whole auditory apparatus, so you can see the
relative sizes of the structures. From Y1 p 297
G9 Ch 11 - 22
3/21/5
Interior structures of the cochlea
Two membranes run its length – Reissner’s membrane and the basilar membrane
Close-up of the cochlea
Vestibular
Canal
Reissner’s
Membrane
Close-up of a cross-section
of the cochlea
Tympanic
Canal
What happens in the cochlea
Movement of the stapes causes the membrane covering the oval window to move in and out.
This, in turn causes fluid movement in the Vestibular canal which leads to fluid movement in the Tympanic
canal.
That fluid movement causes the basilar membrane to move – vibrating like a sheet or rug having the dirt
shaken out of it.
G9 Ch 11 - 23
3/21/5
Here is an end-view of the basilar membrane and Organ of Corti – analogous to looking south down
Lookout Mountain –
C:\Users\Michael\Desktop\Desktop folders\Class Videos\Going South on Lookout Mountain.mp4
Receptors
Two sets of cells with hairlike extensions (steriocilia) run the length of the basilar membrane.
The cells are called hair cells.
One set is a single row running down one side of the membrane. These are the inner hair cells – about
3500.
The other set is a triple row running down the other side. They’re called the outer hair cells – about
12,000 total.
We now know that it is the inner hair cells that are the receptors that transduce the movement of the
basilar membrane into action potentials.
The outer hair cells modulate the responses of the inner hair calls.
G9 Ch 11 - 24
3/21/5
More on the Action of basilar membrane and receptors
(Possible test question.)
Movement of stapes causes pressure changes in the vestibular/tympanic canals.
These pressure waves cause ripples on the basilar membrane – like an earthquake moving down Lookout
Mt. toward Fort Payne
This movement causes the stereocilia attached to the hair
cells to bend. (As the trees on Lookout Mt. would bend if an
earthquake moved down the mountain.)
Inner hair cells
The bending of the cilia causes inner hair cells to release neurotransmitter substance which triggers
action potentials in auditory nerve neurons whose axons make up the auditory nerve and whose dendrites
are located near the base of the inner hair cells. This is illustrated and described in G9 p 274 Figure 11.18.
So the inner hair cells are the receptors for audition.
G9 Ch 11 - 25
3/21/5
Outer hair cells
The movement of stereocilia of outer hair cells causes the outer hair cells to become slightly longer, with
the result that the inner hair cells release more neurotransmitter substance, in effect amplifying the
response of the inner hair cells.
The outer hair cells increase the inner hair cells response only to specific frequencies, in effect, sharpening
the response of the inner hair cells to whatever frequency they are responding to.
Stimulation of the outer hair cells from higher brain centers causes an attenuation of the responses of the
inner hair cells. So the outer hair cells act kind of like a volume control for the inner hair cells –
amplifying the responses of the inner hair cells at some times and attenuating them at other times.
Show Virtual Lab 11-10 (Cilia Movement) and
Virtual Lab 11-13 (Cochlear Amplifier) here to show the action of the basilar membrane along with the
amplification associated with the outer hair cells described on p. 277.
Show how the hair cells are changing their lengths in VL 11-13.
G9 Ch 11 - 26
3/21/5
Coding Frequency – how do we perceive the frequency of a sound?
Helmholtz’s Place Theory
In the late 1800s Helmholtz believed that the basilar membrane is composed of fibers running at right angles
to the length of the membrane.
Helmholtz believed that these fibers are strung tautly, like the strings of a harp.
High frequency string
Low frequency string
Sound caused them to vibrate, just as the strings of a harp vibrate in the presence of sounds.
Short fibers vibrate most to high frequency sounds. Long fibers vibrate most to low frequency sounds.
So he believed that place of vibration is the signal for frequency.
The brain will know what the frequency of sound is by knowing where the vibration is occurring.
G9 Ch 11 - 27
3/21/5
von Bekesy’s Traveling Wave Theory G9 p 275
Georg Von Békésy carefully examined inner ears of cadavers and built a model of the basilar membrane
based on his examinations.
.
He found that the membrane was a continuous sheet – not a collection of strings.
He also found that it was loosely bound, not strung tightly.
He proposed that the response of the membrane to sound is a wave that travels from the base of the
cochlea to the apex.
Like a sheet or rug being snapped to shake off dirt.
Shape of the membrane during its “shaking” is illustrate by the figure below
Point of maximum
movement
Point of maximum
movement
Play C:\Users\Michael\Desktop\Desktop Folders\Classes\Basilar Membrane Animation.mp4 here
(Original at https://www.youtube.com/watch?v=dyenMluFaUw )
G9 Ch 11 - 28
3/21/5
The bottom line: The point at which membrane movement is greatest depends on frequency of the sound.
So the pattern of vibration of the basilar membrane represents a kind of spectrum of the sound.
High frequency sounds: Movement is greatest near the base – near the oval window end. Hair cells at the
base release the most neurotransmitter substance.
Low frequency sounds: Movement is greatest near the apex. Hair cells near the apex release the most
neurotransmitter.
Complex sounds: There are various amplitude peaks on the membrane as it vibrates – with the location of
each peak corresponding to a frequency component of the complex sound.
G9 Ch 11 - 29
3/21/5
Implications
1) The location of point of maximum vibration depends on frequency.
2) Inner hair cells at a specific place on the basilar membrane respond the most when the basilar membrane
at their location vibrates the most.
3) Since each auditory nerve synapses only with hair cells at a specific place on the basilar membrane, this
means that each auditory nerve responds to a specific frequency in the sound stimulus.
Each auditory nerve is “tuned” to a different frequency. The collection of responses of the several
thousand auditory nerves is like a spectrum. They perform a rough Fourier analysis of the incoming
sound.
Spectrum of a high frequency sound
Sound
Intensity
Frequency
Basilar membrane
Base
Apex
Auditory nerves. Only the red one is active
Spectrum of a low frequency sound
Sound
Intensity
Basilar membrane
Base
Apex
Auditory nerves. Only the red one is active.
G9 Ch 11 - 30
3/21/5
Spectrum of a complex sound
Sound
Intensity
Frequency
Basilar membrane
Base
Apex
Auditory nerves. Red ones are active
Practical Applications . . .
This means that if you wanted to give a person the experience of hearing by direct stimulation,
you could not stimulate just one auditory nerve cell. Because doing so would give the person sound
experience of only one frequency.
And you can’t just stimulate all the auditory nerves equally.
Instead, you have to stimulate multiple auditory nerves, and the ones you stimulate must be distributed
along the basilar membrane, and you must stimulate each nerve with just the appropriate frequency.
This fact has been used by developers of cochlear implants – devices used to help persons who have lost the
use of their hair cell receptors hear.
In this figure, the bluish cord labeled
electrode is really a collection of electrodes
that is laid along the basilar membrane.
This way, each metallic band can stimululate
nerve fibers at a different places on the
membrane each with the appropriate
frequency. The result of stimulation by a
whole group of such electrodes is that a
whole collection of auditory nerve fibers are
activated, such as they are in normal hearing.
Play Desktop folders\Class Videos\Implanting cochlear electrodes.mp4
What do persons with cochlear implants hear?
PlayDesktop folders\Class Videos\Cochlear Implant Simulation - Hearing Speech.mp4
minutes.
G9 Ch 11 - 31
2+
3/21/5
More on Coding Frequency - Temporal theory, aka Frequency Theory, aka Telephone Theory
In this theory it was assumed the basilar membrane vibrated as a
whole, like the membrane of a telephone microphone It has been
called the telephone theory.
Assumed the basilar membrane vibrated as a whole, in unison with
sound – higher the sound frequency, faster the membrane vibrated.
Assumed that somehow, the whole-membrane vibration was
transmitted to higher neural centers. For example, by neurons that
fired each time the membrane moved.
Problem 1: Von Bekesy discovered that the basilar membrane
does not vibrate “as a whole”, as assumed by telephone theory.
Problem 2: We can perceive sounds whose frequencies are as high
as 20,000 Hz, but neurons cannot respond at rates higher than
1000 action potentials per second, if that high. So the theory,
unaltered, cannot account for our ability to hear sounds above 1000 Hz.
Volley principle: To account for the 1000 Hz limit, researchers proposed that no single neuron responded
with the membrane, but that neurons “took turns” responding, so that each individual neuron responded,
say, every 10th vibration or every 20th. This would allow the collection of neurons to signal frequency while
not requiring any one to fire at a rate greater than 1000 APs/sec.
We now know that :
Apparently, low frequency sounds do cause movement of the whole membrane, in unison with the sound.
(This “whole-movement” of the membrane is not shown in the animations played earlier.)
Recent research has shown that, in fact, many neurons in the auditory chain DO respond in unison with
the sound wave – responding only at the highest pressure point of
each wave, for example.
And, it now appears that neurons do trade off with each other, just
as supposed by the proponents of the volley principal many years
ago.
G9 Ch 11 - 32
3/21/5
Current Theory –
So the current theory of how sound frequency is processed is one that suggests that
1) our perception of frequencies of low frequency sounds (say those < 5000 Hz) may be due to neurons
responding in unison with the gross movement of the basilar membrane.
2) our perception of frequencies of high frequency sounds (say those > 5000 Hz) may be due to activity of
different neurons at different locations on the basilar membrane.
Interestingly, as noted on G9 p 280-281, the responses in unison of neurons with frequencies less than 5,000
Hz may explain why we only perceive melodies when they consist of frequencies below 5,000 Hz. Our
sense of pitch may change in some fundamental way at 5,000 Hz.
G9 Ch 11 - 33
3/21/5
Hearing Loss: Measuring Auditory Ability
Audiogram: A plot of thresholds in reverse scale on the vertical axis vs. frequency on the horizontal axis.
Reverse scale means that high thresholds (poor hearing) are represented at the bottom of the vertical
axis. Normal hearing thresholds are at the top of the vertical axis.
Normal hearing
0
10
High-frequency loss
0
10
Note that the high frequency loss is about the same in both hears. This means that whatever caused it –
listening to loud music, shooting guns – affected both ears.
G9 Ch 11 - 34
3/21/5
Categories of hearing impairment
Conductive
Loss due to impairment of the mechanisms taking sound to the cochlea
The pinna, ear canal, tympanic membrane, ossicles
Maximum loss is about 30 db, which means you can still hear, even though you’ve completely lost your
outer and middle ear function.
An audiogram (from the web) showing a common result of conductive hearing loss.
The dotted line represents normal hearing with no conductive loss.
The solid line represents conduction hearing loss.
Generally, conductive loss is at all frequencies.
G9 Ch 11 - 35
3/21/5
Sensorineural
Loss due to damage to the cochlea or parts of it.
The basilar membrane, the hair cells, the auditory nerve.
Typically, loss is greatest for high frequency sounds
From the web – Audiogram of a person with high frequency loss – presumably due to sensorineural
damage.
As in the above example, note that the hearing loss is binaural – affecting hearing through both ears.
G9 Ch 11 - 36
3/21/5
Why is ability to hear high frequencies lost first?
Answer:
Almost all sounds contain high frequency components.
Noise is the worst, but even simple waveforms like square waves contain high frequencies.
Look at the spectrum of a square wave below. Note that though the fundamental frequency is 500 Hz, the
wave contains energy at 1500, 2500, 3500, 4500, 5500, etc.
Square Wave Spectrum
As mentioned before, many many sounds that we hear are composed of the combination of a fundamental
frequency and higher harmonics. So our ears are continually exposed to high frequency sounds even though
the fundamental frequencies of those sounds are not so high.
Now look at the spectrum of white noise – such as the sound of an explosion or gunshot.
Note that MOST of the energy in white noise is at frequencies that we would call high frequencies. Since
much of the noise we encounter in real life is like white noise, much of the noise we encounter has lots of
high frequency components. Gunshots and explosions are the worst.
But even music has LOTS of high frequency components – that’s why it sounds so good.
This means that the hair calls near the base of the basilar membrane are exercised the most and are the most
likely to “wear out”.
G9 Ch 11 - 37
3/21/5
Play VL 11-14 Hearing Loss here.
Mike – It’s now an .mp4 in the Class Videos folder VL 11-14 Children, Beethoven, two speakers.mp4.
“Classic: Beethoven” is most striking, showing the loss of ability to hear high frequencies
“Children singing” also illustrates high frequency loss.
Summary of the hearing process – 7 minutes
https://www.youtube.com/watch?v=PeTriGTENoc
G9 Ch 11 - 38