Download Auditory System

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Specific language impairment wikipedia , lookup

Ear wikipedia , lookup

Soundscape ecology wikipedia , lookup

Sensorineural hearing loss wikipedia , lookup

Sound wikipedia , lookup

Sound from ultrasound wikipedia , lookup

Olivocochlear system wikipedia , lookup

Sound localization wikipedia , lookup

Auditory processing disorder wikipedia , lookup

Auditory system wikipedia , lookup

Transcript
Auditory system
Q1
Which properties characterize an auditory stimulus,
and how is the sensitivity of the auditory system
determined?
Auditory stimulation
The 3 most relevant parameters of an auditory stimulus are:
- Intensity (dB)
- Frequency (Hz)
- Position of the auditory object (space coordinates, distance)
Auditory stimulation
Sound waves are pressure waves. The human auditory system perceives
sounds at frequences between 20 und 20.000 Hz.
Maximum air compression
Pressure
high
low
Frequency low
Frequency high
Minimum air compression
Wavelength, λ
Sound spread velocity (v)
Sound frequency, f (Hz, s-1)
Sound velocity, v (m/s)
f=v/λ
In air: ca. 340 m/s
Auditory stimulation
The sound intensity is measured in decibels. The decibel scale is a
logarithmic scale and refers to the sound pressure level (SPL).
Sound pressure (P): Pascal (1 Pa = 1 N/m2)
Sound pressure level (SPL)
1W = 1Nms-1
1J = 1 Nm
SPL = 20 log Px/P0
Reference sound pressure (P0): 2x10-5 Pa
Quiet countryside
Low voice
Street noise
Aircraft
dB
0
20
40
80
140
Factor X
1
10
100
10 000
10 000 000
Auditory stimulation
The sound threshold depends on the sound frequency. The audiogram displays
possible deficits in the sound perception at a given frequency.
SPL
(dB)
1000
Main range of human language
100
Absolute threshold
(at about 4 kHz)
1
0
1
10
Frequency (kHz)
Presbyakusis is the age-dependent decrease of hearing capacity.
The deficits are larger at higher frequencies.
Auditory stimulation
The human auditory system is very much prone to damage by noisy environment.
The human sound threshold after a 5 minute presentation of a strong
wide band noise at 115 dB (industrial noise, thunderstorm)
About 11% of the German population has hearing deficits.
Tendency: rising. Cause: auditory trauma
Frequency (Hz)
Sound pressure level (dB SPL)
Q2
How are auditory stimuli transduced in the inner ear?
KS-97-22-3
Function of the inner ear
Function of the inner ear
The sound waves are typically composed of several frequencies. The individual
sound waves have their maximum at different sites, and the amplitude of the
deflection of the basilar membrane in the inner ear represents the relative
strength (pressure) of that frequency component.
Oval window
Helicotrema
Function of the inner ear
Tonotopy of the inner ear: Different frequencies produce their deflection maximum
at different distance to the entry site of the inner ear.
Travelling waves
Oval
window
Round
window
Deflection
The mechanical properties of the basilar membrane change along its length
Therefore:
at high frequencies - maximum close to helicotrema
at low frequencies - maximum close to oval window
KS-97-22-7
Function of the inner ear
The transduction of auditory signals is carried out in Corti‘s organ
Tectorial membrane
Hair cells
Outer
Inner
Supporting
cells
Basilar membrane
Afferent nerve
fibers
KS-97-22-5
Function of the inner ear
The stereovilli are connected by tip links.
KS-97-22-6
Function of the inner ear
Function of the inner ear
When the tip-links are under stress, ion channels open and depolarize
the receptor cells.
Receptor activation
Receptor deactivation
Function of the inner ear
95% of the fibers in the acoustic nerve are sensory afferents collecting
signals from the inner hair cells. Each inner hair cell is contacted by about 10 axons
(high degree of divergence). The outer hair cells are not only contacted by afferent fibers
(high degree of convergence), but are also controlled by efferent fibers.
Tectorial membrane
Outer hair cells
Inner hair cells
Efferent fibers
Afferent fibers(VIII)
Both the outer and the inner hair cells are depolarized under the influence of basilar membrane
deflection. The outer hair cells also contain contractile elements that, by elongation or contraction,
augment the travelling waves.This creates acoustic waves.
Function of the inner ear
Lesion of the outer hair cells results in a reduction of the
oto-acoustic emissions
Function of the inner ear
The movements of the outer hair cells generate microphone potentials which can
be recorded for diagnostic purposes.
10 ms
Click
Tone
Q3 How is the direction of sound analyzed?
Positional information of acoustic objects
The distance of a sound-emitting object can be judged by
the intensity or time difference of the acoustic waves at the left and the right ear.
At high frequencies:
The sound has a shade,
i.e. the stimulus intensity
is different at the left and the
right ear.
At low frequencies:
A sound wave arrives
at different time
at the left and the right ear
Positional information of acoustic objects
The signals from the left and right ear are compared in the superior olive.
Auditory cortex
Midline
Thalamus
(CGM)
Colliculus inferior
Hair cell
Nucl. cochl.
MNTB Nucl. lat. olivae
superioris (LSO)
left
VIIIth nerve
right
Q4
How do auditory neurons encode the intensity and
the frequency of sound?
Auditory signalling in the CNS
The encoding of sound intensity is based on 2 mechanisms:
- increase in AP frequency
- increase in the number of active neurons
Stimulus
Neuron A
Neuron B
KS-2001
Auditory signalling in the CNS
In a single fiber of the N. acusticus, each AP is strictly in phase with the sound wave,
but a single fiber does not discharge with each sound wave.
Auditory signalling in the CNS
But the frequency is faithfully reproduced by the
neuron ensemble.
Stimulus
Single neuron
Neuron ensemble
Up to 4 KHz - good reproduction of the frequencies by this mechanism.
Auditory signalling in the CNS
KS-2001
Each neuron has a tuning curve. At the preferred frequency ('characteristic frequency'),
the intensity threshold is the lowest.
Recording from individual axons of the
acoustic nerve. After exposure to a cytoxic
drug - loss of 'best frequency'
Auditory signalling in the CNS
With each new level of information processing,
the tuning curves become sharper and sharper.
dB
Tuning curve of a neuron
in the cochlear nucleus
80
60
Inhibitory
surround
40
20
Excitatory response
2
7
4
KHz
10
15
Characteristic
frequency
Auditory signalling in the CNS
The receptive fields in the inferior colliculus and auditory cortex have
an inhibitory surround. A decrease of inhibition (deprivation from a
GABA-potentiating drug) can cause disturbance of sound perception.
Also impaired is the localization of an auditory object in space.
This may cause panic.
Auditory signalling in the CNS
The frequency of an auditory stimulus is, in addition, encoded by the position of
active neurons within the tonotopic map.
Thus, sound frequency is determined by two mechanisms:
By the discharge pattern
of the individual auditory
neurons or neuron
ensembles
By the position of the
active elements
with regard to the
entire tonotopic map
Auditory signalling in the CNS
The auditory maps in the superior colliculus and the primary auditory cortex
are multidimensional.
Auditory signalling in the CNS
The coordinate system of the auditory space is head-related.
In order to attribute auditory and visual signals to the same
object at a given position in space, the auditory map is coupled to
the retina-related visual map.
The superposition of the auditory, head-related map onto the
visual, retina-related map is carried out in the superior colliculus
and develops postnatally.
Q5
How is language represented in the
cerebral cortex?
Language processing
The basic acoustic elements of the language are the phonems.
Spoken sentences can be presented in a frequency spectrogram.
Frequency
(Hz)
To
cat----ch
pink
s----alm----on
Language processing
To perform quantitative tests of language perception and execution,
it is convenient to work with reduced sound patterns („artificial language“)
Language processing
Language processing can also be tested in non-human primates
Keyboard with lexical symbols for communication with signs
Language processing
Are non-human primates able to use language for communication?
In higher primate species, stimulation with language symbols can elicit neuronal
activity in areas equivalent to the human language areas (Wernicke‘s and
Broca‘s fields), but organs for speech (articulation) are missing.
Language processing
PET analysis of cortical activity in connection with different language tasks
Language processing
Cortical areas involved in language processing
Parietal associative cortex
Face representation in the
motor cortex (MS1 )
A.39
atus
u
c
r
sa
u
l
u
ic
Fasc Associative visual cortex
A. 18,19
Primaryr visual cortex
Broca‘s
field
Primary
auditory
cortex (A1)
Associative
auditory
cortex (A2)
A.41,42
Wernickefield
A.17
Language processing
The Wernicke-Geschwind model of language representation in the cerebral
cortex
Broca
MS cortex
Wernicke
LGN
Par. ass. cortex
Ass. vis. cortex
Vis cortex
Task: 'Point to the butterfly and tell me its color'
Language processing
Humans communicating by the sign language display activity in the cortical
language fields.
„We arrived in Jerusalem and stayed there“
The American Sign Language (ASL) of the deaf is a language
that largely follows the linguistic rules of spoken language.
A lesions in the left temporal lobe produces sign language aphasia
Language processing
A common basis of language in all humans?
Basic observation:
Newborns recognize the phonems in the same way,
regardless of where they are born
Topography of the phonems in the cortex
Newborns
(up to 6 Mo)
Initially: separate representation of the phonems r and l
Later on: grouping of phonem representation according to similarity
Grown up with German or English
Grown up with Japanese
Representation of r and l
remains separate
Representation of
r und l fuses
Language processing
Different forms of aphasia:
Wernicke‘s aphasia: patient does not understand the task, says
nothing but nonsense.
Broca‘s aphasia: patient understands the task (points to the right figure,
understands that this is an insect, etc.), but cannot say „it‘s yellow“;
difficulties in word finding; incorrect grammar; articulation preserved