Download Transcripts/2_4 2

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Sound localization wikipedia , lookup

Sensorineural hearing loss wikipedia , lookup

Sound from ultrasound wikipedia , lookup

Olivocochlear system wikipedia , lookup

Auditory system wikipedia , lookup

Transcript
S2: Neuroscience : 2:00-3:00
Wednesday, February 4, 2009
Dr. Lester
The Auditory System
Scribe: Andrew Treece
Proof: Dylan Vaught
Page 1 of 5
The Auditory System
I.
The Auditory System [S1]:
Try and keep in mind Dr. Gamlin’s lecture about general sensory transduction. Try to remember the attributes of
a sensory stimulus: Modality (today talking about sound), Intensity (usually encoded in the number/frequency of action
potentials), Duration (can be measured with sound by how long cells are stimulated), Location (interesting for auditory
because we have mechanisms to detect the direction of sound with a tonotopic map)
II. Overview [S2] Skipped, But keep in mind that today’s lecture was talking about cranial nerve VIII
III. Learning Objectives [S3]
a. Pretty straight forward. These will probably be what’s on the test.
b. To generalize, out in the periphery there is some sort of receptor, neuron or specialized cell, and it might have a
synapse and is probably going to go through the thalamus, but not always, then to the cortex.
c. Some processing in other structures may add complexity to the signal.
d. Think of it as relay and selective signaling.
IV. What is sound? [S4]
a. Most of this is concentrating on the mechanosensory processing that goes on in the cochlea, and we’ll focus on
how a cochlea implant works.
b. So we have a mechanical signal that is sound waves with the air squished, moved further apart, and banging on
something and then ends up on the ear drum... the basic nature of sound, so we have to translate that.
c. Good idea to know the normal hearing range because we are going to lose some of that.
V. Loudness range [S5]
a. We have the ability to encode across a big range of sounds, so from very weak to intense sounds, and they
have to be coded logarithmically basically.
b. We have a law that Dr. Gamlin talked about concerning decibels, which are measured using this law with some
constant and the log of a reference that is minimal intensity, so like the threshold of hearing at your best
frequency (just detectable).
c. If you change the intensity, the result is in log and is in decibels from zero to about 100 which is a huge range of
intensity, and every tenfold increase in sound pressure we perceive an equal increment.
VI. Real life levels [S6]
a. This is a scale to give you a perspective as to what decibels mean and intensity.
VII. Central Auditory pathways [S7]
a. skipped
VIII. Roles of the ear [S8]
a. We will talk about these structures more in Gross anatomy so all he wants us to know is that:
b. The outer ear collects sounds and the middle ear is really to transmit them from the eardrum to the processing
unit the cochlea with some amplification
c. It involves these tiny bones that we’ll learn more about later.
IX. Middle and inner ear [S9]
a. We are interested mainly in the spiral structure, the cochlea, and in its anatomical organization in respect to how
it functions.
b. Movements of ions into and out of cells in the cochlea is unusual because of the concentrations there of the ions
that allows it to send signals, so that is important part of getting mechanosensation into an electrical signal.
c. But basically we have the sound being transmitted as sound waves and pressure changes are transmitted and
cause pressure changes in a fluid field space which is going to end up going right up to the apex and back out of
the round window to let the pressure escape again.
X. Learning objective #1 [S10]
XI. The cochlea in section [S11]
a. Three different chambers in effect if you cut this in cross section:
i.) Scala vestibuli which comes into from the oval window
ii) Scala tympani going back out to the round window
iii) Scala media, an odd one
b. The processing unit is the Organ of Corti that sits on the basilar membrane
c. So what we have done is cut transversely through the coil, and we could stretch it out and look like this no
matter where we cut.
INSERTED QUESTION: Vibration of the basilar membrane encodes sound:
a. frequency
S2: Neuroscience : 2:00-3:00
Scribe: Andrew Treece
Wednesday, February 4, 2009
Proof: Dylan Vaught
Dr. Lester
The Auditory System
Page 2 of 5
b. loudness
c. localization
d. all of the above
The answer is all of the above. The basilar membrane is going to be the initial recipient of the mechanical stimulus, a
taught membrane that vibrates and causes hair cells, initial sensory receptor cells. This has to encode frequency,
loudness, and location... and everything starts at the basilar membrane and ends up at the cochlear nucleus when it
comes in through CN VIII . Everything starts at the basilar membrane.
XII. Learning objective #2 [S12]
XIII. The basilar membrane [S13]
a. Frequency is encoded along the length of the basilar membrane.
b. It is narrow at one end and wider at the other. It is stiff and narrow at the oval window where the
mechanostimulus is first hitting. High frequency sounds are detected at the base, and low frequency out at the
apex.
XIV.
Traveling waves [S14]
a. This shows you that basically the mechanical stimulus maximally distorts where it best causes the membrane to
resonate.
b. Low frequency sounds travel all the way to the apex to get the maximum displacement.
c. If you put different frequencies in we get higher frequency sounds near the base and lower frequency sounds
near the apex.
d. You can already see that frequency is encoded by where you are along the basilar membrane.
e. If you are a loud sound you cause more vibration, same frequency but more intense, it will give you a bigger
displacement. You already have frequency and intensity encoded just with the vibration of the membrane.
Aside: This will lead to how a cochlea implant works, because frequency is already encoded by the basilar
membrane, so it means the signal there that gets to the receptor cell and is passed on probably also encodes
frequency. An individual nerve cell coming out close to that location is going to carry that same frequency all the
way to the cortex. Within this modality it is like the labeled line hypothesis, so we are a hair cell in the ear so we
are detecting sound, but within the ear depending on where we are we are also going to detect a specific
frequency of sound.
XV. Different solutions in chambers [S15]
a. These are our different fluid filled chambers, so this is a cross section right through the whole spiral.
b. So depending on where we are we will encode a different frequency, but the encoding is basically the same no
matter where you are.
c. We will just look at one little cross section.
d. Key thing: CSF is generally low in potassium, and potassium tends to maintain our resting potential, but when
we want to change out membrane potential it’s not the ion that we generally think of moving... we generally think of
sodium coming in to depolarize.
e. For whatever reason, this media contains very high potassium such that its gradient drives potassium into the
cell. Potassium carries the signal. For completeness, you have tiny capillaries that form the Stria vascularis and
they’re responsible for producing this high potassium containing solution, and you might expect that because
capillaries are important for removing and concentrating potassium using the glial end feet, so we use our
vasculature to produce this special high potassium containing medium.
f. This involves the hair cells and cilia.
XVI.
Organ of Corti [S16]
a. What we have done here is ripped off this tectorial membrane that just sits on the top here (from previous
slide), and we are looking down onto this Organ of Corti and we’re really just looking at the top surface of these
cells.
b. We are going to see the cilia stick out in rows, and there are two sets of different types of cells:
c. Inner hair cells: a single row, THE important ones for receiving the auditory info so if you lose these you will be
deaf.
d. Outer hair cells: multiple rows and are more cells, these are more part of the regulation of sensitivity.
e. So you can effectively damage outer hair cells with more subtle problems, but if you damage the inner cells and
cilia and you’ll go deaf for at least particular frequencies.
XVII. Hair cell structure [S17]
a. But, all hair cells are basically the same, they are modified endothelial cells that are kind of box shaped.
b. They have cilia and are contacted by afferent and efferent nerves.
c. So here is our hair cell with support glial cells and synaptic input and output and cilia.
S2: Neuroscience : 2:00-3:00
Scribe: Andrew Treece
Wednesday, February 4, 2009
Proof: Dylan Vaught
Dr. Lester
The Auditory System
Page 3 of 5
d. Don’t worry too much about the mechanism, but the cilia are contacted by tip links.
XVIII. K+ is the initial depolarizing signal [S18]
a. This is the basic flow of K+, but don’t worry about recalculating gradients and things like that.
b. You should know that in this system K+ comes into the cell, and it actually comes in through the cilia
c. Cilia have membranes covering them and the K+ comes in there, and that’s because the channels that open up
are on the cilia themselves.
d. K+ comes down its electrochemical gradient, and just like any other cell, K+ leaves through other channels that
open up.
e. They can be voltage or calcium sensitive channels
f. So K+ comes in, and if K+ comes in we are increasing positive ions on the inside and there will be a
depolarization and we get calcium channels to open up and calcium will affect vesicles of transmitter and
vesicles will fuse with the membrane and release the contents of glutamate (sensory systems also use
glutamate as a transmitter) and it then goes across a synaptic cleft and hits a receptor and then transmits the
signal.
g. So it’s still depolarization, transmitter release, and synaptic signaling once we’ve left this cell, and the
mechanism really just involves K+ moving through the entire cell, into the cell and then back out.
XIX.
Hair cell synapse [S19]
a. This is the synapse, it’s a little weird and called a ribbon synapse, but basically they have a specialized active
zone with a bunch of vesicles.
b. Calcium dependent, vesicle fusion, transmitter release.
c. There is a big electron dense body where the vesicles are, but we still have the individual clear, small synaptic
vesicles which have glutamate in them, and they are the ones that have to fuse with the plasma membrane
XX. Stereocilia and tip links [S20]
a. This is how the cilia work, and Dr. Gamlin will talk more about this.
b. If these things are attached to each other by these links, and if a link is connected to a channel and we alter the
tension on it we can cause mechanical rearrangement of the subunits in that channel maybe the channel will
open.
c. We have talked about stretch channels, voltage channels, transmitters opening channels, and this is really the
other major way.... actual mechanical changes opening channels.
d. So we pull open this channel and K+ goes in and causes the signaling cascade.
XXI.
As basilar membrane moves… [S21]
a. The other thing in the old textbook that used to be thought was that the tectorial membrane that covers the top
of the hair cells was in contact with the cilia and the shearing between the two caused the cilia to move... THIS
IS NOT TRUE
b. Actually, they are just sitting in the media and when the basilar membrane vibrates the fluid will shift around and
the movement of fluid causes the cilia to move back and forth
c. NO DIRECT CONTACT
XXII. Encoding Frequency [S22]
a. We’ve got our basilar membrane and all the hair cells (thousands all along the length), so if the basilar
membrane vibrates we will just release transmitter because the hair cell is sitting on there and it will respond to
the displacement.
b. So that one will depolarize, release transmitter, and conduct.
c. So each cell will be tuned by a specific frequency just because of its position on the membrane. So if a cell is at
the low frequency end and there is a high frequency sound, our low frequency cell won’t detect that sound in the
high frequency range. These cells are tuned by location, not by specific properties, and if that part of the basilar
membrane is not vibrating it won’t detect the signal.
d. ***Note in the picture that the graph on the bottom increases frequency from left to right and the picture at the
top increases in frequency from right to left (They don’t match up)
e. To remember which part of the basilar membrane is which, think about it being backwards: One would think
that the apex would be the pointed end and the wider portion the base, but INSTEAD the base is the skinny part
and the apex is the wider part... see slide 13 for clarification.
XXIII. Learning Objective #3 [S23]
XXIV. Cochlea Implant [S24]
a. The reason that I say this might help you think about how this whole transduction system works is the cochlea
implant. It’s a nice smart invention that actually gets away with taking only maybe eight key frequencies and just
use those (the voice may be distorted but most of us would understand that sound).
b. The implant says we have lost our ability to hear and info coming in because we’ve lost our hair cells. The
basilar membrane may be vibrating but we’ve lost our hair cells to get the signal to the nerve, but the nerve is
intact and for the implant to work that nerve has to be good.
S2: Neuroscience : 2:00-3:00
Scribe: Andrew Treece
Wednesday, February 4, 2009
Proof: Dylan Vaught
Dr. Lester
The Auditory System
Page 4 of 5
c. So we have to bypass that step, so we can do that because we don’t need to encode all the frequencies from
high to low, we just have to pick key ones to understand speech.
d. The implant must do two things: gather incoming info because the hair cells can’t do that anymore, and then it
has to be able to stimulate the appropriate nerves. It can do that because part of the implant is a microphone
which is actually picking up the sound and it’s attached to a transducer frequency analyzer that analyzes the
frequencies. It must be organized temporally and in terms of frequency, and then that sends an appropriate
signal and the right time to a specific part of the cochlea implant which is basically wires wound around into the
cochlea itself and sits just under the basilar membrane.
e. So, instead of the basilar membrane vibrating and saying we are getting this frequency of sound at this
particular time the microphone sends that signal to the appropriate part of the membrane and it causes and
electrical stimulus to stimulate an underlying nerve that then carries the signal to the cochlear nucleus.
f. It’s under the basilar membrane because that is where all the afferent and efferent connections are. The wire
from the implant sits here and stimulates the axons that are carrying the signals from the hair cells, and the
following structures like the cortex, etc. would think that the hair cells are still working.
XXV. Encoding Loudness [S25]
a. Loudness, I told you, you only want to worry about this in logical terms.
b. Pressure waves will be more intense, that’s the amplitude of the sound wave, and it hits the eardrum and
transmits it. The basilar membrane at whatever frequency that sound is is going to vibrate more and you get a
bigger depolarization and more transmitter released and will fire more action potentials.
c. Little sound is little depolarization, little transmitter release, and fewer action potentials released.
d. Loudness is encoded by the frequency of action potentials, frequency is encoded by the location on the basilar
membrane.
XXVI. Innervation [S26]
XXVII. Innervation Ratios [S27]
a. These are inner hair cells with massive innervation that converges on a hair cell. One of the reasons for that is
that it’s going to different parts of the brain stem and different parts of the cochlea nucleus are going to look at
the location vs. the loudness vs. the frequency so this signal has to go to different parts of that nucleus and then
to different parts of the brain.
b. So we have massive inputs coming from individual hair cells.
c. But our outer hair cells have less innervation and you can think of them as more regulatory
XXVIII. Cochlear Amplification (OHCs) [S28]
a. It turns out that one of their jobs is to regulate the sensitivity, so there are inherent problems and these are
mechanical problems when you have fluids trying to respond to vibration, and what happens is fluids dampen
vibrations so one of the roles of the outer hair cells is to counteract that dampening so we have good sensitivity
of our system.
b. Efferents mainly go to the outer hair cells, and afferents bringing info in mainly from the inner hair cells, so that is
what you mainly need to remember
XXIX. Learning Objective #4 [S29]
a. Ascending the auditory axis [S30]
***Corrections: The IO should be IC for inferior colliculus
b. We know a lot about the cochlea processing but once we get into the brain we don’t know as much but we know
it gets to the cortex... but you do need to know a few things.
c. Cranial nerve VIII, the afferents have their cell bodies in the spiral ganglia, these would be equivalent to the
dorsal root ganglia, comes into the cochlear nucleus.
d. One of the important things to remember is that that auditory system is highly bilateral once you leave the
cochlear nucleus so it is hard to go deaf in just one ear unless you just damage the hair cells in that ear.
e. One of the reasons it has to go bilateral is that you use the difference between to two ears to detect where the
sound is coming from. So you need to process both ears at the same time to know where it’s coming from, and
that’s going to happen in the superior olive with bilateral innervation.
f. We go through a body called the trapezoid body, then we go up through the lateral lemniscus, up to the inferior
colliculus in the midbrain, up to the MEDIAL geniculate (relay), then up to the cortex.
g. In the cortex there is a frequency tonotopic map that doesn’t encode location.
XXX. Output of cochlear nucleus [S31]
a. This is just to show you that it’s highly bilateral.
XXXI. Learning objective #5 [S32]
XXXII. Sound localization part 1 [S33]
a. There are two parts of the superior olivary nucleus, lateral and medial, and they are both interested in the
location of sound and they both do it slightly differently
XXXIII. Medial superior olive [S34]
S2: Neuroscience : 2:00-3:00
Scribe: Andrew Treece
Wednesday, February 4, 2009
Proof: Dylan Vaught
Dr. Lester
The Auditory System
Page 5 of 5
a. The first way we do it is, there is going to be a subtle TIME difference.
b. If a sound is on my left, I am going to hear it on the left a little bit before the right.
c. The brain makes use of that so we have inputs coming to our medial superior olive from both ears but it’s going
to get to one side before the other and it has a system to look for that difference.
d. There is a divergent input contacting a lot of cells in the superior olive.
e. If the sound is close to midline, the two signals will come in close to the same time so you will get maximal input.
If the sound is way lateralized, sound will come in on the ipsilateral ear and hit all these cells, but since it’s so far
away it will get to the other ear much later.
f. This will send inputs all at the same time and these cells will get excited, but that’s not enough to fire an action
potential you need the input from the other cell and depending on how long it takes to get there.
g. Based on the timing, don’t worry about the equation.
XXXIV. Sound localization part 2 [S35]
XXXV. Lateral superior olive [S36]
a. In the lateral part it is based on the INTENSITY. So imagine this system going on both sides.
b. On the ipsilateral ear you get an excitatory input which strongly excites a neuron, but if the sound is weak on the
other side you are not going to get much excitation on that contralateral side.
c. You won’t drive the other nucleus very much either because there is an inhibitory signal over there, so you will
get very little inhibition coming from one side and maximal excitation from the other side.
d. As the sound starts to move from lateral to medial you get less excitation from one side and more excitation
from the other, and start to inhibit the one that was originally more excited. Coming to the midline, you start to
get equal excitation from both sides, so it’s which cell on which side is getting more excited. He said look at the
diagram and think through it, but these are very subtle differences in time and intensity.
Inserted Question: Could such a system differentiate between sounds directly in front or behind a subject?
Answer: No, not with only the auditory system, but you can use the visual system and other cues to help.
XXXVI. Inferior colliculus [S37]
a. It gets the sound location, and it’s not known that well as to what sort of a role it plays.
b. In lower animals, the colliculi can be very big, and it’s thought that it is really important to help with orientation
towards sounds, like reflexes if you hear a loud sound and turn your head that way.
c. It probably has multiple modalities like vision in there somewhere. It does have to do with orientations and
directing attention.
XXXVII. Medial geniculate [S38]
a. Nothing more than processing relay cells.
XXXVIII.
Auditory cortex [S39]
a. Heschel’s gyrus is deep inside the Sylvian fissure.
b. This shows there is a tonotopic map.
XXXIX. Tuning of auditory cortical neurons [S40]
a. Some cells may be tuned more to location and some more to intensity and some to frequency.
XL. Relationship to other areas [S41]
a. There are other parts of the cortex that help us relate those sounds to speech and what not.
b. Wernicke’s area involved in further sensory processing and interpreting what we are hearing.
c. Broca’s area which is for speech and language
d. And we have connections between these areas.
(end time 56:23)