Download A Parents guide to Cochlear Implantation - CPE Short Courses

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Sound localization wikipedia , lookup

Earplug wikipedia , lookup

Hearing loss wikipedia , lookup

Auditory system wikipedia , lookup

Noise-induced hearing loss wikipedia , lookup

Sensorineural hearing loss wikipedia , lookup

Audiology and hearing health professionals in developed and developing countries wikipedia , lookup

Transcript
Suite 43, Cleveland House, Cleveland
4163
PO Box 265
Cleveland, QLD
Australia 4163
Phone +61 7 3286 3901
http://bradleyreporting.com
ABN 71908 010 981
Subtitled video link: https://www.youtube.com/watch?v=I0dt6KsfXfY
For information on enabling captions in YouTube:
https://support.google.com/youtube/answer/100078?co=GENIE.Pla
tform%3DAndroid&hl=en
Transcript:
JACQUI: Hi. My name is Jacqui Cashmore and I have recently
taken over the role that Trudy was doing.
I would like to welcome you today on behalf of RIDBC Renwick
Centre and Cochlear Limited to the first of our 2016 HOPE parents
series lectures.
I hope you enjoy today. Today we have Julie Decker presenting for
us on a parent's guide to Cochlear implant journey. Julie is a sales
clinical manager for the Australian region of Cochlear AustraliaPacific.
Julie has been involved in the field of Cochlear implants for over 20
years. Her background includes experience working with listening
and spoken language specialists, in educational and early
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 1
intervention settings. This role enables Julie to draw on her
knowledge of listening and language development, audiology skills
and technical knowledge of Cochlear implants to improve outcomes
for Cochlear implant users and hearing professionals across
Australia.
I want to go through a couple of housekeeping things before I hand
over. This session is endorsed by AG Bell Academy and BOSTES.
So, can you please send through your details after the session so
we can update both of the sites.
It’s being recorded and you will be sent a link to the captioned
recording in a couple of weeks. We have Jason from Bradley
Reporting joining us today. We appreciate their ongoing support.
We want a nice clean recording today, so we will have everybody
muted. If you have any questions, please type your question into
the chat function and I will interrupt Julie and ask for questions. This
is a one-hour lecture, and I will hand over to Julie. Go ahead, Julie.
JULIE. I’ve been asked to speak today on a parent's guide to the
Cochlear implant journey. As Jacqui mentioned, I work for Cochlear
Limited here in Sydney. But in my previous life I’ve worked in quite
a few different clinics around Australia and overseas in cochlea
implants. So, I'm hoping to overview a general journey for families.
I'm happy to take questions at any time. You can type them into the
chat box and Jacqui will pose them or what I might also do is just
stop at the end of each mini section and just ask if there are
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 2
questions. I'm happy to take time to answer questions just to make
sure that you get everything out of the session that you’d like.
I'm hoping to cover I guess a broad -- why your child should have
an implant if their hearing loss is significant enough. Some of the
professionals that you will meet on the journey. A very brief
overview of surgery. And then a look at how we begin to listen. And
some research in the area of cochlear implants.
So, let's start at the very beginning. The beginning of the story for
Cochlear is this man here, Professor Graeme Clark, the inventor of
the Australian Cochlear implant.
He developed the implant at the University of Melbourne in the late
60s/early 70s. But for families the journey basically to start was
trying to determine whether there was a presence of hearing loss.
And through that process we have a number of tests that look at
detection. Then we want to move from detection to discrimination
and identification, hopefully to comprehension and communication.
So, we will take a journey through all of those steps.
First, you need to be able to hear something before you can
discriminate it.
We have a number of tests that audiologists conduct looking at
primarily detection but a little bit of overlap into discrimination with
some of the cortical testing. And then some of these further higher
functions that we do to assess comprehension, communication,
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 3
identification.
So, the first port of call generally is trying to assess the hearing
system. Our hearing system works by our ear collecting sound
vibrations, channelling it down the ear canal. We have bones in our
middle ear space that conduct those vibrations off our eardrum into
a mechanical movement that adds emphasis to the sound.
That footplate rests up against this bony chamber called the
cochlea. Inside the cochlea is a membranous labyrinth that has the
sensory receptors for hearing floating in fluid.
So, as the footplate move in the base of the cochlea, it creates
movement within the cochlea that stimulates the nerve endings, or
the hair cells, of hearing. That stimulus goes to the hearing nerve.
The hearing nerve sends it up a pathway within the brainstem up to
our brain. So, we have a number of different tests that look at the
different stations along the way.
I'm going to talk about tympanometry a little bit, which looks at the
middle ear space, otoacoustic emissions and bone conduction,
which looks at the cochlea function. The ABR, or steady state
potential, looks at the brainstem activity. The cortical auditory
evoked potentials look at the top of the brain or the cortex. When
you are looking at a pure tone audiogram or speech perception,
you’re really looking at the whole system.
So, ABR testing, auditory brainstem response testing, evaluates
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 4
how sounds travel along the pathway up to the level of the
brainstem. Now, this information can be frequency or tone specific
or it can be a broad stimulus. There is also a test called auditory
steady state response, which some children have depending on
where your hearing testing is done, which is frequency specific and
results generally in a plotting of an audiogram-like representation of
your hearing loss.
Otoacoustic emissions are an acoustic response produced inside
the cochlea. So, your ear or your cochlea creates an echo to a
sound it ears. This echo bounces back out of the cochlea and it's
measured by the OAE machine. This gives us information on how
the cochlea is functioning and then how that relates to the ABR to
help us determine the type of hearing loss.
We then have cortical auditory evoked potentials, which are looking
at the top of the brain or the cortex response to sound. Generally
we use speech sounds like ma, ba, ga. This can be done with
hearing aids or without hearing aids, aided or unaided.
We also look at tympanometry when doing these assessments and
we are looking at how well the eardrum is vibrating. So, how is the
movement in the eardrum? If your middle ear space is full of fluid
your eardrum mobility is restricted. And your middle ear and the ear
infection that you have in that space is creating a hearing loss in
some cases.
So, in some cases you can have a hearing loss just by that
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 5
blockage or it could add extra loss to something that is happening
at the level of the cochlea. They also do an acoustic reflex
measurement, which is really looking at when you hear a loud
sound there a muscle in your middle ear that pulls on one of the
bones in the middle ear space as a reflex to loud sound, and that
needs information to go up to the lower part of the brain and
back down to create that reflex. So, it's looking at that neural
pathway again. Also, you have to have a certain amount of hearing
sensitivity to be able to elicit the reflex. So, if there is a significant
hearing loss present, the sound that goes in, which if you have
normal hearing, is extremely loud, is not loud enough to create that
reflexive result.
Those are the sorts of things you will have done when children are
young or difficult to test. And they are done by the professional and
their equipment that basically measures those neural responses or
physical responses from the system and don't actually need
participation from the child or the adult or the teenager.
We also look at what we call behavioural testing. That first lot of
tests are what we call objective testing, because essentially it's a
measure of a response as measured through equipment. Versus
behavioural testing, where we are trying to train a response in
response to sound. So, our ability to do that depends on the age of
the child. And we use different approaches trying to look at different
frequencies or pitches and different levels of intensity or loudness.
Sometimes the test will be done through a loud speaker. So, that
means the child is responding to the sound coming from a single
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 6
source, and both ears potentially could hear and so therefore any
response we get represents the better ear.
We then move to putting headphones on, which then lets us test
the right side versus the left side. Again, it tests the whole pathway
up to the level of the brain.
We then also have a bone conductor, which is a little hard vibrator
placed behind the ear on the bone, which creates vibration of the
skull, which then measures the pathway of hearing at the level of
the cochlea up to the brain. So it bypasses the ordinary air pathway
that we get through headphones and a loud speaker.
So we as people with good hearing acuity, we hear our own speech
primarily through bone conduction. As we talk, our skull is vibrating.
This is why when you listen to yourself on a tape recorder or
answering machine you sound quite different than you sound to
yourself, because your pathway of hearing it is slightly different. We
do hear through both means, and we need to test both means so
we can try and separate where the locus of the hearing loss is.
So, if your hearing levels with the bone conductor match the
headphones, then it's a sensory loss. Because it's not anything in
the pathway obstructing.
If there's a difference between those, there is what we call a
conductive overlay or a blockage in the hearing pathway. Often this
is treatable, if it's middle ear fluid either by grommets or antibiotics.
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 7
If it's something more significant like a cholesteatoma, you may
need to have surgery to resolve it.
When testing kids, we look at doing behavioural observation
audiometry, or BOA, when kids are very young. Basically we are
using different toys or noise makers to try and see if a child reacts.
For most of you, you will know the hooter test, where they squeak
the hooter because we're trying to elicit a startled to loud sound.
As kids get bigger and they can sit up and have head control, we
train them to turn towards a sound source and reward them with
something visual like a puppet. So VROA, or visual reinforced
orientation audiometry, is also known as the puppet test, and really
we are just training kids to look to the sound source, which is
generally a loud speaker, and see the puppet when they hear, and
that helps us sort out frequency specific and intensity specific
information.
We can also put a set of headphones on and still use the puppet to
reward them.
As kids get older, around five, sometimes four, sometimes three
depending on the child, we can teach them what we can play
audiometry. That’s the pegboard game, or the model …. So they
hold the toy and they listen and when they hear the sound they put
the peg in and are rewarded for hearing.
We do this with headphones as well as the bone conductor. As they
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 8
get older, six, seven, eight, we pop them in the chair like we would
an adult or teenager. We give them a hand-held button, ask them to
press when they hear a sound or ask them to raise their hand when
they hear a sound. Again, we do that with the headphones and the
bone conductor to try to ascertain where the hearing loss is
occurring.
All of this information results in the audiogram, which then the
audiologist describes as either normal, mild, moderate, moderatesevere, severe or profound.
We have a whole bunch of tests that we do that are looking at that
point essentially of detection: Can you hear sound or not?
What that doesn't tell us really is how well you hear sound. So,
what are you doing with the sound that you can hear? Or if some of
these tests are done with your hearing aids on, how are you
processing, how are you perceiving sound with those hearing aids?
That's when we move to what we call speech perception testing.
Depending on the age of the child, we look at assessing this in
different ways. For young kids often parent or teacher
questionnaires, observation of their speech development and
babble, and response to different sounds usually in a formalised
listening or habilitation session.
For older children we actually ask them to repeat sentence and
words and they will let us know how clear they have heard
something based on their repetition. Obviously if there is specific
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 9
speech errors we need to take this into account. So, the older a
child is, the easier it is to do formal speech perception testing.
So, what we really need to know is: how do we take all of this
access -- so what sounds can your child hear -- and actually really
understand what are children doing with the sounds that they
perceive and detect?
We know children learn language rapidly and effortlessly from
babbling to full sentences within the first three years of life. That
acquisition appears quite simple for children with normal hearing.
Because we take 40 distinct elements, or phonemes, sounds of
speech, and build these units into categories, which become words.
So in English, we have 40 different phonemes, 40 different sounds
and we create nearly half a million different words from those.
So, central to learning is the categoral perception which is focused
on the discrimination of acoustic sound. Unlike adults, children,
infants can discriminate between all phonetic units. We are born
with a propensity to hear in our native language. Because for
children your organ of hearing, or your cochlea, is fully formed,
adult size, at three months gestation. So, although the baby is
growing inside, they are beginning to perceive sound from sort of
three months gestation onwards. So they are born with a leaning
towards hearing their own native language but have the ability to
perceive all different sounds. That's where often learning multiple
languages when you are young is a lot easier. We know infants can
discriminate tiny changes at birth and that that is essential to
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 10
helping us learn language.
So, we know that hearing loss does impact this innate ability. It
makes things intrinsically harder, makes it harder to perform
discrimination, it's harder to hear. We lose the automaticity of that
hearing response, because we have the presence of a hearing loss.
It becomes a conscious task instead of unconscious. We have to
teach discrimination to children and help them to hear the
differences.
And it reduces the ability to overhear. So, we know we learn
language just by being immersed in a surrounding of sound. And
the presence of a hearing loss reduces our ability to overhear and
learn naturally. So, it reduces our implicit learning as well. It
reduces the redundancy of the information we hear. We are not
hearing as much or as often.
So, this then impacts on cognitive capacity, working memory,
executive function, memory skills, changes our sensory pathways.
This may then result later on in psychosocial implications,
self-esteem, social interaction, connection, pragmatics.
So, the presence of a hearing loss, if left unsupported, can have
significant knock-on effects just beyond the presence of a sensory
loss.
So, we need to I guess identify what access people have, how that
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 11
relates to the autometric thresholds. In this audiogram we have a
range of normal hearing, which is between 0 and 20dB. Speech on
average sits around 50 to 65dB and it's quite dynamic in nature. So,
if we have the presence of a hearing loss of a severe nature, so
65dB down here. Normal speech is outside of your range of access,
so you need amplification to increase that so it falls within your
audible range. But then we need to consider what does that access
with the amplification look like?
Because it's audible, does that make it usable? Those differences
in speech which are soft to loud become relevant and salient to our
language learning, our phoneme acquisition and words and so on.
So, we need to look at access to speech and then its knock-on to
understanding, because speech is quite dynamic. When children
are very young we have to go on objective information, and we
need feedback from teachers and parents, habilitationists, speech
therapists, all of the individuals who work closely with the children
to understand that relationship between audiometric thresholds and
actual performance, how they are using hearing and accessing
sound.
If your access to sound is deemed to be insufficient, so you don't
have enough access to all of the range of speech to develop I
guess enough access to sound to get language acquisition, spoken
words with clarity, it may be decided that a cochlea implant referral
is the most appropriate next step for you or your child.
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 12
So, what is a cochlea implant. Essentially--actually, before I go on
to that, are there any questions that have popped up, Jacqui, and I
will stop and take a breath?
JACQUI: Not as yet. Does anybody want to type any questions into
the chat? No, it doesn't look like it at this stage, Julie.
JULIE: Okay, great. So, a cochlea implant essentially has two
components. It has an internal implant that is placed under the skin
behind the ear, in the bone. And an external sound processor that
is worn on the outside that picks up sound information. So, the
sound processor, microphone, will pick up sound and look at that
sound that's come in. It will look at the frequency content and the
loudness information that is contained in that sound.
It will then determine what electrical stimulation needs to be given
to represent that signal. So, we're taking a really big acoustic signal
and translating it to information that can be processed through a
digital code across 22 electrodes. So, it sends that information to
the coil, which sits with a magnet in it over the internal implant.
They are connected by a magnet. So, this is a transmitter and in
here is a receiver. So it's a little mini radio, FM signal. Sends that
information from the acoustic world across to the implant. The
electronics package of the implant decodes that signal. In the
external code here, there's a battery. So, the amount of power
needed to stimulate is carried over in the code. There is no internal
power source in the implant; it gets all of its power from the external
device.
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 13
So, it decodes the frequency information and the power information.
Sends that information down to the electrode array. And the one
amazing thing about the hearing system, as complicated as it is, it's
very organised. It's organised in what we call a tonotopic way; it's
organised in frequency. That organisation of frequency is carried
down the hearing nerve, through the brainstem and even up to the
level of the brain.
Graeme Clark, when he was inventing the implant, realised this and
created an electrode array that goes from low tone, which is the
deepest end, which is the low tones of the cochlea up the top, and
around to the base of the cochlea where the high frequencies sit.
He aligned the electrode contacts to where the nerve endings are.
There is a really good neural to frequency alignment within the
cochlea. And because of that, we can use the plates that the
electrode sits in the cochlea to represent different frequencies. That
information then gets decoded at the level of the electrode array.
The electrical impulses and the intensity of those impulses
stimulate the hearing nerve and sends it on up through the
brainstem to the brain where we perceive it as sound.
So, if you have had heard sound before you start to take your
hearing memory and line it up with that stimulation. If you have not
had hearing before, if you were born profoundly deaf and your
hearing for the first time at six months, you start to create your own
hearing memory through that stimulation.
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 14
So, when you go to the implant centre to have your
assessment -- and it may be a centre, it may be part of your early
intervention service or part of a hospital -- you will meet a group of
people. One of those will be the surgeon. They are responsible for
giving you advice, counselling and medical assessment.
You will meet audiologists. These audiologists may be the same
audiologists who did your hearing loss diagnosis or it may be a
different group of audiologists. They again are responsible
for giving you advice, counselling and doing some audiological
assessments on top of maybe the information that you have come
with.
You should meet a habilitationist. They usually have speech
pathology, listening and spoken language backgrounds, teacher of
the Deaf backgrounds, someone who is skilled up in listening and
language learning for kids or for adults in some instances. They are
responsible again for giving you advice, counselling and also those
functional listening assessments. So, how are you using the
hearing that you have and does that match up with the information
that we have?
There may be a social worker on the team. They again will give you
advice, counselling and family support. So, they often help you
navigate the systems of the hospital, the transportation system. And
also help manage the stress that the family are under, basically
working their way through this journey.
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 15
There may be other professionals that you meet like a psychologist,
an occupational therapist, a physio. You may meet other medical
doctors like cardiologists or neurologists--anyone who can add
information to that picture to determine whether an implant is the
right intervention for you or your child or also to make sure that
medically it's a safe procedure to undertake.
So, you will have a series of steps that you walk through in that
journey. There will be candidacy evaluation, surgery, activation, the
habilitation process and then maximising the outcomes from the
implant.
So, in that assessment phase--you will have another audiogram or
specific electrophysiology testing for that testing where the child
doesn't need to cooperate--where we are measuring responses
from the level of the brain stem, the cochlea or the brain. If that
hasn’t already been done in your diagnostic work-up.
Some centres will use electro-choreography, some will use steady
state evoked potentials and some will use tone burst ABR
dependent on the clinic. They will do tympanometry, check the
health status of the middle ear essentially and also do verification
that the hearing aids have been optimised. So, in Australia it's quite
unique; we have generally the diagnostic process happens in a
hospital with paediatric facilities.
Then they are referred to the government organisation, Australian
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 16
Hearing, for hearing aid fitting. And then they are referred
sometimes into a new place, cochlea implant assessment.
Sometimes that new place is the paediatric hospital facility that they
started at. But not always.
So, there are a number of different people you meet along the way.
And ideally they are all kind of speaking with each other and
sharing information.
You will also have some medical tests done as determined by your
ear, nose and throat specialist. Generally they do MRI and CT
scanning for all children. Sometimes it’s one or the other just
dependent on access. Then there will be a general medical
assessment to ensure that they are safe for anaesthetic, because it
is a surgery that requires anaesthetic.
This process of assessment generally takes a few sessions with the
various professionals. At the end of the information gathering there
will be generally a group meeting case discussion where all of the
results and potential benefits from the implant will be discussed.
One of the things that has to happen along that journey is choosing
an implant system.
Cochlear system is a Nucleus 6. We have a variety of electrode
options that the surgeon can choose, dependent on the shape of
the cochlea, dependent on perhaps sometimes their surgical
preference, whether they go for a curly array or a straight array.
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 17
Implant reliability is important. Not only looking at the reliability at
one year, but over time. We have the best implant reliability of all
manufacturers, and the most number of recipients around the
world.
We also have very sophisticated sound processor technology.
We've been manufacturing implants for over 30 years. And they just
get better and better. So, the technology that is available to us as
hearing aid technology improves becomes incorporated into sound
processors. We have something called SmartSound IQ that
basically analyses the sound situation and automatically sets up the
processor to hear the best.
We also have wireless freedom, wireless accessories that allow us
connectivity to remote microphones, to phone interfaces—and all of
that is done without wires. We have a remote assistance which
allows communication between the processor and the remote. And
the remote and the processor. It goes both directions. If you have a
faulty cable or your battery is running flat, that gets reflected on
your remote. So as a parent you can be very confident about the
status of the equipment that your child is using.
We also have the ability to mix acoustic input through a hybrid
component. We also have an Aqua Plus accessory, which is a
waterproof sleeve so you can swim with your sound processor.
One of the things that is really important is access. So, we know
that on average children are only exposed to clear speech for about
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 18
half an hour a day. That is hardly enough to hear the 21,000 words
they need to hear before they start to develop speech and
language. So, it's really important for us that we manage noise. We
know that noise is much more distracting to a child than an adult.
And that actually one of the most difficult things for a child is trying
to listen when there is other sound forces that are speech-like. So,
when there are other people babbling in the background. We know
that kids learn language from hearing it. And they have to have a
greater need for understanding speech that is around them but they
are less equipped to deal with it. They need to hear the speech
around them because they need to hear all of those words before
they can start to develop their own words.
Normal-hearing children listen for around nine months before they
start to form their first words. We know there is lots and lots of
listening that happens before language emerges.
So, we need to set up our processor so they can manage speech
against any competing background noise. And that process of
listening in noise is not just difficult for very young children, it's
difficult for toddlers and pre-schoolers, and it's really not until kids
are teenagers or adolescents that that skill to suppress background
noise and focus in on speech begins to develop.
We know that kids are not little adults, and they do need a better
signal to noise ratio to hear. There was a study done looking at
phoneme discrimination, looking at adults here versus kids. The
difference in the signal to noise ratio--so that is how much louder
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 19
does speech need to be than background noise for it to be clear-about 30 dB. So, 30 dB is essentially the difference between a soft
voice and a loud voice.
We know that they need a much cleaner signal. When you are
looking at listening in noise, we know that ages—this is five to
seven, eight to 12, 13 to 16 and adult. So, teenagers and adults
function pretty similar and can have noise greater than the speech.
However, young children need speech to be louder than the noise
signal for them to understand. So, we know that we need a really
good what we call signal to noise ratio.
One of the challenges of all of that is that the world is an extremely
noisy place, there are lots of different sounds around. So, we have
created a sound processor that manages all of that automatically.
We do that with what we call SCAN, which is a scene classifier. It
looks at the sound coming in and makes a judgment of whether it's
quiet out there, whether it's noisy, whether the noise is just
background noise from, let's say, construction or whether someone
is trying to talk over the noise, whether there is music or whether
there is wind. What it does is it picks the settings on the processor
to match that environment that it's identified. This gives the best
signal to noise ratio.
When we look at processors that we have had in the past--this is
our Nucleus 5, which had an everyday setting. So it had a little bit of
noise management but not the sophistication that we have in
SCAN--that we get a 27 per cent improvement of listening to
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 20
sentences in noise when we implement that automated scene
classifier, that SCAN functionality. So, we are getting the best
listening we can in noise and we know that that is so important for
kids.
So, the next step, once you have determined what implant the
surgeon is going to use, is the actual surgery. Generally, the
implant is placed under the skin behind the ear. The surgeon
makes quite a small incision, usually directly behind the ear. And
then they open up the space and they open up access into the
cochlea. And thread in that electrode array.
So, the electrode array is quite thin. And it has a very soft tip on it to
facilitate smooth insertion. We want to make sure that there isn't
added trauma into the cochlea when we are putting in implants,
because we don't know what the future will hold. So, the design
principles around the implant are minimal footprint and really
implement soft surgery techniques, so we can maintain the
structures within the cochlea. Because it's highly likely in children's
future that hair cell regeneration or gene therapy or stem cell
therapy may restore some aspects of hearing. We need to stimulate
the hearing pathway so there will be something to connect to, which
is why they need an implant if they have a significant hearing loss.
But it's not going to stop them from having those therapies in the
future. That's often a question people ask, "If I have this now does
that stop me from having hair cell regeneration in the future?" From
all of the surgeons I have spoken with and researchers, there is no
reason to think that that is the case.
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 21
Surgeons generally treat it as a low risk routine medical procedure.
There is obviously an assessment for fitness for the surgery. The
surgery is generally an hour to three hours long depending on
whether you have one implant or two. It's done under general
anaesthetic. They don't shave hair anymore. That was the thing
they used to do a long, long time ago. Most people only have a
single night stay in hospital.
Are there any questions people want to ask about the surgery or
about that assessment period?
JACQUI: None have come through as yet, Julie.
JULIE: I will keep going then. After your surgery you have activation
of the sound processor. That sound processor activation can
happen the day you go home from hospital in some instances to a
week to two weeks after surgery, dependent on your clinic and your
surgeon's recommendation.
Generally recipients will continue to wear a hearing aid in the ear
that didn't have the implant surgery. So, they continue to have
some access to sound. At that switch-on, those 22 electrodes and
the implant gets activated for the first time.
Often almost universally in Australia during the surgery there will be
a test of the implant and the nerve's response to the implant. So,
going back to what happened before implant in that diagnostic time,
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 22
where they were looking at objective measures to the hearing
system, we do similar objective measures but to the implant
stimulation.
The surgeon has a hand-held remote control that measures a
neural response to sound. And that information, about how much
stimulation from the implant does the ear need to get a neural
response, is passed from the surgeon to the clinic, and they have
that available for your activation. So, generally what they will do is
they will turn the implant on based on those responses they had
from the hearing neural system. And turn it on very softly and
gradually make it louder over time. So, all of the electrodes are
turned on on that first occasion. And the settings are a starting
point. So, it's an early access point. It's not where your levels end
up as your hearing system gets used to hearing. But if there is
sufficient access generally to hear spoken voice. What adults often
say is that over the first week to two weeks it goes a little bit soft
because their system is getting used to it and they need more
stimulation.
So, part of knowing how you are responding initially is that
habilitation phase. So, as the implant gets activated, the
habilitationist, the listening and spoken language therapists or your
teacher will run through some training I guess and language and
listening exposure very similar to what you had before the implants.
But starting again at those very early steps. They may use a variety
of resources that we have available to help them and to also share
with you so you can do work at home.
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 23
Over that first three months, the therapy and then programming or
review mapping, some people call it, continues to adjust those
electrical levels to the individual hearing nerve’s needs. As progress
is made with speech and language and listening, eventually those
levels plateau and they stay relatively stable and then there is a
less frequent review. We know that generally there is a rapid
change in the first three to six months and then a pretty stable
access to sound that happens after that.
So, what does that mean? Well, that means we now have access to
information, and we are hoping that children start building that
matrix to knowledge and understanding and language development
and speech development. So, we're going from now we have
improved the access to information--generally with a cochlea
implant you have access to soft speech, medium loud speech, loud
speech, the whole gamut. So, you are not missing part of the signal
in a quiet environment in particular if you are across the dinner
table or something like that.
So, we know listening is really important. That if we listen to a book
a day, we speak a book a day, we read a book a month and we
write a book a year. So, we listen much more than we talk, write or
read. So, listening is important and having that access to sound is
important, which is one of the reasons it's important to get in and
provide implants early. If your child is at school, there is extra
technology or even preschool--riding in the car with you--we have
technology to continue to support access in the form of our Mini
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 24
Mic2-plus, a remote microphone that the teacher or parent wears
and it streams wirelessly to the processor, so it means that you can
have an optimal signal to noise level at all times.
There is also a function where you can put it on a table and it goes
into like--what do you call it when you have a meeting? A meeting
mode. There you go.
So, we do know that when we look at listening alone in quiet, even
having the voice at this distance versus this distance--I hope you
can see that--improves access. In quiet and as noise gets louder
and louder, access improves. So this level here, in 65dB of noise,
which is around the average early primary, preschool room noise
level, we get speech perception outcomes that are actually similar
to children with near normal hearing. The use of these assistive
devices continues to improve access and also improve access over
distance.
So, generally as distance increases our access decreases, sound
drops off. However, with wireless technology that access to sound
remains steady.
One of the key things we know in research as well is that children
with profound deafness experience better outcomes when they are
implanted with a Cochlear implant as early as 12 months of age.
So, a group down in Melbourne, the University of Melbourne, the
Hearing CRC, and a number of their hearing partners conducted a
study looking at long-term communication outcomes for children
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 25
receiving Cochlear implants younger than 12 months.
So, they grouped their kids into five groups. Before 12 months is
group 1. Group 2 is between 13 and 18 months. Group 3 is 19 to 24
months. Group 4 is 25 to 42 months.
So, that is two to three and a half. And then they looked at three
and a half up to 72 months, which is six years of age. So, we have
very young to prep or kindy depending on what state you live in.
And they looked at speech perception. So what is access to sound
and how are kids performing on the open set tests where you are
just asked to repeat.
When they measure it in terms of phonemes, so the correct number
of sounds repeated. If the word is ‘cat’ and the child says ‘cat’ you
get three out of three. If the child were to say ‘ca’ with no T on the
end they get two out of three. So the score is out of 100 per cent.
That's what this here is.
For these young kids--groups 1 and 2—the phonemes, they are
around 85 per cent correct. Down as the group gets older they are
sitting just under 70 per cent. So still a huge improvement off the
pre-operative state, but better listening skills for younger children.
The same again if you look at that in terms of words. They are
understanding about 60 per cent of words, when they are implanted
before 18 months versus being implanted older, where it drops
down to 45 to 38 per cent.
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 26
Then when you look at sentences--so that is generally the ‘clown
had a funny face’, and it's scored by the number of words correct.
You need language to be able to pull together the missing
components and fill in the gaps.
If you were implanted younger, again, your outcomes on these tests
are better than if implanted later. We know that has to do with the
plasticity of the hearing system. By the time children are four, if you
are not using your hearing system other factors start to take over
components of that hearing system and you are not as flexible.
By the age of 7, any flexibility in the system begins to shut down,
and it's much more difficult to stimulate the auditory pathway as you
get older. If you have had a total profound, severe to profound
hearing loss. If you have had hearing and your hearing has
changed, those pathways have been stimulated and that window is
very different to if you have been born with a severe to profound
hearing loss.
How does that translate again--this is open-set speech perception
again. We are getting better results younger on. They also looked
at language outcomes. They looked at the Peabody picture
vocabulary test, and looked at kids that were school age. This dotdashed line here and here and this solid line in the middle is the
normal range.
You can have a normal range of one standard deviation below to
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 27
one above. So, this is where 60 per cent of children--hang on.
70 per cent of kids fall generally in this range. So, they are looking
at where kids fall based on the age at implant.
The younger you are the more kids are populating that normal
range. Some of these older children are falling in the normal range
but not as many. A lot of that is due to the fact that you need
experience to learn language, and your amount of time you weren't
learning language is greater. And your starting point then is
delayed.
So, the longer you wait, the more difficult it is to close the gap you
have because you haven't had access to sound.
Basically, they concluded that they should provide implants for
children with significant hearing loss before 24 months of age to
optimise speech perception, which is listening, and then to facilitate
speech production accuracy before 12 months of age as well as to
enable language acquisition.
So younger is better. That is also supported by Teresa Ching's
longitudinal study looking at the outcomes of children with hearing
impairment, or the LOCHI study.
Do you have any questions about that before I go on to potential
benefits of bilateral implants? I only have three slides. And then
open it up completely for questions. I haven't seen anything pop up.
So I might press on and then we can regroup at the end, if we like.
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 28
One of the questions people often ask is: should my child have one
implant or two? Essentially, that question is answered by the
degree of hearing loss in both ears. If they are similar or whether
they are dissimilar.
Basically, we know that having two ears hearing, that we can
integrate the input from both ears, and that sound is presented to
and perceived in both ears, and that we may have different binaural
hearing. So, that is using our two ears we could have two hearing
aids, two Cochlear implants, a Cochlear implant and a hearing aid
or we might have a hybrid and a hearing aid--there are a few
different combinations to stimulate both ears.
So, the recommendation is to stimulate both ears, because there is
a number of reasons why it gives you improved hearing in noise,
improved hearing in quiet and it reduces that auditory deprivation,
so that amount of time where you are not being stimulated. It gives
you ease of listening. You are less fatigued. You always get the
better ear performing. You have more balanced sound. You are
more aware of sound and the environment. It gives you better
opportunities for overhearing--if one side is having trouble then the
other side can still hear. It also is interesting that it gives you
increased opportunity for employment or mainstream school, and
gives you improved expressive and receptive language.
Objectively, it helps you localise. So, it helps you localise on a
horizontal panel as well as up and down and front to back
differentiation. It gives you perception of distance better. It helps
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 29
with what we call the head shadow effect. If you have noise on one
side and hearing on the other side, if all of the noise is to your
good-hearing ear, this ear has to try harder to hear because your
head is in the way. Hearing through both ears helps with that. If you
get binaural squelch, you need input from both ears to kind of
suppress the background noise, and you get more redundancy and
a boost from hearing through both ears. You get an oomph in
loudness.
We do know if you have one implant, you can tell the side that the
sound is coming from. But it's quite a broad angle that you can hear
on. You can know it's from the left or the right. But if you add a
second implant, that window of localisation narrows significantly.
So, it's much easier to localise when you are hearing through two
implants versus one.
So, this is that head shadow effect. So, if you have noise over here
and speech over here, by adding hearing through both ears you
have always got one ear facing the speech. And so you actually are
going to get a more favourable signal to noise ratio when you are
listening in noise. And by having the two ears combined, you
actually are going to get a boost in your hearing in noise
performance compared to either ear on their own.
Also, the University of Melbourne have looked at doing studies in
regards to educational outcomes, parental stress, language and
everyday hearing situations. And there are benefits seen for
children with bilateral implants in all of those areas compared to the
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 30
cohort that were age matched with unilateral implants. So, we do
know that not only from a hearing point of view do you hear better
but it's actually having educational and language outcomes effects.
So that brings me to the benefits of bimodal hearing, which
basically are increased speech intelligibility, binaural redundancy,
improved localisation and improved functional performance.
What should you do next? If you are thinking about an implant by all
means speak with your RIDBC habilitationist or teacher or if you are
in a different early intervention group speak to your professionals
there. Feel free to contact our customer service if you just want an
information pack. The contact details are there.
I will stop talking and take questions, if anyone has any. I have
talked for almost a whole hour, I apologise.
JACQUI: Thanks, Julie, for a really informative presentation. Are
there any comments or questions for Julie? If you would like to type
"no" into the chat box if you don't have any questions that would be
good. (pause) Julie, we -JULIE: Still awake?
JACQUI: We have a question from Elizabeth saying you have had a
well spoken presentation. Clarified quite a lot for her. So that was
good and she says thankyou.
JULIE: Thank you.
2016-07-20 11.36 HOPE for Families- Julie Decker
Page 31