Download Multichannel Nonlinear Frequency Compression

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Sound localization wikipedia , lookup

Auditory system wikipedia , lookup

Earplug wikipedia , lookup

Telecommunications relay service wikipedia , lookup

Dysprosody wikipedia , lookup

Hearing loss wikipedia , lookup

Speech perception wikipedia , lookup

Noise-induced hearing loss wikipedia , lookup

Sound from ultrasound wikipedia , lookup

Audiology and hearing health professionals in developed and developing countries wikipedia , lookup

Sensorineural hearing loss wikipedia , lookup

Transcript
C HAPTER T HIR TEEN
Multichannel Nonlinear Frequency Compression:
A New Technology for Children with Hearing Loss
Susan Scollie, Danielle Glista, Marlene Bagatto, Richard Seewald
Over recent years, evidence has been mounting
that modern hearing aids may not provide enough audibility of the high frequency energy in speech for children while they are developing speech and language.
Specifically, studies have shown both that children’s
ability to understand and use high frequency speech
sounds are affected (Stelmachowicz, Pittman, Hoover,
Lewis and Moeller 2004) and that the ability to produce
high frequency speech sounds is significantly delayed
compared to other sounds that are lower in frequency
(Moeller et al. 2007a, 2007b). These unwanted outcomes
occur despite early and high quality intervention programs. The question then, is why do they occur? Scientific data clearly show that certain sounds of speech,
such as the phonemes /s/ and /兰/ have significant energy well above 4000 Hz, with peak energies in the range
of 4500 to above 8000 Hz (Boothroyd and Medwetsky
1992; Pittman, Stelmachowicz, Lewis and Hoover 2003)
depending on the age and gender of the talker. Specifically, the speech sounds produced by women and children are likely to occupy the higher frequency portion
of this range. It has been speculated that this frequency
range, which is well beyond the electroacoustic passband of today’s hearing aids, is likely to be important for
speech and language acquisition (Stelmachowicz et al.
2004). Infants and young children typically spend the
bulk of their time with either adult women or other children and are therefore routinely exposed to the higher-
frequency versions of high frequency phonemes during
the critical infant, toddler, and preschool years of speech
and language acquisition. Two realities collide in our attempts to deliver these high frequency stimuli to the
children whom we serve: (1) hearing loss is often greatest in the highest frequencies (Pittman and Stelmachowicz 2003); and (2) hearing aid gain is least in the highest
frequencies, which can also be understood as frequencies above the passband of the device. Therefore, we
likely cannot provide enough audibility above about
4000 Hz to support consistent acquisition of the high frequency speech sounds by the majority of children who
have permanent hearing losses.
These problems explain, in part, both recent and
not-so-recent interest in signal processing strategies
that provide frequency lowering. Early attempts at frequency lowering have been reported in the literature for
decades, some with significant reported benefits (Johansson 1961). In recent years, commercially available
hearing aids have made a variety of frequency lowering
strategies an option for clinical use. These include timevarying whole-band frequency transposition (e.g., the
AVR Sonovation Transonic), time-varying frequency
transposition of a passband centered on a high frequency peak (e.g., the Widex Inteo), time-varying proportional frequency compression (e.g., the AVR Sonovation Nano) and high-frequency nonlinear frequency
compression (e.g., the Phonak Naida). All of these technologies provide frequency lowering. They differ from
one another along four dimensions:
(1) whether the frequency lowering processing is always on or is time-varying;
(2) whether the frequency lowering is achieved by shifting frequencies down (i.e., transposition) versus
narrowing the output bandwidth (i.e., compression);
Address correspondence to: Susan D. Scollie, Ph.D., National
Centre for Audiology, Elborn College, School of Communication
Sciences and Disorders, Faculty of Health Sciences, University of
Western Ontario, London, Ontario, Canada, N6G 1H1,
[email protected]
151
152
A Sound Foundation Through Early Amplification
(3) whether the shift is linear (i.e., proportional) or nonlinear in the frequency domain; and
(4) whether the lowering is done for all frequencies or
is restricted to the high frequencies.
At the time of this writing, no studies exist that compare these various strategies to one another. However,
some evidence exists to support the use of some forms
of frequency lowering, either with some hearing loss
profiles or with certain populations. Studies with early
frequency lowering products that used time-varying
transposition found that speech recognition enhancements were possible, and evaluated their effectiveness
with children and adults who had hearing losses (MacArdle et al. 2001; McDermott and Knight 2001; McDermott, Voula, Dean and Ching 1999; Miller-Hansen, Nelson, Widen and Simon 2003; Parent, Chmiel and Jerger
1997). Of these, one study on a large sample of children
and young adults with hearing loss found that the Dynamic Speech Recoding algorithm improved the detection of speech sounds and improved speech recognition:
children with severe hearing losses showed the largest
improvements. Some studies reporting benefits also report concerns with either sound quality, consistency of
benefit, and/or mechanical robustness of some products used (Gifford, Dorman, Spahr and McKarns 2007;
Miller-Hansen et al. 2003). Recent studies have also examined the effects of an alternative time-varying high
frequency transposition device, as implemented in the
Widex Inteo, finding that a subset of adult participants
receive subjective benefits (Kuk et al. 2006).
Our laboratory has recently worked with a prototype hearing aid that uses high frequency nonlinear frequency compression (NFC). This signal processor was
developed at the University of Melbourne, and an early
version of it has been evaluated on adults with moderate
to severe (Simpson, Hersbach and McDermott 2005)
and steeply sloping hearing losses (Simpson, Hersbach
and McDermott 2006). Results revealed that some, but
not all, adults derived significant objective benefit from
this processor for understanding high frequency speech
sounds, with greater benefits in the adults with moderate to severe losses than in the adults with steeply sloping losses. Unresolved questions with this scheme at
our study’s inception included the lack of a systematic
fitting and verification method, undetermined limits of
candidacy and unknown outcomes in the pediatric population. The sections below will describe the current status of our work to increase our knowledge base in some
of these areas.
Developing a Fitting Rationale
for Frequency Lowering
Our goal in conducting a clinical field trial of the NFC
processor was to determine if it would help us to overcome the bandwidth-related limitations in pediatric hearing aid fitting described above. Clearly, this required a
clinical field trial. However, because the candidacy limits
and fitting method for the technology had not yet been
determined, several up-front decisions had to be made.
First, we decided to recruit both children and adults in
order to see whether adult-child differences would
emerge with this technology in ways similar to those already demonstrated for adult-child differences due to
bandwidth (for a review, see Stelmachowicz et al. 2004).
Second, we decided to recruit participants with a wide
range of hearing losses, rather than restricting our sample to children with severe or precipitous hearing losses.
This was done because the effects of bandwidth had
been clearly demonstrated even for children with moderate hearing losses, and the candidacy limits of the NFC
processor were unknown. Third, we constructed a set of
fitting goals for frequency lowering, so that we could develop a fitting method for use in our clinical field trial.
The fitting goals specified that an individualized NFC fitting should:
(1) provide more audibility for high frequencies than is
available with a non-NFC fitting;
(2) not cause confusion of the phonemes /兰/ and /s/;
(3) preserve normal formant relationships as much as
possible; and
(4) maintain high sound quality, as perceived by the wearer.
These goals formed the basis for three recent studies. First, they formed the basis of the individualized fittings that we provided to both adults and to children in
our clinical trial (Glista, Scollie, Bagatto, Seewald and
Johnson submitted). Second, they were evaluated in a
blinded study of consistency among the fitters in our
clinical field trial with a series of custom-developed electroacoustic indices corresponding to each fitting goal
(Scollie et al. submitted). A third study conducted a systematic evaluation of the effects of NFC settings on
sound quality ratings by adults and children (Parsa et al.
submitted). These studies used prototype hearing aids
that were based on the Savia 311 or 411 hearing aid, with
modifications to allow the use of NFC processing via
custom programming software.
The following case study provides a brief illustration
of how we applied these fitting goals in our studies. This
Multichannel Nonlinear Frequency Compression: A New Technology for Children with Hearing Loss
case study was developed using NFC processing as implemented in the Phonak Naida UP hearing aid, which
is called “SoundRecover”. Figure 1 shows an electroacoustic verification of a hearing aid fitting that uses
conventional multichannel amplitude compression, but
does not use the NFC SoundRecover processor. This fitting has been done without SoundRecover, so that we
can observe what speech audibility would be available
without any form of frequency lowering. This type of fitting is representative of a well-fitted, conventional level
of technology by today’s standards.
Figure 1 uses the DSL Method’s SPLogram format
(Seewald, Moodie, Scollie and Bagatto 2005), which includes the listener’s thresholds, converted to the dB
SPL scale, predictions of the listener’s upper limits of
comfort and targets from the DSL v5.0 algorithm for
conversational speech at an input level of 65 dB SPL
(Scollie et al. 2005). Together, these three variables describe the listener’s auditory area (the range between
the thresholds and the upper limit of comfort), and the
target level within that range intended for aided speech.
The DSL SPLogram is then overlaid with the electroacoustic verification of the hearing aid. In this test
case, a simulated real ear measurement approach has
been taken in which the measured real ear to coupler
difference (RECD) is used together with other variables
to predict the Real Ear Aided Responses (REAR) for
both tones at 90 dB SPL (to evaluate high-level output)
and speech at 65 dB SPL (to evaluate mid-level speech
output). The 90 dB tone sweep should be matched to the
“+” targets and limited to the “∗” targets, while the latter
Figure 1: Sample fitting for a severe-to-profound hearing loss, using
the DSL Method. This hearing aid fitting does not employ frequency
compression signal processing.
153
Live voice
production of /œ/
Live voice
production of /s/
Figure 2: The same fitting as shown in figure 1, evaluated using live
voice productions of the phonemes /s/ and /兰/. The SoundRecover
processor is disabled.
should have the center line of the crossed area matched
to the
targets. In the test case shown, the hearing aid
is able to provide a good match to the speech targets up
to and including 4000 Hz. Also, the hearing aid maintains at least 10 dB SL for the aided peaks of speech to
about 5000 Hz. Above this frequency, the audibility is
limited, and this is likely attributable to the passband
limits of the hearing instrument, and therefore is not
resolvable via fine tuning.
The consequences of the limited high frequency audibility in this fitting are more clearly illustrated by the
display shown in figure 2. In these measures, a female
talker has presented a sustained live voice utterance of
the phoneme /s/, and the aided spectrum of this isolated phoneme has been measured. The process was also
repeated for the phoneme /s/. The audibility of each of
these two high frequency phonemes can be approximately assessed from these measurements; bearing in
mind that live voice presentation is not calibrated and
therefore does not ensure consistency of level. Care was
taken during these measures to use a moderate conversational production level1. Assuming that appropriate production levels were used, these measures indicate that
while the frication noise of the /兰/ phoneme is audible,
the frication noise of the /s/ phoneme is not. In fact, the
/s/ frication noise is difficult to discern on this plot, as no
clear spectral peak is present. This is likely because the
peak associated with /s/ is located in a higher frequency
region, such as 6000 to 9000 Hz (Pittman et al. 2003), and
is being rolled off by the shape of the hearing aid’s response (Stelmachowicz, Pittman, Hoover and Lewis
154
A Sound Foundation Through Early Amplification
2002; Stelmachowicz et al. 2004). Therefore, improvement of /s/ audibility is certainly a fitting goal if frequency lowering processing is used.
It is important, when using this fitting method, to remember that audibility of frication noise is not the only
possible cue for the recognition of /s/ and /兰/, nor is it
the only cue for the discrimination between /s/ and /兰/.
It is well understood that speech contains multiple redundant cues. For /s/ and /兰/, some of these cues include the transitions of vowel formants into and out of
the voiceless region in time during which /s/ and /兰/
are being produced. These formant transition cues are
typically derived from the second formant and higher,
and therefore occupy the mid to high frequency range
of 2500 Hz through 5000 Hz (Stevens 1998). However,
research has also shown that children with hearing loss
make use of the frication energy itself when recognizing
the /s/ and /兰/ phonemes (Pittman and Stelmachowicz
2000). This has important implications for the use of frequency lowering technology such as NFC/Sound Recover. Provision of audible frication cues is likely a practice that can help to support the recognition of certain
high frequency speech sounds for children with hearing
impairment. However, there are many hearing losses
that may in fact limit audibility of even lower-frequency
formant transition cues in addition to the spectral cues
of frication. For these users, if the application of NFC can
restore or improve audibility of mid-frequency formant
transition energy, speech recognition may be better supported. The savvy clinician is therefore reminded of the
importance of incorporating knowledge of speech
acoustics and speech perception when incrementing audibility through the use of frequency-lowering processing strategies such as SoundRecover.
Figure 3 illustrates the changes made to the /s/ and
/兰/ phonemes when the NFC processor was activated at
pediatric default settings of 2900 Hz and 4:1. These settings indicate that the band from 2900 Hz to the response
limit of the hearing aid will be compressed into one-quarter of its original width. This processing lowers the /兰/
frication noise in frequency (compare figures 2 and 3),
but does not change its audibility because it was audible
without NFC. The processing also lowers the frequency
of the /s/ frication noise, which now has a clear and audible peak that was not present without NFC (again, compare figures 2 and 3). Because the /s/ sound receives
more frequency lowering, figure 3 shows some fre1 Obviously,
a calibrated, pre-recorded test signal and approach for this
test would be preferable, but this was not possible at the time of testing.
quency overlap between the two sounds. In extreme settings, this can cause the listener some difficulty in discriminating between the two sounds. In this case, a listening check indicated that the two sounds remained
Live voice
production of /œ/
Live voice
production
of /s/
Figure 3: The same fitting as shown in figure 1, evaluated using live
voice productions of the phonemes /s/ and /兰/. The SoundRecover
processor is enabled.
clearly distinguishable to the fitter, even though they do
sound somewhat altered by the use of the SoundRecover
processor. The electroacoustic measurements indicate
that complete overlap of the two spectra has been successfully avoided. Putting these observations together,
the fitter can suspect that both sounds may now be audible and discriminable from one another (i.e., that the fitting goals have been met). Further follow-up on this with
the end user is recommended to ensure that /s/ and /兰/
can be heard and discriminated. Electroacoustically,
complete overlap of the frication spectra of these two
phonemes is likely not advisable, especially if the two
phonemes are indistinguishable to the fitter (i.e., if
speech takes on a lisping characteristic). A second combination of verification and a listening check was
also made to assess the overall audibility (figure 4) and
sound quality provided for running speech when
SoundRecover was activated. Both listening checks and
electroacoustic measures indicate that the fitting goals of
audibility with high sound quality have been met.
Some Thoughts on Verification
and Fitting
The case example of a fitting using nonlinear frequency compression shown in the previous section illus-
Multichannel Nonlinear Frequency Compression: A New Technology for Children with Hearing Loss
trates a few important points about measurement of high
frequency gain or output, and measurement of maximum output. Both of these warrant special consideration
with this, and perhaps with other, frequency lowering
technologies. First, a paradox emerges when measuring
a hearing instrument with active NFC: it looks as though
the hearing aid has less high frequency gain/output,
compared to measures made without NFC (figure 4, with
NFC versus figure 1, without NFC). This paradox occurs
because the speech energy that is present in the higher
frequencies (those greater than 2900 Hz in this case) has
been shifted to a lower frequency prior to output from
155
the hearing aid. Therefore, the apparent “cut” that is
shown in the 4000 Hz region, for example, is not in fact a
cut at all. Rather, the energy has been shifted downwards
in frequency and now likely exists within the passband of
the device. This is not directly portrayed by the verification screen itself, but instead must be conceptually overlaid by the clinician when interpreting the measurement.
It was this paradox that led us to routine measurements
of isolated phonemes as illustrated in figures 2 and 3, in
an effort to index more directly the frequency-lowering
properties of an individual’s fitting.
A similar phenomenon occurs during evaluation of
maximum output. Typically, this is done using high -level
sweeps of narrowband stimuli, as shown by the test labelled “MPO” in figure 4. Above the cut-off frequency of
2900 Hz, the maximum output of the device apparently
drops precipitously, even below the known aided levels
of speech. This type of configuration does not normally
occur in devices without frequency lowering processors. Again, we must consider the role of the NFC
processor when interpreting the measurements. In this
case, the energy that goes in to the hearing aid at
4000 Hz is coming out of the hearing aid at a much lower
frequency. The hearing aid analyzer is only measuring
energy in the 4000 Hz region, and therefore does not
register the actual level of output for the test signal. Particularly for narrowband tests such as pure tone sweeps
or tests of maximum output using narrowband test signals, these effects are very strong, and do not give valid
information above the cut-off frequency of the test signal. Some clinicians are rightly concerned about this:
preferred practice protocol documents recommend routine evaluation of the maximum output of hearing aids.
This can be accomplished by measuring the MPO with
the NFC processor temporarily disabled. The observed
maximum output in this condition has not, in our experience, ever been exceeded by frequency-lowered signals once the NFC processor was re-enabled. Essentially, this is analogous to setting the hearing instrument
to have appropriate outputs in the conventional (no
NFC) condition prior to enabling the NFC processor. In
summary, it seems that an acceptable maximum output
setting without NFC also provides acceptable output
limiting when NFC is activated.
Early Clinical Findings
Figure 4: Final verification of the fitting with NFC/SoundRecover enabled. The top panel illustrates a basic verification for conversation-level
speech and maximum output. The bottom panel illustrates a verification
using multiple levels of speech, ranging from soft to loud.
The previous sections have attempted to explain our
thoughts on how the NFC strategy can be applied to
meet specific goals for pediatric hearing aid fitting. We
A Sound Foundation Through Early Amplification
of seven children and four of seven adults had significant improvement on at least one of the two tests when
the NFC processor was activated. Four of seven children showed significant improvement on both tests.
Both children and adults who benefited from NFC
tended to have greater degrees of hearing loss. In no
case was speech recognition made worse by the use of
the NFC processor. These data provide early evidence
that the NFC processor, combined with the gains associated with DSL v5 and the NFC fitting approach illustrated in figures 1 through 4, was able to provide additional cues for speech sound recognition for at least
some listeners. It may be that greater degrees of hearing loss indicate greater candidacy for the NFC processing. Significant benefits for these listeners were seen,
with average improvements for plural recognition in the
range of 30%. These results are very encouraging, and
may also be associated with improvements in speech
production. Our longitudinal project to evaluate speech
production changes is currently ongoing, but early re-
(a) High Frequency Consonant Recognition
100
90
80
70
% Correct
used this general strategy when evaluating clinical outcomes using a prototype NFC hearing aid in a sample of
adults and children with high frequency hearing loss. An
early analysis of our findings is shown below, for a subset (n = 13) of the children and adults we tested. All of
these listeners had better ear, high frequency pure tone
averages (average of 2000, 3000, 4000 Hz) in excess of
70 dB HL. The seven children ranged in age from 6 to
17 years, with a mean age of 11 years at the beginning of
the study. The six adults ranged in age from 50 through
81, with a mean age of 68 years. All participants were fitted using targets and protocols from the DSL v5 method
(Bagatto et al. 2005; Scollie et al. 2005), and verified using the Audioscan Verifit as shown in the figures above.
Outcomes for all participants were evaluated using a test
battery that evaluated speech sound recognition, sound
quality, speech production, and real world performance,
on a larger sample of patients. The results of this larger
trial have been submitted for publication, and are currently in review (Glista et al. submitted; Parsa et al. submitted; Scollie et al. submitted). Case study evaluations
from this work have also been reported (Bagatto, Scollie, Glista, Parsa and Seewald 2008).
Our outcome measure test battery included two
tests of high frequency speech sound recognition. The
first used ten high frequency consonants (t兰, d, f, , k,
s, 兰, t, , z), spoken by two female talkers within the nonsense syllable context /CIL/. These were selected from
a larger test of consonant recognition (Cheesman and
Jamieson 1996). In this task, listeners selected the consonant they heard from ten choices listed on a computer
screen. The second used the singular and plural forms
of 15 words (ant, balloon, book, butterfly, crab, crayon,
cup, dog, fly, flower, frog, pig, skunk, sock and shoe), selected to evaluate word-final plural recognition as was
done in previous studies of high frequency bandwidth in
adults and children who use hearing aids (Stelmachowicz et al. 2002). In this task, listeners selected the word
they heard from two choices (singular, plural) shown on
a computer screen. For both tasks, the test level used
was individually determined, aiming for a performance
level of about 70% in the no-NFC condition. The lowest
test level used was 50 dB SPL, and this level was increased if performance was very poor. Both tasks were
completed while wearing bilateral hearing instruments
that had been fitted using the procedures described
above and worn for at least two weeks. Testing was completed both with and without the NFC processor.
Results from these two tasks are shown in figure 5.
The data from both adults and children indicate that five
60
50
Adults
40
Children
30
20
10
0
60
70
80
90
100
110
120
Better Ear High Frequency Pure Tone Average
(dB HL)
(b) Word-Final Plural Identification (s/z)
100
90
80
70
% Correct
156
60
50
Adults
40
Children
30
20
10
0
60
70
80
90
100
110
120
Better Ear High Frequency Pure Tone Average
(dB HL)
Figure 5: Results of two tests of speech sound recognition for adults and
children with hearing loss. Symbols indicate scores obtained without NFC
processing. Vertical bars indicate scores obtained with NFC processing.
Multichannel Nonlinear Frequency Compression: A New Technology for Children with Hearing Loss
sults indicate significant improvement for some NFC
users (Polonenko et al. 2007).
Some Thoughts on Fine Tuning
Although we have described our general fitting approach in this chapter, the use of NFC signal processing
in our studies also required individualized fine tuning, in
order to optimize the audibility of the frequency-lowered band. If sounds are lowered too much, speech may
take on a lisping quality that may be noticeable and objectionable to the wearer. Other indications of excessive
frequency lowering include excessive loudness or unpleasant sound quality for high-frequency sounds,
whether they are speech sounds or environmental
sounds. If sounds are not lowered enough, the end user
may not derive any additional benefit over and above
what the hearing aid provides without NFC processing.
Unacceptable NFC settings were typically resolvable
with small changes at the time of follow-up, with the
greatest effect usually seen when the cross-over frequency was adjusted. Cross-over frequency adjustments in the range of a few hundred Hertz are generally
small changes, while adjustments in the range of one
thousand Hertz or more are generally large changes.
When NFC is used, whether it is too much, too little, or just right, it may result in very high output levels
delivered to the ear without feedback. These high output levels may be audible to conversational partners
with normal hearing, particularly when talking at close
proximity and in a quiet room. This occurs because the
amplified sound leaks out of the earmolds as usual, but
does not result in an oscillation cycle, since the input
and output frequencies are not the same. Therefore, the
conversational partner may be able to hear some of their
own speech, lowered in frequency. One’s own speech,
once frequency-lowered, may be more noticeable than it
would be otherwise, and may be most noticeable when
fricatives or affricates are produced. This effect is not
necessarily a problem, but rather a new side effect associated with NFC processing that has not been noticed
with previous technologies. In some cases, more occluding earmolds and/or thicker earmold tubing may reduce or alleviate the effect.
As with any new technology, it may take some time
and patience to become familiar with how best to fine
tune. This may be similar to the issues we faced in becoming accustomed to tuning wide dynamic range compression for the best effect when it was first considered
a “new” technology.
157
What Don’t We Know?
Well, probably a lot! These are early days for most
forms of frequency lowering hearing aids, and NFC is
certainly not an exception. We feel that we’ve learned a
lot from our studies with it, but also that more is left to
be learned. For example, some but not all of our participants had asymmetrical hearing losses. We chose to fit
based on the needs of the better ear, and provided both
ears with that setting. Some would argue that more audibility (via a stronger NFC setting) could have been
provided to the poorer ear. However, the alternative position is that asymmetry in the frequency domain may
prevent binaural integration of sound. We chose to err
on the side of caution, and gave a symmetrical NFC setting, with the NFC cut-off frequency and compression
ratio determined only by the fit to the better ear. We
don’t know at this time if the fitting would be improved
or degraded by taking a more ear-specific strategy. This
is certainly a question for further research, and likely
depends on the degree of loss in both ears and the degree of symmetry between them.
Other unresolved issues include whether NFC processing should be fitted on infants, and how to evaluate
whether it provides them benefit, as well as whether
NFC processing is important to consider in evaluation
of cochlear implant candidacy. For both infants and
cochlear implant candidates, individual trials with NFC
may be warranted, especially when accompanied by a
high quality hearing aid fitting protocol and multidisciplinary monitoring of progress and performance with
amplification. Also, should NFC be used for music listening or for understanding speech in noise? If so, would
the optimal settings differ from those used for understanding speech in quiet? One might certainly speculate
that different amounts of frequency lowering might be
appropriate for these varying listening situations, but
hard data on these issues are not yet available.
References
Bagatto, M., Moodie, S., Scollie, S., Seewald, R., Pumford, J., and Liu, K. P. 2005. Clinical protocols for
hearing instrument fitting in the Desired Sensation
Level method. Trends in Amplification 9(4): 199–226.
Bagatto, M., Scollie, S., Glista, D., Parsa, V, Seewald, R. 2008.
Case study: Outcomes of hearing impaired listeners
using nonlinear frequency compression technology. AudiologyOnline 3/17/2008 from http://www.audiologyon-
158
A Sound Foundation Through Early Amplification
line.com/articles/article_detail.asp?article_id=1990
Boothroyd, A., and Medwetsky, L. 1992. Spectral distribution of /s/ and the frequency response of hearing
aids. Ear and Hearing 13(3): 150–157.
Cheesman, M. F., and Jamieson, D. G. 1996. Development, evaluation, and scoring of a nonsense word
test suitable for use with speakers of Canadian English. Canadian Acoustics 24(1): 3–11.
Gifford, R. H., Dorman, M. F., Spahr, A. J., and McKarns,
S. A. 2007. Effect of digital frequency compression
(DFC) on speech recognition in candidates for combined electric and acoustic stimulation (EAS). Journal of Speech Language and Hearing Research 50(5):
1194–1202.
Glista, D., Scollie, S., Bagatto, M., Seewald, R., and Johnson, A. submitted. Evaluation of nonlinear frequency compression II: Clinical outcomes. Ear and
Hearing.
Johansson, B. 1961. A new coding amplifier system for
the severely hard of hearing. Proceedings of the
3rd International Congress on Acoustics 2: 655–657.
Kuk, F., Korhonen, P., Peeters, H., Keenen, D., Jessen, A.,
and Anderson, H. 2006. Linear frequency transposition:
Extending the audibility of high-frequency energy.
Hearing Review (October) from http://www.hearingreview.com/issues/articles/2006-10_08.asp.
MacArdle, B. M., West, C., Bradley, J., Worth, S., Mackenzie, J., and Bellman, S. C. 2001. A study of the application of a frequency transposition hearing system in
children. British Journal of Audiology 35: 17–29.
McDermott, H. J., and Knight, M. R. 2001. Preliminary
results with the AVR ImpaCt frequency-transposing
hearing aid. Journal of the American Academy of Audiology 12(3): 121–127.
McDermott, H. J., Voula, P. D., Dean, M. R., and Ching,
T. Y. 1999. Improvements in speech perception with
use of the AVR TranSonic frequency-transposing
hearing aid. Journal of Speech, Language and Hearing Research 42(6): 1323–1335.
Miller-Hansen, D. R., Nelson, P. B., Widen, J. E., and Simon, S. D. 2003. Evaluating the benefit of speech recoding hearing aids in children. American Journal of
Audiology 12(2): 106–113.
Moeller, M. P., Hoover, B., Putman, C., Arbataitis, K.,
Bohnenkamp, G., Peterson, B., et al. 2007a. Vocalizations of infants with hearing loss compared with
infants with normal hearing: Part I - Phonetic development. Ear and Hearing 28(5): 605–627.
Moeller, M. P., Hoover, B., Putman, C., Arbataitis, K.,
Bohnenkamp, G., Peterson, B., et al. 2007b. Vocali-
zations of infants with hearing loss compared with
infants with normal hearing: Part II–transition to
words. Ear and Hearing 28(5): 628-642.
Parent, T. C., Chmiel, R., and Jerger, J. 1997. Comparison
of performance with frequency transposition hearing aids and conventional hearing aids. Journal of the
American Academy of Audiology 8(5): 355–365.
Parsa, V., Scollie, S.D., Glista, D., Seelisch, A., Bagatto,
M., Seewald, R. submitted. Speech quality ratings of
nonlinear frequency compressed speech by normal
and hearing impaired listeners. Ear and Hearing.
Pittman, A. L., and Stelmachowicz, P. G. 2000. Perception
of voiceless fricatives by normal-hearing and hearing-impaired children and adults. Journal of Speech,
Language and Hearing Research 43(6): 1389–1401.
Pittman, A. L., and Stelmachowicz, P. G. 2003. Hearing
loss in children and adults: Audiometric configuration, asymmetry, and progression. Ear and Hearing
24(3): 198–205.
Pittman, A. L., Stelmachowicz, P. G., Lewis, D. E., and
Hoover, B. M. 2003. Spectral characteristics of
speech at the ear: Implications for amplification in
children. Journal of Speech, Language and Hearing
Research 46(3): 649–657.
Polonenko, M., Scollie, S.D., Glista, D., Bagatto, M.P.,
Seewald, R.C., Kertoy, M., and Seelisch, A. 2007.
Evaluation of frequency compression: Effects on
speech production in children. Poster presented at
the Fourth International Conference: A Sound Foundation Through Early Amplification, Chicago IL.
Scollie, S., Parsa, V., Glista, D., Bagatto, M., Wirtzfeld,
M., and Seewald, R. submitted. Evaluation of nonlinear frequency compression I: Fitting rationale.
Ear and Hearing.
Scollie, S., Seewald, R., Cornelisse, L., Moodie, S., Bagatto, M., Laurnagaray, D., Beaulac, S. and Pumford, J. 2005. The Desired Sensation Level multistage input/output algorithm. Trends in Amplification 9(4): 159–197.
Seewald, R., Moodie, S., Scollie, S., and Bagatto, M.
2005. The DSL method for pediatric hearing instrument fitting: Historical perspective and current issues. Trends in Amplification 9(4): 145–157.
Simpson, A., Hersbach, A. A., and McDermott, H. J.
2005. Improvements in speech perception with an
experimental nonlinear frequency compression
hearing device. International Journal of Audiology
44: 281–292.
Simpson, A., Hersbach, A. A., and McDermott, H. J.
2006. Frequency-compression outcomes in listen-
Multichannel Nonlinear Frequency Compression: A New Technology for Children with Hearing Loss
ers with steeply sloping audiograms. International
Journal of Audiology 45: 619–629.
Stelmachowicz, P. G., Pittman, A. L., Hoover, B. M., and
Lewis, D. E. 2002. Aided perception of /s/ and /z/
by hearing-impaired children. Ear and Hearing
23(4): 316–324.
Stelmachowicz, P. G., Pittman, A. L., Hoover, B. M., Lewis, D. E., and Moeller, M. P. 2004. The importance
of high-frequency audibility in the speech and
language development of children with hearing
loss. Archives of Otolaryngology – Head and Neck
Surgery 130(5): 556–562.
Stevens, K.N. 1998. Acoustic Phonetics. Cambridge, MA:
MIT Press.
159