Download Listening with two ears instead of one

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Dysprosody wikipedia , lookup

Sound wikipedia , lookup

Auditory processing disorder wikipedia , lookup

Speech perception wikipedia , lookup

Telecommunications relay service wikipedia , lookup

Sound from ultrasound wikipedia , lookup

Ear wikipedia , lookup

Evolution of mammalian auditory ossicles wikipedia , lookup

Hearing loss wikipedia , lookup

Earplug wikipedia , lookup

Auditory system wikipedia , lookup

Sound localization wikipedia , lookup

Hearing aid wikipedia , lookup

Sensorineural hearing loss wikipedia , lookup

Noise-induced hearing loss wikipedia , lookup

Audiology and hearing health professionals in developed and developing countries wikipedia , lookup

Transcript
Listening with two ears instead of one
Phonak Insight
The importance of bilateral streaming between hearing aids
Binaural VoiceStream TechnologyTM permits the ability to exchange full bandwidth
audio data between hearing instruments in real time. As a result, Phonak hearing
instruments are now capable of providing truly binaural solutions for hearinginstrument users in complex and challenging listening situations.
Abstract
For many years (e.g., Cherry, 1953), hearing care professionals
have understood the advantage of listening with two ears
compared with one. In addition to improved intelligibility of
speech in quiet, noisy, and reverberant environments, binaural
versus monaural listening improves perceived sound quality and
decreases the effort listeners must expend in order to understand
a target voice of interest. Importantly, the recent development of
full audio streaming between two hearing aids is poised to realize
these benefits for individuals with impaired hearing. This paper
will describe in detail why full audio streaming between hearing
instruments is beneficial to the hard-of-hearing with a particular
emphasis on the importance of binaural processing in challenging
listening environments.
Introduction
Hearing loss is a significant health concern and currently affects
roughly 40% of adults older than 65 years (Yueh et al., 2003).
Untreated hearing loss has far-reaching consequences in
psychosocial, emotional, physical, cognitive, and behavioral
domains of one’s life (Dalton et al., 2003). World Health Organiza­
tion estimates indicate that hearing loss is the second leading
cause of years lived with a disability at a global level. In order to
ameliorate the burden of impaired hearing, the most common
method of rehabilitation is to provide listeners with hearing
instruments coupled with counseling.
For most individuals with bilateral hearing impairment, the body
of evidence collected across nearly three decades of research has
found that the provision of two compared with one hearing aid
yields significant benefit for the listener. Although research
continues to more fully understand why listening with two ears
compared with one ear is advantageous for most listeners, much
has already been learned. What follows in this paper is some
background information about research on the “cocktail party”
problem, a brief summary of the major advantages of binaural
compared with monaural listening, followed by a more detailed
description of the potential benefits arising from a significant
technological advancement in hearing instrument development,
namely the capacity to wirelessly stream an audio signal from one
hearing instrument to the other.
The cocktail party problem
For over 50 years, researchers have attempted to better understand
how listeners are able to perform extremely complex auditory
tasks in “cocktail party” listening situations (Cherry, 1953). In
cocktail party situations, a listener selectively attends to and
identifies the speech from a single talker among a mixture of
background conversations in a multitalker situation (for reviews
see Bregman, 1990; Bronkhorst, 2000). Speech identification in
the presence of background speech is a nontrivial challenge that
results in a phenomenon called masking, which is the process by
which the detection threshold for a sound (i.e., target) is made
more difficult by the presence of another (i.e., masker) sound.
When identifying target speech in the presence of speech maskers,
the difficulties arise from two different types of masking. Whereas
energetic masking (French & Steinberg, 1947) occurs when a
target and masker compete for representation in a channel of
information at the level of the auditory periphery (e.g., in a
cochlear filter or on proximal portions of the auditory nerve),
informational masking refers to the additional masking that is
observed when there is competition for representation at higher
or more central levels of processing (Durlach et al., 2003). The
challenge of listening in environments where there is a relatively
poor signal-to-noise ratio (SNR) for target compared with masking
sounds is particularly deleterious for hearing-impaired and many
older listeners (Pichora-Fuller & Singh, 2006).
The advantages of listening with two ears compared with one ear
In order to overcome the cocktail party problem, all but especially
hearing impaired listeners rely heavily on the fact that humans
come equipped with two ears. Before we take some time to
consider why listening with two ears compared with one ear is
important, let us quickly review some of the benefits associated
with bilateral hearing instrument use.
The advantages of bilateral compared with unilateral hearing
instrument fittings include better understanding of speech in
quiet (e.g., Nabelek & Pickett, 1974) and in noisy listening situations
(e.g., McArdle et al., 2012), better sound localization performance
on both objective (e.g., Kobler & Rosenhall, 2002) and subjective
(e.g., Noble & Gatehouse, 2006) indices of sound localization
Why listening with two ears is important
The benefits from the availability of two ears arise from a number
of monaural and binaural auditory cues that facilitate speech
identification in noise. Specifically, these benefits stem from:
1.
2.
3.
4.
5.
The “better-ear” effect
Binaural directivity
Binaural loudness summation
Binaural redundancy
Binaural comparisons
What follows next is a more detailed discussion of each of these
processes.
2
Phonak Insight / Listening with two ears instead of one / January 2013
abilities, improved sound quality (e.g., Balfour & Hawkins, 1992),
decreased listening effort necessary to understand speech in the
presence of background noise (Noble & Gatehouse, 2006), reduced
auditory deprivation in aided compared with unaided ears (e.g.,
O’Neil, Connelly, Limb, & Ryugo, 2011), increased consumer
satisfaction (e.g., Kochkin & Kuk, 1997), and higher scores on
measures assessing self-perceived quality of life (e.g., Kochkin,
2000). In light of the potential benefits arising from the use of two
compared with one hearing instrument, it should perhaps not be
surprising that when given the choice, bilaterally hearing-impaired
individuals strongly prefer bilateral instead of unilateral hearing
aid fittings (e.g., Boymans et al., 2008).
The “better-ear” effect
Likely the most important cue that listeners use to enhance
hearing in noisy situations is the monaural cue arising because of
the better-ear effect (Zurek, 1993). When target and masker
signals are produced from different locations, there is a resulting
SNR advantage at one ear relative to the other. This SNR
ad­vantage is acoustical in nature because the head acts as an
acoustic barrier which produces a level difference between the
ears (i.e., head diffraction effects). For example, compare a
situation in which both target and masker signals are located to
the listener’s left to a situation in which the target is on the right
and the masker is on the left. Moving the target from left to right
dramatically improves the SNR at the right ear, a process largely
arising because of the sound shadow cast by the listener’s head.
Alternatively, if the target is located to the left of the listener and
the masker is located to the right of the listener, the resulting SNR
will be more deleterious at the right ear and the listener can take
advantage of the better SNR available at the left ear. Assuming
spatial separation of a target and masker, the use of two ears
provides the listener with the option to allocate attentional
resources to the ear with the better SNR, regardless of the location
of the target or masker, thus improving speech identification in
noise (e.g., Hornsby, Ricketts & Johnson, 2006). An additional
benefit is that the “decision” to attend to the better ear is largely
a reflexive process when listeners are trying to understand speech
presented in the background of noise and/or reverberant
environments. Importantly, the listening advantage arising
because of the better ear effect has been reported to be as large
as 8 dB (Bronkhorst & Plomp, 1988).
Binaural directivity
As sound travels from the free field to the eardrum, the human
torso, head and pinnae induce a number of direction-dependent
acoustic transformations that help listeners localize auditory
objects (Shaw, 1974). Importantly, by knowing where to listen,
both younger and older listeners experience considerable benefit
by focusing attention toward the target (e.g., Singh et al., 2008).
Historically, studies investigating body-related acoustic
transformations have focused on monaural acoustic effects;
however, more recent research has focused on how the auditory
system integrates monaural directivity (e.g., Sivonen, 2011), a
process increasingly relevant for hearing instrument developers
given the advances with binaural signal processing algorithms.
Hence, binaural directivity is the directional listening advantage
arising from listening with two ears compared with one. The
philosophy motivating the development of binaural directivity
signal processing strategies is to exploit a listener’s ability to focus
attentional resources along a spatial vector.
Phonak Insight / Listening with two ears instead of one / January 2013
3
Binaural loudness summation
The second characteristic of binaural loudness summation that
developers of hearing instruments find especially intriguing is
that unlike several phenomena where hearing-impaired listeners
typically demonstrate significantly poorer abilities than normalhearing listeners (e.g., less spatial unmasking [Best, Mason & Kidd,
2011], less binaural noise reduction [Peissig & Kollmeier, 1997],
increased susceptibility to forward masking [Oxenham & Plack,
1997]), hearing-impaired listeners demonstrate binaural sum­
mation values similar to that found with normal-hearing listeners
(Hawkins et al., 1987; see Figure 1). This is an important detail
because as hearing instruments continue to be developed so as to
take advantage of the effect of binaural loudness summation, it
suggests that all listeners can potentially experience benefit
regardless of degree of hearing impairment.
Binaural summation (dB)
The second important benefit of listening with two ears compared
with one ear is that listeners experience an increased perception
of loudness, a phenomenon known as binaural loudness sum­
mation (Reynolds & Stevens, 1960). Importantly, there are two
characteristics of binaural loudness summation that make it
particularly intriguing for developers of hearing instruments.
First, let us consider the magnitude of the increase of loudness
perception arising from binaural loudness summation. In general,
threshold-based estimates of binaural loudness summation yield
increases of loudness perception of approximately 3 dB (Keys,
1947). In contrast, loudness perception for supra-threshold
signals is higher than that observed with near-threshold signals,
with typical values ranging from approximately 6-10 dB (Haggard
& Hall, 1982). Hence, this leads to our first interesting finding
regarding binaural loudness summation – namely, that there is a
significant listening advantage (i.e., 6-10 dB) when listening with
two ears compared with one ear.
14
9
4
-1
500 Hz
Normal hearing
4000 Hz
SSN
Hearing impaired
Figure 1
Mean binaural summation in dB for 500 Hz and 4000 Hz pure tones and speech
spectrum noise (SSN) for normal hearing and hearing-impaired listeners. This
figure was generated from Table 4 “MCL-B” from Hawkins et al. (1987).
Binaural redundancy
Imagine a situation where a person experiences a loss of vision in
both eyes, but remarkably, the loss is conspicuously different for
each eye. For the left eye, the person experiences extreme tunnel
vision and has no peripheral vision (as can happen with retinitis
pigmentosa). For the right eye the person experiences a complete
loss of central vision but peripheral vision remains completely
intact (as can happen with macular degeneration). Although
vision would be severely restricted when using only one eye, in
theory, the use of both eyes would provide almost full access to a
visual scene, as higher cognitive centers would integrate input
from each eye in order to form a more unified and less fragmented
image of the world. This scenario highlights the benefit of
redundancy in the visual system arising from the availability of
having two eyes (see Figure 2). Interestingly, a parallel process
occurs in the human auditory system.
4
Phonak Insight / Listening with two ears instead of one / January 2013
Binaural redundancy is the advantage obtained from receiving
identical information about the signal in both ears (also referred
to as diotic listening). One of the costs of listening with one
instead of two ears is that there is only one opportunity for the
auditory system to capture the available information in a signal.
In other words, there is a loss of redundancy of cues available
across the two ears. Binaural redundancy describes a process
whereby the brain has two “looks” at each sound (Dillon, 2001).
This process is particularly relevant for listeners with asymmetrical
hearing losses because the auditory cues available in a given
signal may be more readily accessible for one ear compared with
the other. For example, consider an individual with a highfrequency hearing loss in the left ear and a low-frequency hearing
loss in the right ear. By presenting a signal to both ears, the listener
will be able to access low- and high-frequency cues with the left
and right ear, respectively.
Importantly, the term binaural redundancy is frequently
interchanged with binaural summation, a term not to be confused
with binaural loudness summation which refers to the combination
of information that results in an increased loudness percept.
Improvements of 1-2 dB SNR are typically observed from
experiments on diotic listening (e.g., Bronkhorst & Plomp, 1988).
Importantly, both individuals with normal and impaired hearing
can take advantage of the binaural redundancy effect (Day et al.,
1988).
Figure 2
Left: Retinitis pigmentosa: Extreme tunnel vision (complete loss of peripheral vision).
Middle: Macular degeneration: No central vision (peripheral vision is undisturbed).
Right: Result: Access to most visual cues and a resulting percept that is more unified.
Binaural comparisons
Listening can also be enhanced by making comparisons between
the two ears, a process arising from the fact that when sounds
arrive from a location in space (other than when presented directly
in front of or behind a listener), the sound will first arrive at one
ear relative to the other (resulting in an interaural timing difference
or ITD cue), and the sound will be louder at the ear closer to the
signal relative to the far ear (resulting in an interaural level
difference or ILD cue) (for a review see Bronkhorst, 2000). Both
cues are critically important for localization performance and
their availability can facilitate speech understanding in complex
listening environments when target sounds originate from
unexpected listening locations (Singh, Pichora-Fuller & Schneider,
2008). In addition to providing ILD and ITD cues, binaural
processing of interaural differences enables higher perceptual
systems to take advantage of the subtle spectro-temporal
differences between the target and masker signals arriving at
each ear via a process known as interaural cross-correlation (ICCs)
(e.g., Colburn et al., 2006; Culling, Hawley & Litovsky, 2004). For
example, Akeroyd and Summerfield (2000) found that in difficult
listening situations with low SNRs, listeners benefit from making
fine-tuned comparisons of highly correlated spectral profiles of a
signal arriving at one ear relative to the other. Importantly, the
contribution of ILDs, ITDs, and ICCs to listening performance is
relatively well understood in simple, highly-controlled, and
anechoic listening environments. However, there is still much to
learn about the benefit of these cues in noisy, reverberant, and
multitalker listening environments, because all of these cues
combine in complex ways that do not permit useful analysis with
currently available experimental methodologies.
Phonak Insight / Listening with two ears instead of one / January 2013
5
Binaural VoiceStream TechnologyTM
application of this technology in any situation where the hearing
instrument detects a more favorable SNR at one ear relative to the
other.
JFC test
Change [dB SNR]
Although all the major manufacturers of hearing instruments
have developed hearing aids capable of wirelessly exchanging
information with each other, it is important to consider the
sophistication of the data streaming capabilities available between
the manufacturers. Currently, the most sophisticated hearing
instruments are capable of sending and receiving information at
approximately 300 Kbits/second. It is at these exchange rates that
hearing instruments are capable of sending and receiving a full
bandwidth audio signal, thus establishing a new and exciting
frontier for developers and especially users of hearing instruments.
Moreover, the ability to copy, send, receive, and present ongoing
audio signals between hearing instruments will continue to foster
new innovations that take into consideration decades of research
on binaural processing.
15
10
5
0
1
2
3
4
7
8
9
10
11
12
13
-5
-10
-15
As previously discussed, in noisy and reverberant listening
situations, speech understanding is greatly enhanced by attending
to the ear with the better SNR, a phenomenon known as the
better-ear effect. One of the significant advancements of modern
hearing instruments is the capacity for the hearing instrument to
calculate the amount of signal present relative to the amount of
noise (i.e., SNR), a capability that can be performed at a resolution
of individual frequency bands. In light of the ability to stream the
full audio signal from one hearing instrument to the other, and
because each hearing instrument is able to calculate SNR, it is
now possible to stream a copy of the audio signal from the hearing
instrument with the better SNR to the hearing instrument with
the poorer SNR, thus providing the listener with the output of the
“better ear” at not one, but both ears.
The capability to stream from the ear with a high SNR to the ear
with a low SNR is perhaps best illustrated by the DuoPhone
feature (see Figure 3). When using a phone with conventional
hearing instruments, it is straightforward that there exists an ear
where the SNR is relatively good (i.e., the phone where the ear is
placed) and an ear where the SNR is relatively poor (i.e., the other
ear). By wirelessly streaming the signal from the better ear so that
a copy of the signal is presented at the poorer ear, the listener will
experience an enhanced ability to understand speech on the
phone (Picou & Ricketts, 2011). Although this example highlights
the benefit of real-time audio exchange between hearing
instruments when using the phone, there is potential for
6
Phonak Insight / Listening with two ears instead of one / January 2013
Number of test subjects
Figure 3
Nyffeler (2010) depicting benefit on the Just Follow Conversation (JFC) test.
Change is calculated by subtracting the level of the speech required when
listening with DuoPhone from the level of the speech required when listening
to a monaurally presented signal (positive values signify an improvement).
Importantly, Binaural VoiceStream TechnologyTM yields another
important and unique benefit when coupled with standard
directional microphone technology. In most modern hearing
instruments, directional processing is achieved using the two
omnidirectional microphones found on one hearing instrument.
However, with the advent of full bandwidth audio streaming
between hearing instruments, it is now possible to achieve
directional processing that coordinates the signals from the four
microphones available with a bilateral hearing aid fitting, thus
permitting a truly binaural beamformer. As a result, it is now
possible to achieve more advanced beamformer polar responses
that were previously unavailable with directional microphone
systems that coordinate input from two microphones. With
StereoZoom, for example, Phonak is now able to achieve a more
focused beam pattern compared with traditional monaural
beamformers, resulting in significantly improved speech under­
standing performance (Kreikemeier et al., 2012).
Why does listening improve with full audio streaming between hearing instruments?
The reason listening abilities improve with full audio streaming
between hearing instruments is largely because the hearing
instruments are able to capitalize on the better ear effect, and this
is critically important because of the additional benefits bestowed
by binaural loudness summation and binaural redundancy. Recall
that better-ear effects are perhaps the most important reason
why intelligibility improves in challenging listening environments
when target and masker signals originate from different locations
(Brungart & Simpson, 2002). By copying the audio signal at the
hearing instrument located at the better ear and presenting this
signal also from the hearing instrument with the poorer SNR,
listeners experience a more favorable listening situation.
Specifically, the signal available at the “better ear” is processed
and routed not just through the hearing instrument located at the
“better ear”, but instead is processed and routed through both
hearing instruments.
Importantly, the benefit of full audio streaming between hearing
instruments is not limited to strategic exploitation of the better
ear effect. In addition, wireless streaming of the full audio signal
takes advantage of both the binaural loudness summation and
binaural redundancy effect. As previously indicated, binaural
summation provides listeners with a robust listening advantage
of 6-10 dB, and importantly, the ability to exploit the benefit of
binaural loudness summation appears to be fully preserved for
listeners with sensorineural hearing loss. Finally, by presenting a
signal to both ears, the auditory system is provided with multiple
rather than just one opportunity to access auditory cues available
in the signal (i.e., binaural redundancy). As previously indicated,
binaural redundancy typically results in improvements of 1-2 dB
SNR and are likely most important for individuals with
asymmetrical hearing losses.
Lastly, recall that as sound travels from the free field to the
eardrums, the human body produces a number of directiondependent acoustic transformations that serve as important
localization cues for listeners. Notably, the cues available to each
ear combine a process resulting in binaural directivity or the
directional listening advantage when listening with two ears
compared with one ear. Both StereoZoom and autoStereoZoom
are designed to mimic binaural directivity for bilaterally hearingimpaired listeners via coordination of the two dual-microphone
systems available with bilateral hearing instrument fittings, a ca­
pability uniquely available with Binaural VoiceStream TechnologyTM.
Conclusions
• Binaural VoiceStream TechnologyTM is the capability to
wirelessly exchange full bandwidth audio signals between
hearing instruments. This represents a significant technological
advancement in hearing instrument development and opens
up the possibility to exploit decades of research on binaural
processing.
• There is the potential to apply the full audio streaming
technology in any listening situation where the hearing
instrument detects a more favorable SNR in any frequency
band of one of the hearing instruments relative to the other.
• Binaural VoiceStream TechnologyTM takes advantage of several
mechanisms critical for improving intelligibility in difficult
listening environments. These mechanisms include the betterear effect, the binaural loudness summation effect, and the
binaural redundancy effect.
• Binaural VoiceStream TechnologyTM when used in conjunction
with directional microphone technology enables StereoZoom
and autoStereoZoom, features which take advantage of the
listening benefit arising from binaural directivity.
Phonak Insight / Listening with two ears instead of one / January 2013
7
References
Akeroyd M.A. & Summerfield A.Q. (2000). Integration of monaural and binaural evidence of vowel formants. Journal of the Acoustical Society of America,
107(6), 3394-3406.
Balfour P.B. & Hawkins D.B. (1992). A comparison of sound quality judgments for monaural and binaural hearing aid processed stimuli. Ear and Hearing,
13(5), 331-339.
Best V., Mason C.R. and Kidd G. (2011). Spatial release from masking in normally hearing and hearing-impaired listeners as a function of the temporal overlap
of competing talkers. Journal of the Acoustical Society of America 129(3), 1616-1625.
Boymans M., Goverts S.T., Kramer S.E., Festen J.M. & Dreschler W.A. (2008). A prospective multi-centre study of the benefits of bilateral hearing aids.
Ear and Hearing, 29(6), 930-941.
Bregman A.S. (1990). Auditory Scene Analysis. MIT Press: Cambridge, MA.
Bronkhorst A.W. (2000). The cocktail party phenomenon: A review of research on speech intelligibility in multiple talker conditions. Acustica, 86, 117–128.
Bronkhorst A.W. & Plomp R. (1988). The effect of head-induced interaural time and level differences on speech intelligibility in noise. Journal of the Acoustical
Society of America, 83, 1508-1516.
Brungart D. & Simpson B. (2002). The effects of spatial separation in distance on the informational and energetic masking of a nearby speech signal.
Journal of the Acoustical Society of America 112, 664–676.
Cherry E.C. (1953). Some experiments on the recognition of speech, with one and two ears. Journal of the Acoustical Society of America, 25, 975-979.
Colburn H.S., Shinn-Cunningham B., Kidd G. Jr. & Durlach N. (2006). The perceptual consequences of binaural hearing. International Journal of Audiology,
45(Suppl 1), S34-44.
Culling J.F., Hawley M.L. & Litovsky R.Y. (2004). The role of head-induced interaural time and level differences in the speech reception threshold for multiple
interfering sound sources. Journal of the Acoustical Society of America, 116, 1057-1065.
Dalton D.S., Cruickshanks K.J., Klein B.E.K, Klein R., Wiley T.L., et al. (2003). The impact of hearing loss on quality of life in older adults. The Gerontologist,
43(5), 661-668.
Day G., Browning G. & Gatehouse S. (1988). Benefit from binaural hearing aids in individuals with a severe hearing impairment. British Journal of Audiology,
23, 273–277.
Dillon A. (2001). Beyond usability: process, outcome and affect in human-computer interactions. Canadian Journal of Library and Information Science, 26(4), 57-69.
Duquesnoy A.J. (1983). The intelligibility of sentences in quiet and in noise in aged listeners. Journal of the Acoustical Society of America, 74, 1136-1144.
Durlach N.I., Mason C.R., Kidd G. Jr., Arbogast T.L., Colburn H.S. & Shinn-Cunningham B.G. (2003). Note on informational masking. Journal of the Acoustical
Society of America, 113, 2984-2987.
French N.R. & Steinberg J.C. (1947). Factors governing the intelligibility of speech sounds. Journal of the Acoustical Society of America, 19, 90–119.
Haggard M. & Hall J. (1982). Forms of binaural summation and the implications of individual variability for binaural hearing aids. Scandanavian Audiology
Supplementum (15), 47-63.
Hawkins D.B., Walden B.E., Montgomery A. & Prosek R.A. (1987). Description and validation of an LDL procedure designed to select SSPL90. Ear and Hearing,
8(3), 162-169.
Hornsby B.W., Ricketts T. A. & Johnson E. E. (2006). The effects of speech and speechlike maskers on unaided and aided speech recognition in persons with
hearing loss. Journal of the American Academy of Audiology, 17, 432-447.
Keys J.W. (1947). Binaural versus monaural hearing. The Journal of the Acoustical Society of America, 19(4), 629-631.
Kobler S. & Rosenhall U. (2002). Horizontal localization and speech intelligibility with bilateral and unilateral hearing aid amplification. International Journal of
Audiology, 41, 395-400.
Kochkin S. & Kuk F. (1997). The binaural advantage: evidence from subjective benefits and customer satisfaction data. The Hearing Review, 4(4), 29-32.
Kochkin S. (2000). MarkeTrak V: Consumer Satisfaction Revisited. The Hearing Journal, 53(1), 38-55.
Kreikemeier S., Margolf-Hackl S., Raether J., Fichtl E. & Kiessling J. (2012). Vergleichende Evaluation unterschiedlicher Hörgeräte-Richtmikrofontechnologien
bei hochgradig Schwerhörigen. Zeitschrift für Audiologie, Supplement zur 15. Jahrestagung der deutschen Gesellschaft für Audiologie.
McArdle R., Killion M., Mennite M. & Chisolm T. (2012). Are two ears not better than one? Journal of the American Academy of Audiology, 23, 171-181.
Nabelek A.K. & Pickett J.M. (1974). Reception of consonants in a classroom as affected by monaural and binaural listening, noise, reverberation, and hearing
aids. Journal of the Acoustical Society of America, 56, 628–639.
Noble W. & Gatehouse S. (2006). Effects of bilateral versus unilateral hearing aid fitting on abilities measured by the Speech, Spatial, and Qualities of Hearing
Scale (SSQ). International Journal of Audiology, 45, 172-181.
Nyffeler M. (2010). DuoPhone. Field Study News, February.
O’Neil J.N., Connelly C.J., Limb C.J. & Ryugo D.K. (2011). Synaptic morphology and the influence of auditory experience. Hearing Research, 279(102), 118-130.
Oxenham A.J. & Plack C.J. (1997). A behavioral measure of basilar membrane nonlinearity in listeners with normal and impaired hearing. Journal of the Acoustical
Society of America, 101, 3666–3675.
Peissig J. & Kollmeier B. (1997). Directivity of binaural noise reduction in spatial multiple noise-source arrangements for normal and impaired listeners, Journal
of the Acoustical Society of America, 101, 1660–1670.
Pichora-Fuller M.K. & Singh G. (2006). Effects of age on auditory and cognitive processing: implications for hearing aid fitting and audiological rehabilitation.
Trends in Amplification, 10, 29-59.
Picou E.M. & Ricketts T.A. (2011). Comparison of wireless and acoustic hearing aid-based telephone listening strategies. Ear and Hearing, 32(2), 209-220.
Reynolds G.S. & Stevens S.S. (1960). Binaural summation of loudness. The Journal of the Acoustical Society of America, 32(10), 1337-1344.
Singh G., Pichora-Fuller M.K. & Schneider B.A. (2008). The effect of age on auditory spatial attention in conditions of real and simulated spatial separation.
Journal of the Acoustical Society of America, 124, 1294-1305.
Sivonen V.P. (2011). Binaural directivity patterns for normal and aided human hearing. Ear & Hearing, 32, 674-677.
Shaw E.A.G. (1974). Transformation of sound pressure level from the free field to the eardrum in the horizontal plane. Journal of the Acoustical Society of America,
56, 1848-1860.
Yueh B., Shapiro N., MacLean C.H., et al. (2003, April 16). Screening and management of adult hearing loss in primary care. Journal of the American Medical
Association, 289 (15), 1976-1985.
Zurek P. M. (1980). The precedence effect and its possible role in the avoidance of interaural ambiguities. Journal of the Acoustical Society of America, 67, 953-964.
8
Phonak Insight / Listening with two ears instead of one / January 2013
028-0873-02/V1.00/2013-01/8G Printed in XXX © Phonak AG All rights reserved