Download BIOLOGY LAB 1615 MELISSA VAN BIBBER TUES. 10 AM CLASS

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Lip reading wikipedia , lookup

Speech perception wikipedia , lookup

Allochiria wikipedia , lookup

Transcript
BIOLOGY LAB 1615
MELISSA VAN BIBBER
TUES. 10 AM CLASS
SUMMARY OF ARTICLE ON “SPEECH IN NOISE”
This is basically an experiment testing language perception associated with hearing and visual
stimuli. The professors, Mor Nahum, Israel Nelken, and Merav Ahissar, from the Department of
Neurobiology and Psychology at Hebrew University, Jerusalem, Israel did experiments based on
such semantics, mainly to prove the Reverse Hierarchy Theory (RHT).
It has been widely accepted in neurobiology that the understanding of language is processed in a
hierarchical manner. This hierarchical order is:
1- Detection – Detection is the ability to respond to the presence or absence of sound. It
is the essential first step in learning to listen.
2- Discrimination – Discrimination is the ability to perceive similarities and differences
between two or more speech stimuli.
3- Identification – Identification is the ability to label by repeating, pointing to or writing
the speech stimulus heard.
4- Comprehension – Comprehension is the ability to understand the meaning of speech
by answering questions, following directions, paraphrasing, or participating in a
conversation.
In RHT the order is reversed from “Detection to Comprehension”, to “Comprehension to
Detection.” They predicted that their experiment of RHT will show that when not much noise or
stimulus is going on around the subject while trying to comprehend linguistics, it is easier for the
subject to understand what is spoken or read, in contrast to high stimulus where only a portion of
the relevant information can be fully comprehended.
In the first study they performed IV experiments involving identification of word sets under
binaural (hearing with both ears at the same time) uncertainty. The study showed that they could
not predict for similar-sounding words in both ears (diotic) when low-level uncertainty is
introduced. So, they changed a few word sets and replicated the first study a second time, but
with the same results.
In the second study, they manipulated difficulty by increasing the stimulus set size and dichotic
(hearing different words in each ear at the same time) listening. They found that the new
difficulty did not affect the use of the binaural cues and that more attentional stimulus does not
affect the subject’s use of information.
They found that when the stimuli they gave was phonologically different words the binaural
benefits were identical to those of ideal listener models under different types of task
requirements. However, when they gave the same stimuli but instead of phonologically different
words they presented phonologically similar pairs the binaural benefits we a lot lower than those
of the ideal listener models. The differences could not be explained as far as low-level binaural
information is concerned.
In conclusion, they found that there are constraints on the use of low-level information, but the
constraints had to be formulated in terms of properties of stimulus sets rather than in terms of
behavioral difficulty, or general cognitive or attentional demands. Because the acoustic contrast
was large for both types of sets, at the lower levels both word sets had distinct non-overlapping
representations. They also found that the main factor in determining whether the use of lowlevel information would reach the levels of ideal listeners was the representation of stimulus sets
in the high-level. Although RHT was originally derived for visual perception, they argue that it
indeed applies to the auditory system as well.