Download Software Defined Hearing Aid

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Earplug wikipedia , lookup

Lip reading wikipedia , lookup

Telecommunications relay service wikipedia , lookup

Hearing loss wikipedia , lookup

Hearing aid wikipedia , lookup

Sound from ultrasound wikipedia , lookup

Noise-induced hearing loss wikipedia , lookup

Sensorineural hearing loss wikipedia , lookup

Audiology and hearing health professionals in developed and developing countries wikipedia , lookup

Transcript
Software Defined Hearing Aid
Senior Project 1
By: Richard Knowles & David Vaz
Project Leader: Richard Knowles
Project Advisor: Dr. Allen Katz
Technical Adviser: Jay Ross
May 2010
1
Hearing Aid Designed with Memory
A Senior Design Project submitted to the faculty of
The College of New Jersey
Senior Project 1
By: Richard Knowles & David Vaz
Project Leader: Richard Knowles
In partial fulfillment for a degree of a
Bachelor of Science in Computer Engineering
May 2010
2
Abstract
Hearing Aids are crucial to make sound more accessible to the 28 million hearing impaired
people in the United States. About one out of five hearing impaired people actually wears
hearing aids. Not only do elderly hearing-impaired people have problems hearing due their
physical capabilities, but their cognitive skills often slowly deteriorate resulting in short-term
memory loss. Today, there are hearing aids that are helpful for many types of hearing loss, but
none for short term memory loss. The long term objective of this project is to design a hearing
aid that will assist in both hearing and short-term memory loss. It is hoped that this combination
can treat both hearing and memory issues. The principal tasks to be accomplished this year are
the following: 1) create the software for the audio filters needed by the hearing aid system with
focus on an adjustable filter that can complement for the hearing loss of individual hearing aid
users; and 2) create the software for an automatic level control needed by the hearing aid system.
Adjustable level control is essential for the efficient operation of the hearing aid system. The
final product shall be a software-defined hearing aid that is very user friendly and treats both
short-term memory and hearing loss.
3
Table of Contents
Abstract .............................................................................................................................
3
Introduction......................................................................................................................... 5
Background......................................................................................................................... 6
Basic Hearing Aid System…………………………………..…….……………….…….. 8
Specifications...................................................................................................................... 9
System Design ................................................................................................................... 10
Hardware Selection............................................................................................................. 12
Platform Selection…….. ...................................................................................................
13
Headset Selection……………………….………….………………….……………….… 13
Hardware/Software Design................................................................................................. 14
ALC …………...................................................................................................................
16
Equalizer ………………………………………………………..……………………….
18
FIR Filter …………………………………………………………………………... ……
20
Program Design …………………………………………………………………………. 23
Test Programs …………………………………………………..……………………….
24
Design Tools .....................................................................................................................
26
Project Status/Discussion.................................................................................................... 26
Conclusion ......................................................................................................................... 28
References........................................................................................................................... 29
Appendix A – Biography.................................................................................................... 30
Appendix B – Gantt Chart ………………………………...…………………….............
31
Appendix C – Engineering Standards and Realistic Constraints ......................................
32
Appendix D – Three Laws of Marketing ……………………………………………….
33
Appendix E - Software Defined Hearing Aid Milestone Evaluation ……..…………….. 34
Appendix F – Program Code.............................................................................................. 35
4
Introduction
There is a large population of Americans with a hearing loss. About 17% (36 million) of
American adults have a hearing loss. About 2 or 3 out of every 1,000 children that are born have
a hearing loss. One may not have a hearing loss at birth, but may lose hearing as he or she gets
older. About 15% (26 million) of Americans between the ages of 20 and 69 lose their hearing.
About 47% of Americans who are 75 years of age or older have a hearing loss. Not only do
these people lose their physical hearing capabilities, but their cognitive skills can slowly
deteriorate as well resulting in short term memory loss. Hearing aids are beneficial to those who
are hearing impaired giving them better hearing. Only one out of every 5 hearing impaired
Americans actually use hearing aids. Therefore, 80% of those who are hearing impaired are not
benefitting from hearing aids. There are many different kinds of hearing aids, but there is still
much that can be done to improve the performance of conventional sound amplifying hearing
aids through use of sophisticated digital signal processing (DSP) and nonlinear filtering that is
tailored to an individual’s needs. A hearing device that assists with both hearing loss and short
term memory loss does not exist.
There have been attempts to solve the issue of short term memory loss through surgical
procedures, medications, and brain-stimulating activities. These methods were found to be very
ineffective. [1] The main objective of this project is to aid both short term memory loss and
hearing loss by digitally combining a sound recorder with a hearing aid. The device will give the
user the capability of playing back specific sound information. It will be very user-friendly and
give the user control of the frequency response and level of what he or she is hearing with the
addition of a graphic equalizer (GE) and automatic level control (ALC).
5
Background
There are many different causes of hearing loss. Hearing loss often comes from the wear
and tear on an individual’s ears from noise that damages the inner ear. There are about 25% of
Americans between 67 and 75 years old, and about half of those older than 75 suffer from a
hearing loss at some level. [2] Short term memory loss can be associated with hearing loss. One
of the most common types of short term memory loss is Amnesia, which is the unusual
forgetfulness that can be caused by brain damage due to disease or injury. In the United States,
there is an estimate of 5.4 million people aged 71 years or older who suffer from mild amnesia.
About 3.4 million people in the United States suffer from dementia, which is a more severe type
of memory loss that affects the ability of a person to function independently. These numbers are
expected to grow larger as the baby-boomer population gets older.
Hearing aids are devices that can be designed for an individual with any type of hearing
loss. Most commonly they simply amplify sound to make it louder for the individual. There are
three basic types of hearing loss: Conductive, Sensorineural, and Mixed. Conductive hearing is
when sound is not conducted efficiently through the outer canal to the eardrum and the tiny
bones, called ossicles of the middle ear. Conductive hearing loss can be a result of impacted
earwax, an infection in the ear canal, or an absence or malformation in the ear. This type of
hearing loss causes a reduction in sound level and can be corrected through surgical procedures.
Sensorineural hearing loss occurs when there is damage in the inner ear (cochlea) or the nerve
pathways from the inner ear to the brain. This type of hearing loss can be caused by diseases,
birth injury, genetic syndromes, noise exposure, head trauma, aging, etc. Sensorineural hearing
loss causes a reduction in sound level and the ability to hear faint sounds. It also affects their
6
speech understanding and ability to hear clearly. This type of hearing loss can not be corrected
but those who have a sensorineural hearing loss can benefit from the use of hearing aids. The
last basic type of hearing loss is called a mixed hearing loss which is a combination of
conductive and sensorineural hearing loss. It occurs when there is damage in the outer or middle
ear as well as the inner ear (cochlea) or auditory nerves.
The degree of hearing loss is broken up into different intervals as shown in the table
below. Hearing aids are most beneficial to those with a hearing loss range of at least 71 dB.
Table 1: Degrees of Hearing Loss
7
Basic Hearing Aid System
The most basic hearing aid system consists of a microphone to detect sounds waves, an
amplifier to increase the incoming signal, and a speaker to play the amplified sound. More high
tech versions include some sort of filtering system that allows the frequency response to be
adjusted suitable to the user’s needs. This allows the hearing aid to be more beneficial to a
specific type of hearing loss, due to the many different types of disabilities and their different
effect on the frequency sensitivity of the ear. The following figure depicts the basic system.
MIC
Speaker
Filtering
Amplifier
Figure 1: Basic Hearing Aid System
8
Specifications
Traditional hearing aids simply amplify a sound while at the same time try to minimize
distortion. The goal of this hearing aid is to combine the traditional function of a hearing aid with
a computer. The computer saves sound segments available for playback at the request of the
user. These functions improve the user’s hearing ability as well as short term memory loss.
To be classified as a hearing aid, it must be able to supply an audio gain of at least +20
dB. The level of gain depends on the user and will be adjustable using the device. The device
must also be tailored to apply specific levels of gain to specific incoming frequency signals. The
normal hearing capabilities for a human are between 20 Hz - 20,000 Hz, but the
intelligibility of common speech typically requires only frequencies between about 100 Hz
and 2,000 Hz. Thus a hearing aid must be able to adaptively filter the spectrum between
100 Hz and 2,000 Hz to produce optimal gains for a specific individual. The graphic
equalizer currently being implemented will assists a great deal in this area. The sound will
pass through a series of band pass filters connected in parallel, all with the ability to be
adjusted in gain.
To help the user with memory loss, the system will save speech segments. Each time
someone begins to speak, the system detects the presence of sound and begins to record.
Experimentation showed that a period between 2 and 5 seconds is typically required for a
given segment (the system can handle segments as short as 1 second and as much as 10
seconds). A series of 10 segments can be saved in the system at any given time; therefore
the total storage capacity must be greater than 2 minutes. 3 MB of data is typically
required.
9
The battery life of the system lasts through an entire working day, which is normally
16 hours. The weight and size of the system are also crucial to its functionality. The goal of
the system was a weight of less than 1 lb, and a size small enough to carry around from one
place to another. Ideally, it should be able to fit on a belt or in the user’s pocket.
In short, the system will be able to:

Supply an audio gain of at least 20 dB

Filter the spectrum between 100 Hz and 2,000 Hz

Record up to 10 second sound segments

Be able to save as much as 10 sound segments

Last 16 hours of the day

Weigh less than 1 lb
System Design
The first phase of the design was to determine the best platform for implementing the
hearing aid and memory loss correction system. The most optimal system had to be one that was
portable, small in size and light in weight, and with adequate amplification. It also had to record
speech (store it in memory), process the sound signals in real-time, and play back the selected
sound information on command. Several different platforms were considered to determine the
most efficient and practical approach.
Although there were several types of systems compared, certain aspects of the design had
to be met. The device must have a microphone and speaker (headphone). Also, the computer
system must to be able to send the sound signals to the speaker wirelessly. Conventional hearing
aid’s small size tends to limit the ability to do DSP. Sending the sound signals from the earpiece
(headphone) wirelessly to an external computer allows for much greater DSP capabilities.
10
Figure 2 shows the proposed system design. The microphone will pick up the audio
signal (conversation), send the signal to an analog-to-digital converter and the resulting digital
signal to a computer where it can be processed and stored digitally. With a digital signal,
different forms of signal processing can much more easily be implemented in software than in
dedicate hardware. After processing, the digital signal is sent through a digital-to-analog
converter and then back to the earpiece. The earpiece will include an amplifier to boost the signal
to the level best for hearing by the user. (To avoid cumbersome wires connecting the earpiece to
the computer and back to the earpiece, a wireless connection will be used). Currently ALC and
GE functions are being added to the software capabilities of the computer (CPU). These
functions are depicted in yellow at the bottom of Figure 2.
MIC
Speaker
DSP
A/D
Amplifier
CPU
Equilizer
D/A
ALC
Figure 2: Block Diagram of Proposed System
11
Hardware selection
When the selection process was performed one year ago, three alternatives were
considered for the wireless connectivity between the computer system and the earpiece.
Among the options considered were: 1) building the system from scratch, 2) using a premade receiver and transmitter, and 3) obtaining a Bluetooth headset. Due to time
constraints, building the system was not practical. Using a pre-made receiver and
transmitter would have required less time, and allowed for more flexibility, but would have
cost more money than the third option, which was obtaining a Bluetooth headset.
Obtaining a Bluetooth headset proved to be the best solution due to the lower cost and time
associated. Additionally, this would meet the system criteria of wireless connectivity. Table
2 shows the options considered, and indicates that the Bluetooth headset was most
practical.
Table 2: Selection of Wireless Device
12
Platform Selection
A variety of platforms were considered to be used for digital processing and voice
recording. The options evaluated were: 1) a desktop computer, 2) a laptop computer, 3) a
dedicated commercially available digital voice recorder, 4) an MP3 player, 5) a system built
from scratch, 6) a PDA, 7) a netbook, and 8) a combination of these options. The platforms
used one year ago for the development of the code were a laptop and netbook running
Linux OS, and the final platform selected was the Openmoko PDA. As listed in Table 2, it is
clear that the Openmoko smartphone was the best solution.
Table 3: Selection of Hardware Platform
Headset Selection
The earpiece, which will perform the sound delivery, selected for this project was
the Jabra BT8010 headset. This is a Bluetooth 2.0 headset that has built in DSP technology
to reduce background noise and has volume control adjustment. It also comes with a stereo
design function that was already built into the design. Stereo is beneficial to the project in
13
that each ear can be programmed to have individual frequency responses. It also has a
rechargeable battery that lasts up to 10 hours of talk time, and it weighs just 1.3 oz.
Hardware/Software Design
The Openmoko smartphone was chosen as the final platform for the software development,
but a laptop running the Linux operating system was used for development. This year, a
Windows operating system was used for the development. The reasoning behind this
decision is based on the fact that a high amount of testing will be performed on the
machine. The utilization of .wav format files was used in the majority of tests, which was
produced in the Windows environment. Changing environments was not a problem due to
the fact that the C programming language is multi-platform, and is compatible with Linux,
the Openmoko smartphone as well as Windows machines. The Openmoko smartphone
runs on a Samsung s3c2410 SoC @ 266 MHz processor, and has 128 MB SDRAM and 64 MB
NAND Flash memory. The 2.8 inch VGA TFT screen provides excellent visual clarity, and
Bluetooth provides wireless access.
The C programming language was used to minimize the complexity of transferring
the program to a different machine, which proved to be important with the switch from a
Linux to Windows environment. Furthermore, C allows for faster and more reliable
runtime of the program, due to it being a very low level language. Writing the program in C
allows for nearly real time data processing, guaranteed to be less than 2 milliseconds
latency.
The audio subsystem involved an interleaved circular design, which captures stereo
sound, and alternates the right and left channel in memory. Each channel requires 2 bytes
per sample, which means that every frame is 4 bytes (2 bytes for each channel). The audio
14
is stored into the incoming buffer as RLRLRL, etc. Using 32 frames per block of audio
provided the best audio output, which involved 128 bytes per block. In order to achieve CD
quality audio, the program uses a 44.1 kHz sample rate, as well as 16 bit audio (2 bytes per
channel). The audio is stored into a ring buffer in memory that has 3 megabytes of allocated
memory. Three megabytes is equivalent to 32 seconds of stored audio, which should be
sufficient to recall enough of a conversation to remember what had happened. Given this
system implemented one year ago, certain changes have been made to implement the GE
and ALC. The signal is coming into the system at a rate of 3 kHz. The nyquist rate states that
the sampling rate needs to needs to be at least 6 kHz. With the system currently sampling
at CD quality (44 kHz), a highly excessive amount of samples are taken, taking up memory
and delaying the program. In the ALC and equalizer functions, the sampling rate was
converted to 11.025 kHz, downsized to be exactly ¼ of the current program’s sampling
rate. The sampling rate was then converted back to 44 kHz at the end of each function, so
there was no compatibility issues once the signal returns to current flow of the program.
The user control can be very wide, or very simple. It is important to leave the
control as simple as possible as a default, but to provide the option to configure more
advanced features. The default commands necessary to run the program at the present
time include: return, reset, and quit. These commands correspond to returning the audio to
a previous time, resetting the audio to a live input, and quitting the program. The additional
commands currently being implemented deal with the GE and ALC.
15
Automatic Level Control
An addition of an ALC unit is essential in making the hearing aid a successful device. In
applications that record sound, it is often desirable to keep the recorded sound at a constant level.
For example, the signal may vary depending on how loud the user speaks, how close the user is
to the speaker, or how many people are in the room. This can result in a signal that is difficult to
listen to when played back. The purpose of the ALC is to keep an approximately constant output
volume irrespective of the input signal level. The ALC controls the gain of the system
adaptively. The gain is determined from a reference (RMS) level related to a comfortable
listening level. The ALC adjusts the signal gain in order to achieve a target level and thereby
prevent clipping and suppress any discontinuities. If a signal is below the target level, the ALC
will cause the gain to be increased until the target level is reached. If the signal is above the
target level, the ALC will decrease the gain until the target level is reached.
Three key terms are related to automatic level control; decay time, attack time, and noise
gate. Decay time is the measure of the rate at which the gain can be increased by the ALC. It
determines how rapidly the ALC will make adjustments in response to a fall in the signal level.
Attack time is the measure of the rate at which the gain can be decreased by the ALC. This
reduces the possibility of clipping, as the signal is quickly shifted away from the clipping level.
The attack time should be set in conjunction with decay time. The noise gate is the minimum
signal the ALC can receive to be accepted as a signal. This prevents an extremely low signal
being rapidly amplified by the ALC. Such a signal could be from a person in the room not
talking to the user.
Figure 3 displays a block diagram of the ALC. It first receives an input signal. An
absolute value function is then applied to convert the negetive values to positive values. This
16
modified signal is then passed through a low pass FIR filter. The filter is crucial in the ALC as it
helps prevent distorition. Without the filter, the ALC’s audio would be distorted when the
incoming signal is multiplied by high frequency components from the control (absolute) signal.
The low pass filter cuts off any control frequencies greater than 10 Hz. Frequencies within the
10 Hz range will not be heard by the user, however, larger frequencies would add to the input
signal and distort what the user is hearing. The RMS signal is used as the target value, and is
combined with the low pass filtered absolute signal. This process results in a signal (K), which is
is multiplied by the input signal.
Figure 3: ALC Block Diagram
17
Equalizer
In an effort to make the software defined hearing aid more effective a frequency equalizer
is essential. This equalizer maybe preset to an optimum frequency versus gain response or made
to be adjustable by the user. A GE is an audio device that allows the user to see graphically and
control individually the gain of a number of different frequency bands. In other words, it allows
one to increase or decrease the gain of a fixed set of frequency ranges in an audio system. This
allows the user to amplify the specific frequency range in which he or she is having difficulty
hearing. In order to implement the equalizer, one must understand low pass, high pass and band
pass filters, which attenuate high frequencies, low frequencies, or both respectively. The
required filters were implemented using Finite Impulse Response (FIR) filters, which have a
finite amount of sample intervals. The software defined hearing aid has a 6-band equalizer,
which requires six Finite Impulse Response filters (refer to the Figure 4 below). Six bands were
chosen because of the sampling rate of 11.025 kHz at which the equalizer program is running.
This results in a nyquist rate of approximately 6 kHz, making it irrational to provide bands
beyond that point. Each band consists of a low frequency, a high frequency, and a center
frequency, which is calculated by finding the square root of the product of the high and low
frequencies. The following chart shows the values for each band.
fL
50
100
200
400
800
1600
fc
70.71068
141.4214
282.8427
565.6854
1131.371
2262.742
fh
100
200
400
800
1600
3200
The input audio signal passes through the six FIR filters, then its gain within the frequency range
is increased or decrease by a number KN, where N references the filter number.
18
Input
Signal
1
X
FIR
K1
2
X
FIR
K2
3
X
FIR
K3
4
X
FIR
K4
5
X
FIR
K5
6
X
FIR
K6
S
Equalizer
Output
Signal
Figure 4: Equalizer Design
The output from each gain block is summed, assembled together and sent to the final audio
signal.
Due to the increase/decrease of the gain in the amplitude within the six different
frequency ranges, some of the ripples will overlap each other as shown in Figure 5. The
equalizer takes this into consideration and is programed to set a maximum defined ripple.
Overlapping
Defined Ripple
Figure 5: Ripple Effect
19
FIR Filter
An FIR filter, otherwise known as a finite impulse response filter, is the backbone in both
the ALC and equalizer, and is also the most difficult to implement. Before programming an FIR
filter in C, one must understand exactly how an FIR filter works. Figure 6 gives a signal flow
Z-1
x(n)
K0
X
Z-1
K1
X
Z-1
K2
X
Z-1
K3
X
.... 160
K160
X
∑
Figure 6: FIR Filter
graph for a general causal FIR filter, also known as a transversal filter or a tapped delay line. The
input signal, x(n), gets passed through multiple delays, otherwise known as taps. The impulse
response is obtained at the output of each tap when the input signal is the impulse signal. At the
3rd tap, the impulse signal would be δ = [0, 0, 1, 0, 0…]. In other words, the impulse response
consists of the tap coefficients, prepended and appended by zeros. The final output of the filter is
the summation of all the impulse responses from each tap. The impulse response becomes zero
after the final tap, therefore, a tapped delay line can only implement finite-duration impulse
responses in the sense that the non-zero portion of the impulse response must be finite. This is
what is meant by the term FIR. Also, in the common case, the impulse response is finite because
20
there is no feedback in the FIR. A lack of feedback guarantees that the impulse response will be
finite.
After trial and error, 161 taps was determined to be the best for the hearing aid. This
particular number provided the best functionality, while still being small enough to allow the
program to run in real time. The coefficients for the filter were determined using a program
written in GW Basic. The program took the characteristics of the filter as input, such as the
number of taps and the type of filter to be implemented, and output the correct coefficients.
Real Time Execution
The most imperative aspect of this project is that it must run in real time. While
communicating with others, the user should not be forced to wait for any processing the hearing
aid needs to complete. It should appear to the user as though the sound he or she is listening to is
coming directly from the source. The many components needed in the hearing aid system make
this task a difficult one, as a new task of simplifying the system is now presented. With a faster
processor, the more complex the system can become.
The aspect of the system that requires the most attention from the CPU is the FIR filter
section in the equalizer. The Samsung s3c2410 SoC @ 266 MHz processor may have difficulty
running six separate FIR filters at real time due to the high number of calculations. To cut
down on execution time significantly, the six filters could be composited into one, due to
the fact that the tap length and sample rate are the same for each. The system can be
thought of as one large FIR filter of 161 by 6 coefficients, totaling 966. To adjust individual
band sections requires a simple multiplication and then a summation of the coefficients.
21
With that technique implemented, the system can be further simplified by reducing
the amount of coefficients. Because the filter has a linear response, there is mirror imaging
of the coefficients. The first 80 values and the last 80 values are symmetrical about the 81st
value. This phenomenon allows the 161 coefficients to be reduced to 81, otherwise known
as the Folded Computation Method. Doing a partial sum and using half the coefficients prior
to the sum of products is extremely effective in making the system as close as possible to
real time execution. With this simplification in effect, the system runs as though there are
486 total coefficients instead of 966.
22
Program Design
The program written in C++ uses the algorithm of the FIR to implement the equalizer.
The flowchart diagram is shown below:
START
Prompt user to
enter gain of each
of the 6 bands
Open and read
coefficient file.
Store coefficients
in an array.
FIR
Function
START
Shift the data array
where the input
signal values are
stored
Open input file
(.wav)
Store in the value
in the function
parameter into the
data array
Count the number
of samples in the
.wav file
Divide the input
signal into 6
different ranges
(bands)
Read input
signal. End of
file?
Simplify 161
coefficients to 81
coefficients by
using addition of
each end of the
161 coefficient
aray
Return sum of
product where the
values in the
coefficient array
are multiplied by
the input signal
values in the data
array.
First, the program prompts the user to input the gain for
each frequency bands. There are six frequency ranges,
which means that the user must input six values. The
program stores these variables into the corresponding
variable names for each band. The coefficient file is read
and each coefficient is stored into an array called
COEF_Array. This array contains 161 coefficients
because it is a 161 tap filter. The program then reads the
input .wav file and counts the number of samples in the
file so that it can determine the range for the frequency
bands. Once it counts the number of samples, it divides it
up into 6 different bands. Next, the program runs a while
loop where it continues to read the input .wav file. Inside
this while loop, an input signal value is read, called by the
FIR function, then multiplied by the gain, and outputted
into an outfile.
The FIR function starts by shifting the data array by one
where the new sample data comes in. It cuts down the
number of coefficients from 161 to 81 taps just by adding
the symmetrical values. Once, the FIR is simplified to 81
taps, the output is the sum of products of the values from
the coefficient array multiplied by the values from the
data array.
FALSE
Store gains that
were inputted for
each band.
Read and store TRUE
input signal value.
Call FIR function.
Outfile the product
of the gain and the
FIR returned
value.
FINISH
Figure 7: Flowchart Diagram
23
Test Programs
A Gaussian noise generator, burst tone generator with pre and post dead bands, and a sine sweep
function were implemented in C++ and used to test the GE and the ALC. The code for these
generators can be found in Appendix D. Gaussian noise is a statistical noise that has a
probability density function of the normal distribution. The values that the noise can take on are
Gaussian-distributed. A Gaussian noise generator is also a (all audio frequency) white noise
generator. Figure 6 shows the output signal from the Gaussian noise generator.
Figure 8: Gaussian Noise
The burst monotone generator synthesized a 6-second tone burst in file format with pre and post
dead bands. The user is prompted to input the variable frequency, amplitude, and sample rate.
The following figure displays the output from the C++ program where the user inputs an
amplitude of 32,767, a sample rate of 8000 Hz, and a frequency of 1 kHz.
Figure 9: Sine Burst
24
The sine sweep function gradually increases the frequency of a sine wave. This program allows
the user to specify the duration of the sweep, and the frequency will increase for the specified
time. Figure 8 depicts the operation performance of a sweep generator.
Figure 10: Sine Sweep Waveform
The Sine Sweep waveform was used as the input .wav file for the equalizer. Figure 11 shows
two different output waveforms with different gains. As one can see, the bands of each
frequency range can be amplified to the user’s liking. The user can modify the gains of the
signal using the equalizer such that he/she can hear best.
Figure 11: Equalizer outputs
25
Design Tools
The design tools used are all software oriented. The majority of the work was done using
Microsoft Visio and Dev C++ for writing the C++ code. Microsoft Visio was used for pre-design
planning, which included block diagrams and UML diagrams. Cool Edit 2000 was used for
viewing the waveforms produced by the C++ code. GW Basic was used for the calculation of
coefficients. All programs were run on the Windows XP operating system.
Project Status / Discussion
The major objective of this project was to develop an improved hearing aid that assists
not only with hearing but memory problems as well. The principal tasks focused on this year
were: 1) to create in software the audio filters needed by the hearing aid system including on an
adjustable filter that can correct for the hearing loss over frequency of individual hearing aid
users; and 2) to create in software an ALC needed by the hearing aid system. Task 1 was
successfully completed. To date, the software for the ALC is being implemented. During Senior
Project I, the architecture of the hearing aid system was studied; the design of GE and ALC
systems investigated and several C++ programs that were used to test the GE and ALC were
written. These programs were written to gain experience with real time programming and to
learn how to apply digital signal processing in C++.
In Senior Project II, the Finite Impulse Response (FIR) filters were developed as well as
the need sample rate converters. A lot of experience was gained by developing and
experimenting with FIR Level One Band Pass, FIR Level One High Pass, FIR Tap Length Low
and Band Pass filters. A FIR level two was developed, which involves the classic shift register
method. Then, the FIR level three was developed, which supports the Folded Computation
26
Method. The use of these FIRs were implemented for the graphic equalizer and the ALC. The
equalizer was tested with these components, and functioned to specification. The
application of the finished design will ultimately prove effectiveness of the hearing aid.
Currently, the equalizer program takes in the gain for each band before execution. The goal
of the equalizer that still needs to be met involves a GUI interface, which allows the bands
to be changed as the program is running in real time. The GUI interface, as well as the
completed ALC, are currently in development. A gantt chart for the project is shown in
Appendix B.
27
Conclusion
This project is very significant and can have a large positive impact on the world. Upon
completion, the software defined hearing aid will assist those with hearing impairments and short
term memory loss. With a digital sound recorder built in to the hearing aid, this device will be
highly versatile. The finished product is very user-friendly and easy for the user to control
whatever he or she is hearing with the addition of the graphic equalizer and the automatic level
control, which is still in development.
28
References
[1] Laurance, Jeremy. "Scientists discover way to reverse loss of memory". The
Independent. <http://www.wireheading.com/dbs/memory.html>.
[2] Mayo Clinic Staff, "Hearing Loss". MayoClinic.com.
<http://www.mayoclinic.com/health/hearing-loss/DS00172>.
[3] Warner, Jennifer. "Americans in 70s Face Mild Memory Loss". WebMD.
<http://www.webmd.com/brain/news/20080317/americans-in-70s-face-mildmemoryloss>.
[4] Bogardus JR MD, Sidney T.. "Screening and Management of Adult Hearing Loss in
Primary Care". Journal of the American Medical Association. <http://jama.amaassn.
org/cgi/content/full/289/15/1986>.
[5] AAA, "Hearing Aid Facts". American Academy of Audiology.
<http://www.audiology.org/aboutaudiology/consumered/guides/hearingaids.htm>.
[6] WHO, "Deafness and Hearing Impairment". World Health Organization.
<http://www.who.int/mediacentre/factsheets/fs300/en/>.
29
Appendix A - Biography
Richard Knowles
Richard Knowles (May 2010) is a senior Computer Engineer
at The College of New Jersey. He has a minor in Computer Science,
and has been a student at TCNJ since 2005. He was born in White
Plains, NY on November 15, 1986 and has lived in New Jersey all his
life. His interests include Information Technology and Computer
Software Design. He held a Systems Administrator internship at OSS
Nokalva in the Summer of 2009. He also worked on a research
project at Rutgers University with the Research In Science and
Engineering program for two years. The research project focuses on memory, attention, and eye
movements during closed-captioned videos. Richard plans to attend graduate school at Rutgers
University and achieve a Master’s Degree in Software Engineering.
David Vaz
David Vaz (May 2010) is a senior Computer Engineer at The
College of New Jersey. He has been attending the college since the
Fall of 2005. Born in Freehold, NJ on October 11, 1986, he has lived
in New Jersey all of his life. He is interested in Computer
Networking, System Administration and Computer
Software/Hardware design. He worked as an intern for OSS Nokalva
in New Brunswick, NJ as System Administrator. David plans to
work in the field of his interest immediately upon graduating.
30
Appendix B – Gantt Chart
31
Appendix C – Engineering Standards and Realistic Constraints
There are multiple standards that were followed in order to complete the development
platform for this design project. Although the majority of the standards were software related,
there also were some hardware standards. Since the project is not confined to a single
platform, there are a variety of standards that could apply to the project design. On the development
platform, the standards that applied were the operating system, which is Linux OS,
and the coding standard, which is C++. The communication between devices followed the
Bluetooth standard ‐ IEEE 802.15 WPAN. Any other devices that are used to implement the
design could follow a different variety of software and/or hardware standards.
The realistic constraints that apply to this design project relate to the size and
functionality of the devices needed. The DSP platform should not be so large that it is not portable.
A desktop computer would not be satisfactory, as it is too large. Therefore, a laptop was used as
the development platform, and smaller devices are projected to be the final platforms, such as a
PDA device, which can be small enough to be placed in the user’s pocket or on a belt.
Additionally, the in‐ear device must be sufficiently loud that one with hearing problems is able
to pick up sounds that are not normally audible, but should not be overly load so as to cause pain or
additional damage to the ear.
32
Appendix D – Three Laws of Marketing
What’s in it for you?
 The software defined hearing aid provides two features today’s hearing aids are incapable
of
 The ability to alter the incoming sound using highly sophisticated digital signal
processing.
 The ability to record sound and play it back
 Not only does this help individuals with hearing problems, but also individuals with short
term memory loss
Why should you believe us?
 The software used in the hearing aid uses C++ and DSP
 These tools are more than capable of performing the tasks needed to process the
incoming signals
 Bluetooth technology along with an Openmoko PDA Neo1973 is used as the hardware
 The Openmoko PDA has enough power to perform the DSP necessary, and the
Jabra BT8010 Bluetooth Headset is capable of syncing with the PDA and
outputting the sound loud enough for the user needs.
 The system has already been proved to work, currently under construction are
enhancements to the DSP to add more functionality
Why should you care?
 17% of American adults have a hearing loss
 36 million Americans
 About 2 or 3 of every 1,000 children are born with a hearing loss.
 One out of 5 people wear hearing aids
 15 % (26 million) of Americans between the ages of 20 and 69 lose hearing
 Age groups:
 45-64 years old: 18%
 65-74 years old: 30%
 75 years or older: 47%
33
Appendix E - Software Defined Hearing Aid Milestone Evaluation
Number
Milestone
Rating
Score
Importance
Product
1
Problem Definition
Met
100%
14%
14%
2
Research
Met
100%
10%
10%
3
Burst Monotone Generator
Met
100%
9%
9%
4
Sine Sweep Generator
Met
100%
9%
9%
5
FIR Level One Development
Met
100%
14%
14%
6
Experimenting with FIR Taps
Met
100%
5%
5%
7
FIR Level 2-3 Development
Met
100%
14%
14%
8
Composited Coefficient Equalizer Met
100%
10%
10%
9
Construct ALC
Partially Met
80%
10%
8%
10
Add to Current System/Testing
Partially Met
50%
5%
2.5%
100%
95.5%
34
Appendix F– Program Code
// Author: Richard Knowles & David Vaz
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include <time.h>
#include <cstdlib>
#include <ctime>
#include <iostream.h>
#include <fstream>
using namespace std;
FILE *fin;
FILE *fout;
double DATA_Array[161];
double COEF_Array[161];
double DATA_Array2[81];
double COEF_Array2[81];
double gain;
//FIR filter function
double fir(double FDATA_IN)
{
int i;
double Fsum;
//Shift array
for (i=0; i < 160; i++)
{
DATA_Array[160-i] = DATA_Array[159-i];
}
//Bring in new data
DATA_Array[0] = FDATA_IN;
int Index1=0;
int Index2=160;
//sum of products
for(i =0; i < 81; i++)
{
DATA_Array2[i] = DATA_Array[Index1] + DATA_Array[Index2];
Index1++;
Index2--;
}
DATA_Array2[80] = DATA_Array[80];
int COEFIndex1=0;
int COEFIndex2=160;
for(i =0; i < 81; i++)
{
35
COEF_Array2[i] = COEF_Array[COEFIndex1] +
COEF_Array[COEFIndex2];
COEFIndex1++;
COEFIndex2--;
}
COEF_Array2[80] = COEF_Array[80];
Fsum=0;
for(i=0; i <81; i++)
{
Fsum = COEF_Array2[i]*DATA_Array2[i];
}
return Fsum;
}
int main(void)
{
cout << "Enter
double gain1;
cin >> gain1;
cout << "Enter
double gain2;
cin >> gain2;
cout << "Enter
double gain3;
cin >> gain3;
cout << "Enter
double gain4;
cin >> gain4;
cout << "Enter
double gain5;
cin >> gain5;
cout << "Enter
double gain6;
cin >> gain6;
gain of first range: ";
gain of second range: ";
gain of third range: ";
gain of fourth range: ";
gain of fifth range: ";
gain of last (6th) range: ";
//read coefficients
double coefficient = 0;
int counter=0;
ifstream myfile ("COMPOSITE161.COF");
if (myfile.is_open())
{
while (! myfile.eof() )
{
myfile >> COEF_Array[counter];
counter++;
}
myfile.close();
}
else cout << "Unable to open file";
36
// open binary file for reading data
fin = fopen("sine_Sweep_stream.wav", "rb");
// open binary file for writing data
fout = fopen("161to81sine_Sweep_stream2.wav", "wb");
int counting = 0;
int countsamples = 0;
//count number of samples
while(!feof(fin))
{
short sample;
fread(&sample, sizeof(short), 1, fin);
countsamples++;
}
//close and reopen
fclose(fin);
fin = fopen("sine_Sweep_stream.wav", "rb");
//ranges for gains
int increment = countsamples/6;
int range1 = increment;
int range2 = increment*2;
int range3 = increment*3;
int range4 = increment*4;
int range5 = increment*5;
while(!feof(fin))
{
//for gains
if (counting<range1)
{
gain=gain1;
}
else if (counting<range2)
{
gain=gain2;
}
else if (counting<range3)
{
gain=gain3;
}
else if (counting<range4)
{
gain=gain4;
}
else if (counting<range5)
{
gain=gain5;
}
else
{
gain=gain6;
}
//read input, call FIR function, output
37
short input_int;
fread(&input_int, sizeof(short), 1, fin);
//double Finput = fabs(((double)(input_int))/32767.);
//abs
for F10HZCOF
double Finput = ((double)(input_int))/32767.;
double Fsum2 = fir(Finput);
short Iout;
if (Fsum2 > 1) Fsum2=1;
if (Fsum2 < -1) Fsum2 = -1;
Iout = (int)(gain*Fsum2*32767.);
fwrite(&Iout,sizeof(short),1,fout);
//cout << input_int << "
Fsum= " << Fsum2 << "
<< counting << "
Iout: " << Iout << endl;
counting++;
counting "
}
fclose(fin);
fclose(fout);
cout << increment <<endl;
cout << "completed" <<endl;
int yes;
cin >> yes;
return 0;
}//end main
38