Download What is Sound

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Resistive opto-isolator wikipedia , lookup

Valve RF amplifier wikipedia , lookup

Spectrum analyzer wikipedia , lookup

Waveguide filter wikipedia , lookup

Music technology (electronic and digital) wikipedia , lookup

Mathematics of radio engineering wikipedia , lookup

Phase-locked loop wikipedia , lookup

Loudspeaker wikipedia , lookup

Radio transmitter design wikipedia , lookup

Superheterodyne receiver wikipedia , lookup

Sound reinforcement system wikipedia , lookup

Index of electronics articles wikipedia , lookup

Mechanical filter wikipedia , lookup

Audio crossover wikipedia , lookup

Analogue filter wikipedia , lookup

Distributed element filter wikipedia , lookup

Linear filter wikipedia , lookup

Kolmogorov–Zurbenko filter wikipedia , lookup

Equalization (audio) wikipedia , lookup

Transcript
Spatialisation effects
Psychoacoustics
Psychoacoustics is the study of the structure of the ear and the way sound is received,
transmitted, and understood by the brain. Fortunately(!) time dictates that we can't
look at the actual mechanics of hearing. Over the next few lectures, however, we will
be thinking more about the psychological aspects of hearing, and how we can fool the
ear by changing the tonal qualities of a sound.
Aural perception
Auditory scene analysis is the process by which we perceive the distance, direction,
loudness, pitch, and tone of many individual sounds simultaneously.
Analysing auditory scenes is a complex human ability. Our environment bombards us
with constant sound. Even the smallest vibrations and echoes help us to identify our
surrounding area. Sounds in a small area produce fewer echoes than sounds in a large
area.
The physical properties of an object can also be determined by sounds the object
makes. When a ball is dropped onto a soft surface, it makes a different sound than
when it is dropped onto a hard surface. As you walk across the floor you can hear the
change in the sound of your footsteps when you cross from a carpeted area onto a
tiled surface.
The recording of sounds has progressed from simple to more complex levels in an
attempt to replicate the way humans perceive sound. Early monophonic recordings
progressed to stereo. Newer technologies, such as 3D sound and other advances in the
digital era, are refining the process further. These recordings, however, are still crude
imitations of the process by which the human ear receives and understands sound.
Today, we're going to look at the most fundamental of effects - loudness.
Perception of intensity
Intensity is related to the energy in a sound wave. In general, as the intensity (a
physical quantity associated with the wave itself) increases, we perceive an increase
in the loudness of a sound, but there is not a simple one-to-one correspondence
between intensity and loudness. Loudness is also dependent on other factors such as
the frequency content of the sound, its duration and the background sounds that are
present.
We have already discussed the sound pressure level (SPL) scale which is measured in
decibels (remember 1st year Introduction to Audio?), but it is probably useful to recap
a little.
The decibel scale takes as its reference 0dB – a sound at the very threshold of human
hearing – and all other sounds are classified according to this. The higher the number,
the louder the sound. The scale is logarithmic in nature, since it takes a tenfold
increase in intensity to generate a sound that is perceived to be twice as loud. The
ear’s sensitivity to changes in intensity is also dependent on the strength of the signal
itself. What this means is that although the absolute intensity difference between, say
90 and 91 dB (about 3859 times greater) is much greater than that between 30 and 31
dB (about 3.86 times greater), the perceived change in loudness is about the same.
The ear is also more sensitive to certain frequency regions than to others. The most
sensitive region is about 2700 – 3000 Hz, with sensitivity falling off gradually on
either side. What this means is that a sine wave at 3000 Hz with a certain intensity
will sound louder than a similarly intense sine wave at 200 Hz.
In order to compensate for this, a series of constant-loudness contours called phon
contours were introduced, which take into account the frequency-dependant
sensitivity of human hearing. Figure 1 shows a set of phon contours.
Figure 1 – Some phon contours
The ear can be fooled into perceiving a constant loudness as the sound decreases in
intensity provided the sound is also heard as moving away from the listener. This
effect can be enhanced by applying artificial reverberation - something we'll talk
about in more detail in a couple of lectures time.
So what does all this tell us?
Well, at its most basic, we can simulate the effect of a sound source being positioned
at a distance from the listener by reducing the loudness. Intuitively, you should know
that when sounds are far away they aren't as loud.
In fact, we can go one step further and give a detailed description of how loud/quiet
something appears according to an equation known as the 'inverse-square law'. This
essentially states that if you double the distance between a sound source and yourself,
you quarter the intensity of the sound. More formally:
I
1
d2
Now, I don't necessarily expect you to call on this equation every time that you alter
the loudness of a sound, but do be aware that as sounds get further and further from
you, the loudness levels will drop very quickly.
But that's not the whole story, surely!
If you try and simulate the effects of distance on a sound file simply by reducing the
volume, you will hit against a problem: rather than sounding distant, the resulting file
will simply sound like a quieter version of the original. What's going on?
Well, the short answer is that something else is at work. The more involved answer is
that the principal reason sound doesn't appear to be any further away when we reduce
the volume is that we haven't taken into account any of the tonal differences caused
by atmospheric absorption.
So what is this and why do we need to take it into account?
There are a number of different factors to take into account when discussing
atmospheric absorption. It depends on the frequency of the sound, the relative
humidity, temperature and atmospheric pressure. In addition, a small part of a sound
wave is lost to the air or other media through various physical processes, such as the
direct conduction of the vibration into the medium as heat caused by the conversion of
the coherent molecular motion of the sound wave into incoherent molecular motion in
the air or other absorptive material.
However, as a general rule of thumb, you can consider absorption as follows: sound
consists of vibrating molecules. For a given molecule, low frequency vibration
requires less energy than high frequency vibration. Therefore, it seems reasonable that
for a given sound wave, which will consists of various high, mid and low frequency
components at a given amplitude, as it travels through the air, it is the high frequency
(high energy) components that suffer first. Thus the atmosphere has the effect of a
low pass filter as sound waves travel through it - sounds that are further away tend to
sound duller.
Filters… oh yeah! I remember something about those…
Filters are devices that boost or attenuate (cut) certain regions of the frequency
spectrum.
There are essentially four main types of filter in common musical use, although you
will probably become more familiar with variations on these from working with the
processing tools in Soundforge. These are:
Lowpass filters, which let through unchanged all frequencies below a certain point
and attenuate all the rest (see Figure 2). The diagram below is known as an amplitudeversus-frequency response curve, which is more commonly referred to simply as a
frequency response curve. The perfect frequency response (from an accuracy of
reproduction viewpoint) is the flat response – a horizontal line at the 0dB point across
the whole of the frequency spectrum. This indicates that the signal is passed through
the device without any boost or attenuation. In the real world, however, devices never
have a perfectly flat response curve. Indeed, when it comes to sound manipulation
devices like filters, a flat response curve is probably not what you’re after at all!
In the frequency response curve of figure 2, the low frequencies are passed through
unaltered, then, when we get to a certain point (known as the cutoff frequency) the
filter kicks into action and attenuates the signal.
Figure 2 – A low-pass filter.
The cutoff frequency is the point on the frequency scale where the filter cuts the signal
to approximately 0.707 of its maximum value. Why choose this value? Well, the
power of the signal is proportional to the square of the amplitude, and 0.7072 is 0.5.
Thus, the cutoff frequency is also called the half power point.
How quickly a filter boosts or attenuates a signal is measured in decibels of boost or
attenuation per octave (dB/octave). For example, a 6dB/octave slope on a lowpass
filter gives a smooth attenuation (or rolloff) to the signal, while a 90dB/octave makes
a sharp cutoff.
Highpass filters let through only frequencies above a certain point and attenuate all
the rest (see figure 3). The highpass filter can essentially be thought of as a lowpass
filter in reverse, and, like the lowpass filter, has a cutoff frequency and a slope
measured in dB/octave.
Figure 3 – A highpass filter.
Bandpass filters let through only frequencies within a certain range. As you can see
from the diagram below, a bandpass filter has 2 cutoff frequencies – a lower and
upper cutoff. The difference between these two frequencies is known as the
bandwidth of the filter. It is essentially a measure of ‘how much’ of the original sound
signal is let through unaltered. Frequencies that fall between these two frequencies
(i.e. those that lie above the half power point) are said to be in the filter’s passband.
Those that fall outside are said to lie in the filter’s stopband. In addition, the centre
frequency of the filter is defined as the point at which the amplitude of the signal is at
a maximum.
Figure 4 – A bandpass filter.
There is one other important parameter connected with this type of filter – its Q. The
Q of a filter is defined to be:
Q = centre frequency/bandwidth.
It is a measure of how ‘spread out’ the response curve is. For low Q values, the curve
is very broad. High Q values result in a very sharp, narrow curve that is focused
around a peak (resonant) frequency. If a high-Q filter is excited by a signal near its
centre frequency, the filter ‘rings’ – oscillating for some time after the input signal has
finished.
Bandreject filters, also known as notch filters, let through all frequencies except those
within a certain range. Again, they can be thought of as essentially bandpass filters in
reverse, and similarly have two cutoff frequencies and a bandwidth. Here, however,
the centre frequency is defined to be the frequency at which the amplitude of the
signal is at a minimum.
Figure 5 – A bandreject filter.
In addition, there are two other important types of filter which are similar to the
lowpass and highpass types. These are shelving filters (see Figures 6 and 7), which
can boost or cut all frequencies above or below a certain point. Their names are a bit
confusing, however, because a high shelving filter acts like a lowpass filter when it is
adjusted to cut high frequencies and a low shelving filter acts like a highpass filter
when it is adjusted to cut low frequencies.
Figure 6 – High shelving filter.
Figure 7 – Low Shelving filter.
© Kenny McAlpine 2002