Download Social Learning - Ms. Zolpis` Classes

Document related concepts

Educational psychology wikipedia , lookup

Behavior analysis of child development wikipedia , lookup

Cognitive development wikipedia , lookup

Learning theory (education) wikipedia , lookup

Behaviorism wikipedia , lookup

Eyeblink conditioning wikipedia , lookup

Social cognitive theory wikipedia , lookup

Learning wikipedia , lookup

Psychological behaviorism wikipedia , lookup

Classical conditioning wikipedia , lookup

Operant conditioning wikipedia , lookup

Transcript
Learning
“Pay Attention!”
In order for people to pay attention effectively they
generally have to be able to do the following:
1. identify and focus on the important elements of the
situation
2. maintain attention on that while ignoring competing
information
3. retrieve memories related to what is being focused on
4. redirect attention to new information when
appropriate
Memory and Learning
• Memory plays a key role here. When learning takes place, a neural
pathway is formed. The more that pathway is used, the more it
becomes strengthened.
• The stronger the pathway, the easier memory retrieval occurs.
• The brain uses two systems to retrieve information.
1. There is a fast system to note where objects are located (the
background)
2. And a slower one to discern what the objects are (the foreground).
• Children who have dyslexia have a fast system that does not work fast
enough. The words that have been read do not shift to the background
while new ones are being read, causing them to blur.
• The same dysfunctional fast system applies to those with Attention
Deficit Disorder.
• Irrelevant information from the environment is not filtered out during
tasks that require attention.
• The result is the sound of a pencil writing becomes as important as the
test that is being written.
The brain can be tricked into
producing the same result with the
fast system of memory.
Stroop test
http://www.pbs.org/wgbh/nova/everest/exposure/
stroopnonshock.html
• People will often say the color of the letters
rather than saying the word the letters spell.
• This is because their attention is divided between
the colors the words are printed on and the name
of the color the word denotes. When the two are
mixed up, then accuracy fails.
Conditioning
• A nine month-old baby lay in a hospital crib
starving to death; his weight less than 12 pounds.
The baby was so thin that his ribs stuck out and
the skin hung from the bones of his arms. His
eyes were wide and dull; he spent most of his
time staring into space. Death seemed inevitable
because each time he swallowed food, it would
reach a certain point in its downward movement
and then his muscle would contract in the
opposite direction, causing him to throw up.
Conditioning: association is made
between two events by repeatedly having
them occur close together in time.
• Doctors found the exact location in the digestive tract that was
causing reversal of the food movement took place.
• A wire that would carry an electric shock was attached to the
infant’s leg.
• Each time the food arrived at the “reversal spot”, a shock was
sent to the leg at one-second intervals until the vomiting was
over.
• By that time the food reversing had been thoroughly associated
with something very unpleasant for the infant.
• His brain decided to stop the reversal process in order to avoid
the shock.
• Soon afterward, the infant began to gain weight and he was
discharged, well and happy. Thus, it is possible to condition
mental or physical behavior by a process of association.
Social Influence
• However, conditioning can also be very subtle
and complex.
• A good example is the concept of handsomeness,
which is conditioned by each culture.
• Males in the Boloki tribe demonstrate their
attractiveness and masculinity by chiseling their
upper from teeth to V-shaped points.
• Apparently the women in the tribe like it, but it is
hard to imagine this technique attracting many
people in North America.
Four Types of Learning
1. The first type of learning involves unavoidable physical
association, such as the shock to the boy’s leg. This is
called classical conditioning.
2. The second involves learning caused by the actions we
perform. For instance, we learn that pressing one’s finger
very hard on the edge of a knife is not a good idea. This
is called operant conditioning.
3. The third type of learning is the learning that results from
observing others. If someone dives into a black lagoon
and does not resurface, you know not to do that. This is
called social learning (because it results from viewing
other people).
4. The fourth type emphasizes thought processes in
learning. It is called the cognitive approach.
Classical Conditioning
• It was first demonstrated by a physiologist in the
1900s, Ivan Pavlov.
• Pavlov’s original goal was to understand how the
digestive system works. He wanted to discover how
salivation and gastric juice aids in digestion.
• By today’s standards, the experiments were both
basic and simple, but such was not the case in the
early 1900s.
• Pavlov surgically separated the stomach from the
esophagus in dogs. This meant that: (1) food taken
by mouth would never reach the stomach and (2)
food could be put directly into the stomach without
having to travel through the mouth.
Importance of
Association
• Pavlov was quick to note three strange things.
1. First, food put directly into the stomach did not generate all by
itself enough gastric juices for digestion. Thus, salivation at the
time of eating is critical to proper digestion.
2. Second, even though no found was placed in the dog’s mouth, the
animal would still salivate copiously just at the sight of the food.
3. But Pavlov’s third finding was the most surprising and important
one: the sight of the experimenter who fed the animal would
cause the dog to salivate even if that person was not carrying any
food. This meant that receiving food could be conditioned to
(associated with) the mere presence of the experimenter.
• Later Pavlov rang a bell before feeding the dogs and the dog’s
associated the ringing of the bell with food. They would salivate
whenever the bell was rung even if food wasn’t provided.
Outline of Classical Conditioning
• Since Pavlov’s experiments were the first in the learning area, they
also are considered classical. This is how we get the term classical
conditioning.
• The following steps are involved in classical condition. You start
with a reflexive or “natural” stimulus-response pair. The word
stimulus refers to anything that causes some kind of reaction. That
reaction is termed the response.
• Thus, since meat makes a dog salivate, meat is the stimulus (S) for
the response (R) of salivation. The behavior involved is completely
automatic; the animal salivates when food is put into its mouth.
Here is a diagram of this activity:
Receives food (S)  Salivates (R)
• So food is a stimulus (S) and salivation is
• a response (R) to that stimulus.
Note that no special conditions are needed for meat to cause salivation; it is natural and
automatic. Hence Pavlov called the food an unconditioned stimulus (UCS) and salivation
an unconditioned response (CR), because they occur without any special conditions
needed. Replacing the diagram above with more accurate terminology, this is what we get:
Receives food (UCS)  Salivates (UCR)
Since seeing an experimenter will not elicit salivation all by itself, some specific conditions
are necessary – namely, the animal must associate the experimenter with food. When that
association takes place over time, then “seeing the experimenter” becomes a conditioned
stimulus (CS). In other words, the special condition of associating the experimenter with
food has been met. A diagram of this process of association would look like this:
Sight of experimenter (CS)  Receives food (UCS)  Salivation (UCR)
Eventually the animal responds to the conditioned stimulus alone, by salivating, much as it
did to the unconditioned stimulus of food. Salivation at the sight of the experimenter, since
it is now triggered by a CS, with no food present, becomes the conditioned response (CR)
(even though it’s the same type of salivation). The last step is:
Sight of experimenter (CS)  Salivation (UCR)
Textbook example:
Here is a quick review to help set the terms in your brain. You hear someone
mention that he or she desperately wants a juicy dill pickle. Note that just
reading this is causing you to salivate. How is this possible? In the past:
Eating pickle (UCS)  Salivation (UCR)
Before you actually eat a pickle, you think or say to yourself “Pickle”.
Word pickle (CS)  Eating pickle (UCS)  Salivation (UCR)
Over time, the word pickle, which is only a sound and not an object, becomes
associated with a real pickle, which does cause salivation. So now we have:
Word pickle (CS)  Salivation (CR)
Classical Conditioning Comic
• http://www.psychotube.net/psychologycomic/ivan-pavlov-classical-conditioning/
Review of Terms
• http://educationportal.com/academy/lesson/classicalconditioning.html
• Example in “The Office”
• http://vimeo.com/5371237
John Watson and Emotional
Conditioning
• Several years after Pavlov’s early experiments, psychologist James
Watson appeared on the scene. While he was working his way through
school, one of his jobs was to take care of laboratory rats.
• Gradually the rats become Watson’s pets and friends.
• One of his favorite pastimes was to teach them all kinds of tricks. The rats
were able to find their way through elaborate mazes he built, to solve
problems such as the need to dig through obstacles he had put in their
path, to act as construction workers in tunnels he started for them, and
so forth.
• Based on his observations, Watson eventually decided that what seemed
to be the rats’ complex behavior actually resulted from little more than a
series of stimuli and responses, rather than from some exotic concept
such as “intelligence”.
• Watson went even further to suggest that at the human level, “deep
emotions” are also just the result of association and learning.
• One of his most famous experiments involved trying to get a human to
generalize the emotion of fear from one object to another; this he
thought, would demonstrate that emotions can be mechanically induced.
Controversial Experiment
• Watson’s work in this area concerned many
people because of the ethics involved in how he
dealt with a child.
• His research would never be allowed today.
• A woman who worked at the same clinic as
Watson would bring her child with her while she
was working. Unknown to the mother, Watson
started a series of conditioning experiments with
the child.
• This 11-month old is now famous in psychology
and is known as “Little Albert”.
Fear
• Before describing what
• happened to Little Albert, we need some
background information on fear.
• An unexpected noise makes anyone’s heart
race. We don’t have to learn to be startled. It
happens automatically. So, a sudden loud
noise is an unconditioned stimulus for the
unconditioned response of fear.
Watson put a white laboratory rat into the room with Albert.
Albert loved the furry creature and played with it. While Albert
played, Watson sneaked up behind him and made a loud, startling
noise. Albert fell forward, crying and burying his face in a
mattress on the floor. The next time he reached for the rat,
Watson repeated the crashing noises. Little Albert became
terrified of the rat. Here is the situation:
Loud sound (UCS)  Fear (UCR)
followed by the association phase:
Rat (CS)  Loud sound (UCS)  Fear (UCR)
which then becomes:
Rat (CS)  Fear (CR)
• Watson then went on to demonstrate what is
called stimulus generalization, which means that
a response can spread from one specific stimulus
like the white rat to other stimuli resembling the
original one in some way.
• To show this had occurred, Watson brought in a
white rabbit, which also frightened Albert.
• Albert even showed some concern about a fur
coat and a mild negative response to a Santa
Claus mask, objects somewhat similar to the
white rat.
Mom freaked out and stopped the
experiment
• Before the mother discovered these goings on and fled with Albert,
Watson had shown two things:
(1) conditioning of emotions to neutral objects is possible and
(2) a conditioned emotion can generalize to other objects that have similar
characteristics.
• All of this is helpful to know, but there is a problem because no one ever
located “Big” Albert after Watson’s experiment and because no one since
Watson has done a similar kind of experiment, we don’t know how long
such conditioned emotions last.
• Most likely Albert’s fear disappeared, since we do know from other studies
(with adults) that if you stop pairing something like a frightening noise
with an object, the original association will begin to disappear.
• This disappearance is called extinction.
• Thus, after a while, Pavlov’s dogs would extinguish (stop) their salivation at
the presence of the experimenter unless the experimenter continued to
feed them occasionally.
This type of conditioning is the basis
behind slot machines in Las Vegas. As
long as a player receives a pay-out,
even occasionally, they continued to
pull the lever and keep gambling.
Removal
of Fears
• One very important discovery was made
as a result of Watson’s experiments, and
it came from a student who worked for Watson, Mary Cover Jones.
• Aware of the effect that Watson’s experiments had had on Little Albert, she
wondered if she could reverse the procedure and cure a child of a terrible
fear.
• She found a three-year old, “Peter,” who panicked at the sight of a rabbit. In
an experiment, she brought a rabbit into the room with Peter, close enough
for him to see it.
• She then gave the child some food he liked. She moved the rabbit closer
and gave more food, and she continued this process, associating the
pleasure of food with the feared object.
• It worked: Peter lost his fear of rabbits. Jones had found the key to
removing all manner of fears, called phobias, that can make people’s lives
miserable – fears of elevators, snakes, dogs, and the like.
• Associating something pleasant with a feared object is still used quite
successfully today to reduce or stop such fears.
Operant Conditioning
• behavior is learned or avoided as a result of its consequences.
• In classical conditioning, learning takes place without any choice; in
other words, meat on the tongue (or something that has been
associated with meat) will automatically cause salivation without
any choice by the organism. In operant conditioning, the organism
plays some role in what happens. This theory claims that humans
and animals learn as an end product of performing certain actions
(or operations).
• The distinction between classical and operant conditioning is often
hard to grasp when encountered for the first time. But the brain
has a way of remembering unusual things better than it remembers
the commonplace so by now giving you an example that is truly
absurd, you won’t ever forget it.
First scene: Someone in your household decides to condition you classically to hate
a certain vegetable.
• At random times this person, carrying a handful of the cold vegetable, sneaks up
behind you and shoves it into your mouth while talking into your ear about
something nauseating.
• After a few of these encounters, you will find the thought of that vegetable quite
unpleasant.
• You have now been classically conditioned to dislike the vegetable, since you had
no control whatsoever over what was happening.
Second scene: You find three different varieties of canned vegetables in the
cupboard.
• You have never eaten any of them. You reach in, take one out, cook it, and eat it.
You do the same thing with the other two later on.
• The one you like best you will probably reach for and cook again.
• In this case, you have been operantly conditioned by your actions (operations)
and their consequences to prefer one vegetable over another.
B.F. Skinner
• Psychologist B.F. Skinner is best known for his work with the
operant conditioning theory.
• He believed that how we turn out in life is the result of what
we learn from all the operations we make over the years.
• If our actions result in people getting angry and disliking us,
we are being operantly conditioned to believe that the
world is a dangerous and threatening place.
• If the environment rewards us when we perform certain
acts, then we tend to repeat them.
• Thus, if you study hard, do a good job on a paper, and get a
note of praise, you will tend to study hard and do a good job
again; if you get a nasty note on your paper even though
you’ve done well, you will lose your desire to repeat these
actions.
Operant Conditioning Processes
Reinforcement follows a response and strengthens our tendency to repeat that response in the
future. For example, say that there is a bar inside an animal cage, and each time the animal presses
the bar, food appears. The behavior of bar pressing is reinforced (strengthened) by the arrival of the
food. After awhile, when the animal is hungry, it will walk right over and push the bar.
• Primary reinforcement: something that is absolutely necessary for survival, such as food or
water. The possibility of obtaining one of these when you perform an action is the strongest
incentive to learn.
• Secondary reinforcement : anything that comes to represent a primary reinforcer.
- Because money can buy food and drink, it represents these primary reinforcers.
- All secondary reinforcers are related to some primary one. For example, you work for a high
grade because it is a formal way of receiving praise, and this praise represents the physical love
(primary reinforcement) in the form of hugs that you got from your parent(s) when you did a
good job as a child.
• Positive reinforcement occurs when something the organism wants (such as food) is added on (+,
positive) after an action.
• Negative reinforcement occurs when something unpleasant (negative) is stopped or taken away
(-, negative) if the organism does something.
• In one, something pleasant is added; in the other, something unpleasant is stopped or avoided.
• Try to remember that reinforcement always strengthens a response, rather than weakening it,
and this will be easier to understand.
Example:
• If the floor of a cage gives an animal a shock and the
animal learns to push a bar in order to stop the
electricity, this is negative reinforcement; it
strengthens a response (pushing the bar).
• Say that someone wants you to take out the trash,
which you keep forgetting to do. So the nagging starts,
and it keeps on and on.
• You are being negatively reinforced: all you want to do
is find a way to stop the endless whining about what a
mess you are. You take out the trash and are therefore
performing an act in order to stop something
unpleasant.
Big Bang Theory: Operant Conditioning
• http://www.youtube.com/watch?v=teLoNYvO
f90
Punishment
• Students often confuse negative reinforcement with
punishment, but there is a very basic difference.
• Punishment is an attempt to weaken a response by
following it with something unpleasant, not to
strengthen it.
• There are two basic ways to go with punishment.
1. First, something desired can be taken away, as when
someone is fined for a traffic violation.
2. Second, something unwanted can be added, as when
students had to write “I shall not talk in class” 100
times on the blackboard.
Generalization and Discrimination
Generalization can also occur in operantly conditioned behavior.
• For instance, a boy who pats a dog and gets a wagging tail is likely to approach
the next dog he sees in the same fashion. If that dog also wags its tail, the boy’s
actions will generalize to all dogs, for the boy will assume they are all friendly.
Suppose, however, that the third dog the boy pats bites him.
In such a case, generalization has been instantly halted and replaced by
discrimination learning. In other words, the child learns to tell the difference
(discriminate) between dogs or between situations that are not all the same.
Babies often embarrass adults because of their
generalizations. For instance, a baby girl hugs her father and
says, “Dada.” Daddy gets so excited about this that he praises
her and runs to tell the mother that she has called him by
name. The little girl generalizes the response, sensibly in her
own mind, by calling every man she meets “Dada.” When the
other men don’t give her the same positive reinforcement,
she gradually discriminates between who is actually “Dada”
and who isn’t – even though she doesn’t actually know what
that sound means.
Extinction
• Often, when a response is no longer followed by
reinforcement, a person will gradually stop making that
response.
• This situation is called extinction, the same term used
earlier with classical conditioning.
• In both classical and operant conditioning, then,
extinction can occur.
• In both cases, an association has been weakened: in
classical conditioning, because the unconditioned
stimulus is no longer present; in operant conditioning,
because reinforcement is no longer present.
Shaping and Chaining
Shaping
• So far we have been talking about fairly, simple one-step behaviors. Two
major techniques can be used to teach more complex or complicated
responses.
1. Shaping = the “method of successive approximations.”
- In shaping, increasingly closer versions (approximations) of the desired
response are reinforced in turn (successively). We start out reinforcing a
rough version of the response we’re after.
- Once that rough version has been learned, the standard goes up. Now, a
smoother or more accurate attempt is required before reinforcement will
be given, and so on.
- Example: A dog being trained to jump through a hoop will first be
reinforced by praise or food for approaching the hoop that lies on the
ground. Next it is reinforced for walking through the hoop as it is held
vertically touching the group. Then the dog is shaped to jump through the
hoop held a few inches off the ground, and so forth. The same process is
gone through when someone is learning how to play a tune on the piano
or how to swim.
Chaining
•
When we want a complete sequence done in order, we usually have to start by
reinforcing each part of that sequence. Then, each part or link is connected to the
others by reinforcement.
2. Chaining = in “connecting together.”
Example: In learning a new dance, people learn the different steps or parts of the
dance first. Then they put the parts together like links in a chain.
• Seeing Eye dogs for the blind are highly intelligent and remarkable examples of
what shaping and chaining can produce.
• They can read stoplights and traffic patterns, find curbs and doors, discover
dangerous holes that might trip their owner, or find things the owner drops. They
even will resist dangerous commands from the sightless person.
• All these behaviors occur as a smooth, ongoing process that looks completely
effortless – and is, after being done hundreds and hundreds of times.
• Since these animals are capable of forming close psychological bonds, only
occasionally during training is a reinforcer such as food used.
• The dog wants to please the trainer to such a degree that a pat on the head or
some other form of approval or praise is much more than enough and is generally
even preferred.
Pigeon
Rescuers!
• The Coast Guard Search and rescue teams tested an
unusual one for pigeons because a pigeon’s vision is so
much sharper than a human’s. The pigeons were trained to
search for an orange disk and when it was located, to push
a button with their beaks as a signal. After training, they
went on helicopter rescue missions to watch for orange
“disks” – life jackets attached to people in the water.
• Pilots have trouble seeing them, but the pigeons don’t.
Pigeons have a 90% success rate, whereas the pilots are
stuck at 35%.
• The victims were thankful when the pigeon spotted the life
jacket and pecked a signal to the rescuers.
Schedules of Reinforcement
• There are different methods of providing reinforcement
during operant conditioning. So far we have focused on
continuous reinforcement – that is, each time a desired
behavior occurs, it is reinforced.
• In many cases, this is not a good method because the
creature gets used to having something and will quit if it
doesn’t show up every time.
• This problem can be avoided by using different schedules
of reinforcement, that is, different techniques.
• When the organism is not being continuously reinforced,
it is on a partial reinforcement schedule, of which there
are four.
• In partial reinforcement, the animal or person does not
get a reward each time a desired act is performed.
1. Variable Ratio
• A pigeon quickly learns to peck at a button for food. But if you use
continuous reinforcement, the pigeon will quit unless it is really
hungry.
• On the other hand, if the pigeon gets food after five pecks, then after
seven pecks, then three, or whatever variable numbers you want to
use, once you stop the reinforcement, they will peck of 10,000 times
before they finally give up.
• This is the variable ratio schedule; “ratio” refers to numbers. Thus,
with the pigeons, you vary the number of pecks required before
reinforcement occurs.
• Humans can really get hooked on this type of schedule, which is how
slot machines work. Since players don’t know exactly when the
money will fall through the chute, they work hard at it, just like the
pigeons.
• Usually the machines are set to give a few coins as reinforcement
every now and then but to give a jackpot only infrequently.
2. Fixed Ratio
•
•
•
•
•
•
What would happen if you kept the ratio the same, so there is one reinforcement
every time the creature performs a certain number of acts?
For example, what if the pigeon is rewarded after every five pecks? This is called a
fixed ratio schedule since the relationship between the number of pecks and the
number of reinforcements is always the same.
With this method, the pigeons will peck as rapidly as possible because they know
that the faster they go, the greater the number of reinforcements they will receive.
At first this seems like it might be a good way to squeeze every drop of work
possible out of factory workers – but there are many pitfalls.
Suppose that an auto company decides to put the workers on a fixed ratio
schedule in which they are paid by the number of cars they produce. As workers
are forced to speed up, they will put screws in halfway and leave out parts in order
to save time and produce more cars.
Even when this system looks like it might work – some workers may decide to try
to out produce the others – pressure from the group as a whole will force a quick
end to this competition. On occasion it may work, as with individual farmhands
who are paid by the bushel.
3. Variable Interval
• A third type of partial reinforcement is called variable interval schedule.
• Here the creature never knows (in terms of time, or “interval”) when the
reinforcement will arrive.
• It may come at the end of three minutes, then two minutes, then five
minutes, and so forth.
• A real-life example can be found in that baffling activity called fishing, in
which a person sits hour after hour holding a pole up in the air staring into
space while apparently nothing happens.
• Actually, variable interval reinforcement is going on and keeps the person
moving the boat or adjusting the line: the line is attached to a bobber that
floats on the water and at unpredictable intervals (from the current or a
small wave most often, but on occasion from a fish), the bobber will
disappear below water level, causing considerable excitement and keeping
hope alive.
• With variable interval reinforcement, animals will keep working at a steady,
sluggish pace, just to be sure they are performing the right act when the
prize comes. But they don’t overdo it in terms of speed.
4. Fixed Interval
• A fourth type of schedule, called fixed interval
schedule, gives a reward when a specific, fixed amount
of time has passed. It has an interesting effect on the
behavior of animals.
• Pigeons that learn they are going to be rewarded every
five minutes no matter how fast they peck become
very casual about it all.
• They walk over to the pecking button, hit it once,
saunter away for a while, and then return, hitting it
again. They mope about until just before the fiveminute interval is over. Then they move to the button
and peck rapidly.
Social Learning
• In present-day psychology, most of the research has moved away
from classical and operant conditioning. While both play a role in
learning, they fall short of explaining complex learning processes.
• One of the current theories about learning is called social learning,
and its most prominent theorist is psychologist Albert Bandura.
• He claims that the most important aspect of learning was missed by
Pavlov, Watson and Skinner, for he feels that between the stimulus
and the response is the complex “inner person” who is able to
analyze events and make decisions before a response is given.
• Bandura feels that a more complex explanation for behavior is
needed when analyzing group, or social, living.
• In order to survive, he says, we imitate directly the activities of
those around us, “social learning” is the general term for this
imitation.
“Bobo Doll”
•
•
•
•
•
•
Much of our behavior is acquired by observational learning,
meaning that we learn patterns of behavior by watching others
and deciding what to imitate.
From the parent, a child learns speech patterns, personal
habits, and how to react to other people. In other words, the
child observes and then patterns behavior after that of the
important people in his or her life.
“Social learning” refers to all learning in a social situations;
“observational learning” is one of the processes used for social
learning in which we watch events, persons, and situations for
cues on how to behave.
In a now-famous experiment, Bandura demonstrated that
children who observe aggressive adult models become
aggressive themselves as a result. The children watched adults
slugging plastic stand-up dolls.
When the children were left alone, they imitated this behavior.
The important point that Banduras is making: the child does not
require a specific reinforcement such as food for learning to
occur.
Social learning can occur by expose and imitation alone.
Bandura felt that earlier explanations of learning were too
simplified.
http://www.youtube.com/watch?v=zerCK0lRjp8
Cognitive Psychology and Learning
• Bandura’s approach to learning is clearly more complex than earlier
theories. Today psychologists are finding that even his version doesn’t
fully account for the elaborate task of learning.
• As a result, much of the present research looks at a means of learning. As
a result, much of the present research looks at a means of learning called
the cognitive approach.
• The word cognitive here means “knowledge-using,” with “knowledge”
meaning far more than just a stimulus and response or imitation.
• Using the cognitive approach, we are able to learn very abstract and
subtle things that could not be learned simply through conditioning or
social learning.
• For instance, some people have learned through the stories of others that
it is very bad luck to walk under a ladder or to break a mirror: this kind of
belief is very abstract and hence could not be learned by any method
other than the cognitive one. When psychologists study cognition, then,
they focus on how complex knowledge is obtained, processed, and
organized.
Complexities of Conditioning
• Cognitive psychologists support their position by pointing out that even
classical conditioning is not as simple as it first appears.
• For example, the type of cage an experimenter keeps an animal in will affect
the animal’s learning ability, as will the amount of time the animal has
previously spent in the cage.
• If an animal is in unfamiliar surroundings, it gets preoccupied with its new
environment and doesn’t pay attention to the experiment. And as in the
case of Pavlov’s dogs falling asleep, animals vary in the degree to which
they are interested in the experiment itself.
• There are also strange individual preferences – for example, pigeons will
tend to peck at lighted keys even without reinforcement, and they will peck
differently if they are trying to get water as compared to food.
• Animals condition more easily to pictures of rats or spiders than they do to
pictures of flowers and mushrooms. All of these findings make the animal
far more complex than just a responder to stimuli.
• To see how complicated it can become at the human level, think about how
the experimental results might have changed if Little Albert had been a bit
older and had known that Watson was standing behind him making the
noise.
• Under the cognitive theory, complexities Watson didn’t know about take on a
new light.
• For instance, at first it seems reasonable to assume that Watson’s work with Little
Albert can explain such human problems as phobias – in other words, that fears
of closed spaces, heights, snakes, open places, or germs arise from straight
association. But this is not necessarily the case.
• Psychologists have discovered a strange quirk about phobias: while many of them
may indeed come from association, there is clearly a cognitive (or knowledgebased) aspect to them, because phobias only develop in relation to some kind of
natural danger.
• Thus, if you are in a closed space, you really may be trapped; if you are up high,
you may fall; some snakes are indeed poisonous; if you are out in the open, you
may be more vulnerable; germs can kill you.
• All of these are known phobias; in contrast, there are no phobias for neutral or
“unnatural” things, such as umbrellas, trees, light switches, or automobile tires.
As you can see, the conditioning of fears seems to develop through a
sophisticated cognitive process.
Cognitive Maps
• In the 1930s, psychologist E.C. Tolman was already
arguing that the mechanical stimulus-response view
was too shallow an explanation for all learning. But
only with the emergence of cognitive psychology has
his early claim been taken seriously and studied
extensively.
• Tolman claimed that even rats in a maze were able to
form what he called a cognitive map.
• This term refers to the human and animal ability to
form a mental image of where he or she is located in
the environment.
• Thus, when a maze is changed, a rat is able to visualize
the change after going through it once and then can
run the maze to seek food using the new mental image.
• We now know that Tolman was right, that there is no such thing as
a simple organism.
• Rats in mazes, for example, not only form some kind of cognitive
map but they also use strategies of their own to explore carefully
the alleyways of a maze without going over the same territory more
than once.
• Chimpanzees in a maze are remarkable. An experimenter can carry
a baby chimp through a complicated maze and deposit bananas in
18 different places in the alleyway while the chimp watches.
• When freed, the chimp can find an average of 12 of the 18 bananas
quickly without duplicating any routes.
• Birds that store pine seeds in the ground for months at a time have
the same excellent record; in fact, they even pass by storage places
for other birds and pick up only their own seeds.
• One of the most amusing of these experiments involves bees who use a
“scout” to find food. After finding the food, the scout flies back to the
hive to tell the others where the food is.
• The location is indicated by an elaborate scout-bee “dance” that shows
the direction and distance of the food location by the length of updown movement and the general pattern of its flying.
• One researcher took a scout bee out to the middle of a lake in a boat
and exposed it to food. When it flew back to the hive, it dutifully
reported direction and distance of the food to the others.
• Since the other bees also have cognitive maps, they presumably
thought the scout was mentally disturbed (food in the middle of the
lake? He’s got to be kidding!) because not one bee moved. In the next
step of the experiment, a scout bee was taken by boat to the shore at
the other side of the lake, exposed to food, and let go. When it
reported back to the same hive, all the bees came flying posthaste!
In Summary:
Classical Conditioning:
Operant Conditioning:
Social Learning:
Cognitive Learning:
Learning by association
Learning through
reinforcement
Learning by observing and
imitating
Learning through mental
processing
For each of the following, indicate whether the capitalized
behavior is learned primarily through classical conditioning
(CC), operant conditioning (OP), or social learning (SL).
1. Nino EATS at Lou’s Pizza for the first time. Since he enjoys
the food, he returns there every Saturday for dinner.
2. The main reason that Nino EATS at Lou’s Pizza is because
all his friends eat there.
3. Every time Nino drives into Lou’s parking lot, his MOUTH
WATERS because he knows he will eat soon.
4. Little Lauren WEARS her mom’s clothes simplify because
she wants to imitate her mom.
5. Little Lauren’s HEART RACES every time she wears her
mom’s clothes.
6. Little Lauren WEARS her mom’s clothes often because she
knows she will always get a laugh.
Classical Conditioning:
Read the following example of a behavior learned through
classical conditioning: “The first time that Sarah went to the
DENTIST, he stuck a long NEEDLE in her mouth, which naturally
caused her to experience FEAR. After a few visits, she
experienced FEAR not only when the needle was stuck in her
mouth but also when the DENTIST appeared to call her into the
office.” Using this example, identify the following concepts
(possible answers are capitalized above).
•
•
•
•
•
The unconditioned stimulus (UCS)?
The unconditioned response (UCR)?
The conditioned stimulus (CS)?
The conditioned response (CR)?
The stimulus that started out as neutral (N)?
Positive vs. Negative Reinforcement
Which of the following are examples of positive
reinforcement, and which are examples of
negative reinforcement?
1. Tom hangs up his coat in order to get a dollar.
2. Tom hangs up his coat in order to stop his
mom’s yelling.
3. Mary stays at home every weekend so she won’t
run into her old boyfriend.
4. Mary stays at home every weekend because her
new boyfriend always comes over.
Written Response (5 marks)
Pick ONE of the 2 paragraphs to write about.
Scenario: Suppose that you are a parent and you want your teen
to mow the lawn every week.
• Paragraph 1: Write a short, well-written paragraph giving an
example of how you would use positive reinforcement to
achieve your goal (get teen to mow regularly). Explain the
rationale behind using positive reinforcement.
OR
• Paragraph 2: Write a short, well-written paragraph, giving an
example of how you would use negative reinforcement to
achieve your goal (get teen to mow regularly). Explain the
rationale behind using negative reinforcement.