Download Chapter 6 Editable Lecture Notecards

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Learning theory (education) wikipedia , lookup

Applied behavior analysis wikipedia , lookup

Behavior analysis of child development wikipedia , lookup

Verbal Behavior wikipedia , lookup

Psychological behaviorism wikipedia , lookup

Behaviorism wikipedia , lookup

Psychophysics wikipedia , lookup

Classical conditioning wikipedia , lookup

Operant conditioning wikipedia , lookup

Transcript
:: Slide 1 ::
:: Slide 2 ::
Learning is any relatively durable change in behavior or knowledge that is due
to experience.
Ivan Pavlov was a prominent Russian physiologist who did research on
digestion. Pavlov discovered that dogs will salivate in response to the sound
of a tone in a process we now call classical conditioning.
:: Slide 3 ::
:: Slide 4 ::
Pavlov noticed that the dogs in his experiments began to anticipate the
presentation of food and would salivate before being fed – when they heard
the click of the device that was used to present the meat powder. In order to
isolate the cause, Pavlov paired the sound of a tone with the presentation of
meat powder several times, then presented the sound of the tone alone
(without meat powder). What do you think happened next?
[Click to show dog’s reaction to bell]
The dog reacted to the sound of the tone even without the presentation of meat
powder. In other words, Pavlov demonstrated that learned associations were
formed by events in the organism’s environment.
:: Slide 5 ::
:: Slide 6 ::
At first, the bell tone is a neutral stimulus that does not cause the dog
to drool.
The meat powder is an unconditioned stimulus that elicits the
unconditioned response.
:: Slide 7 ::
:: Slide 8 ::
When the neutral stimulus (bell) is paired with the presentation of meat
powder, the unconditioned response to the meat powder causes the dog
to drool.
After conditioning (pairing of neutral stimulus with unconditioned stimulus),
the neutral stimulus becomes a conditioned stimulus, and the bell alone can
elicit the response of salivation.
:: Slide 9 ::
:: Slide 10 ::
It’s possible for emotional responses, such as fear and anxiety, to develop
through the process of classical conditioning.
Classical conditioning can be broken down into two basic parts – acquisition
and extinction.
The video shows a neutral stimulus (“That was easy” button/sound effect)
become a conditioned stimulus through the process of classical conditioning.
The air shot gun is the unconditioned stimulus used to elicit a response that
then becomes associated with the neutral stimulus, gradually turning the button
into a conditioned stimulus.
Acquisition is the initial stage of learning something – like Pavlov’s dog
learning to drool at the tone of the bell. The graph here shows the strength of
the dog’s response, measured in drops of saliva, to the conditioned stimulus
(the bell).
:: Slide 11 ::
:: Slide 12 ::
Extinction is the process by which the association between the unconditioned
stimulus (meat powder) and conditioned stimulus (bell ringing) is broken.
Spontaneous recovery is a phenomenon discovered by Pavlov in which
an extinguished conditioned stimulus suddenly elicits a conditioned
response again when there is a period of time between trials in a classical
conditioning experiment.
When the bell is presented enough times without being paired with meat, the
response extinguishes.
The CR this time is weakened and eventually re-extinguishes, though after
another “rest” interval, a weaker spontaneous recovery occurs.
:: Slide 13 ::
:: Slide 14 ::
John Watson is referred to as the ‘father of modern behaviorism’ and was
influenced by Pavlov’s work.
It is possible to see a CR without a CS-US pairing.
Primarily concerned with the control and prediction of behavior.
Individuals often fail to discriminate between two stimuli – for example, falcons &
hawks, or linguine & spaghetti.
In classical conditioning this leads to showing the CR to a stimulus that was not
previously encountered.
Stimulus discrimination occurs when an organism that has learned a response to a
specific stimulus does not respond in the same way to new stimuli that are similar to
the original stimulus. For example, a child who receives painful injections only from a
certain nurse may learn not to fear other similarly attired medical personnel.
:: Slide 15 ::
:: Slide 16 ::
John B. Watson, the founder of behaviorism, conducted an influential early
study of generalization.
There is a way other than stimulus generalization to observe a CR without a
CS-US pairing, it is called higher-order conditioning.
Watson and a colleague, Rosalie Rayner, examined the generalization of
conditioned fear in an 11-month-old boy, known in the annals of psychology
as “Little Albert.”Watson and Rayner conditioned Little Albert to show fear
of the CS (a rat). Little Albert then generalized his fear to a number of furry
objects, including a rabbit, a fur coat, and Watson wearing a Santa
Claus mask.
This phenomenon was discovered by Pavlov.
:: Slide 17 ::
:: Slide 18 ::
Operant conditioning is a form of learning in which responses come to be
controlled by their consequences. B. F. Skinner of Harvard University first
described this type of learning in the late 1930s.
Skinner’s principle of reinforcement holds that organisms tend to repeat those
responses that are followed by favorable consequences, or reinforcement.
It involves three steps-first, a US is paired with a CS (known as CS1), then
CS1 is paired with another CS (CS2). Finally, CS2 (which had not been paired
with the US) elicits a CR from the individual being.
Something is positively reinforcing if individuals are reinforced when it is
presented to them, like food, water, sleep, or sex.
An example of positive reinforcement is if you tell a joke and all your friends
laugh, you’re more likely to keep telling jokes.
:: Slide 19 ::
:: Slide 20 ::
This is an experimental apparatus, also known as an “operant chamber,”
devised by Skinner for testing laboratory animals in operant
conditioning experiments.
Cumulative recorders create a graphic record of responding and reinforcement
in a Skinner box as a function of time.
Each time the lever in the Skinner box is pressed, it moves the pen up a step.
It is commonly referred to as the ‘Skinner box’.
A steeper slope represents a rapid response rate.
A Skinner box is a small enclosure in which an animal can make a specific
response that is systematically recorded while the consequences of the
response are controlled.
:: Slide 21 ::
:: Slide 22 ::
The rat in this Skinner box initially explores his surroundings, then eventually
presses the lever in the box, a behavior which is reinforced by the presentation
of water.
In operant conditioning, as in classical conditioning, acquisition refers to the initial stage of
learning some pattern of responding.
The rat becomes conditioned by the positive reinforcement to continue pushing
the lever.
Operant conditioning is usually established through a gradual process called shaping, which
involves the reinforcement of closer and closer approximations of a desired response.
Shaping is necessary when an organism does not, on its own, emit the desired response.
For example, when a rat is first placed in a Skinner box, it may not press the lever at all. In this
case the experimenter begins shaping lever-pressing behavior by reinforcing the rat whenever
it moves toward the lever.
:: Slide 23 ::
:: Slide 24 ::
“Priscilla, the Fastidious Pig,” was shaped to turn on a radio, eat at a table, put
dirty clothes in a hamper, run a vacuum cleaner, and then “go shopping” with a
shopping cart.
Extinction in operant conditioning is the process by which the association
between response and contingency is broken.
The most efficient means of unpairing a response and contingency is to stop
reinforcing the operant response – to not present food when the bar is pressed,
for example.
Since responses are graphed cumulatively, the line never goes down – when a
response is extinguished, the line flattens.
:: Slide 25 ::
:: Slide 26 ::
A schedule of reinforcement determines which occurrences of a specific
response result in presentation of a reinforcer. The simplest schedule is
continuous reinforcement.
A fixed-ratio schedule entails giving a reinforcer after a fixed number of nonreinforced responses.
Continuous reinforcement occurs when every instance of a designated
response is reinforced.
A fixed-ratio schedule in general provides a rapid response, indicated by the
steep slope of the curve.
Intermittent reinforcement, or partial reinforcement, occurs when a designated
response is reinforced only some of the time.
:: Slide 27 ::
:: Slide 28 ::
Example: A student who gets money for every fifth “A” she receives on a test
is an example of a fixed-ratio schedule of reinforcement.
A variable ratio schedule entails giving a reinforcer after a variable number
of non-reinforced responses.
Variable-ratio schedules, like fixed-ratio schedules, tend to provide a
rapid response.
Intermittent reinforcement, or partial reinforcement, occurs when a designated
response is reinforced only some of the time.
:: Slide 29 ::
:: Slide 30 ::
Example: Playing slot machines is based on variable-ratio reinforcement
as the number of non-winning responses varies greatly before each time
the machine pays out.
Interval schedules require a time period to pass between the presentation
of reinforcers.
A fixed-interval schedule entails reinforcing the first response that occurs
after a fixed time interval has elapsed – reinforcement is not given before the
interval has elapsed.
:: Slide 31 ::
:: Slide 32 ::
Example: Once a year after Thanksgiving, retail stores have large discounts in
preparation for Christmas, on a day called “Black Friday.” Shopping before the
sale does not provide the reinforcer (getting cheaper items).
A variable-interval schedule entails giving the reinforcer for the first response
after a variable time interval has elapsed.
:: Slide 33 ::
:: Slide 34 ::
Example: Constantly checking a cell phone to see if there are text messages
waiting is reinforced by the periodic receipt of text messages.
Responses can be strengthened either by presenting positive reinforcers or by
removing negative reinforcers.
Positive reinforcement occurs when a response is strengthened because it is
followed by the presentation of a rewarding stimulus.
:: Slide 35 ::
:: Slide 36 ::
Something is negatively reinforcing if individuals can avoid or escape from an
aversive situation. The rat in this example presses the lever in order to remove
the unpleasant effects of an electric shock.
Have students guess the answer.
:: Slide 37 ::
:: Slide 38 ::
Aversive conditioning is an aspect of operant conditioning that deals with
unpleasant stimuli and how we learn to stay away from them.
Avoidance is a bit more complicated. It involves a stimulus, usually a light
or bell, that signals the onset of the aversive stimulus.
Escape is defined as performing an operant response to cause an aversive
stimulus to cease (ex. run to the other side of a shuttle box to get away
from shock).
:: Slide 39 ::
:: Slide 40 ::
Taste aversion is a special instance of conditioning because it breaks two of
the cardinal rules of the process – it occurs after only one pairing of CS-US,
and the presentation of the US (illness) and CS (taste) can be separated by
as much as 24 hours.
People tend to develop phobias to species that used to be genuine threats to
our ancient ancestors. According to Martin Seligman, evolutionary forces
gradually wired the human brain to acquire conditioned fears of these stimuli
rapidly and easily.
Taste aversion also supports the behavior-systems approach because it is
clearly a crucial response to poisoning. In this example, the taste of berries are
the
This clip shows a woman with a phobia of snakes. Watch as the phobia
gradually becomes extinguished.
:: Slide 41 ::
:: Slide XX ::
Observational learning occurs when an organism’s response is influenced by
the observation of others, who are called models.
Left blank
Albert Bandura investigated observational learning extensively, and identified
four key processes in observational learning: attention, retention, reproduction,
and motivation.