Download Artificial Neural Networks - Introduction -

Document related concepts

Gene expression programming wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Concept learning wikipedia , lookup

Neural modeling fields wikipedia , lookup

Machine learning wikipedia , lookup

Hierarchical temporal memory wikipedia , lookup

Pattern recognition wikipedia , lookup

Catastrophic interference wikipedia , lookup

Convolutional neural network wikipedia , lookup

Transcript
Learning Algorithm and
Neural Networks
MTR607- Spring 2012
Egypt-Japan University
Dr. Alaa Sagheer
[email protected]
MTR 607
Textbook: Simon Haykin, “Neural Networks A Comprehensive
Foundation,” 2nd Ed., 1999
Lecturer: Dr. Alaa Sagheer
Place: Seminar Room, E-JUST
Grading: Class participation (10%),
Assignments and reports (20%),
Midterm test (30%),
Final exam (40%)
MTR607: Learning Algorithms and Neural Networks
2
Dr. Alaa Sagheer
Course Overview
Introduction to Artificial Neural Networks,
Artificial and human neurons (Biological Inspiration)
The learning process,
Supervised and unsupervised learning,
Reinforcement learning,
Applications Development and Portfolio
The McCulloch-Pitts Model of Neuron,
A simple network layers, Multilayer networks
Perceptron,
Back propagation algorithm,
Recurrent networks,
Associative memory,
Self Organizing maps,
Support Vector Machine and PCA,
Applications to speech, vision and control problems.
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
3
ANN’s Resources
Main text books:
“Neural Networks: A Comprehensive Foundation”, S. Haykin (very good theoretical)
“Pattern Recognition with Neural Networks”, C. Bishop (very good-more accessible)
“Neural Network Design” by Hagan, Demuth and Beale (introductory)
Books emphasizing the practical aspects:
“Neural Smithing”, Reeds and Marks
“Practical Neural Network Recipees in C++”’ T. Masters
Seminal Paper:
“Parallel Distributed Processing” Rumelhart and McClelland et al.
Other:
“Neural and Adaptive Systems”, J. Principe, N. Euliano, C. Lefebvre
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
4
ANN’s Resources
Review Articles:
R. P. Lippman, “An introduction to Computing with Neural Nets”’ IEEE ASP
Magazine, 4-22, April 1987.
T. Kohonen, “An Introduction to Neural Computing”, Neural Networks, 1, 3-16, 1988.
A. K. Jain, J. Mao, K. Mohuiddin, “Artificial Neural Networks: A Tutorial”’ IEEE
Computer, March 1996’ p. 31-44.
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
5
Course Overview
Introduction to Artificial Neural Networks,
Artificial and human neurons (Biological Inspiration)
The learning process,
Supervised and unsupervised learning,
Reinforcement learning,
Applications Development and Portfolio
The McCulloch-Pitts Model of Neuron,
A simple network layers, Multilayer networks
Perceptron,
Back propagation algorithm,
Recurrent networks,
Associative memory,
Self Organizing maps,
Support Vector Machine and PCA,
Applications to speech, vision and control problems.
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
6
Introduction to Artificial Neural Networks
Part I:
1. Artificial Neural Networks
2. Artificial and human neurons (Biological Inspiration)
3. Tasks & Applications of ANNs
Part II:
1. Learning in Biological Systems
2. Learning with Artificial Neural Networks
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
7
ANNs vs. Computers
Digital Computers
• Analyze the problem to be solved
Artificial Neural Networks
No requirements of an explicit
description of the problem.
Inductive Reasoning. Given input and
output data (training examples), we
construct the rules.
•
Deductive Reasoning. We apply
known rules to input data to produce
output.
•
Computation is centralized,
synchronous, and serial.
Computation is collective,
asynchronous, and parallel.
•
Not fault tolerant. One transistor goes
and it no longer works.
Fault tolerant and sharing of
responsibilities.
•
Static connectivity.
Dynamic connectivity.
•
Applicable if well defined rules with
precise input data.
Applicable if rules are unknown or
complicated, or if data are noisy or
partial.
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
8
Artificial Neural Networks (1)
What is ANN?
ANN is a branch of "Artificial Intelligence". It is a system modeled based on the human brain.
ANN goes by many names, such as connectionism, parallel distributed processing, neurocomputing, machine learning algorithms, and finally, artificial neural networks.
Developing ANNs date back to the early 1940s. It experienced a wide popularity in the late
1980s. This was a result of the discovery of new techniques and developments in PCs.
Some ANNs are models of biological neural networks and some are not.
ANN is a processing device (An algorithm or Actual hardware) whose design was motivated by
the design and functioning of human brain.
Inside ANN:
ANN’s design is what distinguishes neural networks from other mathematical techniques
ANN is a network of many simple processors ("units“ or “neurons”), each unit has a small
amount of local memory.
The units are connected by unidirectional communication channels ("connections"), which
carry numeric (as opposed to symbolic) data.
The units operate only on their local data and on the inputs they receive via the connections.
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
9
Artificial Neural Networks (2)
ANNs Operation
ANNs normally have great potential for parallelism (multiprocessor-friendly architecture),
since the computations of the units are independent of each other. Same like biological neural
networks.
Most neural networks have some kind of "training" rule whereby the weights of connections are
adjusted on the basis of presented patterns.
In other words, neural networks "learn" from examples, just like children…and exhibit some
structural capability for generalization.
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
10
Artificial Neural Networks (3)
ANNs are a powerful technique (Black Box) to solve many real world
problems. They have the ability to learn from experience in order to
improve their performance and to adapt themselves to changes in the
environment.
In addition, they are able to deal with incomplete information or
noisy data and can be very effective especially in situations where it is
not possible to define the rules or steps that lead to the solution of a
problem.
Once trained, the ANN is able to recognize similarities when
presented with a new input pattern, resulting in a predicted output
pattern.
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
11
What can a ANN do?
Compute a known function
Approximate an unknown function
Pattern Recognition
Signal Processing
…….
Learn to do any of the above
Introduction to Artificial Neural Networks
Part I:
1. Artificial Neural Networks (ANNs)
2. Artificial and human neurons (Biological Inspiration)
3. Tasks & Applications of ANNs
Part II:
1. Learning in Biological Systems
2. Learning with Artificial Neural Networks
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
13
Biological Inspiration
Biological Neural Networks (BNN) are much more
complicated in their elementary structures than the
mathematical models we use for ANNs
Animals are able to react adaptively to changes in their external
and internal environment, and they use their nervous system to
perform these behaviours.
An appropriate model/simulation of the nervous system should
be able to produce similar responses and behaviours in artificial
systems.
The nervous system is build by relatively simple units, the
neurons, so copying their behaviour and functionality should be
the solution!
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
14
ANN as a Brain-Like Computer
ANN as a model of brainlike Computer
Brain
The human brain is still not well
understood and indeed its behavior
is very complex!
There are about 10-11 billion
neurons in the human cortex each
connected to , on average, 10000
others. In total 60 trillion synapses
of connections.
The brain is a highly complex,
nonlinear and parallel computer
(information-processing system)
 An artificial neural network (ANN) is
a massively parallel distributed processor
that has a natural propensity for storing
experimental knowledge and making it
available for use. It means that:
 Knowledge is acquired by the network
through a learning (training) process;
 The strength of the interconnections
between neurons is implemented by
means of the synaptic weights used to
store the knowledge.
The learning process is a procedure of the
adapting the weights with a learning
algorithm in order to capture the knowledge.
On more mathematically, the aim of the
learning process is to map a given relation
between inputs and output of the network.
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
15
Principles of Brain Processing
A process of pattern
recognition and pattern
manipulation is based on:
How our brain
manipulates
with patterns ?
Massive parallelism
Connectionism
Brain computer as an information
or signal processing system, is
composed of a large number of a
simple processing elements, called
neurons. These neurons are
interconnected by numerous direct
links, which are called connection,
and cooperate which other to
perform a parallel distributed
processing (PDP) in order to soft a
desired computation tasks.
Brain computer is a highly
interconnected neurons system in
such a way that the state of one
neuron affects the potential of the
large number of other neurons
which are connected according to
weights or strength. The key idea
of such principle is the functional
capacity of biological neural nets
deters mostly not so of a single
neuron but of its connections
MTR607: Learning Algorithms and Neural Networks
Associative
distributed memory
Storage of information in a brain is
supposed to be concentrated in
synaptic connections of brain
neural network, or more precisely,
in the pattern of these connections
and strengths (weights) of the
synaptic connections.
Dr. Alaa Sagheer
16
Biological Neuron
Biological
Neuron
- The simple
“arithmetic
computing”
element
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
17
Biological Neuron (2)
Cell structures
Cell body
Dendrites
Axon
Synaptic terminals
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
18
Biological Neurons (3)
axon
dendrites
synapses
The information transmission happens at the synapses, i.e
Synaptic connection strengths among neurons are used to store
the acquired knowledge.
In a biological system, learning involves adjustments to the
synaptic connections between neurons
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
19
Biological Neurons (4)
1. Soma or body cell - is a large, round
central body in which almost all the
logical functions of the neuron are
realized (i.e. the processing unit).
2. The axon (output), is a nerve fibre
attached to the soma which can serve
as a final output channel of the
neuron. An axon is usually highly
branched.
Synapses
Axon from
other
neuron
3. The dendrites (inputs)- represent a
highly branching tree of fibers. These
long irregularly shaped nerve fibers
(processes) are attached to the soma
carrying electrical signals to the cell
4. Synapses are the point of contact
between the axon of one cell and the
dendrite of another, regulating a
chemical connection whose strength
affects the input to the cell.
Soma
Axon
Dendrites
MTR607: Learning Algorithms and Neural Networks
The schematic
model of a
biological neuron
Dendrite
from
other
Dr. Alaa Sagheer
20
Properties of ANNs
Learning from examples
labeled or unlabeled
Adaptivity
changing the connection strengths to learn things
Non-linearity
the non-linear activation functions are essential
Fault tolerance
if one of the neurons or connections is damaged, the
whole network still works quite well
Introduction to Artificial Neural Networks
Part I:
1. Artificial Neural Networks (ANNs)
2. Artificial and human neurons (Biological Inspiration)
3. Tasks & Applications of ANNs
Part II:
1. Learning in Biological Systems
2. Learning with Artificial Neural Networks
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
22
Applications of ANNs
Classification
In marketing: consumer spending pattern classification
In defence: radar and sonar image classification
In agriculture & fishing: fruit, fish and catch grading
In medicine: ultrasound and electrocardiogram image classification, EEGs, medical diagnosis
Recognition and Identification
In general computing and telecommunications: speech, vision and handwriting recognition
In finance: signature verification and bank note verification
Assessment
In engineering: product inspection monitoring and control
In defence: target tracking
In security: motion detection, surveillance image analysis and fingerprint matching
Forecasting and Prediction
In finance: foreign exchange rate and stock market forecasting
In agriculture: crop yield forecasting , Deciding the category of potential food items
(e.g., edible or non-edible)
In marketing: sales forecasting
In meteorology: weather prediction
MTR607: Learning Algorithms and Neural Networks
23
Dr. Alaa Sagheer
Who are the Men of ANNs?!
Computer scientists want to find out about the properties of non-symbolic
information processing with neural nets and about learning systems in
general.
Statisticians use neural nets as flexible, nonlinear regression and
classification models.
Engineers of many kinds exploit the capabilities of neural networks in many
areas, such as signal processing and automatic control.
Cognitive scientists view neural networks as a possible apparatus to describe
models of thinking and consciousness (High-level brain function).
Neuro-physiologists use neural networks to describe and explore mediumlevel brain function (e.g. memory, sensory system, motorics).
Physicists use neural networks to model phenomena in statistical mechanics
and for a lot of other tasks.
Biologists use Neural Networks to interpret nucleotide sequences.
Philosophers and some other people may also be interested in Neural
Networks for various reasons
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
24
Operation of Biological Neuron
The spikes travelling along the axon of the pre-synaptic neuron
trigger the release of neurotransmitter substances at the
synapse.
The neurotransmitters cause excitation or inhibition in the
dendrite of the post-synaptic neuron.
The integration of the excitatory and inhibitory signals may
produce spikes in the post-synaptic neuron.
The contribution of the signals depends on the strength of the
synaptic connection.
• Excitation means positive product between the incoming
spike rate and the corresponding synaptic weight;
• Inhibition means negative product between the incoming
spike rate and the corresponding synaptic weight;
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
25
ANN Architecture
Output
Inputs
An artificial neural network is composed of many
artificial neurons that are linked together according
to a specific network architecture. The objective of
the neural network is to transform the inputs into
meaningful outputs.
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
26
ANN Architecture (2)
Neurons are arranged in layers. Neurons work by processing information. They
receive and provide information in form of spikes.
The artificial neuron receives one or more inputs (representing the one or more
dendrites),
At each neuron, every input has an associated weight which modifies the
strength of each input and sums them together,
The sum of each neuron is passed through a function known as an activation
function or transfer function in order to produce an output (representing a
biological neuron's axon)
Inputs
MTR607: Learning Algorithms and Neural Networks
Output
Dr. Alaa Sagheer
27
ANN Architecture (3)
x1
w1
x2
Inputs
x3
…
xn-1
xn
n
z   wi xi ; y  H ( z )
w2
..
i 1
w3
.
Output
y
wn-1
wn
Each neuron takes one or more inputs and produces an output. At each
neuron, every input has an associated weight which modifies the strength of
each input. The neuron simply adds together all the inputs and calculates an
output to be passed on.
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
28
Models of A Neuron
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
29
Models of A Neuron (2)
Terminal Branches
of Axon
Dendrites
x1
w1
x2
x3
w2
w3
S
Axon
wn
xn
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
30
Models of A Neuron (3)
Three elements:
1. A set of synapses, or connection link: each of which
is characterized by a weight or strength of its own wkj.
Specifically, a signal xj at the input synapse ‘j’ connected to
neuron ‘k’ is multiplied by the synaptic wkj
2. An adder: For summing the input signals, weighted by
respective synaptic strengths of the neuron in a linear
operation.
3. Activation function: For limiting of the amplitude of the
output of the neuron to limited range. The activation function
is referred to as a Squashing (i.e. limiting) function {interval
[0,1], or, alternatively [-1,1]}
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
31
Bias
The bias has the effect of increasing or lowering the net input of
the activation function depending on whether it is +/-
yk = Ø(vk) = Ø(uk + bk) = Ø(S wkjxj + bk)
An artificial neuron:
-
computes the weighted sum of its input (called its net input)
adds its bias (the effect of applying affine transformation to the output vk)
passes this value through an activation function
We say that the neuron “fires” (i.e. becomes active) if
its outputs is above zero.
This extra free variable (bias) makes the neuron more
powerful.
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
32
Activation Function Ø(vk)
It defines the output of the neuron given an input or set of inputs. A standard
computer chip circuit can be seen as a digital network of activation functions
that can be "ON" (1) or "OFF" (0), depending on input,
The best activation function is the non-linear function. Linear functions are
limited because the output is simply proportional to the input.
Three basic types of activation function:
1. Threshold function,
2. Linear function,
3. Sigmoid function.
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
33
Activation functions (2)
Threshold (Step) function
The output yk of this activation function is binary, depending on
whether the input meets a specified threshold. The "signal" is sent,
i.e. the output is set to one, if the activation meets the threshold.
McColloch-Pitts Model
Threshold Logic Unit
(TLU), since 1943
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
34
Activation functions (3)
Piecewise Linear Function
- The amplification factor inside the linear region of operation is assumed to be
unity.
- This form may be viewed as an approximation to a non linear amplifier
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
35
Activation functions (4)
Sigmoid function
- A fairly simple non-linear function, such as the logistic function.
- As the slop parameter approaches infinity the sigmoid function becomes a
threshold function
Where “a” is the slope parameter of
the sigmoid function
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
36
Artificial Neural Networks
Early ANN Models:
McCulloch-Pitts , Perceptron, ADALINE,
Hopfield Network,
Current Models:
Multilayer feed forward networks (Multilayer
perceptrons- Back propagation )
Radial Basis Function networks
Self Organizing Networks
...
MTR607: Learning Algorithms and Neural Networks
Dr. Alaa Sagheer
37
Feedback
Feedback is a dynamic system whenever occurs in
almost every part of the nervous system,
Feedback is giving one or more closed path for
transmission of signals around the system,
It plays important role in study of special class of
neural networks known as Recurrent networks.
Feedback (2)
The system is assumed to be linear and has a forward path (A)
and a feedback path (B),
The output of the forward channel determines its own output
through the feedback channel.
Feedback (3)
E.g. consider A is a fixed weight and B is a unit delay operator z-1 .
Feedback (4)
Then, we may express yk(n) as an infinite weighted summation of
present and past samples of the input signal xj(n).
Therefore, feedback systems are controlled by weight.
Feedback (5)
Feedback systems are controlled
by weight.
1.
For positive weight, we have
stable systems, i,e, convergent
output y,
2. For negative weight, we have,
unstable systems, i.e divergent
output y.. (Linear and
Exponential)
Network Architectures
Three different classes of network architectures:
1. Single-layer feed forward networks,
2. Multilayer feed forward networks,
3. Recurrent networks.
Single-layer feed forward network
- Input layer of source nodes that projects directly
onto an output layer of neurons.
- “Single-layer” referring to the output layer of
computation nodes (neuron).
Multilayer feed forward network
It contains one or more hidden
layers (hidden neurons).
“Hidden” refers to the part of
the neural network is not seen
directly from either input or
output of the network .
The function of hidden neuron
is to intervene between input
and output.
By adding one or more hidden
layers, the network is able to
extract higher-order statistics
from input
Recurrent Networks
It is different from feed
forward neural network in that
it has at least one feedback
loop.
Recurrent network may consist
of single layer of neuron with
each neuron feeding its output
signal back to the inputs of all
the other neurons. Note: There
are no self-feedback.
Feedback loops have a
profound impact on learning
and overall performance.
How to Decide on a Network Topology?
What transfer function should be used?
How many inputs does the network need?
How many hidden layers does the network need?
How many hidden neurons per hidden layer?
How many outputs should the network have?
There is no standard methodology to determinate these values.
Even there is some heuristic points, final values are
determinate by a trial and error procedure.
Slide 47
Knowledge Representation
Knowledge is referred to the stored information or models used
by a person or machine to interpret, predict and, appropriately,
respond to the outside.
A good solution depends on a good representation of
knowledge
The main characteristic of knowledge representation has two
folds:
1) What information is actually made explicit?
2) How the information is physically encoded for subsequent use?
Knowledge Representation (2)
There are two kinds of Knowledge:
1) The known world states, or facts, (prior knowledge),
2) Observations (measurements) of the world, obtained by
sensors to probe the environment.
These observations
represent the pool of
information, from
which examples are
used to train the NN
Knowledge Representation (3)
These Examples can be labeled or unlabeled
In labeled examples
 Each example representing an input signal is paired with a
corresponding desired response,
 Labeled examples may be expensive to collect, as they require
availability of a “teacher” to provide a desired response for
each labeled example.
Un labeled examples
 Unlabeled examples are usually abundant as there is no need
for supervision.
Knowledge Representation (3)
Design of neural network may proceed as follow:
An appropriate architecture for the neural network, with an
input layer consisting of source nodes equal in number to the
pixels of an input image.
The recognition performance of trained network is tested with
data not seen before (testing).
This phase of the network design called learning
Roles of Knowledge Representation
There are four rules for knowledge representation:
Rule 1:
Similar inputs (i.e., patterns) drawn from similar
classes should usually produce similar representation
inside the network, and should therefore be classified as
belonging to the same class.
There are plethora (many) of measures for
determining the similarity between inputs
Roles of Knowledge Representation (2)
A commonly used measure of similarity is the Euclidian Distance
Let xi denotes an m-by-1 vector
(1)
Roles of Knowledge Representation (3)
Another measure is the dot product or inner product com
Given a pair of vectors xi and xj of the same dimension, their inner
product will be (the projection of vector xi onto vector xj)
Please note that:
Roles of Knowledge Representation (4)
The smaller the Euclidean distance ║x i - xj ║(i.e. the more similar
the vector xi and xj are), the larger the inner product xiT xj will be.
To formalize this relationship, we normalize
the vectors x i and xj to have a unit length, i.e.:
Using Eq.(1) to write
The minimization of the Euclidean distance d(x i , xj ) corresponds
to maximization of the inner product (x i , xj )..and, therefore, the
similarity between the vectors x i and xj
Roles of Knowledge Representation (5)
If the vectors x i and xj are stochastic (drown from different
population of data)
Where C-1 is the inverse of the covariance
matrix C. It is supposed that the
covariance matrix is the same for both
For a prescribed C, the smaller the distance d is the
more similar the vectors xi and xj will be
Roles of Knowledge Representation (6)
Rule 2:
Item to be categorized as separate classes should be given widely
different representation in work.
Rule 3:
If a particular feature is important, then there should be large
number of neurons involved in the representation of that item in
the network.
Rule 4:
Prior information and invariance should be built into the design of
a neural network when ever they are available, so as to simplify
the network design by its not having to learn them.
Rule 4 is particularly important and highly desirable
Roles of Knowledge Representation (7)
Rule 4 is particularly important and highly desirable
because it results in an NN with a Specialized Structure (SS)
1) Biological visual and auditory networks are very specialized,
2)
NN with SS has a smaller number of free parameters available for
adjustment than other networks. Then, they need a small training dataset,
learns faster and generalize better.
3) Rate of information transmission through a specialized network is faster,
4) Cost of building a specialized network is minimum, due to small size.
How to build prior information into NN design?
There are currently no well-defined rules for doing this; but we
have some procedure are known to yield useful rules. In particular,
we may use a combination of two techniques:
1. Restricting the network architecture (using local connections)
2. Constraining the choice of synaptic weight (using the weight
sharing)
The latter tech is so
important because it
leads to reducing
significantly free
parameters
How to build invariance into NN’s design?
Consider any of the following:
1) When an object rotates, the perceived image, by observer, will change as well,
2) The utterance of a spoken person may be soft or loud..slower or quicker,
3) …..
A classifier should be invariant to different transformation
Or
A class estimate represented by an output of the classifier
MUST not be affected by transformations of the observed
signal applied to the classifier input
There are three technique for rendering classifier-type NNs
invariant to transformations:
1. Invariance by structure.
2. Invariance by training.
3. Invariance by feature space
Learning in
Biological Systems
Learning in Biological Systems
Learning approach based on modeling adaptation in
biological neural systems
Learning = learning by adaptation
The young animal learns that the green fruits are sour,
while the yellowish/reddish ones are sweet. The
learning happens by adapting the fruit picking
behaviour
Learning in Biological Systems (2)
From experience: examples / training data
Learning happens by changing of the synaptic
strengths,
Synapses change size and strength with experience (or
examples or training data),
Strength of connection between the neurons is stored
as a weight-value for the specific connection,
Learning the solution to a problem = changing the
connection weights
Learning in Biological Systems (3)
Hebbian Learning
When two connected neurons are firing at the same
time, the strength of the synapse between them
increases,
“Neurons that fire together, wire together”
Learning in ANN
We may categorize the learning process through Neural
Networks function as follows:
1. Learning with a teacher,
- Supervised Learning
2. Learning without a teacher,
- Unsupervised Learning
- Reinforcement Learning
Supervised Learning
In supervised learning, both the
inputs and the outputs are
provided. The network then
processes the inputs and compares
its resulting outputs against the
desired outputs
Errors are then calculated, causing
the system to adjust the weights
which control the network. This
process occurs over and over as the
weights are continually improved.
Supervised learning process
constitutes a closed-loop
feedback system but unknown
environment is outside the loop,
Supervised Learning (2)
It is based on a labeled
training set.
The class of each piece
 Class
of data in training set is
B
known.
 Class
Class labels are preA
determined and
provided in the training
 Class B
phase.
 Class
A
 Class
B
A
 Class
Understanding Supervised Learning
A
A
B
B
B A
Two Possible Solutions…
A
A
B
B
A
B
B
A
B
B
A
A
How to solve a given problem of supervised learning?
Various steps have to be considered:
1. Determine the type of training examples,
2. Gather a training data set that satisfactory describe the given problem,
3. After the training process we can test the performance of learned artificial
neural network with the test (validation) data set,
4. Test data set consist of data that has not been introduced to artificial
neural network while learning.
Reinforcement Learning
The learning of input –output
mapping is performed through
continued interaction with the
environment in order to minimize
a scalar index of performance.
Or
A machine learning technique
that sets parameters of an
artificial neural network, where
data is usually not given, but
generated by interactions with the
environment.
Reinforcement Learning (2)
Reinforcement learning is built around critic that converts primary
reinforcement signal received from the environment into a higher
quality reinforcement signal
Unsupervised Learning
No help from the outside,
No information available on the desired output,
Input: set of patterns P, from n-dimensional space S, but little /
no information about their classification, evaluation, interesting
features, etc.
It must learn these by itself!
Learning by doing
Tasks: Used to pick out structure in the input
Clustering - Group patterns based on similarity,
Vector Quantization - Fully divide up S into a small set of
regions (defined by codebook vectors) that also helps cluster P,
Feature Extraction - Reduce dimensionality of S by removing
unimportant features (i.e. those that do not help in clustering P)
Supervised vs. Unsupervised
Task performed
Task performed
Classification
Pattern Recognition
Clustering, Pattern Recognition
Feature Extraction, VQ
• NN model
Preceptron,
Feed-Forward NN
• NN Model
Self Organizing Maps,
ART