Download x` j

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Embodied cognitive science wikipedia , lookup

Machine learning wikipedia , lookup

Personal knowledge base wikipedia , lookup

Gene expression programming wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Time series wikipedia , lookup

Perceptual control theory wikipedia , lookup

Knowledge representation and reasoning wikipedia , lookup

Pattern recognition wikipedia , lookup

Neural modeling fields wikipedia , lookup

Hierarchical temporal memory wikipedia , lookup

Catastrophic interference wikipedia , lookup

Convolutional neural network wikipedia , lookup

Transcript
KULIAH II JST:
BASIC CONCEPTS
Amer Sharif, S.Si, M.Kom
INTRODUCTION REVIEW

Neural Network definition:
A
massively parallel distributed processor of
simple processing units (neuron)
 Store experiential knowledge and make it
available for use
 Knowledge is acquired from the environment
through learning process
 Knowledge is stored as internerneuron connection
strengths (synaptic weights)
INTRODUCTION REVIEW

Benefits:
 Nonlinearity
 Input
Output Mapping
 Adaptivity
 Evidential
Response
 Contextual Information
 Fault
Tolerance/Graceful Degrading
 VLSI Implementability
 Uniform Analysis & Design
NEURON MODELLING
Basic elements of neuron:
 A set of synapses or connecting links
Each synapse is characterized by its weight
 Signal xj at synapse j connected to neuron k is
multiplied by synaptic weight wkj
 Bias is bk



An adder for summing the input signals
An activation function for limiting the output
amplitude of the neuron
NEURON MODELLING

Block diagram of a nonlinier neuron

x1
Input
signals
Bias
bk
wk1
Activation
function
x2
wk2
.
.
.
.
.
.
xm
wkm
Synaptic
weights

vk
 
Summing
junction
u  w x
j 1
yk
y
m
k
Output
kj
j
k
  uk  bk 
NEURON MODELLING
Note
 x1, x2,…, xm are input signals
 wk1, wk2,…, wkm are synaptic weights of neuron k
 uk is the linier combiner output
 bk is bias
   is the activation function
 yk is the output signal of the neuron
NEURON MODELLING

If
and bias is
vk  uk  bk
substituted for a synapse where
x0 = + 1 with weight wk0 = bk
then
m
vk   wkj x j
j 0
and
y
k
  vk 
NEURON MODELLING

Modified block diagram of a nonlinier neuron
Fixed input x0= +1

x1
Input
signals
wk0
wk1
Activation
function
x2
wk2
.
.
.
.
.
.
xm
wk0=bk (bias)
wkm
Synaptic
weights

 
vk
Output
yk
Summing
junction
y
m
v  w x
k
j 0
kj
j
k
  vk 
ACTIVATION FUNCTIONS
Activation Function types:
 Threshold Function
1.2
vk  0
1 if
yk   0 if

0.8
vk  0
0.6
m
and
v  w x  b
k
j 1
kj
j
 v 
1
k
0.4
0.2
0
-2
-1
0
v

also known as the McCulloch-Pitts model
1
2
ACTIVATION FUNCTIONS

 v 
Piecewise-Linear Function
1

v
1,
2
 1
1
1
 v  ,   v  
2
2
 2
1
0,
v

2
1.2
1
0.8
0.6
0.4
0.2
0
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
ACTIVATION FUNCTIONS

1.2
Sigmoid Function
S-shaped
 Sample logistic
function:
1

1
 (v ) 
1  exp( av )
1.2
1
increasing
0.8
f(v)
a
0.6
0.6
0.4
0.4
0.2
0.2
0
0
-10
-8
-6
-4
-2
0
2
4
v
a is the slope parameter: the larger a the steeper the
function
 Differentiable everywhere

0.8
6
8
10
NEURAL NETWORKS AS
DIRECTED GRAPHS

Neural networks maybe represented as directed
graphs:
wkj
yk= wkj xj
 Synaptic links
xj
(linier I/O)
 Activation links
(nonlinier I/O)
 Synaptic convergence
 
Synaptic divergence
k

x 
j
yi
yj

y
xj
xj
yk=yi + yj
xj
xj
NEURAL NETWORKS AS DIRECTED
GRAPHS

Architectural graph: partially complete directed
graph
x0 =+1
x1
x2
.
.
.
xm
Output
yk
FEEDBACK



Output of a system influences some of the input
applied to the system
One or more closed paths of signal transmission
around the system
Feedback plays an important role in recurrent
networks
FEEDBACK

Sample single-loop feedback system
xj(n)
x’j (n)
w
yk(n)
z-1

yk

l 1
(n)   w
l 0
x j (n  l )
w is fixed weight
z-1 is unit-delay operator
x j (n  l )
is sample of input
signal delayed by l time units
Output signal yk(n) is an infinite weighted summation of
present and past samples of input signal xj(n)
FEEDBACK
yk(n)

Dynamic system behavior is
determined by weight w wxj(0)
w
<1



w<1
0
1
2
3
4
n
System is exponentially convergent/stable
System posses infinite memory: Output depends on
input samples extending into the infinite past
Memory is fading: influence of past samples is
reduced exponentially with time n
FEEDBACK
yk(n)

w=1
 System
is linearly
divergent

w=1
wxj(0)
w>1
 System
is exponentially
divergent
0
1
2
3
4
n
3
4
n
yk(n)
w>1
wxj(0)
0
1
2
NETWORK ARCHITECTURES

Single-Layered Feedforward Networks




Neurons are organized in
layers
“Single-layer” refers to
output neurons
Source nodes supply to
output neurons but not
vice versa
Network is feedforward or
acyclic
input layer of
source nodes
output layer of
neurons
NETWORK ARCHITECTURES




Multilayer Feedforward Networks
One or more hidden layers
Hidden neurons enable
extractions of higher-order
statistic
Network acquires global
perspective due to extra set
of synaptic connections
and neural interactions
Layer of output
neurons
Input layer of
source nodes
Layer of hidden
neurons
7-4-2 fully connected
network:
• 7 source nodes
• 4 hidden neurons
• 2 output neurons
NETWORK ARCHITECTURES

Recurrent Networks

z-1
Outputs

At least one
feedback loop
Feedback
loops affect
learning
capability and
performance
of the network
z-1
z-1
z-1
Unit-delay
operators
Inputs
KNOWLEDGE REPRESENTATION

Definition of Knowledge:
Knowledge refers to stored information or models used
by a person or a machine to interpret, predict, and
appropriately respond to the outside world
 Issues:



What information is actually made explicit
How information is physically encoded for subsequent use
Knowledge representation is goal-directed
 Good solution depends on good representation of
knowledge

KNOWLEDGE REPRESENTATION

Challenges faced by Neural Networks:
Learn the model of the world/environment
 Maintain the model to be consistent with the real world to
achieve the goals desired



Neural Networks may learn from a set of
observations data in form of input-output pairs
(training data/training sample)
Input is input signal and output is the corresponding
desired response
KNOWLEDGE REPRESENTATION

Handwritten digit recognition problem
Input signal: one of 10 images of digits
 Goal: to identify image presented to the network as input
 Design steps:




Select the appropriate architecture
Train the network with subset of examples (learning phase)
Test the network with presentation of data/digit image not seen
before, then compare response of network with actual identity of the
digit image presented (generalization phase)
KNOWLEDGE REPRESENTATION

Difference with classical pattern-classifier:

Classical pattern-classifier design steps:




Neural Network design is:



Formulate mathematical model of the problem
Validate model with real data
Build based on model
Based on real life data
Data may “speak for itself”
Neural network not only provides model of the environment
but also process the information
ARTIFICIAL INTELLIGENCE AND
NEURAL NETWORKS

AI systems must be able to:
Store knowledge
 Use stored knowledge to solve problem
 Acquire new knowledge through experience


AI components:

Representation


Knowledge is presented in a language of symbolic structures
Symbolic representation makes it relatively easy for human users
ARTIFICIAL INTELLIGENCE AND
NEURAL NETWORKS

Reasoning




Able to express and solve broad range of problems
Able to make explicit and implicit information known to it
Have a control mechanism to determine which operation for a
particular problem, when a solution is obtained, or when further
work on the problem must be terminated
Rules, Data, and Control:



Rules operate on Data
Control operate on Rules
The Travelling Salesman Problem:



Data:
Rules:
Control:
possible tours and cost
ways to go from city to city
which Rules to apply and when
ARTIFICIAL INTELLIGENCE AND
NEURAL NETWORKS
 Learning
Environment
Learning
element
 Inductive
Knowlegdge
Base
Performance
element
learning: determine rules from raw data and
experience
 Deductive learning: use rules to determine specific facts
ARTIFICIAL INTELLIGENCE AND
NEURAL NETWORKS
Parameter
Artificial
Intelligence
Neural Networks
Level of Explanation Symbolic
Parallel distributed
representation with
processing (PDP)
sequential processing
Processing Style
Sequential
Parallel
Representational
Structure
Quasi-linguistic
structure
Poor
Summary
Formal manipulation
of algorithm and data
representation in top
down fashion
Parallel distributed
processing with
natural ability to learn
in bottom up fashion