Download Lecture 2: Basics and definitions - Homepages | The University of

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Brain wikipedia , lookup

Theta model wikipedia , lookup

Neurophilosophy wikipedia , lookup

Action potential wikipedia , lookup

Connectome wikipedia , lookup

Resting potential wikipedia , lookup

Endocannabinoid system wikipedia , lookup

Binding problem wikipedia , lookup

Cognitive neuroscience wikipedia , lookup

Neuroeconomics wikipedia , lookup

Neuromuscular junction wikipedia , lookup

Activity-dependent plasticity wikipedia , lookup

Clinical neurochemistry wikipedia , lookup

Catastrophic interference wikipedia , lookup

Artificial general intelligence wikipedia , lookup

Rheobase wikipedia , lookup

Artificial neural network wikipedia , lookup

Caridoid escape reaction wikipedia , lookup

Synaptogenesis wikipedia , lookup

Axon wikipedia , lookup

End-plate potential wikipedia , lookup

Multielectrode array wikipedia , lookup

Mirror neuron wikipedia , lookup

Neural modeling fields wikipedia , lookup

Premovement neuronal activity wikipedia , lookup

Neuroanatomy wikipedia , lookup

Circumventricular organs wikipedia , lookup

Neural oscillation wikipedia , lookup

Holonomic brain theory wikipedia , lookup

Neurotransmitter wikipedia , lookup

Neural engineering wikipedia , lookup

Central pattern generator wikipedia , lookup

Molecular neuroscience wikipedia , lookup

Electrophysiology wikipedia , lookup

Nonsynaptic plasticity wikipedia , lookup

Feature detection (nervous system) wikipedia , lookup

Optogenetics wikipedia , lookup

Chemical synapse wikipedia , lookup

Development of the nervous system wikipedia , lookup

Pre-Bötzinger complex wikipedia , lookup

Convolutional neural network wikipedia , lookup

Stimulus (physiology) wikipedia , lookup

Channelrhodopsin wikipedia , lookup

Recurrent neural network wikipedia , lookup

Metastability in the brain wikipedia , lookup

Single-unit recording wikipedia , lookup

Neural coding wikipedia , lookup

Types of artificial neural networks wikipedia , lookup

Neuropsychopharmacology wikipedia , lookup

Synaptic gating wikipedia , lookup

Biological neuron model wikipedia , lookup

Nervous system network models wikipedia , lookup

Transcript
Introduction to Neural
Networks
Neural Nets slides mostly from: Andy Philippides,University of Sussex
Uses of NNs
Neural Networks
Are For
Applications
Science
Character recognition
Neuroscience
Optimization
mathematics statistics
Physics,
Financial prediction
Computer science
Automatic driving
Psychology
..............................
...........................
What are biological NNs?
• UNITs: nerve cells called neurons, many different
types and are extremely complex
• around 1011 neurons in the brain (depending on
counting technique) each with 103 connections
• INTERACTIONs: signal is conveyed by action
potentials, interactions could be chemical (release or
receive neurotransmitters) or electrical at the synapse
• STRUCTUREs: feedforward and feedback and
self-activation recurrent
“The nerve fibre is clearly a signalling mechanism of limited scope.
It can only transmit a succession of brief explosive waves, and the
message can only be varied by changes in the frequency and in the
total number of these waves. … But this limitation is really a small
matter, for in the body the nervous units do not act in isolation as
they do in our experiments. A sensory stimulus will usually affect a
number of receptor organs, and its result will depend on the
composite message in many nerve fibres.” Lord Adrian, Nobel
Acceptance Speech, 1932.
We now know it’s not quite that simple
• Single neurons are highly complex
electrochemical devices
• Synaptically connected networks are only
part of the story
• Many forms of interneuron communication
now known – acting over many different
spatial and temporal scales
The complexity of a
neuronal system can be
partly seen from a picture
in a book on computational
neuroscience
edited by Jianfeng
How do we go from real neurons to artificial ones?
Hillock
input
output
Single neuron activity
• Membrane potential is the voltage difference between a neuron
and its surroundings (0 mV)
Membrane potential
Cell
Cell
Cell
Cell
0 Mv
Single neuron activity
•If you measure the membrane potential of a neuron and print it out
on the screen, it looks like:
spike
Single neuron activity
•A spike is generated when the membrane potential is greater than
its threshold
Abstraction
•So we can forget all sub-threshold activity and concentrate on
spikes (action potentials), which are the signals sent to other
neurons
Spikes
• Only spikes are important since other neurons receive them
(signals)
•
Neurons communicate with spikes
•
Information is coded by spikes
So if we can manage to measure the spiking
time, we decipher how the brain works ….
•
Again its not quite
that simple
• spiking time in the cortex is random
With identical input
for the identical neuron
spike patterns are similar, but not identical
Recording from a real neuron: membrane potential
Single spiking time is meaningless
To extract useful information, we have to average
 for a group of neurons in a local circuit where neuron
codes the same information
 over a time window
to obtain the firing rate r
r =
=
Local circuit
= 6 Hz
Time window = 1 sec
Hence we have firing rate of a group of neurons
r1
So we can have a network of these
local groups
w1: synaptic strength
R = f ( w j rj )
wn
rn

ri is the firing rate of input local circuit
The neurons at output local circuits receives signals in the form
N
w r
i
i
i=1
The output firing rate of the output local circuit is then given by
R
N
R = f ( w i ri )
i=1
where f is the activation function, generally a Sigmoidal
function of some sort
wi weight, (synaptic strength) measuring the strength of the
interaction between neurons.
Artificial Neural networks
Local circuits (average to get firing rates)
Single neuron (send out spikes)
Artificial Neural Networks (ANNs)
A network with interactions, an attempt to mimic the brain
• UNITs: artificial neuron (linear or nonlinear inputoutput unit), small numbers, typically less than a few
hundred
• INTERACTIONs: encoded by weights, how strong a
neuron affects others
• STRUCTUREs: can be feedforward, feedback or
recurrent
It is still far too naïve as a brain model and an information
processing
Four-layer networks
x1
x2
Input
Output
(visual
input)
(Motor
output)
xn
Hidden layers
The general artificial neuron model has five components, shown
in the following list. (The subscript i indicates the i-th input
or weight.)
1.
A set of inputs, xi.
2.
A set of weights, wi.
3.
A bias, u.
4.
An activation function, f.
5.
Neuron output, y
Thus the key to understanding ANNs is to
understand/generate the local input-output relationship
m
yi = f ( wij x j  bi )
j =1