Download A neuron receives input from other neurons

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Functional magnetic resonance imaging wikipedia , lookup

Biochemistry of Alzheimer's disease wikipedia , lookup

Endocannabinoid system wikipedia , lookup

Human brain wikipedia , lookup

Selfish brain theory wikipedia , lookup

Human multitasking wikipedia , lookup

Brain morphometry wikipedia , lookup

Binding problem wikipedia , lookup

Multielectrode array wikipedia , lookup

Caridoid escape reaction wikipedia , lookup

Aging brain wikipedia , lookup

Haemodynamic response wikipedia , lookup

Neurolinguistics wikipedia , lookup

Artificial neural network wikipedia , lookup

Neuroinformatics wikipedia , lookup

Brain wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Neuroeconomics wikipedia , lookup

Neuroplasticity wikipedia , lookup

Neurophilosophy wikipedia , lookup

Neural oscillation wikipedia , lookup

Donald O. Hebb wikipedia , lookup

Premovement neuronal activity wikipedia , lookup

Mirror neuron wikipedia , lookup

History of neuroimaging wikipedia , lookup

Neurotransmitter wikipedia , lookup

Neuropsychology wikipedia , lookup

Central pattern generator wikipedia , lookup

Cognitive neuroscience wikipedia , lookup

Synaptogenesis wikipedia , lookup

Neural modeling fields wikipedia , lookup

Connectome wikipedia , lookup

Convolutional neural network wikipedia , lookup

Nonsynaptic plasticity wikipedia , lookup

Axon wikipedia , lookup

Activity-dependent plasticity wikipedia , lookup

Neural engineering wikipedia , lookup

Neural correlates of consciousness wikipedia , lookup

Neural coding wikipedia , lookup

Clinical neurochemistry wikipedia , lookup

Molecular neuroscience wikipedia , lookup

Brain Rules wikipedia , lookup

Pre-Bötzinger complex wikipedia , lookup

Chemical synapse wikipedia , lookup

Stimulus (physiology) wikipedia , lookup

Mind uploading wikipedia , lookup

Optogenetics wikipedia , lookup

Feature detection (nervous system) wikipedia , lookup

Circumventricular organs wikipedia , lookup

Recurrent neural network wikipedia , lookup

Artificial general intelligence wikipedia , lookup

Single-unit recording wikipedia , lookup

Development of the nervous system wikipedia , lookup

Types of artificial neural networks wikipedia , lookup

Biological neuron model wikipedia , lookup

Holonomic brain theory wikipedia , lookup

Neurotoxin wikipedia , lookup

Channelrhodopsin wikipedia , lookup

Neuroanatomy wikipedia , lookup

Synaptic gating wikipedia , lookup

Neuropsychopharmacology wikipedia , lookup

Metastability in the brain wikipedia , lookup

Nervous system network models wikipedia , lookup

Transcript
N. Laskaris
Motivation
‘‘ Πρέπει να αναπτύξουµε, όσο πιο γρήγορα είναι δυνατό,
την τεχνολογία προς στην κατεύθυνση εκείνη
που θα επιτρέψει την άµεση επικοινωνία
µεταξύ του ανθρώπινου εγκεφάλου και των υπολογιστών.
Οι νοήµονες µηχανές θα πρέπει
να συµβάλουν στην ανθρώπινη ευφυϊα
και να µην την ανταγωνίζονται’’
--- Stephen Hawking
Grand Vision
Philosophical Inquiries
What tasks are machines good at doing that humans are not?
What tasks are humans good at doing that machines are not?
What tasks are both good at?
What does it mean to learn?
How is learning related to intelligence?
What does it mean to be intelligent?
Do you believe a machine will ever be built that exhibits intelligence?
Have the above definitions changed over time?
If a computer were intelligent, how would you know?
What does it mean to be conscious?
Can one be intelligent and not conscious or vice versa?
Although it may appear to be no more than a mass of
tissue weighing a mere three pounds (1.350 Kgr),
the brain is a remarkable organ
It is critical to our survivial.
Æ Without the brain, vital functions such as heart rhythms,
breathing, and digestion would be impossible.
But the brain does so much more !!!
Did you enjoy your last meal?
Do you remember the last movie you saw?
Did it make you feel happy or sad?
Can you trace all the steps you take to get
from your house to the nearest convenience store?
The brain is what enables us to have sensations,
make decisions, and take action.
It allows us to learn a whole lifetime of experiences,
and gives us the ability for language and abstract thought.
In short, the brain makes possible the human condition.
How does it do it ?
NeuroSience
Functional Organization
The brain contains over 100 billion specialized cells called neurons.
Neurons can be thought of as the basic processing units of the brain and convey
signals by passing electrical impulses called action potentials
from one end of themselves to another.
Action potentials are generated by the opening and closing of microscopic gates
in the cell membrane. At the end of the axon, information is transferred to the
next neuron via a specialized structure, called the synapse.
The synapse is where individual neurons
transmit information to other neurons.
Neurons come very close to each other at the synapse,
but do not actually touch.
Communication between neurons happens when chemical signals
are transmitted across a gap known as the synaptic cleft.
The Brain as an Information Processing System
The human brain contains about 10 billion nerve cells,
or neurons.
neurons
On average, each neuron is connected to other neurons through
about 10 000 synapses. (The actual figures vary greatly, depending
on the local neuroanatomy.)
The brain's network of neurons forms
a massively parallel information processing system.
This contrasts with conventional computers, in which a single
processor executes a single series of instructions.
On the contrary:
neurons typically operate at a maximum rate of about 100 Hz,
while a conventional CPU carries out several hundred million
machine level operations per second.
Despite of being built with very slow hardware,
the brain has quite remarkable capabilities:
its performance tends to degrade gracefully under partial damage.
it can learn (reorganize itself) from experience.
this means that partial recovery from damage is possible if healthy units can learn to
take over the functions previously carried out by the damaged areas.
it performs massively parallel computations extremely efficiently.
For example, complex visual perception occurs within less than
100 ms, that is, 10 processing steps!
it supports our intelligence and self-awareness.
(Nobody knows yet how this occurs.)
Neural Networks in the Brain
The brain is not homogeneous.
At the largest anatomical scale,
we distinguish cortex, midbrain,
brainstem, and cerebellum.
Each of these can be hierarchically subdivided into many regions,
and areas within each region, either according to the anatomical
structure of the neural networks within it,
or according to the function performed by them.
The overall pattern of projections (bundles of neural connections)
between areas is extremely complex, and only partially known.
The best mapped (and largest) system in the human brain is the visual system,
where the first 10 or 11 processing stages have been identified.
We distinguish feedforward projections
that go from earlier processing stages (near the sensory input) to later ones (near
the motor output),
from feedback connections that go in the opposite direction.
In addition to these long-range connections, neurons also link up with
many thousands of their neighbours. In this way they form
very dense, complex local networks:
Neurons & Synapses
The basic computational unit in the nervous system
is the nerve cell, or
neuron.
A neuron has:
- Dendrites (inputs)
- Cell body
- Axon (output)
-A neuron receives input from other neurons (typically many thousands).
Æ Inputs sum (approximately).
-> Once input exceeds a critical level, the neuron discharges a spike
an electrical pulse that travels from the body,
down the axon, to the next neuron(s).
This spiking event is also called depolarization, and is followed by
a refractory period, during which the neuron is unable to fire.
The axon endings (Output Zone) almost touch the dendrites or cell
body of the next neuron.
Transmission of an electrical signal from one neuron to the next
is effected by neurotransmittors, chemicals which are released
from the first neuron and which bind to receptors in the second.
This link is called a synapse.
The extent to which the signal from one neuron is passed on to the
next depends on many factors, e.g. the amount of neurotransmittor
available, the number and arrangement of receptors,
amount of neurotransmittor reabsorbed, etc.
Variety of neurons:
From
Structure
to
Function
The following properties of nervous systems
will be of particular interest in our neurally-inspired models:
parallel, distributed information processing
high degree of connectivity among basic units
connections are modifiable based on experience
learning is a constant process, and usually unsupervised
learning is based only on local information
performance degrades gracefully if some units are removed
etc..........
Artificial Neuron Models
Computational neurobiologists have constructed
very elaborate computer models of neurons
in order to run detailed simulations of particular circuits in the brain.
As Computer Scientists, we are more interested in the general properties
of neural networks, independent of how they are actually "implemented"
in the brain.
This means that we can use much simpler, abstract "neurons",
which (hopefully) capture the essence of neural computation
even if they leave out much of the details of how biological neurons work.
People have implemented model neurons in hardware
as electronic circuits, often integrated on VLSI chips.
A Simple Artificial Neuron
Our basic computational element (model neuron)
is often called a node or unit.
It receives input from
some other units,
or perhaps from
an external source.
Each input has an associated weight w,
which can be modified so as to model synaptic learning.
The unit computes some function f of the weighted sum of its inputs
Its output, in turn, can serve as input to other units.
Fitting a Model to Data
Actual automobile data :
Miles per Gallon (MPG)
vs Weight
74-cars
Capturing the general
trend: ‘‘the heavier the
car the greater the
consumption’’
Question : How can We ‘‘learn’’ something from this data’’ ?
By using linear regression,
we can draw the best-fit line y = w1*x + w0
This is a simple model that can be used in future
for prediction purposes ……
By comparing the best-fit linear model
With the function implemented
by the following neural net
y = -0.6*x + 0.1
y2 = w21 y1 + w20 1.0
We realize that this naïve artificial neural net
can, in principle, ‘‘learn’’ the same automobile data …..
- By taking advantage of
nonlinearities in neural activations,
we can improve the modeling task
- By taking advantage of the multivariate nature
of artificial NNets,
we can enhance the information used in the modeling task
1. mpg: continuous
2. cylinders: multi-valued discrete
3. displacement: continuous
4. horsepower: continuous
5. weight: continuous 6. acceleration: continuous
7. model year: multi-valued discrete 8. origin: multi-valued discrete
9. car name: string (unique for each instance)
Prediction using many variates
Multivariate prediction
- Learning concept
- Storage of Knowledge Acquired