Download deep learning with different types of neurons

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Nonsynaptic plasticity wikipedia , lookup

Eyeblink conditioning wikipedia , lookup

Stimulus (physiology) wikipedia , lookup

Single-unit recording wikipedia , lookup

Neuroanatomy wikipedia , lookup

Learning wikipedia , lookup

Artificial intelligence wikipedia , lookup

Development of the nervous system wikipedia , lookup

Mirror neuron wikipedia , lookup

Metastability in the brain wikipedia , lookup

Caridoid escape reaction wikipedia , lookup

Neural coding wikipedia , lookup

Premovement neuronal activity wikipedia , lookup

Neuropsychopharmacology wikipedia , lookup

Optogenetics wikipedia , lookup

Machine learning wikipedia , lookup

Neural modeling fields wikipedia , lookup

Biological neuron model wikipedia , lookup

Feature detection (nervous system) wikipedia , lookup

Artificial neural network wikipedia , lookup

Pre-Bötzinger complex wikipedia , lookup

Central pattern generator wikipedia , lookup

Channelrhodopsin wikipedia , lookup

Deep learning wikipedia , lookup

Nervous system network models wikipedia , lookup

Synaptic gating wikipedia , lookup

Hierarchical temporal memory wikipedia , lookup

Catastrophic interference wikipedia , lookup

Convolutional neural network wikipedia , lookup

Recurrent neural network wikipedia , lookup

Types of artificial neural networks wikipedia , lookup

Transcript
D EEP LEARNING WITH DIFFERENT TYPES OF NEURONS
R AFI TALEB
L EIDEN I NSTITUTE
AND
OF
WALTER K OSTERS
A DVANCED C OMPUTER S CIENCE
[email protected]
INTRODUCTION
RESEARCH QUESTION
D EEP LEARNING hypothesizes that in order to learn high-level representations of data a hierarchy of intermediate representations are needed.
In the vision case the first level of representation could be gabor-like filters, the second level could
be line and corner detectors, and higher level representations could be objects and concepts.
Deep learning networks can be trained for both supervised and also unsupervised learning tasks.
Deep learning network architecture is similar to the normal neural network but it has more hidden layers.
Deep learning with different types of neurons.
N ETWORK AND ITS ELEMENTS
METHOD
An artificial neuron tries to simulate the behavior of a biological neuron.
In this project, we will implement a flexible
code to construct the deep learning networks.
We will use different kinds of neurons and we
will connect them using different kinds of architectures. E.g., we vary the number of layers. In particular, we use different activation
functions.
Neuron
(
Output =
0
1
37456
if w.x + b ≤ 0
if w.x + b > 0
The networks will be trained with a data set of
handwritten numbers, and perhaps characters
from the English alphabet.
Finally the networks will be compared according to the accuracy of the results and performance, also examining speed, time and space
complexity.
Here b ≡ −threshold .
The architecture of neural networks
Input
layer
Hidden
layer
In this project we will try to improve the
accuracy of the deep learning by using different types of neurons in the network
Output
layer
Input #1
A PPLICATIONS
Input #2
Output
Input #3
Input #4
D EEP LEARNING currently provides the best
solutions to the following problems:
• I MAGE PROCESSING : image recognition
and Object Detection in Photographs,
Colorization of Black and White Images,
image caption generation.
The leftmost layer in this network is called the input layer, and the neurons within the layer are
called input neurons. The rightmost or output layer contains the output neurons. The middle
layer is called a hidden layer , since the neurons in this layer are neither inputs nor outputs.
• T RANSLATION : machine translation, automatic translation of text, automatic
translation of image.
Activation Function
• S PEECH : speech recognition, and natural
language processing.
If we use a step function, a small change in the weights or bias of any single perceptron in the
network can sometimes cause the output of that perceptron to completely flip. That flip may then
cause the behaviour of the rest of the network to completely change in some very complicated
way.
• H ANDWRITING : handwriting recognition and automatic handwriting generation.
R ELATED WORK
See: I. Goodfellow, Y. Bengio and A. Courville,
Deep Learning, MIT Press, 2016.
See:M.Nielsen, neuralnetworksanddeeplearning.com/, Jan 2017.
What we’d like is for this small change in weight to cause only a small corresponding change
in the output from the network. by introducing a new type of artificial neuron called a sigmoid
neuron. Sigmoid neurons are similar to perceptrons, but modified so that small changes in their
weights and bias cause only a small change in their output.