Download ppt

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Holonomic brain theory wikipedia , lookup

Neuroethology wikipedia , lookup

Single-unit recording wikipedia , lookup

Neural oscillation wikipedia , lookup

Artificial general intelligence wikipedia , lookup

Neural modeling fields wikipedia , lookup

Optogenetics wikipedia , lookup

Artificial intelligence wikipedia , lookup

Neural coding wikipedia , lookup

Neuropsychopharmacology wikipedia , lookup

Channelrhodopsin wikipedia , lookup

Pattern recognition wikipedia , lookup

Central pattern generator wikipedia , lookup

Biological neuron model wikipedia , lookup

Synaptic gating wikipedia , lookup

Metastability in the brain wikipedia , lookup

Catastrophic interference wikipedia , lookup

Neural engineering wikipedia , lookup

Development of the nervous system wikipedia , lookup

Artificial neural network wikipedia , lookup

Nervous system network models wikipedia , lookup

Convolutional neural network wikipedia , lookup

Recurrent neural network wikipedia , lookup

Types of artificial neural networks wikipedia , lookup

Transcript
A Low-Cost Fault-Tolerant Approach for Hardware Implementation of
Artificial Neural Networks (ANNs)
A. Ahmadi, M. H. Sargolzaie, S. M. Fakhraie, C. Lucas, and Sh. Vakili
Silicon Intelligence and VLSI Signal Processing Laboratory,
School of Electrical and Computer Engineering,
University of Tehran, Tehran, Iran.
International Conference on Computer Engineering and Technology
2009 (ICCET 2009) January 23, 2009
Outline
•
•
•
•
Introduction to ANNs
Fault in ANNs
Conventional Fault Tolerant Techniques for ANNs
Proposed Method and Simulation Results
Introduction to ANNs
What are (everyday) computer systems good at... and not so
good at?
Good at
Not so good at
Rule-based systems: doing
what the programmer wants
them to do
Dealing with noisy data
Dealing with unknown
environment data
Massive parallelism
Fault tolerance
Adapting to circumstances
Introduction to ANNs
• Neural network: information processing paradigm
inspired by biological nervous systems, such as our
brain
• Structure: large number of highly interconnected
processing elements (neurons) working together
• Like people, they learn from experience (by example)
Introduction to ANNs (Applications)
• Prediction: learning from past experience
– pick the best stocks in the market
– predict weather
– identify people with cancer risk
• Classification
– Image processing
– Predict bankruptcy for credit card companies
– Risk assessment
Introduction to ANNs (Applications)
• Recognition
– Pattern recognition: SNOOPE (bomb detector in U.S.
airports)
– Character recognition
– Handwriting: processing checks
• Data association
– Not only identify the characters that were scanned but
identify when the scanner is not working properly
Introduction to ANNs (Applications)
• Data Conceptualization
– infer grouping relationships
e.g. extract from a database the names of those most likely
to buy a particular product.
• Data Filtering
e.g. take the noise out of a telephone signal, signal smoothing
• Planning
– Unknown environments
– Sensor data is noisy
– Fairly new approach to planning
Introduction to ANNs
(Mathematical representation of an Artificial Neuron )
The neuron calculates a weighted sum of inputs.
x1
x2
xn
w1
w2
wn
SUM
Σ
Activation
Function
y
f()
Output
Introduction to ANNs
Inputs
Output
An artificial neural network is composed of many artificial
neurons that are linked together according to a specific network
architecture. The objective of the neural network is to transform
the inputs into meaningful outputs.
Outline
•
•
•
•
Introduction to ANNs
Fault in ANNs
Conventional Fault Tolerant Techniques for ANNs
Proposed Method and Simulation Results
Fault in ANNs
Unit is Functionality of Neuron
It shows that defects will occur in these three components
Fault in ANNs
• ANNs are Fault-tolerance due to:
– Non-Linearity of NN
– Distributed manner of information storage
– Number of neurons in a NN
– Difference between training and operational
error margins
Outline
•
•
•
•
Introduction to ANNs
Fault in ANNs
Conventional Fault Tolerant Techniques for ANNs
Proposed Method and Simulation Results
Conventional Fault Tolerant Techniques for
ANNs
• Fault Tolerance Training Techniques:
These techniques use new algorithm or improve popular
algorithm such that tolerate faults.
• Redundancy Techniques:
Use additional hardware and computational time for Fault
detection and correction.
Conventional Fault Tolerant Techniques (Fault
Tolerance Training Techniques)
• In [6, 7] “Chun” and “McNamee” have proposed a method that models the
effects of fault in an ANN as deviation in weight values after the neural
network has been trained.
• Sequin and Clay [5] use stuck-at fault model to describe the effects of
faults in ANNs.
• Chiu et al. [8] use a procedure that injected different types of faults into a
neural network during training process.
• Another form of fault injection is training with noisy inputs. This noise is
similar to the having some faults in input layer of an ANN [5, 9]. Minnix
[9] analyzied the effects of training with noisy inputs.
• Horita et al. [2, 3] proposed a technique for multi layer neural networks by
modifying learning algorithm.
Fault Tolerance Training Techniques limitations
• protection against the effects of only a limited number of faults
• Faults was to occur in operational mode (i.e. after training), the
network will not be able to detect the occurrence and location
of the fault.
Conventional Fault Tolerant Techniques
(Redundancy Techniques)
• Damper in [10] proposed the Augmentation Technique, in which the
neurons in each hidden layer are replicated by a factor n and the weights of
the augmented layers are then divided by n.
Output Layer
Hidden Layer
Input Layer
Standard Network
Augmented Network
Phatak and Koren [11], was developed a procedure to build fault-tolerant
ANNs by grouping the hidden layers of a feed-forward neural network by
replicating this group by factor n.
Outline
•
•
•
•
Introduction to ANNs
Fault in ANNs
Conventional Fault Tolerant Techniques for ANNs
Proposed Method and Simulation Results
Proposed Method and Simulation Results
Different Component of an Artificial Neuron
Weights
ROM
Reg
MAC
LUT
Tanh()
Proposed Method and Simulation Results
• ANN’s properties
– Neurons in each layer are independent of the others.
– Calculation of each input pattern is independent of other
patterns.
Proposed Method and Simulation Results
1. Add a spare neuron in hidden and output layers.
2. Modify neuron structure.
Then propose a fault-tolerant method for ANNs
Proposed Method and Simulation Results
A simple ANN
in1
in2
0
1
in1
out
in2
0
1
2
2
Add spare neuron
s
Proposed Method and Simulation Results
• Hidden layer have (nH + 1) neurons and need nH
calculation.
• For each pattern (nH - 1) neurons do their tasks and
two remain neurons do calculation of one remain
calculation
• So we could test one neuron for each input pattern.
Proposed Method and Simulation Results
For each input pattern we test one of hidden neuron
in1
in2
in2
1
0
1
2
2
0
2
Proposed Method and Simulation Results
• Conventional neuron structure couldn’t be used in
this method.
– Each neuron does its calculation and calculation of
another neuron.
• new structure is essential
Proposed Method and Simulation
Results
Proposed neuron structure
selector
Weights
ROM1
Weights
ROM2
Output
MAC
Reg
Input
LUT
Tanh()
Proposed Method (Hardware requirements)
• Comparator for comparison
• Multiplexer to select proper output
• Simple controller
Proposed Method and Simulation
Results (Fault-tolerant architecture)
in1
1
01
sel
sel
Comp
Comp
en
en
in2
out
out
F
12
sel
sel
Comp
Comp
en
en
22
sel
sel
Comp
Comp
0
s0
en
en
sel
sel
Comp
Comp
en
en
controller
controller
Proposed Method and Simulation
Results (Simulation Results )
Benchmark
Network size
Input
Wordlength
weight
Wordlength
Normal
Area
FT Area
Overh
ead
gene
120-6-3
2
10
6533
12427
90%
mushroom
125-32-2
3
10
37260
61789
66%
soybean
82-24-19
3
10
27139
43347
60%
building
14-16-3
5
10
5040
7843
56%
Character
recognition
65-15-10
8
10
18240
27119
48%
card
51-32-2
12
12
36465
50465
39%
thyroid
21-24-3
10
10
6460
7771
21%
• Advantages
– Tolerate all single faults
– Low area overhead
– Scalability
• Disadvantages
– Fault detection latency especially for large ANNs
• Solution : use clustering
References
[1] D.S. Phatak and I. Koren, “Complete and partial fault tolerance of feedforward neural nets,” IEEE Trans. Neural
Netw., vol. 6, no. 2, pp. 446–456, 1995.
[2] T. Horita, et. al, “Learning algorithms which make multilayer neural networks multiple-weight-and-neuron-fault,”
IEICE Trans. Inf. & Syst., VOL.E91–D, NO.4 APRIL 2008.
[3] T. Horita, et. al. “A multiple weight and neuron fault tolerant digital multilayer neural networks” Proceedings of the
21st IEEE International Symposium on Defect and Fault-Tolerance in VLSI Systems (DFT'06).
[4] N.C. Hammadi and H. Ito, “A learning algorithm for fault tolerant feedforward neural networks,” IEICE Trans. Inf. &
Syst., vol.E80-D, no.1, pp.21–26, Jan. 1997.
[5] C. H. Sequin and R. D. Clay, “Fault tolerance in feed-forward artificial neural networks,” in Neural Networks:
oncepts, Applications and Implementations, Vol. 4, Chap. 4, P. Antognetti and V. Milutinovic, eds., New Jersey:
Prentice-Hall, 1991, pp. 111-141.
[6] R.K. Chun, “Fault tolerance characteristics of neural networks,” Ph.D. thesis, University of California, Los Angeles,
CA, 1989.
[7] R.K. Chun and L.P. McNamee, “Immunization of neural networks against hardware faults, ” IEEE Int. Symp. on
Circuits and Systems, 1990, New Orleans, LA, USA, pp. 714-718.
[8] C.-T. Chiu, K. Mehrotra, C.K Mohan, and S. Ranka, “Modifying training algorithms for improved fault tolerance,”
IEEE Int. Conf. Neural Networks, 1994, Orlando, FL, USA, pp. 333-338.
[9] J. I. Minnix, “Fault tolerance of backpropagation neural network trained on noisy inputs,” Int. Joint Conf. Neural
Networks, 1992, Baltimore, MD, USA, Vol. I, pp. 847-852.
[10] M.D. Emmerson and R.I. Damper, “Determining and improving the fault tolerance of multilayer perceptrons in a
pattern-recognition application,” IEEE Trans. Neural Networks, Vol. 4, No. 5, Sept. 1993, pp 788-793.
[11] D. S. Phatak and I. Koren, “Complete and partial fault tolerance of feed-forward neural nets,” IEEE Trans. Neural
Networks, Vol. 6, No. 2, 1995, pp 446-456.
Thanks for your attention
Questions?