Download 2. HNN - Academic Science,International Journal of Computer Science

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

How to Create a Mind wikipedia , lookup

Neural modeling fields wikipedia , lookup

Convolutional neural network wikipedia , lookup

Catastrophic interference wikipedia , lookup

Hierarchical temporal memory wikipedia , lookup

Pattern recognition wikipedia , lookup

Pattern language wikipedia , lookup

Transcript
Noise Corrupted Pattern Recognition
Ruchi Varshney
Assistant Professor, Department of Electronics & Communication Engineering,
MIT, Moradabad, UP, India
Abstract— In this paper Hopfield Neural Network (HNN) is used for noise corrupted pattern recognition. The patterns are
considered numerals (0, 1, 2, 3, 4, 6, 9) and a character namely block. Firstly HNN is trained with healthy patterns then after
supervised training, the HNN is able to recognize noise corrupted input patterns. Secondly the performance of HNN is compared
with a conventional centroid method which proclaiming that HNN provides accurate results even highly noise corrupted patterns
presented.
Keywords- Pattern rcognition,Hopfield neural network, centriod method
1.
INTRODUCTION
Pattern recognition is the art of how machines can examine the surroundings, learn to distinguish patterns of interest from their
environment, and make considerable decisions to classify the patterns. In spite of several years of research, design and
implementation of a pattern recognizer remains mysterious goal [20]. The best pattern recognizers in most instances are humans,
yet the understanding how humans recognize patterns is not known. Human brain has been the basic motivation in the endeavor to
building intelligent machine in the field of artificial intelligence [7]. Neuro computing adopted its models from the biological
neuron systems [3]. This artificial neural computing applies biological concepts to machines for recognizing patterns [4-6].
Artificial intelligence, neural computing, and pattern recognition shares a common knowledge comprising of multiple disciplines.
Fault-tolerance is a significant feature where the neural networks (NNs) comes in, trying to overcome the mismatch between
conventional computer processing and the working of human brain [8]. The idea of creating a network of such neurons got a boost
when McCulloch and Pitts presented their model of the artificial neuron laying the foundations. Hebb was responsible for
presenting the concept of learning. Much work has been done in the field of simulation where network could be performed on
computers. This situation changed drastically when the Minsky and Papert book cast a shadow on the computation ability of NNs.
An interesting model ‘Amari-Hopfield model’ studied by Amari and formally introduced by Hopfield in 1982, has a wide range of
applications. HNN is a single-layer, non-linear, auto associative[18], discrete or continuous-time network that is easier to
implement in hardware [1]. HHNs [2] are typically used for classification problems with binary pattern vectors. Typically HNN
have the following interesting features: distributed representation, distributed, asynchronous control, content addressable memory,
and fault tolerance [3]. HNN strengths include total recall from partial or incomplete data, its stability under asynchronous
conditions, and fault-tolerance. Studies of HNN have been focused on network dynamics, memory capacity [1], [9] higher-ordered
networks, error correction. Hopfield proposed a method for improving the storage capacity through the use of “unlearning” of
information [10]. Young et al. discussed a variant of HNN that is multi-layer HNN model for pattern or object recognition, which
converges to the single-layer model [11]. The rest of the paper is organized as follows. In Section II the basic structure of HNN,
weight and contribution matrix and centroid method are introduced. Simulations and results are discussed in Section III and
finally conclusions are drawn in last Section.
2.
HNN
By adding feedback connection in feed forward network Hopfield showed that with this topology shows interesting behaviors; in
particular HNN can have memories [12]. Feed forward network with feedback connections is shown in Figure 1.
Fig. 1 Feed Forward network with feedback connections
Networks having feedback connections are known as recurrent networks. These networks operate in a similar way as a feed
forward network and the neurons perform the same function. Refer to Fig .1, Inputs are applied at x1, x2 and x3 nodes and the
outputs are determined as like in feed forward network at y1 , y2 and y3 nodes. The difference is that once the output is obtained
and fed it back into the inputs again means output y1 is fed back into input x1 and likewise output y2 into input x2 and so on. This
gives two new inputs (the previous outputs) and the process is repeated. This process continues until the outputs don’t change any
more (they remain constant). At this point the network is said to have relaxed. A Hopfield network can reconstruct a pattern from
a corrupted original means that the network is capable to store the correct (uncorrupted) pattern. Because of this these networks
are sometimes called Associative Memories or Hopfield Memories [2]. In this case, the weights of the connections between the
neurons have to be thus set that the states of the system corresponding with the patterns which are to be stored in the network are
stable. These states can be seen as 'dips' in energy space [13]. When the network is cued with a noisy or incomplete test pattern, it
will render the incorrect or missing data by iterating to a stable state which is in some sense 'near' to the cued pattern.[19]
A. Weight and contribution matrix
In this paper HNN is used to recognize eight (08) patterns. Each pattern is represented by matrix of 12 X 10 bits. In HNN every
neuron can potentially be connected to every other neuron, so a two dimensional array will be used. The connection weights
putted into this array is called weight matrix which allows the HNN to recall patterns. As in HNN there is no self feedback
connection therefore the connection between Ni to Ni neuron is not possible. Firstly one may initiate with a blank connection
weight matrix of 120 X 120. Now train this neural network to accept the pattern P, where P is a column vector of 120 X 1. To do
this P’s contribution matrix is required. The contribution matrix will then be added to the actual connection weight matrix. As
additional contribution matrices are added to the connection weight matrix, the connection weight is said to learn each of the new
patterns. To determine the contribution matrix of P three steps are involved as follows:
I.
Determination of bipolar values of P which can be known by representing a binary string with –1’s and 1’s rather than
0’s and 1’s.
II.
Multiplication of the transpose of the bipolar equivalent of P with P itself.
III.
To ensure no self feedback connection of HNN, Set all the elements of north-west diagonal to zero.
If diagonal elements were not zero, the network would tend to reproduce the input vector rather than a stored vector. This
contribution matrix can now be added to connection weight matrix that we already had. If one may want this network to recognize
P pattern, then this contribution matrix becomes connection weight matrix. If one may also want to recognize P1 pattern, then
calculate both contribution matrices and add each value in their contribution matrices to result in a combined matrix, which would
be the final connection weight matrix [9].
B. Recalling patterns
To recall the patterns, pattern P has been presented to each input neuron. Each neuron will activate based upon the input pattern.
When neuron is presented with pattern P its activation will be the sum of all weights that have a 1 in input pattern. Only the values
that contain a 1 in the input pattern is summed. Like wise the activation of 120 neurons can be calculated. The output neurons,
which are also the input neurons, will report the above activations. These values are meaningless without a threshold method. A
threshold method determines what range of values will cause the neuron, in this case the output neuron, to fire. The threshold
usually used for a HNN, is any value greater than zero [14-15].
All neurons that fired are assigned with binary 1, and a binary 0 to all neurons that did not fire [16]. The final binary output from
the Hopfield network would be P. This is the same as the input pattern. An auto associative neural network, such as a Hopfield
network, will echo a pattern back if the pattern is recognized. Any time a HNN is created that recognizes a binary pattern, the
network also recognizes the inverse of that bit pattern.
C. Centroid (center of mass) method
By determination of average values of coordinates of pattern, centroid can be calculated. This method involves three steps.
I.
Determination of centroid of corrupted pattern
II.
Determination of the distance between the centroid of noise corrupted pattern and centroid of original eight (08) patterns.
III.
Declaration on the basis of minimum Euclidean distance which is corresponding to original pattern [17].
3.
SIMULATION , RESULTS AND DISSUSION
The generalized Hebb Rule is implemented to train the network. To retrieve the patterns 120 neurons based HNN have been used.
Numbers of stored pattern are eight (08). The stored patterns are ‘zero’, ‘one’, ‘two’, ‘three’, ‘four’, ‘six’, ‘nine’, and ‘block’. The
weight matrix (120X120) of Hopfield network has been calculated and stored. Here learning phase of HNN is performed. Noise
level can be introduced in the patterns by randomly flipping the bits between 0 to 100%. Each pattern is has been considered as of
order of 12 X 10. Simulation study has been carried out for number of examples but results of 3 cases are reported here.
Case 1.
All the patterns are shown in Fig 2. In first case numeral 6 has been chosen to test. Fig 3 shows the chosen input. Noise level of
15% means out of 120, 18 bits are randomly flipped and corrupted pattern is shown in Fig 4. Training continues unless and until
original pattern is not recalled. From Fig 5 it is clearly observed that after 21 iterations original pattern is recalled by Hopfield
network.
Fig. 2 Input Patterns
Fig. 3 Selected pattern number 6
Fig. 4 Pattern with 15% % noise probability
Fig. 5 Pattern retrieval at 21st iteration
Case 2 Pattern no. 4 or numeral ‘Three’ is considered in case 2. Selected pattern is shown in Fig 6.
Fig. 6 Selected pattern number 4
Fig. 7 Pattern with 25 % noise probability
Noise level introduced in this pattern is 25%, 30 bits are randomly flipped and corrupted pattern is shown in Fig 7. Training
stopped after 41th iteration (Fig. 8) and original pattern is recalled.
Fig.8 Pattern at 41th iteration
Fig. 9 Selected pattern number 3
Case 3 In this case, input pattern no. 2 or numeral ‘Two’ has been selected to test which is shown in Fig. 9. 90% i.e. 108 bits have
been flipped randomly to create corrupted pattern. It is observed form the Fig. 10 that HNN is capable to recall noise free pattern
in 21 iterations and it is cleared from the result that recalled pattern is inverse of chosen input pattern.
Fig. 10 Pattern at 15th iteration
From above three cases it has been observed that HNN is capable of extracting original patterns from noise corrupted patterns and
can also recall inverse of these patterns. A comparison also has been made by recalling the patterns from HNN and centroid
method. Fifty observations are taken at each step of 5% probability of flipping bits of patterns where the patterns are chosen
randomly. The graph has been plotted between the probability of flipping bits and average error (Fig. 11). By observation from
the graph following conclusions have been made:
1. Approximately up to 25% of probability of flipping bits in a pattern, HNN is 100% successful to recall patterns whereas
in the case of centroid method, patterns are recalled successfully only up to 5% of probability of flipping bits.
2. After approximately 60% probability of flipping bits HNN recalls pattern successfully. The average error drastically
decreases in comparison to centroid method, because HNN is capable to reproduce the inverse image of input pattern.
3. Because bits are flipped randomly, for the same amount of probability of flipping the bits center of mass of a pattern
changes, so centroid method is not capable of recalling a pattern every time.
4. It is seen that in the range 40-60 % (approx) probability of flipping the bits, average error in pattern recognition is
obtained more by HNN with respect to centroid method.
Fig. 11 Comparison graph
4. CONCLUSIONS
A simple HNN based corrupted pattern recognition scheme has been illustrated implemented and tested. From the simulation
results it is observed that HNN is capable to recognize the patterns if they are corrupted by noise level of 1% to 100 %. In addition
on the basis of simulation results it is concluded that HNN can also recognize the inverse bit pattern of applied input patterns.
5.
REFERENCES
[1] J. J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities”, Proceedings of the National
Academy of Sciences, Vol. 79,1882, pp. 2554-2558.
[2] B.Jessye, “Hopfield Networks”, IEEE IJCNN '99 ,Volume 6, 1999, pp 4435- 4437.
[3] Pao Y. H., Adaptive Pattern Recognition and Neural Networks, Addison-Wesley Publishing Company Inc. New York, 1989.
[4] T. Yoshida and S. Omatu , “Pattern Recognition with Neural Networks” IEEE, Geoscience and Remote Sensing Symposium,
Proceedings. IGARSS ,Volume 2,2000, pp 699 -701
[5] D. Le and M. Makoto, “A Pattern Recognition Neural Network Using Many Sets of Weights and Biases”, Proceedings of the IEEE
International Symposium on Computational Intelligence in Robotics and Automation, Jacksonville, FL, USA, 2007, pp 285- 290.
[6] S. Osowski and D.D. Nghia, “Neural networks for classification of 2-D patterns” Proceeding of ICSP2000, pp 1568-1571
[7] Rich and Knight K , Artificial Intelligence, McGraw-Hill, New York, 1991.
[8] P. P. Patavardhan and D. H. Rao and G. Anita Deshpande, (2007) “Fault Tolerance Analysis of Neural Networks for Pattern
Recognition” International Conference on Computational Intelligence and Multimedia Applications , 2007, pp 222- 226.
[9] Y. S. Abu Mustafa and J. M. Jacques, “Information Capacity of the Hopfield Model” IEEE Transactions on Information Technology,
Vol. IT-31, No. 4, 1985, pp. 461-464.
[10] J. J. Hopfield and D. I. Feinstein and R. G. Palmer, “Unlearning’ has a stabilizing effect in collective memories” Nature, Vol. 304,
1983.
[11] S. S. Young and P. D. Scott and N.M. Nasrabadi,
‘Object Recognition using Multi-layer Hopfield Neural Network’ IEEE
Transactions on Image Processing, Vol. 6, No. 3, 1997, pp. 357-372.
[12] Haykin S., Neural Networks—A Comprehensive Foundation, 2nd edition. Pearson Education, Singapore, 1999
[13] N.M. Kussay and I.A.K. Imad, “Gray image recognition using hopfield neural network with multi bit plane and multi connect
architecture”, ICCGIV,2006.
[14] K. S. Humayun and Ye Zhang, “Hopfield Neural Networks—A Survey”, Proceedings of the 6th WSEAS Int. Conf. on Artificial
Intelligence, knowledge Engineering and Data Bases, Corfu Island, Greece, 2007, pp 125-130.
[15] M.S. Kamel, “Neural networks: the state of the art”, IEEE International Conference on Microelectronics, ICM, 1999, pp 5 – 9.
[16] S. Bhartikar. and J. M. Mendel, “The Hysteretic Hopfield Neural Network” IEEE Transactions on Neural Networks, Vol. 11, No. 4,
2000, pp. 879-888.
[17] Luciano da fontoura costa, Roberto Marcoandes cesar, shape analysis and classification theory and practice, CRC press, pp- 425430,2000.
[18] Yo-Ping Huang, “Noise corrupted pattern retrieval capability for sparsely encoded associative memory”, IEEE International
Conference on Neural Networks, vol.2, 1993, pp. 1046 - 1050.
[19] Smita K. Chaudhari* , G.A.Kulkarni , “ANN Implementation for Reconstruction of Noisy Numeral Corrupted By Speckle Noise”,
International Journal of Advanced Research in Computer Science and Software Engineering , Volume 2, Issue 8, August 2012, pp
316-323
[20] King-Sun Fu, “Pattern Recognition and Image Processing”, IEEE Transactions on Computers, Volume: C-25 , Issue: 12 , pp 1336 –
1346.