Download Network Scan Visualization Using Associative Memory

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Holonomic brain theory wikipedia , lookup

Transcript
Network Scan Visualization Using Associative Memory
ABSTRACT
Network scan pattern has been studied to monitor the network activity in order to detect potential
attacks. Currently a variety of tools exist on network scan pattern visualization, comparison and
clustering. Almost most of them use the raw captured network scan dataset. However, the raw scan
data may contain noise and distortion that make the pattern comparison difficult to interpret accurately.
Machine learning method, in particular, associative memory, has proven to be convenient and efficient
in pattern cognition and reconstruction. Therefore, in this article we demonstrate a network scan
pattern cognition and reconstruction system using the associative memory model.
Keywords: information visualization, security visualization, network scans, associative memory,
pattern cognition, pattern reconstruction
1. INTRODUCTION
Network attack incidents have rapidly increased in recent years. More and more researchers are putting
effort to develop systems for monitoring, analyzing and capturing network attacks. Network scanning
has become a common incident. Scanning a network is the very first step in a network attack attempt. A
network receives millions of the hostile probes everyday. Most of the probes will do network scans. In
order to find the possible attacking object – a computer on the network, the attacker sends connection
requests to every network address or IP address and listens for replies. There are a variety of scanning
methods, such as ping sweeping, port knocking, OS finger-printing, and fire-walking. By analyzing
source IP addresses and packet arrival timing data patterns retrieved from the network scans, we may
be able to identify the attacker's choice of tools, physical platform and/or network location. Therefore,
to identify the network scan data patterns would help in identifying malicious activities and enhancing
the network security.
Because most of the network scans yield raw captured data which are noisy and distorted, direct
comparison of the network scan data patterns could be difficult and inaccurate. A way to remove the
noises and restore the original scan activity pattern is needed. In this article, a pattern cognition and
reconstruction system using associative memory is proposed.
2. PREVIOUS WORK
The study on the network security has been popular for the last decade. Many systems have been
developed to visualize and compare the network scan pattern in order to find out the potential for
attacks. In Muelder’s paper [1], it presents a means of facilitating the process of characterization by
using visualization and statistics techniques to analyze the patterns found in the timing of network
scans. The system allows large numbers of network scans to be rapidly compared and subsequently
identified. In [2], it uses a parallel coordinates system to display scan details and characterize attacks.
Other network activities visualization tools include Mirage [3], PortVis [4] NVisionIP [5] and SeeNet
[6] and Spinning Cube of Potential Doom [7]. However, all of these systems use the original data scan
data, which contain noise and distortion.
Machine learning methods, associative memory model in particular, have been widely applied in the
pattern recognition and classification area. [8] Tavan et al. extend the neural concepts of topological
feature maps towards self-organization of auto-associative memory and hierarchical pattern
classification in 1990. In [9], the author proposed a technique based on the use of a neural network
model for performing information retrieval in a pictorial information system. The neural network
provides auto-associative memory operation and allows the retrieval or stored symbolic images using
erroneous or incomplete information as input. In [10], Kuldarni and Yazdanpanahi developed a
software simulation for the generalized bidirectional associative memory (BAM), and have used the
BAMg to store and retrieve sets of images, using partial or noisy images as the stimulus vectors. In Y.
Dai et al’s paper[11][12][13], an associate memory model utilizing the facial action feature rate of
occurrence on happiness, easiness, uneasiness, disgust, suffering, and surprise is proposed.
3. NETWORK SCAN DATA VISUALIZATION
The scans are of a fixed size network, consisting of 65536 consecutive IP addresses. The visualization
of single scan uses the same technical as in Error! Reference source not found..The scan is displayed
in a 256*256 grid. The x and y axes are the third and fourth bytes of the destination IP addresses.
The raw scan data is composed of arbitrary pairs of destination addresses and times. Various
transformations were performed to create a set of modes to detailed visualize different aspects of the
data. For example, mode 20 is used to visualize the number of connections per unique address and
mode 21 is to visualize the time span between the first connection attempt and the last connection
attempt to each address.
Some select modes are listed below:
mode-20: f(a) = N(v), the number of visits per unique address
mode-21: f(a) = tFirst - tLast, the revisit-span for each address
mode-22: f(a) = tFirst - E(tFirst), time deviance for first probes
mode-23: f(a) = tLast - E(tLast), time deviance for last probes
mode-24: f(a) = d(tFirst), time delta on sequential addresses, first probe
mode-25: f(a) = d(tLast), time delta on sequence
Mode 22 is used in this article. The data is binary-coded. Two bits are used for each IP address. If the
captured time of the first probe is earlier than the expected time, we give the neuron value “11”; if it is
later than the expected time we give the value “00”; otherwise it is assigned “10”.
Figure 1: sample network scan data patterns of mode 22
Figure 1 gives an example of two network scan patterns of mode 22. The blue pixels are neurons with
value “11”, red pixels are neurons with value “00”, and black pixels are neurons with value “10”.
Input patterns
…
Training Data
(Controlled
Patterns)
Associative
Memory Learning
Weight Matrix
Controlled
patterns
Weight matrix
Raw Data
(Deviant
Patterns)
Associative
Memory
Reconstruction
….
Output patterns
Reconstructed
Scan Data Pattern
Figure 2 The data flow of the system
Figure 3 The system structure
The data flow of the system is given above in Figure 2 and the structure of the system is shown in
Figure 3. The system has two stages: encoding stage and decoding stage. In the encoding stage, we
input all the controlled patterns and encode them in one weight matrix. In the decoding stage, we
input the deviant patterns and the system will return us the reconstructed pattern using the weight
matrix we created in the encoding stage.
4. THE ASSOCIATIVE MEMORY
Let us start with how the brain works and how the memory system works. An example is given by Jeff
Hawkins’s book On Intelligence. “How do you catch the ball using memory?” Your brain has a stored
memory of the muscle commands required to catch a ball. When a ball is thrown, three things happen.
First, the appropriate memory is automatically recalled by the sight of the ball. Second, the memory
actually recalls a temporal sequence of muscle commands. And third, the retrieved memory is adjusted
as it is recalled to accommodate the particulars of the moment, such as the ball’s actual path and the
position of your body. The memory of how to catch a ball was not programmed into your brain; it was
learned over years of repetitive practice, and it is stored, not calculated, in your neurons.
After we get some sense of how memory works, it will be easy for us to understand the associative
memory model. The concept of associative memory was first proposed by Kohonen T. in his paper,
“Associative Memory-A System-Theoretic Approach”, in 1977. Later in the 1980’s, based on J.J.
Hopfield’s studies on the collective computation in neural networks, Kohonen introduced a concept of
content-addressable memory [14]. Hopfield’s memory models played an important role in the current
resurgence of interest in artificial neural networks. Hopfield networks are probably the most prominent
of the associative neural network memories. The nature of associative memory is just like human
memory. For example, when you are given a person’s name-”Raymond”, you may immediately recall
many characters and things of the person, such as “he has a big nose”, “brown eyes”, “his girlfriend is
Carina”, “He is a student of UC Davis”. When you are given some stimulus information, probably with
mistakes, such as “Raymond; blue eyes; a student of UCD”, you would be able to response “No, his
eyes are not blue, they are black.” The associative memory model is attractive in many applications,
for example, pattern cognition and reconstruction, image processing, face, character and voice
recognition, databases, control systems, and robotics. Associative memories are often able to produce
correct response patterns even though stimulus patterns are distorted or incomplete.
There are a variety of types of associative memory models that have been studied in the last two
decades. By way of taxonomy, neural associative memories may be either auto-associative or
hetero-associative. For an auto-associative network memory, the training input and the target output
are identical. The network may thus be thought of as memorizing a pattern by associating that pattern
with itself. A simple single-layer linear associative memory (LAM) and Hopfield’s networks are
considered as an auto-associative memory. We can obtain a matrix auto-associative memory by
forming an outer product of the pattern vector with itself. If we impose many such arrays on top of each
other, we can design a memory that will store several patterns, and provide an auto-associative recall.
By contrast, a hetero-associative network memory maps between different and distinct patterns.
Therefore, the auto-associative memory could be considered as a special case of the hetero-associative
memory, when the input pattern is same as the output pattern. The bidirectional associative memory
(BAM), proposed by Kosko in 1987, is a good example of the hetero-associative memory.
Input
Output
Input
Output
AA Aa
BB Bb
CC C c
Auto-Associative
Hetero-Associative
Figure4 Auto-Associative and Hetero-Associative Memory: In auto-associative memory, the
input pattern and output pattern are identical while in hetero-associative memory they are
different
Alternatively, networks can be classified as feed-forward or feed-back (recurrent). In feed-forward
networks, information flows only from input to output. The LAM model is one of the feed-forward
memories. The feed-back networks contain connectors among the neurons which facilitate recurrent
operations. Therefore they are iterative and converge to a final pattern with the minimum energy
corresponding to the desired association. Hopfield’s networks and BAM are both recurrent memories.
DATA FLOW
DATA FLOW
Layer 1
neurons
Layer 2
neurons
Layer 3
neurons
A Feed-Forward Network
A Feed-Back Network
Figure 5 Feed-Forward Network and Feed-Back Network
The classification of the associative memories and their corresponding samples are given in the table.
Associative Memory
Feed-Forward
Feed-Back
Auto-Associative
LAM
Hopfield Networks
Hetero-Associative
BAM
In such auto-associative memories a number of different patterns can be stored, such that if any one of
them is presented (i.e., memory is set in one of the stored states), it will remain stable in that state.
When a distorted version of a stored pattern is presented, it will evolve from that state to the stable
stored state. The convergence properties and storage capacity of the Hopfield networks are examined.
The recurrent models, unlike the feed-forward models, require much iteration before retrieving a final
pattern.
5. NETWORK SCAN PATTERN COGNITION AND RECONSTRUCTION
USING HOPFIELD NETWORKS
5.1 Hopfield Networks
In our system our first application uses Hopfield Networks. Here, let us take a deeper look at Hopfield
networks. A Hopfield network saves a set of fundamental memories. When given a piece of deviant
memory, it will return the correct memory, or one of the fundamental ones.
Stable states
Figure 6. An object at an arbitrary position will always go to the closest stable state
As exhibited by Figure 6 - the graph of peaks and valleys, the black dots in the valleys are the stable
states as the fundamental memories. When given an object, indicated by the red dot, at an arbitrary
position on the curve, it will fall into the nearest valley – the stable state with the minimum energy. To
illustrate further, the stable states may be thought as certain positions on a panel. (See Figure 10) Each
stable state, the black dots, covers its surrounding area. In any position of its cover area, an object has
the minimum hamming distance between it and the corresponding stable point. So by giving an
arbitrary point on the panel, the system is able to return to the nearest stable state.
Stable states
Figure 7. The stable states and its cover area
5.2 The Hopfield Networks algorithm and the energy function
The associative memory structure is commonly built based on a neural network. Our system use an
associative memory model that designed to map stimulus vectors {X1, X2, … XN} to response vectors
{Y1, Y2, … YN}. The stimulus vector and the response vector are memorized or associated together
using a weight matrix W. So, we have:
[Y1, Y2, … YN ] = W[X1, X2, … XN].
A four-neuron interconnected neural network is given in figure 8. The circles represent neurons and the
directed curves represent the direction of information flow through the corresponding weight Wij. The
neurons have value of either 1 or 0. There is a weight between each pair of the neurons. The
higher-weigthed Wij indicates that there is a higher possibility the neuron j will fire (value[j] = 1) when
neuron i is firing (value[i] = 1). The weight matrix is usually symmetric such that Wij = Wji, and Wii =
0.
W11
1
W31
W12
W21
W13
W14
W22
W41
2
W24
W42
W23
W32
3
W43
W44
4
W33
W34
Figure 8. An interconnected neural network
The response vectors are usually the locally stable points in the system. In our system, the deviant
patterns are the stimulus vectors. The controlled patterns are the response vectors. Each pattern has
65536(256*256) pixels and the value of each pixel is binary as we described before. We create an
associative network of 65526 neurons; each of them has two states 1(firing) or 0 (not firing).
1
W[16384][16384]
1
2
2
...
...
i
j
...
...
16384
16384
Pattern K
Pattern K
Figure 9. The nodes connection graph of Hopfield Network: There is a weight between any pair of the
two nodes in the same pattern
There is a weight between any pair of the neurons. For example, we can have Vi for the value of the ith
neuron and Vj for the value of the jth neuron in controlled pattern k. The weight between node i and
node j can be calculated as

(2V  1)( 2V j  1), if i  j

 i
k
Wij  
 0,
if i  j


The final weight matrix is calculated by:
K
Wij   Wijk
k 1
th
Here, k is the k controlled pattern.
The sum is done over all the weights in each of the controlled patterns. So the correlation weight matrix
W superimposes the information of all the controlled patterns on the same memory. There is no
correlation between these controlled patterns.
In the decoding stage, the system is given an unknown pattern, which is a distorted or incomplete
controlled pattern. The system will decode the deviant pattern and output the original controlled
pattern.
When it goes through the network, each neuron in the network is updated by:

1, if

Vi  
0, if


n
W
j 1
n
ij
W
j 1
ij
*V j  
*V j  
Here, θ is a fixed threshold; n is the number of neurons. The threshold could be set by the user.
Hopfield’s contribution received considerable attention, since he presented the memory in terms of an
energy function and incorporated asynchronous processing at individual processing elements
(neurons).
The stable state of the network is the vector composed of the activity levels or states of the ordered
processing elements. Stable states have an associated energy (Liapunov) function given by:
Ek  
1
W jik V j Vi

2 j i
j i
Where Wij is the weight from neuron i to neuron j and Vi is the value of the ith neuron in the network.
As an iterative network, neurons will keep updating, one at a time, until convergence occurs. When the
network is converged it achieved a minimum of the energy function. No individual neuron is motivated
to change when evaluated. Hopfield networks will always converge to a state of minimum energy
because it uses a Lyapunov function. A Lyapunov function is a kind of function that decreases under
the dynamical evolution of a system and that is bounded below. If a system has a Lyapunov function
then its dynamics are bound to settle down to a fixed point, which is a local minimum of the Lyapunov
function, or a limit cycle, along which the Lyapunov function is a constant. Chaotic behavior is not
possible for a system with a Lyapunov function. If a system has a Lyapunov function then its state
space can be divided into basins of attraction, one basin associated with each attractor.
5.3 Classification of network scan pattern based on the associative
memory model
To simply demonstrate how the system works, let us take only four controlled patterns as input. Figure
10 gives the visualization of the four patterns. The weight matrix is calculated using the input
controlled patterns.
Figure10 The four input patterns in our sample
Scenario 1: Input one of the controlled patterns
When the system is given one of the controlled patterns as input, it will recall the same controlled
pattern saved in the system.
Figure 11 shows the input and output patterns. It is obvious that the controlled pattern is recalled
exactly as it should be.
Figure 11. The system recall the same pattern when we input one of the control patterns
Scenario 2: Input a deviant pattern
In this scenario, the system is given a deviant pattern. It might be one of the controlled patterns with
noise. After reconstruction, it will recall the original controlled pattern.
Figure 12 shows the result. We input the pattern to the left. The system returns the right pattern, which
is the second controlled pattern.
Figure 12. The system reconstructs the pattern when given a distorted pattern
Scenario 3: Given an incomplete controlled pattern
When the system is given an incomplete pattern and the original pattern is stored in the system as a
controlled pattern, the system will give the missing pixel a default value “0” and then return the
corresponding complete pattern.
Figure 13 shows the input pattern, which only contains half part of the first controlled pattern, and the
output – the complete pattern.
Figure 13. The system restores the pattern when given an incomplete pattern
5.4 Some drawbacks of Hopfield Networks
The Hopfield Networks model has a profound, stimulating effect on the scientific community in the
field of neural network models. It has been successfully applied to optimization problems of neural
computation by connecting it like the traveling salesman problem (TSP). However, it has some
drawbacks when applied to solve the network scan pattern classification problem.
 The model requires huge memory space
The model requires a large space to store the weight matrix. Since we have 64k neurons in the
whole pattern, the weight matrix will be as large as 64k*64k bytes. In total, 4GB of space is required,
per O(n2) as the number of the neurons. This is not a practical memory requirement. And as we know,
the weight matrix is symmetrical by the diagonal, therefore it is wasteful to allocate that huge amount
of space.
 The training process is long
The training process is to store all the control patterns in the associative memory weight matrix. So
for each pattern, all the weights in the matrix need to be updated. The time consumed is also O(k*n2),
where n is the number of neurons and k is the number of control patterns. However, the weight
matrix is reusable so that if there are more new control patterns being added, the system just needs
the extra time for adding the weights from the new control patterns.
 This communication system can fail in various ways:
The system can fail in several ways such as: individual bits in some memories might be corrupted,
entire memories might be absent from the list of attractors of the net-work, and, spurious additional
memories unrelated to the desired memories or additional memories derived from the desired
memories by operations such as mixing and inversion may also be present.
The most common failure observed is when the system creates additional memories by mixing and
inversion. The reason is that in Hopfield Networks, the inversion of a control pattern will also be
another attraction basin. For example, if we store the left pattern of Figure 14, the pattern to the right
will be considered as a control pattern too.
Figure14. Example of the inversed patterns
The last failure mode might in some contexts actually be viewed as beneficial. For example, if a
network is required to memorize examples of valid sentences such as ‘Ray loves Carina’ and
‘Raymond gets cake’, we might be happy to find that ‘Raymond loves cake’ was also a stable state of
the network. We might call this behavior a generalization.
6. NETWORK SCAN PATTERN COGNITION AND RECONSTRUCTION
USING BIDIRECTIONAL ASSOCIATIVE MEMORY
6.1 Bidirectional Associative Memory
As we discussed above, the application of Hopfield Network in network scan pattern visualization has
many drawbacks. Therefore, we apply a second type of associative memory - the bidirectional
associative memory in our system. Again, let us take a look at the bidirectional associative memory
first.
6.2 The BAM algorithm and the energy function
The BAM is one of the hetero-associative memories, so that the X layer and Y layer of the network
have distinct dimensions.
1
W[131072][10]
1
2
2
...
...
i
j
...
……
10
131072
Layer X
Layer Y
Pattern K
Figure 15 Neuron Structure of the BAM Model
The BAM structure is commonly built based on a neural network too. The BAM model maps stimulus
vectors to response vectors {(X1, Y1),… , (Xi, Yi ),…, (XN, YN)}. We use bipolar mode for the two states
of the neurons – fire is 1 and not fire is -1. Therefore, Xi is {1,-1}n and Yi is {1,-1}m. There is a weight
between each pair of the neurons in layer X and layer Y. There is no weight between neurons within the
same layer. It has two-way retrieval capabilities: Xi  Yi
In the layer X, each pattern has 65536(256*256) pixels and we use two bits for each pixel. 11 is for
early probe, 00 is for late probe and 10 is for the on-time probe. In total, there are 131,072 neurons. In
the layer Y, the values of the neurons are encoded using the index of the corresponding pattern in layer
X. Here we use 10 neurons for layer Y. for example, for the first control pattern, the index is 1, so the
value of the layer Y neurons will be (-1,-1,-1,-1,-1,-1,-1,-1,-1,1), resulting in the following pattern:
The two patterns in layer X and layer Y will be stored as a pair, some examples are provided in the
figure below:
Layer X
Layer Y
There is a weight between any pair of the neurons. For example, let us have xi for the value of the ith
neuron in layer X and yj for the value of the jth neuron in layer Y of the kth pair of controlled patterns.
The weight between the two nodes can be calculated as:
Wijk  xik * y kj
The final weight matrix is calculated by:
K
Wij   Wijk
k 1
K
Wij   qkWijk
k 1
Here, k is the kth pair of controlled patterns.
In the decoding stage, the system is given an unknown pattern, which is a distorted or incomplete
controlled pattern. The system will decode the deviant pattern and output the original controlled
pattern.
When it goes through the network, each neuron in the network is updated by:

1,
if



yi   previous yi , if


 1,
if


131072
W
*xj 
W
*xj 
W
*xj 
j 1
131072
j 1
131072
j 1
ij
ij
ij
Here, θ is a fixed threshold; n is the number of neurons. The threshold could be set by the user.

1,
if



x j   previous x j , if


 1,
if


10
W
* yi  
W
* yi  
W
* yi  
i 1
10
i 1
10
i 1
T
ij
T
ij
T
ij
6.3 Classification of network scan pattern based on the BAM model
To simply demonstrate how the BAM model works, again let us take only four controlled patterns as
the input. Figure 7 shows the visualization of the four patterns. The weight matrix is calculated using
the input controlled patterns.
Figure 16. The four control patterns
The corresponding Y layer patterns are :
Index
Pattern
1
Bipolar Code
(-1,-1,-1,-1,-1,-1,-1,-1,-1,-1)
2
(-1,-1,-1,-1,-1,-1,-1,-1,-1,1)
3
(-1,-1,-1,-1,-1,-1,-1,-1,1,-1)
4
(-1,-1,-1,-1,-1,-1,-1,-1,1,1)
The results are:
The first four pairs of patterns show that the system is successfully recalling the same pattern given one
of the control patterns.
Input
Output
We randomly pick up some patterns as input, where the first three return the first control pattern and
the last one returns the fourth control pattern. The ratio of similarity between the input and the output is
given by the side of each pair.
Input
Output
6.4 More discussions on BAM model
The BAM model, compared with the Hopfield model, requires less memory space and shorter training
time. It takes O(mn) of the memory space, m and n is the number of neurons of layer X and layer Y.
The running time is O(kmn), here k is number of control patterns. The table below contains more detail
system running information.
Model
Hopfield Network
Hopfield Network
BAM
BAM
Total Pixels
Pattern
16K
64K
16K
64K
in Memory
requirement
256M
4G
512K
2M
Training
Time
consumption
1-2 mins
20 mins
<1 sec
10-30 sec
7. FUTURE WORK
There are limitations on the number of patterns the network can correctly store and recall. In
Hopfield’s paper, he indicates that when the number of pattern stored is equal or less than the 5% of the
number of neurons, the system could recall 100% of the patterns. However, when the number of
pattern stored is 10% of the number of neurons, the system could only recall the pattern without errors
50% of the time. McEliece’s paper [15] showed the capacity of Hopfield associative memory is
n/(2logn). In the future applying different associative network algorithm to increase the recall rate
would be worth investigating.
8. CONCLUSION
In our network scan pattern visualization system, machine learning method, associative memory has
been used in the network scan pattern reconstruction. When given a set of controlled scan patterns and
a noisy or incomplete pattern, the system will remove the noise and successfully return the complete
pattern. The restored patterns are much more convenient for the further studies on the network scan
pattern, such as pattern comparison or pattern clustering to detect malicious network activities.
Therefore, the results naturally lead to the feasibility of applying associative models in network scan
pattern cognition and reconstruction and further research is recommended. Furthermore, in our system
demonstrated in this article, we built two models based on the associative memory models: Hopfield
Network and BAM. The results show that both models are able to reconstruct and classify the input
patterns. However, taking into consider of the requirement of system memory space and processing
time, the BAM model was superior in this application.
REFERENCES
[1]C. Muelder, K. Ma, and T. Bartoletti, A Visualization Methodology for Characterization of Network
Scans
[2]Gregory Conti and Kulsoom Abdullah. “Passive visual fingerprinting of network attack tools.” In
VizSEC/DMSEC ’04: Proceedings of the 2004 ACM workshop on Visualization and data mining for
computer security, pages 45–54, New York, NY, USA, 2004. ACM Press.
[3]Tin Kam Ho. “Mirage: A tool for interactive pattern recognition from multimedia data.” In Proc. of
Astronomical Data Analysis Software and Systems XII, 2002.
[4]J. McPherson, K.-L. Ma, P. Krystosk, T. Bartoletti, and M. Christensen Portvis: A tool for
port-based detection of security events. In ACM VizSEC 2004 Workshop, pages 73–81, 2004.
[5]Kiran Lakkaraju, Ratna Bearavolu, and William Yurcik. NVisionIP—a traffic visualization tool for
security analysis of large and complex networks. In International Multiconference on Measurement,
Modelling, and Evaluation of Computer-Communications Systems (Performance TOOLS), 2003.
[6] Richard A. Becker, Stephen G. Eick, and Allan R. Wilks. Visualizing network data. IEEE
Transactions on Visualization and Computer Graphics, 1(1):16–28, 1995.
[7] Stephen Lau. The spinning cube of potential doom. Communications of the ACM, 47(6):25–26,
2004.
[8].Watta, P.,Wang, and Hassoun, M. (1997). “Recurrent Neural Nets as Dynamical Boolean Systems
with Application to Associative Memory,” IEEE Transactions on Neural Networks, 8(6)
[9] Andreas Stafylopatis , Aristidis Likas, “A Pictorial Information Retrieval Using the Random
Neural Network”, IEEE Transactions on Software Engineering, v.18 n.7, p.590-600, July 1992
[10] “Generalized bidirectional associative memories for image processing” Proceedings of the 1993
ACM/SIGAPP symposium on Applied computing,
[11] Y. Dai et al. “Recognition of facial expressions based on the Hopfield memory model”,
Proceedings of IEEE ICMCS‘99, Vol.2, pp.133-137(1999), Italy.
[12] Y. Dai et al. “Facial expression recognition of person without language ability based on the optical
flow histogram”, IEEE Proc. of ICSP‘2000, pp.1209-1212(2000), China.
[13]Y. Dai Y. Shibata T. Ishii K. Hashimoto K. Katamachi K. Noguchi N. Kakizaki and D. Cai, “An
associate memory model of facial expressions and its application: In facial expression recognition of
patients on bed” IEEE International Conference on Multimedia and Expo ‘2001
[14] J.J. Hopfield, “Neural networks and physical systems with emergent collective computational
abilities”, Proc. Natl. Acad. Sci. USA 79, 2554-2558 (1982).
[15] R.J. McEliece, E.C. Posner, E.R. Rodemich and S.S. Venkatesh, “The capacity of the Hopfield
associative memory”, IEEE Trans. Inform. Theory 33, 461-482 (1987).