Download Some computational Approaches for Situtation Assessment and

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Microevolution wikipedia , lookup

Genetic testing wikipedia , lookup

Gene expression programming wikipedia , lookup

Public health genomics wikipedia , lookup

Transcript
Some Computational Approaches for Situation
Assessment and Impact Assessment
Michael L. Hinman
Air Force Research Laboratory /IFEA
32 Brooks Rd
Rome, NY USA
[email protected]
estimation and
Level 2 − Situation Assessment:
prediction of relations among entities, to include force
structure and cross force relations, communications and
perceptual influences, physical context, etc.;
Level 3 − Impact Assessment: estimation and prediction
of effects on situations of planned or estimated/predicted
actions by the participants; to include interactions
between action plans of multiple players (e.g. assessing
susceptibilities and vulnerabilities to estimated/predicted
threat actions given one’s own planned actions);
Level 4 − Process Refinement (an element of Resource
Management): adaptive data acquisition and processing to
support mission objectives.
Abstract - This paper will provide an overview of several
research efforts in the area of Information Fusion being
conducted at the Fusion Technology Branch, Air Force
Research Laboratory. It will describe a series of
innovative approaches of traditional fusion algorithms
and heuristic reasoning techniques to improve situational
assessment and threat prediction. Approaches discussed
include Bayesian techniques, Knowledge Based
approaches, Artificial Neural Systems (Neural Networks),
Fuzzy Logic, and Genetic Algorithms.
Keywords: Keywords: Information Fusion, Situation
Assessment, Impact Assessment, Threat Assessment,
Threat Prediction, Bayesian Analysis, Fuzzy Logic,
Genetic Algorithms, Neural Networks.
1
DATA FUSION DOMAIN
Introduction
Level 1
Level 2
Level 3
Level 0
Processing Processing Processing Processing
The Joint Directors of Laboratories (JDL) Subpanel on
Data Fusion originally defined Data Fusion as: a process
dealing with the association, correlation, and
combination of data and information from single and
multiple sources to achieve refined position and identity
estimates, and complete and timely assessments of
situations and threats, and their significance. The
process is characterized by continuous refinements of its
estimates and assessments, and the evaluation of the need
for additional sources, or modification of the process
itself, to achieve improved results [1].
Sub-object
Data
Assessment
Level 4
Processing
Process
Refinement
A more concise definition was later proposed by
Steinberg, et al [2] as: data fusion is the process of
combining data to refine state estimates and predictions.
Situation
Assessment
Impact
Assessment
Human
Computer
Interaction
Data Base
Management System
Support
Database
Fusion
Database
Figure 1. The JDL Data Fusion Functional Model
For the purpose of this paper, Information Fusion
will be interpreted as encompassing both Level 2
(Situation Assessment) and Level 3 (Impact Assessment).
Figure 1 depicts the data fusion functional model as
revised in [2], which further elaborates on the
composition of each of the levels as follows:
Level 0 − Sub-Object Data Assessment: estimation and
prediction of signal/object observable states on the basis
of pixel/signal level data association and characterization;
Level 1 − Object Assessment: estimation and prediction
of entity states on the basis of observation-to-track
association, continuous state estimation (e.g. kinematics)
and discrete state estimation (e.g. target type and ID);
ISIF © 2002
Object
Assessment
2
Background
Browsing through the various conference proceedings,
journals, and books pertaining to data fusion, it becomes
clear that the majority of research and research
applications to date have focused primarily on Level 1
fusion. The main reason for the abundance of Level 1
activities is that the research community understands well
687
how to extract relevant data about physical objects. For
example, if your goal is to identify an object such as a
fruit, the physical properties that would be used to
describe it would be it’s shape, color, texture, etc. These
are physical properties that one can easily measure and
comprehend. Similarly, if your goal is to identify an
automobile or a tank, then again the physical attributes
might include length, width, number of wheels, number of
tracks, etc. However, when one addresses the higher
levels of data fusion, the emphasis is no longer on
physical objects, but the relationships amongst the
objects. And those relationships, particularly for impact
assessment, are poorly understood.
desirable to have each individual templates that described
each military unit in terms of all possible combinations of
sub-components and individual elements. Obviously, this
can lead to a large number of plausible templates for each
military unit.
Each of these templates should be
compared directly to the observed data.
Data is then collected on the battlefield, and
processed by Level 1 (Object Assessment) processes to
provide the identification and location attributes of the
individual elements. With the knowledge of where
individual elements or obects are on the battlefield, a
clustering process is utilized to aggregate the individual
elements and then compare the aggregated observations
using Bayesian techniques against the templating
information of known characteristics. If the output of the
Bayesian classifier is sufficiently close to the known
military unit template, then the observed military unit will
be identified as that particular military unit.
The following approaches to Information Fusion
have recently been investigated at the Fusion Technology
Branch of the Air Force Research Laboratory.
3
A Bayesian Approach
Bayesian approaches to accumulating evidence are
founded on Bayes Rule. Bayes Rule allows for the
computation of the posterior probability p(H|E,C) given
the prior probabilities p(H|C) and the class-conditional
probabilities or likelihoods p(E|H,C). H is the hypothesis,
E is the evidence, and C is the context.
This Bayesian approach has been implemented to
perform arbitrary military unit identification [5].
Another Bayesian approach to Information Fusion
relies on Bayesian Belief Networks. Bayesian Belief
Networks, which are sometimes referred to as
Probabilistic Expert Systems, uses a consistent Bayesian
framework, while overcoming some of the limitations of
rule based systems. Belief Networks are directed acyclic
graphs that use concepts of conditional independence and
dependence. Bayesian Belief networks are currently
being investigated at the Air Force Research Laboratory
for applicability to Situation Assessment.
Bayes Rule is stated as:
p(H|E,C) = p(H|C)*p(E|H,C)/p(E|C)
(1)
where the posterior probability, p(H|E,C), provides
the probability of the hypothesis H after taking into
account the evidence E in context C. The prior probability
of H given C is the belief in H before the evidence E is
even considered. The class-conditional probabilities or
likelihoods are evidence assuming that the hypothesis H
and context I are correct. The term 1/p(E|C), is
independent of H, and is for normalization. A very
important aspect of probability theory is that the set of
hypotheses must be mutually exclusive and collectively
exhaustive [3,4]. If the intersection of two events is the
null set, then the events are said to be mutually exclusive.
While if the union of a set of events is the universal event,
then the events are said to be collectively exhaustive.
4
A Knowledge Based Approach
Knowledge Based approaches are being utilized to
identify vehicles based primarily on vehicle movement
information. Leveraging work that has been performed
under the Defense Advanced Research Projects Agency
(DARPA) High Performance Knowledge Bases (HPKB)
program, this new approach combines probabilistic
techniques to represent uncertainty with knowledge base
representations of the battlefield to detect patterns of
behavior for specific vehicles of interest.
The
knowledge-base representation of key battlefield products
to detect and identify patterns from sensor data include
terrain data, road networks, and military doctrine for
sequences of operation and spatial deployment.
Bayesian techniques have been utilized for the
successful implementation of a force aggregation
capability that permits the identification of military units.
Military unit templates can readily be developed from
multiple databases that describe the characteristics of
known military force structure. As an example, a Corp
could be partitioned into sub-components (Corp
Headquarters, Divisions, Regiments, Brigades, Batterys,
etc.). Each of the sub-components could then either be
partitioned further, or described in terms of individual
elements (vehicles, radars, radios, etc). Eventually, it is
Knowledge Based approaches typically start with an
encoding of the battlefield into a knowledge base. This
can be performed via knowledge engineering techniques
where a knowledge engineer sits down with several
domain experts.
The domain experts provide the
knowledge concerning the battlefield in terms that they
understand, then the knowledge engineer transforms that
688
knowledge into a language that the computer can
understand. Together, they develop the knowledge base
for the area of interest.
qualitative and quantitative), and the prediction of
expected values of each attribute for each alternative.
The second phase is the reasoning phase. The
reasoning phase typically includes the determination of a
preference based utility function on attributes, the
evaluation of competing alternatives, and then selection of
the alternative corresponding the optimal choice.
For the motion analysis case, the next step in the
knowledge based approach is to compile the information
into a form that can be compared with the data. This
process generates a set of probabilistic models, Hidden
Markov Models (HMM) that captures both doctrinal
information from the battlefield as well as the uncertainty
factors [6].
Extracting domain specific knowledge from a
domain expert is a tedious and often difficult task for a
knowledge engineer. The knowledge engineer’s task of
identifying the attributes that are used by the domain
expert for his/her decision making process is ripe with
difficulties. Some of the difficulties that the knowledge
engineer faces are:
Hidden Markov Models consist of States, Initial
Probabilities, and Transition Probabilities [7]:
•
•
•
States: One state for each vehicle state we
wish to model
Initial probabilities for each state: Probability
that an observation sequence will start at that
state
Transition probabilities: Probability of
transitioning from one state to another
a) often there is no single alternative that is
superior with respect to each attribute
b) the number of alternatives and attributes
may be significant, and therefore may
complicate the process of knowledge
elicitation from the domain expert
c) the importance assigned by a decision maker
to various attributes is usually different
d) quantitative attributes verses qualitative
attributes
The final step in this knowledge based approach to
motion pattern analysis is to perform the pattern matching
between the Hidden Markov Models and a set of
observation hypothesis that are generated from the input
data to identify the ongoing activity. The end goal is the
identification of specific vehicles inferred by the patterns
of movement of those vehicles.
5
Figure 2 illustrates the overall architecture developed for
the connectionist approach to multiattribute decision
making under uncertainty [8].
Artificial Neural Systems Approach
There has been a recent resurgence of interest in the
multi-disciplinary field of artificial neural networks by
researchers.
Artificial neural networks, originally
inspired by the computational capabilities of the human
brain, refer to a variety of computing architectures that
consist of massively parallel interconnections of simple
processing elements.
Informational database for training and testing
Subjective judgments
about the factor characterizing
qualitative attributes
EXPERT
Quantification process
for qualitative attributes
Quantitative
Attributes
Artificial Neural Systems (Neural Networks) are
being utilized in a couple of different applications. The
first application utilizes a multi-layer network that has
been trained using Back Propagation to identify pairwise
preferences of analysts to support situation assessment.
The second application describes a prototype
implementation based on Linear Vector Quantization
(LVQ) and Ellipsoidal Basis Functions that postulates
threat (Attack, Retreat, Feint, or Hold).
Neural Network-based expert
pairwise preference modeling
Quantified
pairwise preferences
Evidential Reasoning Model
Figure 2. Connectionist Approach to Multiattribute
Decision Making
The first application is a connectionist approach to
multiattribute decision making under uncertainty. This
approach can be divided into two phases [8]. The first
phase is the interpretation phase. The interpretation
phases consists of the construction of the various decision
alternatives, the selection of appropriate attributes (both
The second phase is the reasoning phase. The
reasoning phase typically includes the determination of a
preference based utility function on attributes, the
689
evaluation of competing alternatives, and then selection of
the alternative corresponding the optimal choice.
and the Tactical Situation Analysis module.
The
Simulation Generator generates battlefield deployments of
tracked vehicle and equipment observations at discrete
time steps. The Troop Deployment Analysis module
performs hierarchical constrained clustering on the
unorganized battle map, transforming it into an
organizational layout called the Battlefield Cluster Map,
which is a representation of the Level 2 Situation
Assessment fusion results. The resulting temporallybuffered Battlefield Cluster Maps are then subject to
analysis by the Tactical Situation Analysis module, which
employs both rule-based and neural network technology
to perform threat assessment through prediction of enemy
intent. Ellipsoidal Basis Function (EBF) neural network
was utilized for the classification process. Figure 3
portrays the outputs of the system. The end result is a set
of four weighted hypotheses concerning the likelihood
p(A) of Attack, p(R) of Retreat, p(F) of Feint, and p(H) of
Hold.
Extracting domain specific knowledge from a
domain expert is a tedious, and often difficult task for a
knowledge engineer. The knowledge engineer’s task of
identifying the attributes that are used by the domain
expert for his/her decision making process is ripe with
difficulties. Some of the difficulties that the knowledge
engineer encounters are:
a)
often there is no single alternative that is superior
with respect to each attribute
b) the number of alternatives and attributes may be
significant, and therefore may complicate the
process of knowledge elicitation from the domain
expert
c) the importance assigned by a decision maker to
various attributes is usually different
d) information presented to the expert is noisy and
incomplete
e) quantitative attributes verses qualitative attributes
p( Attack ) P( Retreat ) p( Hold )
A connectionist approach is utilized to represent the
expert’s qualitative preference of a single alternative as
compared to another alternative. Specifically, a three
layer multi-perceptron network is trained via a standard
backpropagation algorithm [9] to represent a measure of
confidence from the domain expert of their most
preferable alternative given a pair of alternatives. In
theory, if these relationships do not conflict, then the most
preferable alternative could be found using heuristic
search. But since the information presented to the domain
expert is influenced by the difficulties described above,
the resulting set of preference relationships that are
obtained may also be conflicting [8]. Therefore, the
Dempster-Shafer based Theory of Evidence [10] is used
to combine the outputs from the neural network to provide
a decision about the most preferable alternative.
p(A)
p(R)
p(H)
p(A)
p(R)
p(H)
p( Feint )
p(A)
p(R)
p(H)
p( A, R, H) at time 1
p( A, R, H) at time 2
p( A, R, H) at time 3
p( A, R, H) at time 4
p( A, R, H) at time 1
p( A, R, H) at time 2
p( A, R, H) at time 3
p( A, R, H) at time 4
p( A, R, H) at time 1
p( A, R, H) at time 2
p( A, R, H) at time 3
p( A, R, H) at time 4
Group 1 Analyzer
Group 2 Analyzer
Group 3 Analyzer
Figure 3. Neural Networks for Threat Prediction
As stated, Artificial Neural Systems (Neural
Networks) are being utilized in a couple of different
applications. The first application utilized a multi-layer
network that has been trained using Back Propagation to
identify pairwise preferences of analysts to support
situation assessment. The second application described a
prototype implementation based on Learning Vector
Quantization (LVQ) and Ellipsoidal Basis Functions that
postulates threat (Attack, Retreat, Feint, or Hold).
Threat prediction is yet another application of neural
networks to information fusion. The Artificial Neural
System (ANS) Fusion system uses neural networks and
rule based technology to perform threat assessment by the
prediction of enemy intent. The end result is a set of four
weighted hypotheses concerning the likelihood p(A) of
Attack, p(R) of Retreat, p(F) of Feint, and p(H) of Hold.
The Artificial Neural System (ANS) Fusion system
consists of four computational modules [11]. These four
components cooperatively analyze troop movements over
time and make tactical estimates of ENCOA by
integrating information through multi-hypothesis fusion at
Fusion Levels II and III for battlefield situation
assessment. These four computational modules are: the
Simulation Generator module, the Troop Deployment
Analysis module, the Temporal Fusion Analysis module,
6
A Fuzzy Logic Approach
Zadeh [12] is credited as the pioneer for his research into
Fuzzy Sets. Fuzzy Logic techniques are being evaluated
for the development of a fuzzy logic event detector that
performs a fuzzy logic-based analysis of predicted courses
of action to infer enemy intent and objectives [13]. Figure
4 portrays a basic fuzzy system.
690
required because in most practical circumstances, a crisp
output declaration (of an event's existence or nonexistence) is necessary.
Fuzzy Logic can be employed for high-level event
detection for gathering evidence for enemy
intent/objectives
and
capabilities/vulnerabilities
assessment. The approach utilized in this application to
Information Fusion is for mapping the generated Course
of Action (COA) state vector into a measure of evidence
to deny or confirm that the intent is to seize some piece of
terrain. The COA sate vector, consisting of unit types,
unit dispositions, deployment pattern, missions (e.g.
attack, defend), roles (main, supporting, reserve, and unit
state (committed or not committed to the engagement)
throughout the COA are fed to a COA Decomposition
module. The COA Decomposition module processes and
aggregates this information. It then computes the distance
for nearest approach that the highly capable enemy
mechanized units (e.g. tank battalion) to the objective
terrain, and then returns a multi-valued estimate of the
intent. That estimate could then be posted as evidence to
a node in a belief network or to command and control
decision-aiding and planning systems. This initial fuzzy
mapping scheme can be generalized to take other terrain
and tactical and doctrinal parameters into consideration,
e.g., possible enemy objectives, order of battle
information, terrain and weather constraints, etc. This
fuzzy logic approach minimizes the semantic gap between
human event detection behavior and its computational
representation and provides us with the needed “crisp”
event cues.
Figure 4. A Basic Fuzzy Logic system
The fuzzifier acts on the system measurements and
performs the mapping of deterministic numerical data (a
crisp set) into fuzzy sets. Given a measurement value x,
the fuzzifier interprets it as a fuzzy set A with
membership function µ(x), where µ(x) ∈ [0,1]. In short,
the fuzzification process involves the following steps: 1)
determine the range of values of the measurement
variables; 2) perform a transformation that maps the
ranges of values of the measurements into the
corresponding universes of discourse; and 3) perform the
fuzzification function by transforming the measurement
value into a suitable linguistic value which is viewed as
labels of fuzzy sets.
After the fuzzification procedure, the fuzzy sets are
processed via some decision logic consisting of linguistic
rules. These rules are structured as follows:
IF X is A, THEN Y is B
To summarize, the Fuzzy Logic techniques are being
evaluated for the development of a fuzzy logic event
detector that performs a fuzzy logic-based analysis of
predicted courses of action to infer enemy intent and
objectives.
(2)
where the antecedent X is a measurement and
consequence Y is the output. These rules thus relate the
input measurements into the outputs. The mapping from
the fuzzy set A into the fuzzy set B is called a fuzzy
relation, and can be implemented via simple forward
chaining algorithms.
7
A Genetic Algorithm Approach
Genetic Algorithms have been around since Holland’s
[14] initial research activities in the early 1970’s. Genetic
Algorithms, according to [15], were invented to mimic
some of the processes observed in natural evolution.
Genetic algorithms are different from normal search
methods encountered in engineering optimization in the
following ways [16]:
The final element of the functional block diagram
for fuzzy reasoning is the defuzzifier which acts on the
decision logic output variables or fuzzy control actions
and performs the mapping to the corresponding
deterministic numerical data (a crisp set). That is, it
converts the output from a fuzzy set to, in this case, a
detected high-level event relating to intent/objectives or
enemy capabilities/vulnerabilities.
In short, the
defuzzification process involves determining the range of
values of the output variables and performing the
transformation that maps the fuzzified control action into
a corresponding non-fuzzy event. Defuzzification is
• Genetic Algorithms work with a coding of the
parameter set, not the parameters themselves
• Genetic Algorithms search from a population of
points, not a single point
• Genetic Algorithms use probabilistic transition rules,
not deterministic transition rules
691
Genetic Algorithms require the natural parameter set
of the optimization problem to be coded as a finite-length
string. This string could consist of binary or real numbers.
Genetic
Algorithms
work
iteration-by-iteration,
generating and testing a population of strings. This
population-by-population approach is similar to a natural
population of biological organisms where each generation
successively evolves into the next by being born and
raised until it is ready to reproduce. Optimal strings are
found through population reproduction via selection,
crossover, and mutation.
from its high-level (abstract) representation of the
battlespace and forces.
Wargaming at a high
representational level enables a rapid search through the
COA-space for generally desirable solutions. In FOX’s
current configuration, these high-level candidate COAs
are then presented to human analysts for a more in-depth
analysis and detailed planning effort needed to fully
define the COAs. However, this COA planning and
analysis process could be further accelerated by a decision
support tool to help generate detailed operations plans.
The three major components of FOX are: 1) the Genetic
Algorithm optimization software itself; 2) a Wargamer
which plays out Genetic Algorithm generated enemy
COAs against a representative set of friendly COAs; and
3) a Performance Evaluator (or fitness function in
Genetic Algorithms parlance) that assigns a score or
fitness of a given string or solution.
Selection is a process where an old string is carried
through into a new population depending on its
performance index (or fitness function) value: strings with
above average fitness values receive larger numbers of
copies in the next generation. This strategy emphasizes
the Genetic Algorithm’s survival of the fittest concept.
The FOX system is just one example of using
Genetic Algorithms to support Information Fusion by
determining plausible courses of action.
A simple crossover follows selection in three steps.
First, the newly selected strings are paired together at
random. Second, an integer position n along every pair of
strings is selected uniformly at random. Finally, based on
a probability of crossover, the paired strings undergo
crossover at the integer position n along the string. This
results in new pairs of strings that are created by
swapping all the characters between characters 1 and n
inclusively.
8
Conclusion
This paper has described a series of innovative approaches
of traditional fusion algorithms and heuristic reasoning
techniques to significantly improve situational assessment
and threat prediction. A brief synopsis of an application
of each technique to information fusion follows:
• Bayesian techniques have been utilized for the
successful implementation of a force aggregation
capability that permits the identification of military
units.
• Knowledge Based approaches are being utilized to
identify vehicles based primarily on vehicle
movement information.
• Artificial Neural Systems (Neural Networks) are
being utilized in a couple of different applications.
The first application utilizes a multi-layer network
that has been trained using Back Propagation to
identify pairwise preferences of analysts to support
situation assessment.
The second application
describes a prototype implementation based on Linear
Vector Quantization (LVQ) and Ellipsoidal Basis
Functions that postulates threat (Attack, Retreat,
Feint, or Hold).
• Fuzzy Logic techniques are being evaluated for the
development of a fuzzy logic event detector that
performs a fuzzy logic-based analysis of predicted
courses of action to infer enemy intent and objectives.
• Genetic Algorithm techniques are being utilized in a
couple of different applications relative to situation
assessment. The application that will be discussed in
this paper involves using Genetic Algorithms for
determining plausible courses of action.
Mutation is simply an occasional random alteration
of a string position (based on probability of mutation). In
a binary code, this involves changing a 1 to a 0 and vice
versa. The mutation operator helps to keep genetic
diversity by avoiding the possibility of mistaking a local
minimum for a global minimum. When mutation is used
sparingly (about one mutation per thousand bit transfers)
with selection and crossover, it improves the global nature
of the Genetic Algorithm search.
Unlike many common optimization techniques,
Genetic Algorithms require no gradient information and
have a built-in global search mechanism. Hence, Genetic
Algorithms are well suited for many problems where
gradient information is either unavailable or excessively
complex, such as in optimal Course of Action generation.
Genetic Algorithm techniques are being utilized in a
couple of different applications relative to situation
assessment. The application that will be discussed in this
paper involves using Genetic Algorithms for determining
plausible courses of action.
FOX [17] is a Genetic Algorithm-based planning
support tool for assisting military intelligence and
maneuver battlestaff in rapidly generating and assessing
battlefield courses of actions (COAs). FOX’s efficiency
in generating large numbers of potential COAs stems
692
9
[17] J. Schlabach, C. Hayes, D. Goldberg, “SHAKA-GA:
A Genetic Algorithm for Generating and Analyzing
Battlefield Courses of Action (White Paper)”, 1997.
Acknowledgements
This work was supported by the Air Force Research
Laboratory under Contracts: F30602-94-C-0054, F3060297-C-0180,
F30602-97-C-0208, F30602-99-C-0083,
F30602-99-C-0115, and F30602-99-C-0101.
Special
thanks to Paul Gonsalves for providing the inputs for the
Fuzzy Logic and Genetic Algorithm sections.
References
[1]
F. White, Data Fusion Lexicon, Joint Directors of
Laboratories, Technical Panel for C3, Data Fusion SubPanel, Naval Ocean Systems Center, San Diego, 1987.
[2]
A. Steinberg, C. Bowman, F. White, “Revisions
to the JDL Data Fusion Model”, Proc. Of the SPIE Sensor
Fusion: Architectures, Algorithms, and Applications III,
pp 430-441, 1999.
[3]
E. Waltz and J. Llinas, “Multisensor Data
Fusion”, Artech House, 1990.
[4]
R. Antony, “Principles of Data Fusion
Automation”, Artech House, 1995.
[5]
M. Hinman and J. Marcinkowski, “Final Results
on Enhanced All Source Fusion”, Proc. Of the SPIE
Sensor Fusion: Architectures, Algorithms, and
Applications IV, pp 389-396, 2000.
[6]
C. Burns, “A Knowledge Based Approach to
Information Fusion”, presentation to the Air Force
Scientific Advisory Board, 2000.
[7]
K. Ross and R. Chaney, “Hidden Markov Models
for Threat Prediction Fusion”, Proc. Of the SPIE Sensor
Fusion: Architectures, Algorithms, and Applications IV,
pp 300-311, 2000.
[8]
G. Rogova, P. Losiewicz, and J Choi,
“Connectionist Approach to Multiattribute Decision
Making Under Uncertainty”, AFRL-IF-RS-TR-1999-12,
2000.
[9]
Y. Pao, “Adaptive Pattern Recognition and
Neural Networks”, Addison-Wesley Publishing Co., 1989.
[10]
G. Shafer, “A Mathematical Theory of
Evidence”, Princeton, MIT Press, 1976.
[11]
W. Wright, “Artificial Neural Systems (ANS)
Fusion Prototype”, AFRL-IF-RS-TR-1998-126, 1998.
[12]
L. Zadeh, “Fuzzy Sets”, Information and
Control, Volume 8, pp 338-353, 1965.
[13]
P. Gonsalves, G. Rinkus, S. Das, and N. Ton, “A
Hybrid Artificial Intelligence Architecture for Battlefield
Information Fusion”, Proc. Of the Second International
Conference on Information Fusion, pp 463-468, 1999.
[14]
J. Holland, “Adaptation in Natural and Artificial
Systems”, Ann Arbor: The University of Michigan Press,
1975.
[15]
L. Davis (ed.), “Handbook of Genetic
Algorithms”, Van Nostrand Reinhold, 1991.
[16]
D. Goldberg, “Genetic Algorithms: In Search,
Optimization, and Machine Learning”, Addison Wesley
Publishing Company, 1989.
693