Download Introduction - Knowledge Based Systems Group

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Time series wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Pattern recognition wikipedia , lookup

Convolutional neural network wikipedia , lookup

Transcript
Flight Simulator Artificial Player
(First proposal)
ANDRIAMBOLOLONA, Mahery
LEFEUVRE, Pascal
February 18th 2003
1
1 INTRODUCTION....................................................................................................... 3
1.1 FLIGHT SIMULATOR ARTIFICIAL PILOT ................................................................... 3
1.2 OVERVIEW ................................................................................................................. 3
1.3 REQUIREMENTS ......................................................................................................... 3
2 THE BOT ARCHITECTURE ................................................................................... 5
2.1 ARCHITECTURE DIAGRAM ........................................................................................ 5
2.2 LAYERS DESCRIPTION ............................................................................................... 6
3 INTERESTING WORK TO BE DONE ................................................................... 8
4 AI LAYER REQUIREMENTS ................................................................................. 9
4.1 OVERVIEW ................................................................................................................. 9
4.2 FUNCTIONALITIES ..................................................................................................... 9
5 CONCLUSION ......................................................................................................... 13
6 GLOSSARY............................................................................................................... 14
7 BIBLIOGRAPHY ..................................................................................................... 15
1 Introduction
1.1 Flight Simulator Artificial pilot
The current project aims at implementing a bot for a flight simulator. By bot, it is meant
an intelligent virtual player emulating a human player in the game environment. For the
bot it requires to be able to understand the rules of the game. The bot needs also other
capabilities like flying through the game environment, facing aircrafts failures…
1.2 Overview
The goal of this document is to provide a first proposal to the subject “designing a Flight
simulator artificial player”. First, it is essential to deal with a basic analysis of the
problem that is to say how the bot behavior has been modeled into several layers. It is the
point of the first chapter. In the second chapter, some explanations are given about the
reason why the Artificial Intelligence layer will only be developed. In the third chapter,
the AI layer is specified in more details.
It is crucial to keep in mind that the bot behavior analysis is still in process. Consequently
these specifications just aim to border the project but not in definitive way.
1.3 Requirements
Functional requirements:
The first requirement of a bot is to act as closely as possible like a human player. That is
to say:
1. Visualizing the environment (aircraft parameters, environment stimuli, Pilot
physical)
2. Interpreting environment information and making decisions (choosing a tactical
fight approach instead of another, deciding to retreat)
3. Performing a maneuver
The bot should take the following parameters into account:
Aircraft
instrumentation
The AI Layer should pay attention to all the information
provided by the aircraft. It includes the instrument panel and
the Head Up Display.
Environment stimuli
The AI layer has to deal with all the details that a human pilot
can collect thanks to its senses: sight and hearing. For
instance, the pilot should scan the sky in order to reduce the
possibility of mid-air collisions. Smoke, haze, dust, rain, and
flying towards the sun can also greatly reduce the ability to
detect targets.
3
Pilot physiological
Effects
During a flight, a pilot can be affected by G-forces. He may
experience
discomfort
and
even
unconsciousness.
Consequently, as soon as he feels such effects, he should take
the decision to reduce rapid pull-up or pushover maneuvers.
Resource requirements:
Most of the time deliberative agents take a lot of time during their reasoning processes. In
a flight simulator, a bot should react in real-time to any environment stimuli.
Consequently, a balance should be found between its ability to foresee actions and the
simplicity of its behavior to react spontaneously when needed.
4
2 The Bot architecture
2.1 Architecture Diagram
The diagram below shows the bot layered architecture that has been currently defined.
The thought on it is still in progress.
Figure 1: architecture diagram
5
2.2 Layers description
Now follows the description of each layer.
Formatting data layer
The aim of that layer is to translate data from the upper layers to the flight simulator
and conversely.
 The Awareness System is the system used to provide the bot with all the
information of the current state of the world. It gets data directly from the
flight simulator. It computes all the information gathered, to make it easier for
the bot to use.
INPUT: Data from the flight simulator. They are the same as the outputs to a
human player (sounds, cockpit view, instrument indications, etc…)
OUTPUT: formatted data for the upper layers (see the glossary).
 The Basic Action Generator computes macro actions from the upper layers
(ex: go from A to B with information on the route, the speed, the height,
etc…) into outputs that can be interpreted by the flight simulator engine.
These outputs can be compared to the inputs from a human player (keyboard,
mouse, joystick).
INPUT: Macro actions from the upper layers (see the glossary)
OUTPUT: Basic actions. For instance, “press the key “b”” for a human
player. (See the glossary)
Note: Questions remain on how to interpret 3D-data describing the outside
environment, and how to format these data.
Know-how layer
This layer contains all the bot’s know-how skills. These skills are used in two
different ways:
1. To convert the commands from the AI Layer into macro actions.
2. To generate events from the information provided by the Formatting Data
Layer through a recognizing process. Events characterize what is happening.
Therefore the layer is divided in two functional blocks:
 The Recognizer takes the data coming from the Formatting Data Layer into
account to assess what is happening.
INPUT: Formatted data form the Formatting Data Layer.
OUTPUT: Events (See the glossary).
 An entity generates the macro actions required to process the commands from
the AI Layer.
INPUT: Commands (See the glossary)
OUTPUT: Macro actions
6
To illustrate these functionalities follows an explanation on how the navigation issue
is processed:
 A Feedback Controller is a Recognizer component. It gets information from
the Awareness System about the current position of the aircraft. It compares
it with the expected position. If they are not the same, and if the current
position is not too far from the expected one, the system suggests some
corrections to do. If the current position is really far from the expected one,
then the Feedback Controller send events to the AI Layer that must find a
solution.
INPUT: Formatted Data from the Awareness System on the current position
and features of the aircraft.
OUTPUT: data that define the difference between the expected position and
the current position. These data are intended either for the AI Layer or for the
Path Calculation system.
 The Path Calculation system is an Expander component. It gives to the
Basic Action Generator the route that the aircraft should follow (equations
and flying parameters) and that corresponds to what the AI Layer has
planned.
INPUT: a command defining the type of maneuver that the AI Layer has
decided to execute (ex: a specific kind of loop).
OUTPUT: for instance a macro action that specifies the equation of a curved
track, the speed, etc…
AI Layer
It represents the bot brain where the decisions are taken. It gets events that describe
the situation, and then it computes the most adapted solution to face this situation. To
reach this goal, a state machine and fuzzy functions are tools that can be used:


The State Machine represents the states in which a pilot can stand during a
flight (by instance ‘taking off’, ‘landing’, ‘normal flight’, etc…). It also
describes what is needed for going from one state to another one. To express
how much it wants to have, the bot uses the Fuzzy Engine.
The Fuzzy Engine contains all the fuzzy logic rules needed to take a decision
from all the information coming from the lower layers.
INPUT: Events from the Know-how Layer Recognizer.
OUTPUT: Commands
7
3 Interesting work to be done
When the architecture is totally defined, the two highest layers will correspond to the
bot’s brain. The Formatting Data Layer is actually the only layer that depends on the
flight simulator specificities. It does not include any sort of intelligence, whereas the two
highest layers deal with artificial intelligence. Thus it seems to be more interesting to
work on the highest layers. Moreover, currently no jetfighter models are implemented in
Flight Gear or Flight Simulator 2002. Consequently, the Formatting Data Layer
implementation does not seem possible.
As it was previously said, the Know-how Layer deals with skills that a pilot acquires
with practice. For example assessing a landing trajectory, a refueling track etc. All these
skills make up a knowledge base that has to be generated. It can be build by transforming
the pilot knowledge. Currently it is pretty hard to get this kind of information. A different
approach consists of training the bot as a pilot would be trained. The bot learns which
track is the best through experimental tests. This layer deals with know how skills. It will
not be developed during this project because it is more related to expert experiences than
reasoning process itself.
Finally, the AI Layer can be the matter of the next five-month project. This part is
specified in the following chapter.
8
4 AI LAYER Requirements
4.1 Overview
All the tasks that consist of taking decisions are assigned to this layer. These decisions
are more deliberative than reactive. The AI layer should be able to take several
parameters into account like the available weapons, the remaining fuel etc. Many choices
may be made: engaging the fight, taking evasive action etc. It is important to keep in
mind that this layer is a decision-making system only.
4.2 Functionalities
Figure 2: Input / output AI Engine
The bot intelligence is implemented in this layer. In this document Intelligence means the
ability to take decisions: processing several kinds of data and adopting the most suitable
behavior. As inputs, events are analyzed and as outputs commands are generated.
9
Providing commands
Taking decisions determines commands that have to be
Commands
executed. These commands are high-level actions that do not
require decisions anymore. Commands need only know-how
skills to be processed. For instance, “engage the enemy” is not
a command because you need to decide which kind of
maneuvers you are going to perform, which weapon you
should select etc. Consequently you need an intelligent
process that consists in analyzing facts and making critical
decisions. On the other hand, “join the tanker” is definitely a
command because the only skill you need is how to proceed
the refueling track.
The format of commands is not yet defined.
Figure 3: example of events
10
Getting information
Events
The AI layer gets information in the form of Events. Events are high-level
data. They can be directly used by the AI Layer to generate decisions. They
are dedicated to be used to generate decisions only; they do not require to be
processed by any situation recognizer.
For instance “a mig-29 is near enough to engage”, “Mission target
destroyed” are events. You can decide to engage an enemy that threatens you
once your first priority target has been destroyed. It deals with a decision
process. On the contrary, “radar warning”, “aircraft position” are not
considered as events because before deciding to toggle in dogfight mode, it is
useful to recognize if it is near enough to be dangerous and if it is an enemy.
Yet, “radar warning, enemy position < 40 nm” can be relevant enough. It
deals with a recognizing process.
They can be divided into two categories:
o Unexpected events: there are four groups:
Enemy attack
Aircraft failure
Physiological accident
Unexpected environmental event
An enemy has locked you, and has launched a
missile
Your aircraft has been seriously damaged by the
enemy
You are experiencing physiological stresses due to
G-forces.
Turbulences are occurring
o Expected events: all events related to the flight plan (target destroyed,
waypoint reached, runway vacated etc)
The format of events is not yet defined.
Memorizing the current situation
The bot must know the current context to take decisions. As a human pilot, a bot needs to
know if it is currently fighting, taking off, or landing so that it knows what actions should
be performed. Basically, each state corresponds to a short-term goal. For instance, when
you dog fight, you aim to shoot the enemy down. In the taxiing from runway mode, your
goal is to park your aircraft.
11
The bot can switch from one state to another. During a dogfight, he can decide to retreat
because he has no missile left. Therefore the bot will go to “fight retreat” mode. A state
diagram like the one made in chapter 3.2 of document [1] can model all these modes.
Each mode is a node and you can switch from a node to another one if nodes are
connected.
Here are the states:
Intermission
Stand
Starting up
Taxiing to runway
Taking off
Normal flight
Dogfight
Fight Retreat:
Taking evasive actions
A-G attack
Air refueling
Stall
Landing
Aborting the landing
Flame-out Burning
Taxiing from runway
Shutting down
The bot is not in the game; it may have crashed or simply may
have not yet joined the game. It will use this mode at the end
of a mission.
This mode is used when the bot is typing chat messages
during the game.
The bot has switched on the main power. That is the state that
follows the observer state.
The bot is riding the airplane to the runway.
The airplane is climbing and has exceeded the maximum
taxiing speed.
A standard situation where the bot is flying in a friendly
territory.
A fight with an enemy aircraft has been engaged. The bot
skills are tested.
The bot is retreating.
The bot is taking actions to escape from a missile attack.
The bot is bombing a ground target.
The bot is joining the tanker to refuel.
A lack of energy of the aircraft.
The bot is bringing back its aircraft to ground.
The bot has started the landing procedures, and aborted it.
An emergency landing with no engines.
The bot is riding its aircraft to the parking.
The bot has switched off the main power.
Decision process
The methods that should be used to take decisions are not yet defined. They could be:
o
o
o
o
Fuzzy functions
Neural network
Production rules
Genetic algorithms
12
5 Conclusion
The context in which the AI Layer operates is illustrated in the diagram below. One must
be careful to the fact that this architecture is not definitive. Its goal is only to border as
precisely as possible the work that will be done. This document deals more with what
will be done than how it will be done. Consequently it cannot be considered as a real
proposal. Some skills are currently missing to design how the AI engine will work. That
is why dealing with often-used knowledge representations and reasoning systems as
production systems, fuzzy systems, neural networks will be the next step so that a choice
can be made.
Figure 4: conclusion diagram
13
6 Glossary
Commands: AI Layer OUTPUT / Know-how Layer INPUT.
Taking decisions determines commands that have to be executed. These commands are
high-level actions that do not require decisions anymore. Commands need only knowhow skills to be processed.
Macro actions: Know-how Layer OUTPUT / Formatting Data Layer INPUT
It is a set of basic actions that should be performed in chronological order. The main
feature of macro-actions is that they require neither decisions nor know how skills to be
performed. For example, the action “use the following path defined by a sequence of
(position, speed vector, altitude)” is a macro-action. The basic action generator will
automatically provide the Flight Gear engine inputs (the same as a joystick and a mouse
will). Otherwise, the action “start landing procedure” is not a Macro-action because,
although some steps are compulsory and systematic as lowering the landing gear,
decreasing the aircraft speed etc, some know how skills are required as assessing the jet
trajectory.
Basic actions: Formatting Data Layer OUTPUT / Flight simulator INPUT
Atomic instructions that correspond to the same input as a mouse, a keyboard or a
joystick can provide.
Flight Simulator Data: Flight simulator OUTPUT / Formatting Data Layer INPUT
Data directly provided and formatted by the Flight Simulator engine: for instance all the
aircraft instruments values.
Formatted Data: Formatting Data Layer OUTPUT / Know-how Layer INPUT. To
choose the appropriate behavior, the bot needs to base its reasoning on facts. The
question is how to represent theses facts about the world. The sources are different: pilot
health, aircraft parameters, and environment stimuli. The Bot will have its own world
representation. FS engine information is processed by the awareness system so that the
bot can interpret them. It could be a polygonal representation of space so that the bot can
assess a trajectory avoiding collision. The way these formatted data will be is not yet
defined.
Events: Know-how Layer OUTPUT / AI Layer INPUT. The AI layer gets information in
the form of events. Events are high-level data. They can be directly used by the AI Layer
to generate decisions. They are dedicated to be used to generate decisions only; they do
not require to be processed by any situation recognizer.
14
7 Bibliography
[1] Flying an F16, A knowledge base describing the situations an F16 pilot might
encounter, Mouthaan, Quint, Version 1.0, September 2002
[2] Intelligent Agents in an adaptive Cockpit environment (preliminary report), Ehlert
Patrick, Rothkrantz Leon. Version 0.2, October 2001
15