Download Integrating Intelligent Computer Generated Forces in Distributed

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Wizard of Oz experiment wikipedia , lookup

Transcript
Integrating Intelligent Computer Generated Forces in Distributed Simulations:
TacAir-Soar in STOW-97
John E. Laird, Karen J. Coulter, Randolph M. Jones,
Patrick G. Kenny, Frank Koss, Paul E. Nielsen
University of Michigan
1101 Beal Ave.
Ann Arbor, MI 48109-2110
734-647-1761
[email protected], [email protected], [email protected],
[email protected], [email protected], [email protected]
Keywords:
Behavior representation, computer generated forces, artificial intelligence, STOW-97
ABSTRACT: This paper is based on our experiences in integrating a real-time high-fidelity model of human behavior in advanced
distributed battlefield simulations. Our experiences are drawn from our participation in the Synthetic Theater of War-1997 (STOW97) in which we developed behaviors for all U.S. Navy, Air Force, and Marine fixed wing aircraft missions. We discovered many
interoperability issues that related to integrating our software with the existing military practices and procedures, as well as with the
underlying simulation infrastructure. This paper describes the software systems we developed and the lessons we learned from their
integration.
1. Introduction
On October 29-31, 1997, the Synthetic Theater of War, 1997 (STOW-97) was held. Our research group developed software systems
that flew all missions commonly flown by U.S. Navy, Air Force, and Marine planes, which included defensive and offensive air-toair, air-to-ground missions, intelligence gathering, and command and control. Our challenge was to demonstrate that it is possible to
build autonomous AI entities that could fly these missions, responding to the dynamics of the battlefield using appropriate doctrine
and tactics, without human intervention.
The emphasis of this paper is not on the artificial techniques and domain knowledge that was required to achieve autonomous highfidelity behavior. We plan to cover that topic in a separate paper prepared for the computer generated forces community. In this paper
we concentrate on the integration of autonomous AI entities and the surrounding software infrastructure. TacAir-Soar [4, 8] is the
system we developed, and we begin with a review of the requirements that arose from injecting TacAir-Soar into the distributed
simulation environment. We then discuss in detail our responses to these requirements. We conclude with a summary of our most
important lessons learned for the future.
2. Integration Requirements
The initial emphasis of this project was on developing technology for generating intelligent behavior, which included the following
requirements:
1 . Behavior Integration. The behaviors are written in Soar [5,6], an AI parallel rule-based architecture. These behaviors had to
be integrated with the sensors, weapons, and flight dynamics of the vehicles as provided by the underlying simulation system,
ModSAF [1].
2 . Communication Integration. As part of the CFOR project [3], all communications had to use a standard format that could be
processed by other entities, which for STOW-97 was CCSIL [7].
As STOW-97 approached, our development focussed on integrating the intelligent behaviors into the overall simulation, specifically
to support an operational training exercise involving active duty personnel. We discovered we had to respond to operational
requirements in the following three areas:
1 . Mission Specification. Our system needed to accept mission specifications from the tools used by the Air Operation Center
(AOC). All air missions are contained in Air Tasking Orders (ATO’s), which are created using a program called CTAPS.
2 . Mission Execution. During a mission, humans needed to be able to communicate with the air entities via simulated radios for
command and control.
3 . Mission Reporting. Summaries of the missions for each base needed to be reported back to the AOC.
In each of the following sections, we describe the software we built in response to the above requirements. The sections are order
based on the areas of operational requirements, with the addition of a discussion of the integration issues for behavior development
within Mission Execution. For each area, we describe the software we developed, the results of using that software in STOW-97, and
the lessons we learned from it.
3. Mission Specification and Dissemination
Before a mission can be started, the computer-generated forces must receive their missions from their commanders. Our goal was to
make this process transparent to the training audience so that they would use their existing tools (CTAPS) without modification.
Figure 1. Diagram of Mission Specification Software
The flow of data for mission specification is shown in Figure 1. Military personnel manning the Air Operations Center, enter the
missions in CTAPS. CTAPS generates the Air Tasking Order (ATO) and Air Coordination Order (ACO) which together contain data
on all of the flights for the next day of air operations.
The ATO and ACO are translated into CCSIL messages by MRCI, which was written by SAIC. Following this point, the ATO is
processed by software we developed. The first step is the Automated Wing Operations Center (AWOC), which parses the CCSIL
ATO and breaks out the missions by air base and according to the times they will be flown during the day. The AWOC further
translates the data into a form that can be read by the Exercise Editor. This requires extensive reorganization of the data.
The Exercise Editor (EE) [2] provides a graphical user interface for creating and reviewing all of the mission data required by the
Soar/IFOR aircraft. In STOW-97, much of those data came directly from the ATO and ACO; however the ATO and ACO do not
include all of the mission data required by both humans and TacAir-Soar to fly a mission. For example, they do not include the
detailed flight plans with specific intermediate waypoints. For STOW-97, approximately 90% of the mission data were provided via
the ATO and ACO.
The Exercise Editor is structured hierarchically so that information shared by many missions is only entered once (such as waypoints,
and rules of engagement). It checks the correctness of much of the data entered, which is critical for complex missions where typing
errors are easy to make. During STOW-97, the Exercise Editor was used to review and complete all of the FWA missions. The
Exercise Editor is written in Tcl, TclMotif, and C, and uses several ModSAF libraries.
The final step in getting the missions into the computer-generated forces is mission dissemination. Mission data for all of the planes
``flying'' on the same computer are grouped together so that they can be loaded into TacAir-Soar (TAS) when it is time for the planes
to fly their missions. The Exercise Editor groups the mission information, while software developed by BMH aids in the distribution
and loading of the mission at run time.
3.1 STOW-97 Mission Specification Results
Our goal for mission specification was achieved in STOW-97. All missions came directly from the ATO, and all missions in the ATO
were flown. We did need to use manual intervention at different stages; however the amount of manual intervention greatly decreased
during the exercise.
Table 1 shows the total hours of effort it took to process the three STOW-97 ATO's, and includes computer time (AWOC processing)
and manual efforts (Reconfig, EE Entry) in hours. The times are hours of total effort. Some activities could be overlapped and three to
four people were working simultaneously on EE Entry, so the elapsed time was not the sum of the individual times.
ATO AWOC Reconfig EE Entry Missions
1 1.5 hrs 8.5 hrs 16 hrs 285(12 hrs worth)
2 .5 hrs 7 hrs 14 hrs 319(24 hrs worth)
3 .2 hrs 1.5 hrs 5 hrs 118(12 hrs worth)
As is clear from the data, there was still the need for manual intervention; however, the need for this decreased during STOW as we
adapted ourselves and our software in four ways:
1 . Changes were made to the AWOC so that less manual processing of the ATO was required.
2 . We created templates for exercises, events, waypoints and radios that sped up manual processing and made EE input more
complete for SME review.
3 . EE Entry improved as we learned which preset data could be assumed to be correct and which data needed to be entered.
These speed-ups did not occur earlier because the first STOW ATO was the first complete ATO the AWOC ever received from the
MRCI, and it was received at 1700 the day before STOW. Based on our experience with STOW-97, there are additional
modifications that can be made to the AWOC and Exercise Editor to further decrease the need for human intervention.
3.2 Mission Specification Lessons Learned
The primary lesson learned was that automated tools are necessary for air mission specification of high fidelity, autonomous
behavior. The more data that could be filled out automatically from the ATO, the less manual intervention was required.
A second lesson that we learned was that even with written specifications of the CCSIL messages, there were still many
misunderstandings between developers on details of the formats. CCSIL did provide a well-defined interface; however, it took
multiple iterations to get the interactions correct.
A third lesson was that it is necessary for a developer to create "stubs" that emulate the behavior of software developed by other
groups. For the AWOC, we developed a very simple CCSIL ATO generator that allowed us to test the AWOC before MRCI was
available.
4. Mission Execution
Figure 2 shows the basic organization of the systems used during mission execution. On each machine "flying" entities, there is a
single non-GUI ModSAF, which has multiple TacAir-Soar (TAS) entities running in it. The Soar-ModSAF-Interface (SMI) mediates
between each TacAir-Soar entity and its sensor, weapons, and vehicle controls as provided by ModSAF. Also on the network is the
Ordnance Server (OS) which simulates all missiles once they are shot from a plane. CommandTalk provides direct speech input and
output for controllers and runs on a separate computer. The Communication Panel (CP) is a graphical user interface for manual
composing messages for controllers, and it can be created dynamically on any TacAir-Soar entity.
Figure 2: Diagram of Mission Execution Software
Mission execution was provided by TacAir-Soar. TacAir-Soar consists of over 5,200 rules that are executed by the Soar architecture.
TacAir-Soar includes rules for all of the missions flown in STOW. It was developed by interviewing subject matter experts (SME),
and encoding the appropriate doctrine and tactics in terms of the actions taken by pilots, such as deciding to commit, turning the
aircraft, or sending a radio message.
The underlying Soar architecture is a parallel rule-based system. In contrast to most rule-based systems, Soar uses rules to support the
explicit selection, application, and termination or "operators". Operators include the primitive actions being taken in the task, such as
selecting a missile, or firing a missile. Operators also include more abstract actions, such as intercepting an opponent, using a specific
tactic in an air-to-air engagement. These abstract actions are proposed and selected by rules, but are implemented by casting them as
goals, which are in turn achieved by dynamically decomposing them into sequences of operators, which in turn are selected by more
rules. The rules for selection can be sensitive to the current situation delivered by the sensors, specifications of the current missions,
and the set of goals the entity has dynamically created. This organization provides the reactivity of rule-based systems, together with
the goal-directedness of traditional operator-based systems.
Together with ModSAF, the SMI hides all of the nitty-gritty details of the underlying networking and terrain formats from the
behaviors in Soar. The behaviors only need to know about the specific sensors (vision, radar, and radio), weapons systems, and
vehicle controls of the vehicles they fly. As a result, the behaviors did not have to change as new versions of the real-time networking
infrastructure were released.
Two important design decisions in the SMI increased the modularity of TacAir-Soar. The first was to separate out behaviors from the
operations of the sensors and weapons as much as possible so that in part, the interface was similar to the interface provided to human
pilots. The original ModSAF often had the detailed operations of weapons or sensors embedded within the ModSAF behavior
representations (tasks and task frames).
The second design decision was to create interfaces to general classes of sensors or weapons, so that the behaviors could interact with
all sensors of the same type in the same way.
4.1 Interoperability Impact on Mission Behaviors
Although we tried to insulate the behaviors from the details of the underlying simulation structure, there were a few ways in which we
were unsuccessful.
First, there were minor changes made to the behaviors when the GCS coordinate system was added; however, after those changes, the
behaviors are now completely independent of the underlying coordinate system.
Second, one problem that plagued us when running on the wide-area network was that the behaviors would become swamped in
processing visual sighting of ground entities. We first attempted to solve this by using dynamic subscriptions within the RTI.
Specifically, we would only subscribed to ground entities in the vicinity of air-to-ground targets. This definitely decreased both the
behavioral and infrastructure processing for our planes, however this alone was insufficient. The reason was that RTI subscriptions
are based on cells over the terrain. Any entity from any cell that included the area we subscribed to would be included. We further
restricted this by having the SMI filter out entities that were outside of the geographical area surrounding a target, and entities that
were not of the type our planes were attempting to hit.
Third, during the last year of preparations, the capability was added to ModSAF to sense cultural features on the ground, such as
buildings or bridges. Unfortunately, the interfaces, software, and associated documentation became available too late for us to develop
the SMI interfaces and TacAir-Soar behaviors in time for STOW.
4.1 Mission Execution Results
In STOW-97, we flew all of the missions in the ATO. Over the 48 hours (7am ET October 29 to 7am ET October 31, 1997) we
received 3 ATO's of 12, 24, and 12 hours each. There were a total of 722 FWA scheduled flights, including defensive-counter air,
close-air support, suppression of enemy air defense (SEAD), strategic attack (air-to-ground attacks), escorts, airborne early warning,
Recon/Intell, and tankers. The missions varied in length from 90 minutes to 8 hours, with the median being three hours. At any one
time, there were from 30 to 100 planes airborne on up to 28 Pentium Pro’s. We flew over 15 different types of Navy and Airforce
aircraft.
The planes carried out their missions without human intervention, flying according to doctrine and accepted tactics. The support
personnel were kept busy loading missions to fly, working on the ATO with the Exercise Editor, monitoring the planes, initiating oncall close-air support missions, and talking to visiting VIP's. However, the support personnel were never overwhelmed. Many times
during the night shift, there was little to do but watch the planes fly their missions. At most times during the exercise, there were only
one or two people devoted to monitoring the behavior of up to 100 Soar entities.
4.2 Mission Execution Lessons Learned
From our successes, the primary lesson learned was that AI technology is mature enough to be used to create high fidelity, computer
generated forces. More specifically, a symbolic rule-based architecture is sufficient, and additional capabilities, such as case-based
reasoning, fuzzy logic, or real-time planning are not required.
An important lesson for the future is to develop interfaces between sensors or weapons and behaviors that are independent of the
behavioral representation formalism. If this could be done for all services and all vehicles, it would make behavior development much
easier.
Although most of our missions flew without any problems, we did have some missions that had to be aborted or manually corrected.
The majority of these problems were caused by human operator errors, where incorrect mission data was specified using the Exercise
Editor or incorrect communications where sent to the agents while they were on their missions. This implies we need to enhance our
tools to reduce the likelihood of operator error and to enhance the agent behaviors to flexibly handle incorrectly specified mission
information.
Additional errors arose because of interoperability problems.
1 . We did not fly any EA-6B missions after the first group was launched and we replaced all EA-6B's with FA-18's in
remaining flights. The problem was that the EA-6B included the ESM sensor added for Recon/Intell missions flown by
Sytronics. Because of the enormous number of ESM emissions in the synthetic environment, and the limited time for building
autonomous Recon/Intell behaviors, these sensors and their processing overwhelmed the workstations. If we had realized this
was a problem earlier, we would have disabled the ESM sensor for our missions.
2 . The Ordnance Server crashed multiple times and caused many of our air-to-ground missions to be ineffective because we lost
the ability to employ Maverick missiles against ground targets.
3 . A U.S. Patriot battery shot down one of our planes because a network glitch temporarily disabled the IFF of our planes.
4 . The most serious problem was that the planes were unable to target and destroy SAM sites using HARM missiles. This
problem was caused by shortcomings in the radar model of the SAM sites and the radar warning receivers of our planes.
Operationally, this meant that our strike packages took unacceptably high losses during ground attacks.
This final problem illustrates a more general point. The behavior our entities is greatly constrained by the quality of their sensors and
weapons. Without sensors and weapons that provide the capabilities of planes in the real world, our simulations will always fall short,
no matter how good the underlying behaviors are.
5. Mission Command and Control
Although our entities are autonomous, they had to interact with the established military command and control during a mission. For
STOW-97, command and control for air was provided by humans acting as controllers. For example, during a defensive counter air
mission, a flight of two aircraft might be flying in a racetrack defending an area. They would be in communication with a human
controller on an AWACs. The controller provides them with situational awareness (information on radar contacts), and some
directives, such as to change the position of their racetrack. Similarly, an on-call close air support mission would fly to a waypoint and
contact a controller who would further direct the mission to a specific target. This type of command and control was critical for
involving our training audience. From the behavioral side, we needed to support communication, as well as the dynamic modification
of missions.
TacAir-Soar communicated with other entities via simulated radios, which were provided by ModSAF. The messages were formatted
in CCSIL. Overall we supported approximately 100 different types of messages which covered communications between TacAir-Soar
planes (for maintaining their command groups) and between TacAir-Soar planes and human controllers.
To include human controllers directly we had two different approaches. The "low-technology" approach was the Communication
Panel, which allows a human to easily compose CCSIL messages using a graphical user interface. The Communication Panel can be
created dynamically on any TacAir-Soar entity, and once created it can be used to send CCSIL messages from any entity to any other
entity. It strengths are that it is easy to learn and use, and we can use it directly with any our entities, bypassing the radio system and
CCSIL if necessary. In addition, it does not require any additional hardware, and it can be used from any workstation. This flexibility
was important in many tests where there were technical problems with radios and CCSIL. The one disadvantage with the
Communication Panel is that the human is not doing exactly the same task as they would in the real world – they are not speaking
their commands.
CommandTalk, developed by SRI & MITRE, provides direct speech input and output. During STOW-97, a human controller spoke
directly into CommandTalk, which translated the speech into CCSIL messages that were sent via simulated radio to TacAir-Soar.
CommandTalk is an emerging technology, and as such had a few disadvantages compared to the Communication Panel. It required
significant dedicated computational resources and there were significant delays in producing speech from CCSIL messages. It also
was more difficult to configure for a new exercise, requiring off-line preparation of all callsigns.
5.1 Mission Command and Control Results
All air communication during STOW-97 was done using CCSIL and simulated radios. The underlying network, radio, and CCSIL
infrastructure were very robust during STOW-97. CommandTalk was used during two 12-hour periods by a single controller. All
other control was performed using the Communication Panel. Command and control interactions included changing CAP stations,
directing fighters to intercept enemy planes, vectoring aircraft along various desired routes, and running numerous on-call close-air
support missions against a variety of targets (including ground vehicles, buildings, runways, and ships).
5.2 Mission Command and Control Lessons Learned
In developing our entities, we continually stressed their autonomy. However, during the last final system test and the dress rehearsal,
we needed to improve the control humans could have over the missions. We scrambled to increase the range of messages that could
be received by our aircraft, and achieved a sufficient level for STOW-97 to be successful. An important lesson was that more
interaction was necessary – although it is a fine line between more control and over control. Controllers by their nature, like to
control, and there was some evidence that they will quickly transition to unrealistic, micro-control of automated forces if given the
chance.
A second lesson was that the communication was more difficult than it might have been because of a few of the features of CCSIL.
New messages could not be defined locally and required a new release of the software. Thus, we were required to specify all
messages we were going to be using months before STOW-97. Although that does not seem like much of a hardship, we did not
discover many of the communications we needed to support the training audience until we had them use the system, which was
possible only one month before the final exercise.
The third lesson is the same lesson we learned before – in interacting with other systems, we must do error checking and create
behaviors that are robust in the face of errors. In command and control, the messages sent by controllers could include errors that
caused inappropriate behavior. For example, in one close-air support mission request, the operator did not include an egress point.
The egress point was required by the TacAir-Soar to correctly fly the mission, but the Communication Panel made it easy to not
include the egress point. In response, we changed the Communication Panel; however further changes should be made to ensure that
only correct messages can be generated. However, this is extremely difficult to do in the extreme. For example, one problem that
arose was that controllers would direct planes along a specific vector, and then forget about them. The Communication Panel should
allow this type of message, but the automated force should be "smart" enough to contact the controller if it doesn’t hear back after
some period of time.
6. Mission Reporting
While developing behaviors, it is easy to think that once the planes perform their missions, such as attacking a ground target or
intercepting an enemy plane, our work as developers is done. In contrast, the operational community thinks the behavior is a mere
precursor to the really important activity, which is providing the reports back to the training audience. To support the mission
reporting, or BackTEL, our planes generated mission reports throughout their missions. These included reports on take-offs, landings,
and the results of missions. All of these messages were sent as CCSIL messages over the simulated radios to the AWOC. The AWOC
would collate these messages and send back summary reports to the Air Operation Center.
6.1 Mission Reporting Results
The aircraft successfully sent over 1,500 mission reports back to the AWOC, which covered all aspects of their missions. The AWOC
relayed approximately 700 reports back to CTAPS. Although the number of reports is high, many of the most important reports
dealing with battle damage assessment were incomplete or never sent. One reason was that the aircraft had limited ability to see the
effects of the damage of their air-to-ground attacks. One other factor was their inability to see cultural targets discussed earlier. A
third factor was that the planes were sometimes out of visual range when the damage occurred. There may have been other factors
that we were unable to determine during STOW-97. Needless to say, this is an area that needs more work.
6.2 Mission Reporting Lessons Learned
The main lesson here was to that it is important to get the training audience involved earlier in the definition of requirements. We did
not consider this a critical aspect of our development until near the end, which led to unsatisfactory performance.
7. Conclusion
Interoperability had two dimensions for us: a vertical dimension where we built upon other software to provide complex high-fidelity
behavior, and a horizontal dimension where software systems had to interact with each other like people working together as a team.
Much of our success in STOW-97 can be attributed to the vertical system design where abstractions hid the nitty-gritty details of
lower levels – the inner workings of networking, sensors, weapons, flight dynamics, and radio communication system could be
modified and improved with minimal to no change in behaviors. Sometimes these lower levels peeked through, providing excuses for
phone calls, meetings, and friction between groups, but for the most part, they remained hidden.
The horizontal dimension provided us with more trials and lessons about interoperability. Many of the lessons we learned are
painfully obvious in hindsight: the importance of well-defined, flexible interfaces and communication languages; the need for error
checking of all data received from outside of the application be it entered by a human or sent from another application; and the need to
develop test programs to substitute for other programs. Even though these are obvious, they are worth repeating, because we are sure
we will be reminded of them many times in the future.
8. Acknowledgments
Our work on simulated pilot’s behaviors was a small part of the overall air development effort and was supported by many other
organizations. The ones we worked with most closely are listed here. The Soar/IFOR group at USC/ISI worked with us on the initial
implementations of automated fixed wing pilots and continued to be collaborators with us on IFOR development throughout this
project as they developed rotary wing pilots [9]. BMH Associates provided our subject matter experts and did all the on-site
development and support of the Oceana hardware and software installation. Lockheed Martin Information Systems (originally a part
of BBN, and then Loral) developed the underlying software to support the simulation of vehicles, weapons, and sensors; as well as
significant work on the integration of the overall simulation software. MITRE developed CCSIL the language for communicating
between vehicles. Sytronics developed behaviors for intelligence missions and the weapons editor component of the Exercise Editor.
SRI and MITRE developed CommandTalk, a speech-to-text system for command and control. SAIC developed MRCI, which
translates an ATO into CCSIL. The Electronic System Command at Hansom Air Force Base managed the overall effort. Finally, we
give special thanks to Capt. Dennis McBride, who had the faith and vision to initiate this project, and Cmdr. Peggy Feldmann who
had the faith and patience to see this project through to STOW-97.
9. References
[1] R. Calder, J. Smith, A. Courtenmanche, J. Mar, & A. Ceranowicz, "ModSAF Behavior Simulation and Control."
Proceedings of the Third Conference on Computer Generated Forces and Behavioral Representation, 1993.
[2] K. J. Coulter, & J. E. Laird, "A Briefing-based Graphical Interface for Exercise Specification." Proceedings of the
Sixth Conference on Computer Generated Forces and Behavioral Representation. pp. 113-117, Orlando, FL, July 1996.
[3] S. M. Hartzog, M. R. Salisbury, "Command Forces (CFOR) Program Status Report." Proceedings of the Sixth
Conference on Computer Generated Forces and Behavioral Representation. pp. 11-17, Orlando, FL, July 1996.
[4] J. E. Laird, W. L. Johnson, R. M. Jones, F. Koss, J. F. Lehman, P. E. Nielsen, P. S. Rosenbloom, R. Rubinoff, M.
Tambe, J. Van Dyke, M. van Lent, and R. E. Wray, "Simulated Intelligent Forces for Air: The Soar/IFOR Project
1995." Proceedings of the Fifth Conferences on Computer Generated Forces and Behavioral Representation, pp. 27-36,
May, 1995.
[5] J. E. Laird, A. Newell, and P. S. Rosenbloom, "Soar: An Architecture for General Intelligence." Artificial
Intelligence, 47 (1-3), pp. 289-325, 1991.
[6] P. S. Rosenbloom, J. E. Laird, A. Newell, The Soar Papers: Research on Integrated Intelligence, MIT Press, 1993.
[7] M. R. Salisbury, "Command and Control Simulation Interface Language (CCSIL): Status Update." Proceedings of
the Twelfth Workshop on Standards for Interoperability of Defense Simulations, pp. 639-649, Orlando, FL, 1995.
[8] M. Tambe, W. L. Johnson, R. M. Jones, F. Koss, J. E. Laird, P. S. Rosenbloom, and K. Schwamb, "Intelligent
agents for interactive simulation environments." AI Magazine, 16(1), 1995.
[9] M. Tambe, K. Schwamb, and P.S. Rosenbloom, "Building intelligent pilots for simulated rotary wing aircraft."
Proceedings of the Fifth Conference on Computer Generated Forces and Behavioral Representation, pp. 39-44, May,
1995.
Author Biographies
JOHN E. LAIRD is an Associate Professor in the Department of Electrical Engineering and Computer Science at the University of
Michigan. He is Director of the Artificial Intelligence Laboratory. He is the principal investigator and manager of the Soar/IFOR
FWA project and a co-developer of TacAir-Soar.
KAREN J. COULTER is a Systems Research Programmer in the Department of Electrical Engineering and Computer Science at the
University of Michigan. She is the developer of the Exercise Editor.
RANDOLPH M. JONES is an Assistant Research Scientist in the Department of Electrical Engineering and Computer Science at the
University of Michigan. He is a co-developer of TacAir-Soar and is the developer of the Communications Panel.
PATRICK G. KENNY is a Systems Research Programmer in the Department of Electrical Engineering and Computer Science at the
University of Michigan. He is the developer of the AWOC.
FRANK KOSS is a Systems Research Programmer in the Department of Electrical Engineering and Computer Science at the
University of Michigan. He is the developer of the Soar-ModSAF-Interface.
PAUL E. NIELSEN is an Assistant Research Scientist in the Department of Electrical Engineering and Computer Science at the
University of Michigan. He is a co-developer of TacAir-Soar.