Download The Nature of the Social Agent - Digital Collections

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Agent-based model in biology wikipedia , lookup

Knowledge representation and reasoning wikipedia , lookup

Agent (The Matrix) wikipedia , lookup

Soar (cognitive architecture) wikipedia , lookup

Enactivism wikipedia , lookup

Cognitive model wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Transcript
The Nature of the Social Agent
u
4,
,
2J
fi
"
f".
»-*« Carley
*Jt.
&JiA~'*t*
Department of Social and Decision Sciences
Alien Newell
Departments of Computer Science and Psychology
Carnegie Mellon University
30 Apri19023:55
Draft of 30 AprU 90 23:55 — Do not quote or distribute.
This work was supported in part by DarpaBasic 87-90
The Nature of the Social Agent
Abstract
The social sciences assume man to be an inherently social agent without specifying the basis for such
"socialness" One result is a jumble of views as to the nature of the social agent. What is needed is a single,
adequate scientific theory of the human as a social agent that permits understanding this variety of views.
We present a framework for defining the social agent. We aim for a conception rich enough to do justice to
the diversity of views, but precise enough to support a rigorous science of social systems. We develop this
framework by answering the question What is necessary in order to build an adequate artificial social
agent? Current theories of cognition provide an analytical tool to ask critically what more is needed to
attain social behavior — to peel away what is understood about individual cognition so as to reveal
wherein lies what is social. We fractionate the total set of agent characteristics in order to produce the
Model Social Agent which is a set of increasingly inclusive models, each one more adequate to being the
social agent required by social science. The fractionation reflects both limits to the agent's information
processing capabilities and enrichments of the content of the information available to the agent that
together enable it to be social. The resulting Model Social Agent can be used for analytic purposes. We
use it to examine two social theories
Interaction Theory
Keywords:
Social agent
Society
Self
Cognition
Festinger's Social Comparison Theory and Turner's Social
to determine how social such theories are and from where they get their action.
The Nature of the Social Agent
The social sciences assume man to be an inherently social agent1 and the assumption permeates all
their works. But what exactly this means and wherein lies the inherent "socialness" remains unclear.
Consider only the numerous characterizations of social man that have been put forward at one time or
another: Rousseau's noble savage, homo economicus, Skinner's contingently reinforced man, rational man,
boundedly rational man, the imperfect statistician, Mead's symbolic man, Blau's man as a bundle of
parameters, man as a social position, homofaber (man the toolmaker), homo ludens (playful man), and no
doubt others, especially as more ideology is permitted (e.g., the one dimensional man of Marcause
This variety might be viewed with equanimity simply as what arises naturally from the particular
purposes of authors and intellectual movements
certainly human nature is complex enough to admit of
multiple views. But this takes humans to be understandable only in rhetorical ways. What is needed,
instead, is a single, adequate scientific theory of the human as a social agent that would permit
understanding the variety of views above
laying bare their relations to each other and showing how man
can simultaneously have so many different "fundamental" natures.
The problem of the jumbled views of man is just one way to introduce the need for an adequate
theory of the social agent The basic need is both more prosaic and more pervasive. A science of social
systems is a science of humans acting in relation to each other, within a physical and cultural environment
now largely of their own making. Almost everything in such systems depends on the nature of human
nature. Replace the humans with rocks, spiders, gophers, dogs, chimpanzees, or even computers, and the
resulting "social" systems are radically different. This fundamental point gainsays not at all that the social
level brings constraints to bear, so as to bring forth social regularities that appear autonomous and do not
depend on the nature of man. Withal, the nature of man as a social agent brings all this into being, even if
implicitly and in complex ways. Replace man with rock, etc., and these regularities go away.
In this paper we present a framework for defining the social agent We aim for a conception that is
rich enough to do justice to the diversity of views above, but precise enough to support a rigorous science
of social systems. We develop this framework by answering the question What is necessary in order to
and throughout, "man" refers to the generic individual of the species homo sapiens, not to the male of the species.
Citations for these familiar terms should not really be necessary, but respectively they may be found in (Rousseau; Pareto; Skinner,
1953; Rapoport, 1961; Simon, 1957a; Khaneman, Slovic and Tversky, 1982; Mead, 1962; Blau, 1977; homofaber, Harre?; March79).
build an adequate artificial social agent? To some social scientists this may seem an approach bound to
become trapped in the simplicities of machine-like systems. Exactly the contrary seems the case to us.
Current theories of cognition, based on information processing notions, are in many ways (though not all)
rich rather than impoverished. Moreover, these notions provide an analytical tool to ask critically what
more is needed to attain social behavior — to peel away what is understood about individual cognition,
narrowly conceived, so as to reveal wherein lies what is social.
There are other advantages to approaching the nature of the social agent through an artificial
embodiment It allows us to imagine an ideal stringent test of a set of hypotheses about the social agent
that we refer to as the S ocial Turing Test.
The Social Turing Test: Construct a collection of artificial social agents according to the hypotheses and put
them in a social situation, as defined by the hypotheses. Then recognizably social behavior should result
Aspects not specified by the hypotheses, of which there will be many, can be determined at will. The
behavior of the system can vary widely with such specification, but it should remain recognizably social.
The Social Turing Test3 is of course a sufficiency test; i.e., it is a test of the form if the agent has properties
x,y, and z then it behaves socially and takes for granted that humans will recognize social behavior in all of
its forms. Even though actually carrying out such a test is well beyond the art, it is useful to keep it in
mind, since this test expresses clearly in what image a model of the social agent should be constructed and
criticized.
Articulating a model of the human to be used within a scientific domain is a useful conceptual move.
It operates as a repository of essential properties. As important, by being a definite abstraction it permits
definitive analysis, so that arguments do not always have the character of indefinite appeal to an
indefinitely rich notion of humanity. It lets everyone use the same notion of the human, thus helping to
factor cleanly what are issues of sociology from what are issues of psychology. It abets the accumulation
of the important first-order implications of being human for social systems, because these implications
accrue to enriching the model of the social agent itself, while still maintaining its integral character and
availability for analysis. As an example of this last point, we note that augmenting a model of the agent to
include the ability to respond to interrupts and a learning mechanism (as was done in Soar) while increasing
the number of social and problem-solving behaviors the agent is capable of did not cause the agent to
behave emotionally as was previously hypothesized (Simon%%%).
3The Social Turing Test is both weaker and stronger than the classic Turing Test. It is weaker because it doesn't require a
confusion between man and machine. It is stronger because one can plug in many variables for those not specified.
-2-
In enumerating all these good things, it is not our purpose to oversell the notion. It is, after all, only a
conceptual device. It will evoke its share of scholarly disagreement and be characterizable only within a
certain margin of fuzziness. However, a similar device has been used with some success in the area of
human-computer interaction (Card, Moran & Newell, 1983 Ch. 2), called the Model Human Processor
(MHP). This is an attempt to provide a theoretical model of the individual human user that designers of
computer interfaces can use to think about the interface and the interaction. It is based almost entirely on
cognitive psychology and attempts to integrate into a single framework large numbers of experimental
results on reactions, uncertainty, memory, learning, etc. A Model Human Processor for human-computer
interaction is, of course, much simpler than a Model Social Agent for all of the social sciences. But that
does not gainsay the benefits to be gained from attempting the latter.
Let us foreshadow our analysis of the Model Social Agent, so the reader can see where we are going.
We describe the social agent along two dimensions
processing capabilities and content (i.e. knowledge).
The end result is a set of models each one more adequate to being the social agent required by social
science. We will fractionate the total set of agent characteristics. The fractionation will reflect both
limitations to the agent's capabilities that enable it to be social and conditions of its environment so that
social behavior will arise. We find that the closer we move to the social agent required by the social
sciences the agent's capabilities become increasingly limited and the agent's knowledge becomes
increasingly detailed and complex. The resulting Model Social Agent is not simply the end point of the
two sequences (capability and content), but the two entire sequences with their fractionation of
characteristics and the interplay between the two dimensions.. For analytic purposes, the sequences along
each dimension may be as important as the final social agent as they tell us how to proceed in order to
create a more adequate model of the social agent
We do not claim perfection for this analysis. Important aspects remain on the agenda for further
research. This analysis, however, lays the groundwork for more discriminative incorporation of results
from artificial intelligence and cognitive science, by showing that many characteristics of social agents also
hold for more general agents. Further, in its present form, the Model Social Agent provides interesting
results and perspectives. It reveals that many patently social theories are largely theories of nonsocial
behavior
which is not to disparage them, but only to show the source of their power (section 3.1 and
3.2). It permits seeing clearly the relations between the jumble of views social scientists have espoused
regarding the nature of man (section 3.3). None of this constitutes the ultimate yield, which should be to
provide the model of the social agent that enters into theories at the social level. However, this last requires
-3-
sustained positive theory construction, whereas all we can provide here is introduction, analysis, and
perspective.
We present the fractionation of agent characteristics in a straightforward descriptive fashion. We
give this sharp analytical form, by grounding it in a theory of the human cognitive agent that is embodied in
a specific information processing system called Soar. Then we turn to some applications of the Model
Social Agent that illustrate its usefulness and power. We end by using the Model Social Agent to discuss
how models of the agent extant in artificial intelligence can enter into social theory construction and what is
required to make them effective in doing so.
1. What Makes the Social Agent Social
A reaction to all of the views of social man that have ever been forwarded is that it is simply all true.
Man is incredibly diverse and any attempt to deal with this diversity ends with a scientific version of "let
me count the ways"
an endless if pleasing enumeration. The requirement is to give useful scientific
form to this diversity, while still encompassing it all. Our attempt is outlined in Tables 1 and 2. It is an
analysis that consists of two sequences. In the first sequence, we start with a maximally capable agent and
successively restrict its architecture4, i.e., its information processing capabilities, and so successively obtain
a more and more realistic theory of human being (see table 1). We note that as the capabilities of the agent
are decreased task performance may degrade but different, and sometimes even more complex, behaviors
emerge. This is because a more fully capable agent may not have need of certain behaviors. In the second
sequence, we successively add to the richness of the situation in which the agent must be able to function
and so cumulatively obtains a more and more realistic theory of environment (see table 2). Although
expressed as situations, we may think of this second sequence as defining further agents, namely, those
with the internal knowledge and organization to be capable of operating in the stipulated environments.
Thus, increasingly rich situations corresponds to increasing the complexity and detail of the content of the
information available to the agent. Such augmentations to the content of the information available to the
agent increases the agents repertoire of actions as certain actions require certain knowledge. As we move
out on either dimension we give abbreviations to each of the items, so as to be able to refer to them in later
analyses. Before laying out these analyses we note that an agent exists in an environment (that is external
to that agent) and that this agent has an internal architecture, that controls its ability to handle information,
and a set of knowledge or information, which is to an extent dictated by the external environment Thus,
4The fixed information processing structure of the agent is called the architecture after the use of the term for the fixed structure of
a computer.
-4-
the agent's behavior is determined jointly by its architecture and the external environment in which the
agent is situated.
Table 1: Increasingly limited processing capabilities
1. The Omniscient Agent (OA)
Task-environment + goals => behavior
2. The Rational Agent (RA)
OA + knowledge => behavior
3. The Boundedly Rational Agent (BRA)
RA + processing => behavior
4. The Cognitive Agent (CA)
BRA + interrupt mechanism => behavior
5. The Emotional Cognitive Agent (ECA)
CA + Emotions => behavior
In table 1 we briefly describe the basis for each successive agents behavior. This description takes
the form of defining what types of things are present that limit the agent's ability and so determine
behavior. Additional details on these agents is provided in subsection 2.1. Each successive agent refines
the previous agent and has yet further limitations. In table 2 we describe the type of information content
available to each successive agent and assume the agent has access to information of the previous type.
This content is situation dependent and each situation is cumulative and builds on the previous situation.
For example, one cannot assume a social structural situation without previously assuming that there are
multiple agents. Thus we list under each situation the characteristics of the situation that are "new" at this
point in the sequence. Details are provided in subsection 2.2.
As we move out on either dimension, the model of the agent is becoming theoretically richer. It is
becoming a closer approximation to a model of a social human being. The point simultaneously at the end
of both dimensions is a useful candidate approximation to a social human being which we will term simply
the social agent. We refer to the entire framework, the two dimensions and associated fractionation
schemes, as the Model Social Agent and so emphasize that it remains an abstraction and approximation to a
real human agent.
-5-
Table 2: Increasingly rich situations each with related knowledge
1. Nonspecific Task Situation (NTS)
• Task knowledge determines behavior
2. Multiple Agent Situation (MAS)
• Real or implicit other exists
3. Real Interaction Situation (RIS)
• Interactions actually happen
• Timing makes a difference
• Physical environment makes a difference
4. Social Structural Situation (SSS)
• Groups exist
• Constraints on group membership exist
5. Multiple Goal Situation (MGS)
• There are competing and conflicting goals within agent at three levels
• Specification of the goal matters
• The three types of goals are task, individual, and social
6. Cultural Historical Situation (CHS)
• Behavior depends on socio-constructed meanings
• In the long term socio-constructed meanings can change
In describing these dimensions we imagine the agent as though in a standard context. In the standard
context the agent is encountering a new situation with a given set of external capabilities for interacting
with the environment This new situation is familiar to the social agent in being a situation of the sort that
the agent normally deals with; standard situations that have been studied include going to a restaurant
(FararoSkvoretz; %%%), reading children's stories (Cicourel%%%), or going to the doctors (Heise%%%).
We can think of these external capabilities as the typical human sensory and motor devices — hands, feet,
mouth, face, etc., to affect the situation and eyes, ears, skin, etc. to sense the environment. We are
-6-
concerned with the nature of the "inner system"5 that makes the interaction social, not with how the nature
of the interfaces (i.e. external capabilities) affects social behavior. The latter might be an interesting
question, but it is not ours.6
As we move out on these dimensions the issue is not to contrast the resultant agent with the human,
but to treat each agent as a successive approximation to the human. Implicit in this approach is the
argument that only a certain, and often minimal, amount of the architecture is needed to produce certain
behaviors. Similarly, often only a certain amount of the knowledge is needed to produce certain behaviors.
In other words, not everything is needed for a useful model; however, everything is needed for a model
capable of explicating all social behaviors described in table 1.
1.1. Increasingly Limited Processing Capabilities
The set of behaviors the agent can engage in are defined by the agent's cognitive architecture and its
knowledge. By iterating through agents, as we arc about to do, we arc altering the architecture but keeping
the knowledge fixed. In other words, each "agent" we describe can be thought of as a content independent
description of the agent, as an abstraction away from task domain and information content As we move
along this dimensions we specify what further limitations there arc on the agent's information processing
capabilities. In doing so, we arc describing the model of the agent's architecture and not what behaviors
the agent can and can not engage in.
1.1.1. Omniscient Agent (OA)
Our starting point is an agent who is omniscient, i.e., who knows all there is to know about the task
environment in which it is embedded. Thus, the agent's behavior is determined by the nature of the actual
task environment plus the goals of the agent. That is, it will take whatever actions arc possible and
necessary to attain its goals.
This starting point seems about as far away from a social agent as possible. But that is not quite true.
In particular, it preserves an almost unlimited diversity of behavior. The omniscient agent will generate
those passionless mechanical behaviors that the environment calls for as long as its interface permits. It is a
much better starting point for defining a social agent than starting with some limited notion of physical or
biological system, and then asking what properties should be added in order to obtain human like behavior.
inner system goes beyond cognition and includes emotion. It does not include the external capabilities.
6As to this latter question, it may be that the social behavior requires sufficiently high communication bandwidths or the availability
of multi-media communication opportunities. In this paper, we will simply assume that the agent has an appropriately rich interface
that admits social behavior.
-7-
The omniscient agent is, of course, the original homo economicus. Its viability as an approximation
for some purposes of economics is indicated by its recent extension to task environments that are uncertain,
where it has become the rational expectations model (rational-expectations-XX). Characterizing the
environment as objectively uncertain does not change the omniscient status of the agent; the agent still
knows all there is to know.
1.12. Rational Agent (RA)
The first restriction comes from the recognition that an agent does not know everything about its task
environment. Thus, an agent has its own body of knowledge and it behaves rationally7 with respect to that
knowledge. The agent takes those actions that its knowledge indicates will attain its goals. The task
environment is largely mediated by this knowledge, but still influences the actual actions that can occur.
We leave task environment in the formulas in Table 1, to indicate that the agent's knowledge is about the
task environment, hence an analyst's knowledge of the task environment offers an approximation to what
the agent knows.
This model of an agent is called a knowledge-level system in computer science (Newell, 1982).8 It is
an abstraction of an information processing system that has sufficient time and adequate methods to exploit
all of the knowledge it has acquired about its environment. The knowledge level system abstracts from the
way the knowledge is represented by the omniscient agent and it abstracts from the processing that is
required to extract from this representation the knowledge about how to act Besides restricting the
performance expected of an omniscient agent, the rational agent adds an essential activity missing from the
latter, namely, the acquisition of knowledge. No such operation makes sense for the omniscient agent,
which in effect already knows all there is to know. The definition of a rational agent requires giving
perceptual or knowledge-input actions for interacting with the environment as well as effector actions for
changing the environment From a more sociological perspective, rational agents exhibit the locality so
dear to organizational theorists. That is, the bodies of knowledge they have are conditioned by the places
they occupy in the organization (or in society). Despite having limited capabilities, the knowledge level
system or rational agent is still very much an idealization.
7 Add a note on what is rationality %%%7
'Following economics, the omniscient agent might be taken to define rational man. Then this model of man might be called the
knowledge-level agent But rationality does not have to do with what is available about the environment, but with the response to
whatever is available. So this seems a better choice to identify as the rational agent.
-8-
1.13. Boundedly Rational Agent (BRA)
The concept of bounded rationality was introduced by Herbert Simon (Simon, 1957b; Simon, 1979;
Simon, 1983; March and Simon, 1958) and has become familiar throughout the social sciences.
Dielectically, it emerged in reaction to the limitations of omniscient man as a model for the agent in
administrative organizations. But scientifically, it is simply an empirical fact that human organizational
agents are not even closely approximated by assumptions of omniscience. Indeed, even the assumptions of
complete and appropriate use of acquired knowledge (our rational agent) are often wide of the mark. As
always in science, the issue is not to appreciate such facts, but to find some adequate formulation to deal
with them. In our terms, the issue is to define an agent more limited that the rational agent Limited
computational processing power is the key.
The boundedly rational agent is one that has limited abilities to process its knowledge about its task
environment. It remains procedurally rational, in Simon's phrase (SimonPR). It attempts to deploy its
processing capabilities so as to attain its goals. But its attempts to do this are themselves limited by its
limited abilities and knowledge. The developments in computer science, artificial intelligence, and
information processing psychology (now cognitive science) have provided the substantive underpinnings
which now typically characterize the boundedly rational agent.
The stipulation of a computationally-limited agent does not dictate the form or nature of the limits.
Early on, the concepts drawn from computation were general and qualitative, such as programmed vs
unprogrammed behavior (March and Simon, 1958). But as the study of cognition continued, a basically
simple picture emerged of the nature of the limitations. These are given in Table 3. The only structures
required are a long-term memory for permanent knowledge and a short-term memory for the immediate
situation. Given this, there is a unit of memory organization, the chunk, which permits rough quantitative
statements. The human, of course, is much more complicated than this.
1.1.4. Cognitive Agent (CA)
It is possible to delimit a more elaborated view of the human cognitive agent than the picture
sketched above. Thirty years of intense investigation of cognitive science has provided substantial
refinement The cognitive agent, however, is not simply a refined version of the boundedly rational agent
The cognitive agent is a finer grained model of the world in which it is possible to distinguish intention and
action, deliberate action and reaction, and in which articulated goals emerge from an unarticulated
background. And, the cognitive agent goes beyond the boundedly rational agent in that it automatically
responds to interrupts — changes in its environment
-9-
Table 3: Tenets of the human boundedly rational agent.
1. Tasks are decomposed into goals and subgoals.
2. Task environments are encoded in internal representations.
3. Searches are conducted to find information or solve problems.
4. Immediate actions take about a second.
5. Short-term memory is of limited size (7 +/- 2 chunks).
6. Long-term memory is associative and hierarchically organized by chunks.
7. Chunks are built every couple of seconds (long-term memory acquisition) in an
automatic rather than deliberate fashion.
The cognitive agent we describe here flows from the Soar cognitive architecture (Laird, Newell, and
Rosenbloom, 1987), which is an ongoing effort to integrate what has been found out in cognitive science
into a unified theory of cognition (Newe89).9 Table 4 lists the key properties of the cognitive agent as
embodied in Soar. Table 4 refines Table 3 by putting the properties stated there into a more complete
operational context. Figure 2 gives a picture of how the components of the cognitive agent fits together.
Much detail of Soar remains suppressed. However, even in the background Soar plays an important role.
It demonstrates that all the properties set out in table 4 are embodied in a single operational system that can
problem solve, plan, learn, and interact with the world. This provides an important sufficiency test And,
the full Soar system has been demonstrated to have many of the characteristics of human cognition
(LewinNP89;Newe89^>olkN88;PolkNL89^luizN89).
These characteristics are mostly not directly of
interest as aspects of a social agent (e.g., how strategy change occurs in the Tower of Hanoi puzzle), but
they serve as important evidence that Soar as a general model of human cognition is right.
*************** place figure 2 About Here **********
Structurally, the cognitive agent consists of an architecture, which holds its acquired experience in
symbolic coded form and enables the knowledge in these codes to be brought to bear on whatever current
tasks the agent is working on. An architecture is a fixed system of mechanisms, which by itself cannot
'Soar plays other roles as well. It is being used to form a second iteration of the Model Human Processor, the effort mentioned
above for the field of human-computer interaction. The Soar system is also being used for artificial intelligence research into learning
systems and expert systems. These roles are mutually compatible. We note them here because we will be extracting from Soar only
what is pertinent to the Model Social Agent
-10
Table 4: Key properties of the cognitive agent (CA).
1. Goals
• No structure on directed goals
• Goals are weak (they direct action but are not "functional")
• Goal stack is short and ephemeral
2. Components of the architecture
• Long-term memory
• Working memory
• Decision procedure
• Learning mechanism (chunking)
• Perception
• Action
3. Performance
• Provided by a continuous sequence of micro decisions (several per second)
• Salient knowledge continuously and automatically becomes available
• Candidate decisions and preferences about these decisions accumulate
• Agent decides according to whatever preferences are available
• If the agent cannot decide, it sets up a subgoal to obtain knowledge to be able to
decide
4. Awareness
• Goals are not articulated
• Engages in both reactive and deliberate actions
5. Learning
• Experiential learning occurs continuously every few seconds
• The agent automatically records the knowledge that resolved an impasse with the
pattern to re-evoke it (a chunk)
• Such chunks become a permanent addition to long-term memory
6. Interaction with the world
• Perception and action systems run indepenently of cognitive system
• Perception continuously adds coded knowledge to the working memory
• Cognition provides the intention for action by deciding on what commands to
make to the action system
• Action system modulates itself to accommodate commands
• Interrupts occur by new recognitions from long-term memory
-11-
perform tasks. The agent performs tasks — whether evaluating its position in the group, determining
whether to take place in a social movement, interacting with others, playing games, reading stories, or
whatever — by using the knowledge it has accumulated through experience. This includes its experience
of itself as a physical and biological organism.10
In our simple model this structure has only six components. There are two memories: a long-term
memory for permanent knowledge and a working memory for temporary knowledge about the current task
situation. There are four mechanisms: a decision process for taking the immediately available knowledge
and deciding what to do; a learning mechanism (i.e. the chunker) for creating permanent knowledge from
momentary experience; a perceptual system for initial encoding of the external world; and an action system
for taking actions (including speech and writing) in the external world from coded commands. The
mechanisms, and also the working memory, are fixed devices. The long-term memory on the other hand is
indefinitely expandable to accommodate the indefinitely large body of knowledge that cumulates from
experience.
The long-term memory is active and associative. It consists of a large number of patterns coupled
with coded knowledge. Each pattern actively detects coded knowledge in the working memory that
exhibits its pattern. When it does, the coded knowledge is added to the working memory. This recognition
process operates continuously, so that whatever knowledge the agent has in its long-term memory that is
salient is continuously and automatically being added to working memory. Saliency is determined by
simply matching information in long-term memory with information current in working memory11 and not
by any deeper reasoning process, thus generating stereotyped behavior. Deeper reasoning can occur by
using the knowledge that is brought into working memory by the associative, recognition-like process.
Recognition and retrieval from long-term memory is very fast, occurring many times a second. It is the
fastest thing that happens in the cognitive agent.
When the agent engages in a task it conceives that task and its potential behavior in terms of a limited
conceptual arena.12 An arena consists of a mental model of the task situation, which includes the symbolic
representations of the actions that can be taken to affect the situation. These actions implicitly delimit the
possible situations that could arise by transforming the situation. This arena is a symbolic construction by
10Nothing in this is meant to choose sides on various nativist-empiricist controversies. The cognitive mechansims are purely
biological, but the knowledge encoded in the system may come from either biological or experiential sources.
nSuch a procedure is referred to a shallow pattern matching in computer science.
1*These are called problem spaces in AL
-12-
the agent to deal with some task. Thus, it is neither neutral nor necessarily objective (even over and above
what is perceived). The agent selects the arena in which it will attempt to solve a problem or perform a
task, and in doing so it implicitly accepts the definition of the situation implied by the arena.
The cognitive agent is continuously making decisions about what to do next. These are micro
decisions that affect the internal course of thought. They occur a few times a second. This may seem fast,
but think of how fast a person detects the point of a joke, or judges someone to be lying by a slight turn of
phrase and hesitation. These decisions are highly varied — to move on to another task, to reconceive its
model of the world, to examine the consequences of an action. The decision process extracts from all the
knowledge that has accumulated in the working memory the relevant knowledge that bears on what to do
next. These will be indications to do an action, or not do one, or to prefer one action over another, etc.
Then the decision is taken that is indicated by the accumulated preferences. The decision process does not
search long-term memory, rather, the latter continuously delivers salient information as the task situation
evolves over time. Likewise, the decision process does not determine the relevance of the salient
information. At the point of making a micro decision any knowledge that is not already in the form of an
explicit preference for an action is irrelevant All it does is gather up the preferences and determine what
action they jointly assert. There is much processing of coded knowledge to see relations, draw inferences,
make considered judgements, and determine new preferences. But these larger cognitive operations are the
resultants of many micro decisions; the micro decision itself simply works on the current cognitive state to
determine what to think in the next instant (half a second).
Cognitive life is thus a continuous flow of micro decisions supported by a continuous flow of
knowledge from long-term memory. This all happens automatically. The micro decisions constitute the
process of awareness — they are the finest-grained points at which the agent can be deliberate. Thus the
processes that make them up, the inflow from long-term memory and the decision process itself, are
therefore beneath awareness. Likewise, all the available coded knowledge that has poured into working
memory but has not been attended to by the decision process is outside awareness. Perhaps better
expressed, it is the subject of fleeting and uncontrolled awareness, such as all the details of a visually
apprehended scene. It is not too far fetched to think of the long-term recognition memory as the inner eye
of the mind, with working memory as what is available to inner view — providing it is not taken as a
separate homunculus, but is seen as the agency of this continuous flow of recognition and micro decisions.
When one specific micro decision is indicated, the agent simply takes it and moves on to the next
micro decision a moment later. But the relevant knowledge does not always lead the way. It can be
-13-
incomplete or conflicting in many different ways. These prevent a micro decision from occurring — an
impasse occurs. The response of the architecture is to set up a subgoal to resolve the impasse, by adding
coded knowledge to working memory. This new knowledge changes the flow of cognition because
patterns in long-term memory recognize it and add other knowledge to the working memory which initiates
a new arena to address the problem, with knowledge about how to proceed. Further impasses can (and
often do) occur while resolving an impasse, so that an entire stack of subgoals can build up in the attempt to
respond to the incompleteness or inconsistency of knowledge in making micro decisions.
All this happens at the level of micro decision, hence automatically and beneath awareness. The
larger deliberate goals for the agent arise from coded knowledge in long-term memory. But the moment by
moment flow of cognition is governed by the automatic subgoals continuosly arising and getting resolved
in the attempt to make micro decisions — which of course are themselves occurring in the attempt to
decide on external actions or arrive at judgments based on extended consideration, etc. Because of its
automatic natures, the agent cannot necessarily articulate these subgoals. Indeed, articulation is an external
action which occurs only if the knowledge can be brought to bear to find the right things in working
memory and issue the appropriate action commands, all of which requires an appropriate sequence of micro
decisions.
The flow of cognition is in the service of interaction with the external world. There is a flow of
coded perceptual knowledge into working memory to join with the flow from long-term memory to make
up the total available salient knowledge. Under normal conditions, the agent can be described as attending
to the external situation, with long-term memory automatically elaborating, embellishing and interpreting
the scene. On the action side, codes for commanding external action flow from long-term memory into
working memory as actions-to-be-considered. Micro decisions provide the intention to take a specific
external action. This releases these command-like codes to affect the motor system, which then moves in a
new way. Motor action takes on the order of seconds, which is longer than micro decisions, and it may
take much longer if the commands entrain self-sustaining motor action systems such as holding a scream.
Thus, there is a separation between intention and behavior, which opens the door for all the conundrums of
responsibility and attribution of personal causality. It should also be evident that the above structure
permits the separation of thought and action — that the agent can be actually say "hello", or only think of
saying hello.
The structure of the agent, with its continuous automatic access to all of long-term memory and its
continuous sequence of micro decisions, makes it an interrupt-driven system. At any moment, no matter
-14-
what the agent has been doing, a decision can be taken that is radically different from the prior course of
processing. New knowledge from the external world, or new knowledge from long-term memory, can
provide the sufficient knowledge so that the next micro decision is to abandon the prior path and engage
upon a new one. Thus, in many respects the cognitive agent is distactable, and its problem is to keep itself
focussed enough to accomplish extended considerations. The other side of this coin is that no external
inputs necessarily can gain control if the knowledge that becomes available internally keeps winning all the
micro decisions. What does force the agent to attend is an impasses — but this is not quite like an
interrupt, because it is a condition where the agent cannot proceed on its own in any event.
We have been describing the process of cognitive performance, treating long-term memory as if it
were fixed. In fact, new knowledge is being added continuously to long-term memory in the form of new
patterns and coupled codes (since that is what long-term memory consists of). All new permanent
knowledge arises from experience — but it is the experience of resolving impasses to micro decisions.
Whenever the knowledge is developed to resolve an impasse (by cognitive performance) new patterns are
created to recognize the (internal) impasse situation when this knowledge was needed and then the new
element of long-term memory is fashioned to deliver this coded knowledge whenever that pattern is
detected again. This will short-circuit the problem solving process of having to discover this knowledge
again (the process that has just been gone through). Thus, the experience accumulates in a form that
continuously permits the agent to proceed more effectively in its cognitions — based of course on the
presumption that past situations that caused difficulties (impasses) will occur again. There is nothing
forward looking or predictive about this learning, except this built-in assumption of tomorrow is like today.
It is a way of continuously formulating past experience for storage.13
The encoding of experience in terms of the context of the micro decisions and the use of current
knowledge to resolve impasses, does not prevent knowledge from being encoded in terms of the experience
on the larger scale of minutes and hours in the external world. What it does to is pass that larger experience
through the filter of the way the agent is casting its own activities in terms of arenas of actions and
demands for problem solving within those arenas. Similarly, that the encoded experience is fashioned to
reproduce from memory the knowledge just found to solve a specific difficulty (e.g., incomplete knowledge
to prefer one action to another), does not prevent the pattern in long-term memory from detecting matching
13This description is highly abbreviated and may be far away from the reader's experience of computational mechanisms. It is thus
important to re-emphasize that Soar has such a learning mechanism, that it operates essentially as described, and that it demonstrates
the feasibility of such a scheme. Beyond Soar itself, the field of machine learning provides a substantial body of theory and
experimentation about such learning mechanism that makes clear why this scheme works.
-15-
situations in apparently quite different circumstances and delivering its coupled coded knowledge as
salient. Thus the experience gathered in one context can be evoked in other contexts. All of this learning,
by the way, occurs automatically. The agent does not decide to store away various bits of knowledge, and
it does not know explicitly what has been stored away.
Let us summarize the cognitive agent. It operates by continuous recognition in an internal field of
salient knowledge, making micro decisions several times a seconds which shape the flow of its
considerations. This internal cognitive world operates in a somewhat loosely coupled fashion with the
bodily systems of perception and action. Cognition must operate with the mixture of knowledge provided
by perception and elicited experience. But it can engage in planning and imaginings that are decoupled.
Similarly, perception and action proceed at their own timescales. And cognition arrives at intentions which
then have to modulate the ongoing activity. It may be useful to think of the cognitive part of the agent as
having supervisory control over the perceptual-motor system.
We have emphasized the temporally microscopic character of the agent, because that is, in fact, the
scale at which the fixed architectural mechanisms operate. The scale at which social behavior occurs seems
to be much longer — hours, days, weeks, etc. But the flexibility of the cognitive agent arises because the
time grain is so fine, that it can engage in extended considerations in a way that takes into account
situational knowledge mixed with its own accumulated knowledge. Consonant with this is that the socially
significant act, e.g., the nod of acceptance or the curt turning away, can occur almost instantly on the basis
of immediately presented knowledge.
We have also emphasized that the agent proceeds in many respects automatically and without
awareness. This is because the fixed architectural mechanisms are exactly the locus of such automatic
behavior. But it is important that a model of the cognitive agent provide an account of awareness, and not
take it as primitively given. Many of the ways in which the cognitive agent is more limited than the
rational agent is in its inability to get complete access to all the knowledge that is encoded in its long-term
memory and that its processes proceed in some ways automatically and not in accord with what a god's-eye
view reveals could really solve its problems.
The above paragraph focussed on limitations. The other side of the coin should also stressed. The
cognitive agent is capable of rational considerations in the pursuit of its own ends, with the capbility of
responding rapidly to changing circumstances and knowledge, and throughout learning from its
experiences. It is thus a generally adaptive system.
-16-
1.1.5. Emotional-Cognitive Agent (EGA)
It is not difficult to spot a major lacauna in the cognitive agent. No matter how complete it is in
cognitive matters, it remains devoid of emotion. A cursory examination of the literature of social science,
not to speak of the world's literatures and the social world itself, establishes that emotions, affect and
feelings14 are an essential dimension of human social behavior. To attain an adequate Model Social Agent
we need to enrich the cognitive agent to have emotions.
Unfortunately, this is not just a presentation issue of a convenient fractionation into a sequence of
agents. A major conceptual scandal of the cognitive revolution is not developing any notion of emotion
that fits in with the collection of cognitive mechanisms, such as those that comprise our cognitive agent
There has always been a substantial stream of work in emotion (see (Izard, 1972; Izard, 1977; Kemper,
1987) for general overviews). But there was very little mutual contact with cognition, at least up to the
early 1980s (NormSO).
The issue is not simply one of getting communication between two hitherto distanced areas of
science — a matter of the sociology of science, so to speak. The cognitive revolution ushered in a major
shift in the style of theorizing: from variable-oriented to mechanism-oriented theories. These two styles of
theories are basically incompatible. Variable-oriented theories, which are still the norm in most of social
psychology and sociology, including the study of emotions, posit variables that characterize the persons or
situations. Experiments are run under varied conditions with predictions of the relative effects on the
variables and their interactions. Alternatively, qualitative and/or quantified data are collected from
individuals in "real situations" to test various predictions and examine the impact of the relative effects of
the various variables. Mechanism-oriented theories, which have come to dominate cognitive psychology,
characterize the mind by a set of the mechanisms that process coded information held in memories.
Experiments are run to identify these mechanisms and memories; prediction and explanation come from
inferring the behavior these mechanisms would produce. Our cognitive agent provides an exemplar. The
incompatibility is that no variable can be part of a system of mechanisms without being identified as to how
it arises from these mechanisms. Bluntly stated: By now, to bring emotion into cognition requires
hypothesizing what emotions are in terms of information processing mechanisms and memory
organizations. If, per contra, emotion is fundamentally different (if the mechanisms and codes are all cool,
so to speak), then the mechanisms of emotion and what they process must be posited — and also how they
14The term emotion will be used to cover the domain of emotions, affect and feelings. No single term does the job adequately and
no theory-neutral distinctions support separate subdomains for emotions, affects and feelings.
-17-
interact mechanistically with the cognitive mechanisms. Without this last, the theory effort fails, for the
two systems remain theoretically separate. Needless to say, either path seems incredibly difficult
In fact, in the last decade there has been a major increase in work in the joint area of emotion and
cognition (Frijda, 1987; Qrtony, Clore and Foss, 1987; Ortony, Clore and Collins, 1988), even extending to
establishing a new journal, Cognition and Emotion. By now there is no dearth of activity. The upshot so
far can be captured in two propositions. First, the cognitive function of much emotion is control — to
interrupt ongoing behavior and switch attention (Simon, 1967; Johnson-Laird). In the literature this
proposition often occurs in diluted form in taking emotion to be essentially concerned with appraisal,
without dwelling on the cognitive functions of appraisal. Second, the knowledge used to interpret and
respond to emotional situations and people can be explicated in fairly operational form. Thus, systems of
rules can be constructed that can interpret appropriately (very simple) stories that involve actors exhibiting
emotion (Dyer, 1983; Lehnert, 1981; Lehnert, 1987; Heise, 1977; Heise, 1978; Heise, 1979). These two
propositions represent important attainments. So far, however, they bypass the dilemma posed above.
When they speak mechanistically (e.g., in terms of interrupts and control), they seem no different than
existing information processing mechanisms. For instance, our cognitive agent has interrupt mechanisms,
which are evoked routinely in many tasks, but it does not thereby seem to have emotions. When they move
to characterizing the knowledge of the emotional world, they can (and do) avoid entirely saying in what
emotion consists.
In either case, the crux that seems necessary to describe an emotional-cognitive agent is missing.
There is no sense just identifying emotion with an interrupt mechanism; our cognitive agent already has an
non-emotional interrupt mechanism. There might be an emotional interrupt mechanism in addition to a
cognitive one, because we are vertibrates and mammals, but then the nature of such a mechanism needs to
be specified. Likewise, our cognitive agent is surely capable of having the knowledge to interpret emotion
in others (though we might ask how it acquired it); but this doesn't say how our cognitive agent is in itself
an emotional being.
This paper is not the place to put forth a detailed theory of emotion in cognitive agents, even if we
had one to offer. It is essential, however, to define an emotional-cognitive agent Our fractionation of the
total social agent provides us with some leverage. It lets us ask: How does emotion modify and limit the
behavior of the cognitive agent? This shift in perspective is not minor in its effects. Modern attempts to
characterize emotions (Frijda, 1987) find themselves assimilating to the emotional system immense
amounts of cognitive apparatus. That is, when one simply takes the phenomena of emotion sui generis as
-18-
the starting point, there arises an overwhelming need to bring in appraisal, action tendencies, behavior
control, etc., all categories that are central to cognition per se.
Table 5 lists the major phenomena of emotion that change the picture of the agent obtained just from
cognitive considerations, as embodied in the cognitive agent We start (El) by taking an emotion to
determine an orientation toward some events, agents or objects — angry at X, gratified over Y, fearful of
Z, warm and cozy with respect to A, etc. Events, agents and objects is the taxonomy used by Ortony, Clore
& Collins (1988) in their recent treatment of the cognitive structure of emotions; for short, we refer to any
of these as the target of the emotion. We hedge (necessarily, given the discussion above) about what it
actually is in the agent that produces this orientation. Having an orientation towards a target implies that
something is identified and delimited to be the target. Such capabilities are exactly what cognition provides
and are not posited anew for emotions. However, making that move — emotions are with respect to targets
as determined by cognition — already constrains what emotion (or an emotion system) can be. We hedge
again in the use of the term orientation. Operationally, an action, intent or appraisal concerning a target is
consonant or not with respect to an orientation, and to varying degrees. For example, hitting is consonant
with anger, hugging not (ceteris paribus). The point here is not whether observers can be accurate in their
judgments about consonance in actual situations, where all the complexity of intentions and hidden goals
enter in (the hug is used to break the ribs); the point is that an orientation provides the framework for
attempting such judgments. Table 5 adopts this framework.
The key feature is E2: emotions take hold of their agents independent of the cognitive state, given
that the targets are identified. Emotions are outside cognition, which neither bids them come nor banishes
them at will (so to speak). This is what makes the emotional-cognitive agent different from the pure
cognitive agent — there are independent sources of influence at work in the inner chamber of information
processing.
What actually differs behaviorally, of course, depends on how emotions interact with
cognition. The frame of reference for that interaction is the array of targets that cognition presents to
emotion.15 Given an array of targets, emotions arise. Each defines a notion of consonance with the
emotion; i.e., it produces an orientation against which aspects of cognition are more or less consonant. Any
aspects of cognition that permits such a relation is subject to be influenced (E3), chief among them are the
parts of the environment on which the agent is spending its time (on the target or not on the target,
depending on the emotion); appraisal (does the appraised entity increase or decrease the consonance) and
15Thus, emotion is completely dependent on cognition. This holds only to a point, because if emotion affects cognition, then it
influences the landscape of targets that cognition presents to emotion. Thus, the influences are mutual and potentially intricate.
-19-
Table 5: The phenomena of emotion.
El. ORIENTATION. An emotion is oriented to a target determined by cognition.
E2. POSSESSION. An emotion takes hold of its possessor.
E3. COGNITIVE INFLUENCE. Any component of cognition that can vary in
consonance with an emotion will be affected:
E3.1. ATTENTION. What aspects of the world are attended to.
E3.2. APPRAISAL. How these aspects are evaluated.
E3.3. ACTION. What actions are selected and how they are performed.
E4. INTENSITY. The greater the intensity the more consonant are the effects.
E5. DIVERSITY. Emotions may have any targets determinable by cognition.
E6. PERSISTENCE. Targets continue to evoke then* emotions.
E7. HABITUATION. Repetition of the occurrence of an emotion decreases its intensity.
action (again, does the action increase or decrease the consonance). The direction of influence is always
toward more consonance, and the intensity of an emotion is the extent to that influence (E4). Further, given
an array of targets (presented by cognition) the effect of the emotion may be different (love manifests itself
different on different objects) (E5). Finally, emotions continue to be evoked as long as their targets exist
(E6), although there is habituation, i.e., the more often an emotion is manifest the lower its intensity (E7).
Some essentials are missing from this picture. As described, it appears to be like a continuous field,
-20-
in which the actual cognitive processing is the resultant of all the emotions that tug on it in their own
direction of consonance and with their own intensity. We have not shown how such a continuous field
actually works; presumably this is intimately related to what son of things emotions are within the
cognitive system. Also, we haven't described the exact nature of an orientation. Is it a symbolic structure
of some kind, separate from or intermingled with the symbol structures that represent the the target? Is it a
set of operators for processing targets to yield consonance symbol structures? The picture certainly doesn't
settle many issues about emotions (is there a primitive set of emotions?). However, it does provide a quite
explicit statement of how emotion relates to cognition. And it some strong implications, e.g., emotions
affect cognition, not directly behavior, and cognition affects emotion in part by determining the target
array. Besides the highly dynamic effects of mutual influence, this latter opens the door to the elaboration
of emotions by learning.
The emotional-cognitive agent provides an appropriate terminal point for the sequence of agents
formed by constricting its processing powers, starting from the super-rationality embodied in the
omniscient agent. There are further limitations, analogous to emotion, i.e., aspects of the human that lie
outside cognition and effect it. The human is a biological organism and as such it is subject to fatigue,
desease, and other biological effects. These are not emotions, but they do (or may, the evidence is not
always clear) affect the very nature of the agent and how it operates. These considerations to not seem
central. Hence, stopping with the emotional-cognitive agent will permit us to begin elaborating the
requirements for situation and knowledge. There may in fact be more "agents" between the cognitive agent
and the emotional-cognitive agent; however, future research on agent capabilities is needed to determine
whether this is the case.
1.2. Increasingly Rich Situations
As previously noted, the set of behaviors the agent can engage in are defined by the agent's cognitive
architecture and its knowledge. By iterating through "situations", as we are about to do, we are altering the
content of the information known by the agent and keeping the architecture fixed. As we move along this
dimension we specify what further enrichments there are to the agent's available information. Regardless
of its capabilities, if the agent is placed in too simple a situation it will not behave as a social agent. We
will demonstrate this point when defining the nonspecific task situation. The situation plays a somewhat
different role with different agents in our sequence. For the omniscient and rational agents, most scientific
investigations such as those done by economists have focused on understanding how the behavior of the
agent changes as the environment is varied. For instance, how does the behavior of homo economicus
-21-
change as the the distribution of social welfare in a market of a given structure is varied The rational
agent, as opposed to the omniscient one, permits an additional degree of realism into such analyses, by
limiting the knowledge the agent has of the environment. But the knowledge and goals ascribed to the
agent are still dictated by the external, and often previous, situations. For instance, production, accounting
and sales people can be distinguished largely on the basis of their acquired expertise and knowledge of their
own organizations.
For the more refined agents — boundedly rational, cognitive, and emotional — the focus shifts to a
more internal view. There is a correspondence between the external situation and the internal capabilities
of the agent. An agent cannot deal with a situation without having acquired the knowledge to do so. The
architecture by itself is only a framework for processing knowledge about the environment, and although
emotions reflect intimately the nature of the biological organism and its requirements, they too do not
provide of themselves the capabilities to deal with external situations of various sorts.
Thus, we will proceed through a sequence of external situations that become increasingly richer in
what they specifiy. The requirement is that the Model Social Agent be able to deal with such a situation.
This is a requirement on the nature of the agent, namely, that it have the appropriate body of knowledge.
Thus this sequence of situations is equivalent to a sequence of model agents of increasing capability. We
describe each of these situations in a cumulative fashion as each embodies the richness of the previous
situation and indeed can not exist without presupposing the former. In addition, associated with each level
of richness or complexity in the situation is a body of knowledge. The knowledge associated with a
particular level is somewhat independent of the blocks of knowledge associated with other levels. Further,
these blocks of knowledge are additive; i.e., an agent can have any block of knowledge without necessarily
having the completed blocks of knowledge associated with the proceeding environments. Historically,
most discussions of social man places the the agent in an external situation with a specific richness level,
but do not treat the knowledge as cumulative nor do they completely specify the knowledge for the
situational level of interest. Hence, such agents while theoretically in social situations of sufficient richness
are curiously devoid of the knowledge that a social agent in such a situation would actually have. Implicit
in our sequence is that social man requires not only a sufficiently rich environment but all of the
prerequisite knowledge.
It is important to remember that we are specifying models of agents and not the agents themselves.
An actual human being in the world will face a concrete situation and deal with it with all the knowledge it
has, which will certainly include all the types of knowledge we will be specifying. The model agent
-22-
abstracts away from substantial parts of the total knowledge, either by just ignoring it all together or by
specfiying it only in outline or generic form. Any predictions or explanations used with a given level of
model agent, can only be based on the what the model explicitly specifies as in that agent If we posit of an
agent that it "has values" rather than that "it values socializing with others", then much less can be
predicted with it. Thus, the question is, What knowledge of the environment must be included in a model of
an agent for it to be a model of a social agent?
12.1. Nonspecific Task Situations (NTS)
Our starting point is what we will call a nonspecific task situation. We do not mean that the task
itself is not specific, only that the knowledge that we specify as available in the agent doesn't give us
specific details about the task, so that any analysis made using the agent, can only make generic inferences.
Thus, this is a model of an agent in a very specific task situation; we however, just do not know what that
task is. Further, associated with this specific task is a set of task knowledge. In order to make specific
predictions about the agents behavior we must provide it with this knowledge.
An interesting way to think about this is as an actual agent that comes to a task situation with only
some highly generic knowledge about situations in its long-term memory. It then must acquire the task and
deal with it only from this generic base, even though the task itself is fully concrete. We can illustrate this
by taking a quick look at some different types of artificial intelligent programs (see Table 6). Imagine these
programs as agents, arriving in some new situation where they will then do their thing, so that we can think
of the situation as being the input to the system. A theorem prover in some logical calculus knows only
that its input will arrive as a collection of theorems to assume and a theorem to prove. These may be about
anything at all. The theorem prover has a collection of methods to apply, but it must use the same methods
on every task. It may, of course, examine the task to see what methods might work (it will do that with
general methods, needless to say), but that is part of its behavior in the situation. The system is quite
general, but it has no knowledge at all about the situation (save the form of its input). We may take this as
a paradigm of the nonspecific task situation. Such an agent would not begin to behave as a social agent.
Next consider Soar, which is an AI system that is the basis of our cognitive agent It is different from
the theorem prover in that its architecture is like that of the human, rather than simply an applyer of rules of
inference to logic expressions to derive new true expressions. But it is the same as the theorem prover in
that, in facing a new situation, it must be given a collection of problem spaces, including the operators that
comprise them. It too has no pre-existing body of knowledge to draw upon. It might learn something by
working in the present situation, but that is still radically different from the preparation that adult social
-23-
Table 6: Knowledge of the situation in AI systems.
Knowledge
About
Methods
Knowledge
About
Tasks
Range
of
Tasks
Logic theorem prover
some
no
one
Soar (cognitive agent)
lots
no
many
Chess program
limited
yes
one
Expert system
limited
yes
one
agents bring to a new situation. And certainly a nonspecific task situation is not the situation in which a
newborn human child is found.
To provide a better understanding of the agent in a nonspecific task situation we now contrast such
an agent with an agent in a highly specific task situation. The chess program and the expert system are
different from the theorem prover and the cognitive agent in that they are specific to a task. They come
with very definite bodies of knowledge about their task environments. With the chess program, much is
built right into the architecture — the board representation, the legal moves, etc. But knowledge of chess is
also built into its heuristics, e.g., its evaluation function, though there may not be a lot of it With the
expert system there is also the complete preadaptation to the task and there is likely to be a relatively large
amount of specific knowledge about the domain in its rules. The chess program and expert systems differ
in their scope. The chess program and expert systems are extremely narrow with no capabilities for
moving outside their domains; the theorem prover and cognitive agent are quite general in scope. What is
missing for the latter is how much knowledge and of what kinds they should have as they enter a new
situation. It is doubtful that they can have all the knowledge they will use. By its very nature, a new
situation implies that an agent must acquire substantial new knowledge about it, in order to function.
These AI systems show that the agent is defined by both abilities and knowledge. Regardless of its
abilities, if the agent is placed in a nonspecific task situation it does not act like a social agent. In pan, this
is because the situation itself — because it is non-task specific — does not provide constraints on behavior
or opportunities for action (there is nothing to do). And in part, this is because the agent has no knowledge
which enables it to function. That is, knowledge about a task (such as in the chess system) enables the
-24-
agent to act by constraining choices. But, an agent with task knowledge, even an agent with knowledge
about a wide range of tasks will not behave as a social agent In Weberian terms it will operate in a purely
instrumental-rational or affectual manner (depending on whether it is a cognitive or a emotional-cognitive
agent) but will not behave in a traditional manner.
1.2.2. Multiple Agent Situations (MAS)
From a social point of view, the most basic property of an environment is the existence of other
agents whether physically or symbolically. The mere existence of other agents helps to define the external
situation by noting that there is something out there "like the agent". The existence of such agents limits
the range of permissible actions in which the agent can engage much as the rules of a game define the set of
possible actions one can take and the physical layout of the room defines the possible walkways. From the
agent's standpoint, the assumptions that he makes about others narrows the range of interpretation, narrows
the number of possible actions for all agent including self, and narrows the range of expected reactions
from others to his actions. Consequently, through this narrowing process, such assumptions admit
planning. In other words, there are specific others, the agent has specific knowledge about others, and
without such knowledge specific predictions of the agents behavior can not be made.
Simply assuming the presence of multiple agents brings us a long way toward the social agent We
now have agents who's range of possible actions is limited, who acts in response to others, who can reason
about other's behavior, and so on. Yet, the presence of others tells us nothing about the timing of behavior
or which aspect of the social good to attend to. For example, simply knowing there are others out there and
that they also like public television does not enable the agent to discriminate between the benefits of giving
a donation today, and giving one a year from now. While the presence of others may bring to an individual
agent's attention the need for a clean environment and the need for cost effective energy, the mere presence
of others does not give the agent a way of acting in accordance with these multiple needs. Thus, timely and
political behavior, which are certainly characteristics of the social agent do not arise in the multiple agent
situation. Further, such an agent (as long as it is not omniscient) would act in a rational manner but not in a
traditional manner. The difference here would be one of mechanically pouring and serving tea and
performing a Japanese tea ceremony.
-25-
123. Real-time Interactive Situations (RIS)
The real interactive situation is characterized by the fact that interactions actually occur. This means
that other agents, not under the control of the given agent react to its actions and produce responses that are
perceived by the given agent This moves beyond the multiple-agent (game theoretic) situation, because
these interactions interact with the process of cognition itself. Responses must be made within definite
time limits and the agent must not only spend less time interpreting the other agents action and pondering
its own, but must manage its own mental resources and consider the benefits and losses of thinking versus
action. To be struck dumb when asked an embarrassing question often has consequences in and of itself.
Having an agent with real interactive knowledge thus gives us an agent who is more aware of the
environment and can respond to it in a timely fashion. Since interactions actually occur the difference
between expectations and reality become factors which can affect future behavior. The agent in the
real-interactive situation can now engage in play and fantasy as the agent now knows the difference
between the imagined and the real. Having a RIS also means that the agent can now lose opportunities for
action by not acting fast enough. It gives us an agent who can act rashly, who can by acting quickly can
garner more power, who can be overwhelmed by the deluge of information during a crisis, and so on.
Despite these gains, however, we do not have a complete social agent Specifically, the agent in the real
interactive situation does not engage in political behavior, does not engage in rituals, does not exhibit
beliefs or table manners, and acts in an amoral fashion.
1.2.4. Social Structural Situation (SSS)
The social structural situation is characterized by the presence of groups, rules for membership,
maintenance, and dissolution. The existence of groups and knowledge about particular classifications of
people, or particular roles, such as are embodied in their physiological, demographic, social, political, and
cultural characteristics (such as sex, age, and religion) enable mass analysis of their actions (expected or
actual). Such social structural knowledge increases the ability of the agent to plan, over and above that of
simply knowing there are other agents, as now the agent can make better guesses about the other given the
other's position in the social structure. The point then is that there is an underlying social structure, i.e.
there are groups, there are entry requirements to join these groups, members of groups may have particular
characteristics and/or access to specific information. Further, without providing the agent with specific
knowledge about the social structure specific predictions of agent behavior can not be made.
An agent in a social structural situation (with all the corresponding knowledge) begins to take on the
characteristics of a social agent We now have an agent who identifies itself and others by group
-26-
membership and whose specific knowledge is constrained by social position (whom he or she knows and
interacts with). At the macro level we now get such social behaviors as social mobility, class differences,
and class identification. Yet, agents in such a situation will not react normatively to situations, they will
not act both for their own good and for the good of the group, and they will not exhibit culture shock if
placed in a completely different environment. Further such agents will still be amoral.
1.2.5. Multiple Goal Situation (MGS)
A basic feature of the social agent is that its action is not oriented toward a single solitary goal that,
once achieved, causes all action to stop. For example, as noted by Marx, man continues to produce even
after there is no need. Rather, the social agent is beset by numerous goals which interact and may increase
or decrease in their saliency over time. An example of an agent in a multiple goal situation is the agent
who as a diplomat is trying to develop a treaty, achieve personal fame, and enable a safer world. Further,
the social agent is characterized not only by having multiple goals (that is purely a product of the
architecture and not the situation) but by simultaneously having particular types of goals. In other words,
the specification of the goals matters as much as the fact that there are multiple goals. In particular, we
identify three types of goals that the agent must have to be a social agent: (1) task related goals, (2) self
maintenance goals, and (3) social maintenance goals. These types of goals are not peculiar to our analysis
but have also been identified in other research areas as peculiar to the social agent. Research on social
dilemma's clearly brings into focus the presence, and indeed potential conflict between self and social
goals, and by implication task related goals (Rapoport, 1961; Bonacich%%%). Similarly, Weber's (1978,
p. 23) typology of ways in which action can be oriented can be mapped onto this typology of goals:
instrumental-rational (task), value-rational (individual or social), affectual (individual), and traditional
(social). And, these goals can be seen as occurring in various ways in Turner's (1988, ch. 5) model of
motivation, for example Turner's "need for facticity" is a task level goal, the "need to sustain selfconception" and the "need for symbolic/material gratification" are individual level goals, and the "need for
sense of group inclusion" is a social level goal. Thus the agent has goals at the three different levels (task,
individual, and social) that may compete and conflict with each other, as well as multiple goals at the same
level.
An agent in a multiple goal situation has the wherewithall to make tradeoffs. Thus, we get delayed
gratification, satisficing behavior when faced with multiple goals and ambiguous information, idealism,
suppression of self for the good of others, comprehension of the social good, hidden intentions, and lying
all of which are elements of social man. Further, social identification plus the knowledge of multiple goals
-27-
gives us political behavior which is absolutely endemic to the social agent. And yet, something is missing
for the agent in the multiple goal situation. Such an agent still knows nothing about the the norms, values,
morals governing behavior in that particular society, is not constrained by the actions of history (beyond his
personal history), and will blithely ignore customs.
1.2.6. Cultural-Historical Situations (CHS)
The final situation we shall discuss is the cultural-historical situation. The point here is the agent
exists not only within a society that is structured but at a particular point in time, at a particular place, with
particular institutions, and so on, all of which are themselves affected by the historical process which led to
their current state. Thus, the agent can be, and is, socialized to a particular way of doing things, to specific
beliefs, norms, and values. We now have an agent who not only knows the appropriate response for a
particular situation, but knows what sanctions to expect to be invoked should such a reponse occur or not
occur, and who will suffer confusion should the appropriate sanction not occur. Thus, we get the agent
who in this culture eats with a fork but is confused when faced with a pair of chopsticks, the agent who
reaches out to shake hands when introduced only to be embarrassed if the other agent simply nods, the
agent who assumes that the man will pay for the dinner, the agent who can suffer with a sense of moral
outrage, and so on.
The agent in the cultural-historical situation provides the appropriate terminal point for the sequence
of agents formed by enriching the situation. Starting from the blank-slate of the agent in the nonspecific
task situation we have increasingly enriched the type of situation in which the social agent is embedded and
the type of knowledge to which the social agent has access. We see these types of knowledge as enhancing
the ability of the agent to plan by narrowing the range of all possible actions to just those that are
acceptable in that situation.
13. The Model Social Agent (MSA)
Thus we now have a candidate Model Social Agent We have reached this model: (1) by
increasingly restricting the capabilities of the agent until we have reached an agent with just those
capabilities exhibited by man; (2) by increasingly enriching the situation until we have reached an agent in
situations with all of the right characteristics; and (3) by providing the agent, as an adult, with all of the
knowledge that social agents have at their disposal given that they have grown up in such a rich
environment. This is exhibited graphically in figure 3
*************** place figure 3 AboUt Here **********
-28-
2. Applying the Model of a Social Agent
We have arrived at a candidate Model Social Agent. We certainly do not know whether it is
adequate yet to pass the stringent test we put forth at the beginning, namely, a collection of such agents,
having these properties guaranteed and no others, put in a social situation (as stipulated), would produce
recognizable social behavior. Such a test requires synthetic agents and is still well beyond the simulation
art. We do know that many of the important aspects of being human are included, even though some of
them, in particular emotion, have had to remain only sketchy.
Actual artificial social agents would lend themselves to many interesting simulation uses in addition
to the test These are all beyond reach still. However, the main uses of the Model Social Agent are
conceptual and analytical. Importantly, the Model Social Agent is not monolithic, but fractionates the
characteristics involved into a whole series of capabilities and enabling situations. It can be used to shed
light on much current social science by permitting an analysis of what aspects of the Model Social Agent
are being assumed (or ignored) by a given piece of research. This is still not the main theoretical use one
hopes for from the Model Social Agent, which is to place the model within some larger theoretical
enterprise. But such analyses of current work will still permit some assessment of the fruitfulness of the
Model Social Agent.
2.1. Festinger's Social Comparison Theory
As our first exercise, we address the extent to which an old and familiar piece of social psychology,
called Social Comparison Theory (SCT), set forth more than thirty years ago by Leon Festinger (1954)
embodies a model of "social man". SCT addresses what son of decisions a person makes in a social
situation; in particular, it posits that the person bases his or her decision on a comparison with analogous
decisions made by other persons. It remains a live theory, which continues to be used in analyzing many
social situations and indeed continues to be developed theoretically (Suls & Miller, 1977). The question
we wish to address is what role is played by the different aspects of the social agent we have teased out.
We begin by noting that this theory when placed within the context of the Model Social Agent would lie in
the cell — emotional agent, multiple agent situation — (see figure ^FRAME). As such, SCT can not be
thought of as a full model of the social agent. The question that now comes to the fore is does having a
Model Social Agent help; i.e., are we doing anything more than simply placing this theory on a grid.
A nice thing about Festinger's style of theorizing is his tendency to write down an explicit set of
axioms. Thus, we can classify the axioms for SCT with respect to the type of agent being posited — which
-29-
axioms refer simply to the agent being an omniscient agent [OA], which to the rational agent [RA], and so
on — and by the type of situation being posited — which axioms refer to the task [NTS], which to the
multi-agent situation [MAS], and so on. Table 5 lists the hypotheses, suitably classified. It is not necessary
to go through the hypotheses in detail. But let's take a couple, just for illustration. Hypothesis 1 says there
is a drive for everyone to evaluate. That is certainly a basic tenet of general cognitive behavior in support
of rational behavior. Soar evaluates all the time. Soar, of course, does it under the impress of a goal to be
attained, and SCT ascribes it to a basic drive. But a review of the way Festinger uses the theory shows that
this is distinction without much difference — the effect of hypothesis 1 is to permit the analyst to posit an
evaluation anywhere he finds it expedient. Further, hypothesis 1 is a statement only about the agent and
there are no specifications on the situation. Thus, we classify hypothesis 1 (as well as 4 and 5) as CA.NTS.
Hypothesis 2 specifies that if objective evaluation is not possible, then evaluation proceeds through
comparison with others. This is essentially a principle of cognitive operation in a multi-agent situation. If
evaluation is necessary then other's evaluations are a source of relevant knowledge. Thus, this hypothesis
(as well as 3 and 8) is classified as CA,MAS. There is no specific social or cultural content involved and
no sense of group qua group. We note that even though Festinger uses the nomenclature of group, most of
the time he is really talking about dyadic exchange and mere fact that multiple agents are present and not
special group level knowledge. In contrast, hypothesis 7 specifies the existence of groups as entities with
varying importance to the individual and hypothesis 9 specifies that the position of the individual in the
group (i.e. the individual's structural position) affects behavior. These two hypotheses (7 and 9) with their
emphasis on group qua group and not just a group of independent agents can be classified as CA,SSS. Of
these two hypotheses 9 is particularly interesting as it specifies particular knowledge that the agent has
because he or she is a member of a group, knowledge that is peculiar to one's position in the group.
Each of the hypotheses can be classified in this way. An interesting result is that only one, hypothese
6, is not CA. This is the hypothesis that hostility and derogation will accompany the act of ceasing to
compare. If an agent has been using someone as the basis of comparison, and then ceases to do so, then the
agent dumps on the other person. That does not follow from any obvious rational considerations, even in
the multi-agent situation, and it is clearly emotional behavior, so it is labeled [EGA]. Although the SCT
agent is "cognition + emotion" the notion of emotional response is quite underdeveloped. For example,
SCT relies only on a single dimension of emotional behavior (positive and negative) and yet other studies
have indicated the presence of multiple dimensions (Ortony, Clore and Foss, 1987; Heise, 1977; Heise,
1978; Heise, 1979; Kemper, 1987). A consequence of this undevelopment is that a wide range of social
-30-
Table 7: Tenets of social comparison theory
1. "There exists, in the human organism, a drive to evaluate his opinions and his abilities."
[CAJSTTS]
2. "To the extent that objective, non-social means are not available, people evaluate their
opinions and abilities by comparison respectively with the opinions and abilities of
others." [CA,MAS]
3. "The tendency to compare oneself with some other specific person decreases as the
difference between his opinion or ability and one's own increases." [CA.MAS]
4. "There is a unidirectional drive upward in the case of abilities which is largely absent in
opinions." [CAJSTTS]
5. "There are non-social restraints which make it difficult or even impossible to change
one's ability. These non-social restraints are largely absent for opinions." [CA.NTS]
6. "The cessation of comparison with others is accompanied by hostility or derogation to
the extent that continued comparison with those persons implies unpleasant
consequences." [ECA.MAS]
7. "Any factors which increase the importance of some particular group as a comparison
group for some particular opinion or ability will increase the pressure toward
uniformity concerning that ability or opinion within that group." [CA,SSS]
8. "If persons who are very divergent from one's own opinion or ability are perceived as
different from oneself on attributes consistent with the divergence, the tendency to
narrow the range of comparability becomes stronger." [CA,MAS]
9. "When there is a range of opinion or ability in a group, the relative strength of the three
manifestations of pressures toward uniformity will be different for those who are close
to the mode of the group than for those who are distant from the mode. ..." [CA,SSS]
behavior, largely those related to emotional reaction, do not arise in the SCT agent. As an example, the
SCT agent's language comprehension is not affected by emotion, as is human children's,
(RidgewayWatersKucza85)and the cognitive capabilities, action tendencies, and physiological activity of
the SCT agent, unlike the full EGA agent, are not compromised or enhanced by its emotional state (Ax,
1953; Frijda, 1987).
-31-
An interesting conclusion from this brief analysis is that, to get some theoretical action in the domain
of social comparison, Festinger had to provide an entire structure that was mostly nonsocial. As is
graphically illustrated in figure 5 he had to spend most of his theoretical degrees of freedom building up the
basic apparatus of choice behavior. The elements of a social situation that are necessary for SCT are
primarily the existence of other social actors, not any strong posits about the contents of their beliefs.
Many of the inferences of SCT do not follow from strongly social or cultural aspects. Mostly, they follow
just from the objective character of the task environment, such as the reliability of decision criteria, and the
mere extension to the existence of other rational agents. Thus, most of the knowledge possesed by the SCT
agent is task knowledge. For example, let us contrast the expected behavior of a ballerina in a national
company and a third grader taking ballet classes who are both asked whether they are good dancers.
According to SCT they will locate their referent group (professional ballerinas versus other third graders)
and will then respond (adequate and very good) based on these groups. Making this judgement, requires
knowledge of the task (dancing), the ability to distinguish a referent group, and knowledge of the group
members' abilities. In other words, task knowledge and general cognition are all that is required. To the
extent that Festinger did have a "social theory", it centered on the assumption that there was an entity called
group that individuals could reason about and that individuals posessed knowledge about their and other's
relative position in the group and act on the basis of their relative position.
*************** p^ figure 5 About Here **********
Possibly, with a general theory of social man, such as that proposed herein, more substantial progress
can occur on the enterprise of which SCT is one part. In fact, we expect that by finally getting an
appropriate theory of the individual agent at the cognitive level (such as Soar) — and not just at the
knowledge level, which is what game theory and its psychological derivative, decision theory, basically
provide — this can become common ground for all such efforts. The pattern of pervasive limits to
rationality revealed by behavioral decision theory already indicates something of the complexity of the total
picture (Kahneman, Slovic & Tversky, 1982; Pitz & Sachs, 1984), though it does not integrate these results
into a unified architecture. We further expect that by finally getting an appropriate theory of the individual
agent at the cognitive level it will become possible to generate a model of the agent in the social structural
situation that goes beyond claims that recognizing group and position in group affect behavior. In
particular, we expect that given a collection of such agents it will be possible to examine the development
of such structural phenomena as socially shared cognitions and distributed problem solving.
-32-
22. Turner's Social Interaction Theory
As our second exercise, we address the extent to which a recent general theory, Social Interaction
Theory (SIT) as set forth by Jonathen Turner (1988) embodies a model of "social man". SIT is concerned
with providing an integrated theory of interaction which has as its driving function motivation and which
encompasses the development of different forms of interaction and the social/cultural consequences that
derive from these. As in the previous section, we wish to address what role is played by the different
aspects of the social agent we have teased out
Unlike Festinger, Turner does not provide us with a discreet set of axioms, rather he postulates
models diagrammatically and then provides some of the propositions that relate to those maps. These
propositions are divided, as is the theory, into three areas — motivation, interaction, and structuring. We
can classify these propositions by the type of agent being postulated and do so in table 8. The propositions
listed are those identified by Turner as key for motivation (ch 5), interaction (ch 8), and structuring (ch 11).
We identify the section in which the proposition appeared by using the letters — M, I, and S.
Key propositions from Turner's theory of motivation during interaction (ch. 5) are by necessity in the
multi-agent situation as Turner is concerned with interactions between agents. In these propositions, we
note that Turner, like Festinger, uses an emotion (anxiety/hostility) as pan of the mechanism for regulating
behavior. In contrast, most of the key propositions regarding interaction or structuration require a culturalhistorical situation. In these propositions Turner is not specifying a specific environment but is specifying
meta-knowledge that an agent in such an environment would have and the method by which the agent will
use cultural-historical knowledge.
An interesting conclusion from this brief analysis is that Turner is relying on a very powerful agent.
As is graphically illustrated in figure SIT Turner spends most of his theoretical degrees of freedom building
up meta-knowledge in the cultural-historical situation which only limits the agent in terms of what
knowledge it has. Thus, if actually implemented Turner's social agent would exhibit pure rationality and
would be able to process vast quantities of information with no apparent side effects such as the deluges
that occur during crises.
-33-
Table 8: Tenets of social interaction theory
Ml "The overall level of motivational energy of an individual during interaction is a steep s-function of the level of
diffuse anxiety experienced by that individual." [EGA. MAS]
M2 "The overall level of diffuse anxiety of an individual during interaction is an inverse and additive function of the
extent to which needs for group inclusion, trust, ontotogkal security, and confirmation/affirmation of self are being met"
[EGA, MAS]
M3 "The degree to which an individual will seek to maintain an interaction, or to renew and reproduce it at
subsequent points in time, is an additive function of the extent to which needs for group inclusion, trust, ontologkal security,
self confirmation/affirmation, gratification, and facticity are being met" [RA.MAS]
11 "The degree of interaction between two or more actors is an additive function of their level of signaling and
interpreting." [RA.MAS]
12 "The level of role-making in an interaction is a primary function of the degree of effort to affirm self-references
and a secondary function of the degree of effort to impose frames." [RA.CHS]
D "The level of role-taking hi an interaction is a primary function of the degree of visibility in the ritual-making and
stage-making gestures of others and a secondary function of the level of ability in using stocks of knowledge to understand the
frame-making gestures of others." [RA.CHS]
14 "The level of frame-making hi an interaction is a primary function of the level of ability in using appropriate stocks
of knowledge to make claims and construct accounts and a secondary function of the degree of intensity In role-making."
[RA.CHS]
15 "The level of frame-taking in an interaction is a primary function of the degree of visibility in the claim-making
and claim-taking gestures of others, and a secondary function of the degree of intensity in role-taking with others." [RA, CHS]
16 "The level of stage-making/taking in an interaction is an additive function of the level of role-making/taking and
ritual-making/taking." [RA.CHS]
17 "The level of ritual-making/taking in an interaction is an additive function of the level of role-making/taking and
stage-maldng/taking." [RA.CHS]
18 "The level of account-making/taking in an interaction is an additive function of the level of claim-making/taking
and frame-making/taking." [RA.CHS]
19 "The level of claim-making/taking in an interaction is an additive function of the level of account-making/taking
and frame-making/taking." [RA.CHS]
51 "The degree to which individuals reveal consensual agreements about the level of intimacy, ceremony, and
socializing required in a situation (categorize) is a positive and additive function of the extent to which they regionalize and
normatize." [RA, CHS]
52 "The degree to which individuals share knowledge about the meaning of the objects, physical divisions, and
distributions of people in space (regionalize) is a positive and additive function of the extent to which they routinize and
categorize." [RA.SSS]
53 "The degree to which individuals construct agreements about the rights, duties, and interpersonal schemata
appropriate to a situation (normatize) is a positive and additive function of the extent to which they categorize, ritualize, and
stabilize resource transfers." [RA.CHS]
54 "The degree to which individuals agree upon the opening, closing, forming, totemizing, and repairing behavioral
sequences relevant to a situation (ritualize) is a positive and additive function of the degree to which they regionalize,
categorize, normatize, and stabilize resource transfers." [RA.CHS]
55 'The degree to which individuals develop compatible as well as habitual behavioral and interpersonal responses to
a situation (routinize) is a positive and additive function of the extent to which they regionalize and stabilize resource
transfers." [RA.CHS]
56 "The degree to which individuals accept as appropriate a given ratio of resource transfers in a situation (stabilize)
is a positive and additive function of the extent to which they normatize, ritualize, and routinize." [RA.CHS]
23. The Models of Man
As our third and final exercise, we address the jumble of views of man extant in the social sciences.
We do this by positioning many theories of social man in the space defined by our framework (see figure
4). This exercise serves a variety of purposes. It shows the generality of the Model Social Agent, i.e. we
are able to use it to make sense of a large number of theories and not just Festinger's social comparison
theory and Turner's social interaction theory. It shows the limits of current theories of the social agent;
specifically, by comparing figures 1 and 4 a determination of the social behaviors that a theory of social
man with particular capabilities and content can be expected to explain. It illustrates that social studies of
-34-
man have indeed been cumulative. But, it also illustrates where there are gaps in extant models of the
social agent and thereby provides direction for how current models should be extended.
In placing current theories in this framework, we make the following qualification. First, although
we have placed these views, in terms of knowledge, in the position that characterizes the dominant type of
knowledge posited by that perspective it is generally the case that these perspectives do not take explicit
account of all of the types of knowledge to the left of this point Second, in doing this exercise we have
identified two additional "special purpose" agents, which from our perspective might be thought of as dead
ends as their model of cognition is so limited that they can not perform all tasks one expects the cognitive
agent to perform. These are the stimulus-response agent [SRA] and the purely emotional agent [EA]. With
respect to SRA, as noted by Turner (Turner, 1988, p. 27)
even extreme behaviorism implicitly invoked cognitive processes, since it assumed that responses that
brought gratification would be retained.
Yet, behaviorism tends to ignore that which is distinctly human — complex, but limited cognitive
capacities. The same argument can be made of the purely emotional agent. And third, in doing this
exercise we find that most agents as implicitly or explicitly described in various theories are not functional,
i.e. they exist as descriptions and not as implemented mechanisms. Thus, we list in italics only those that
are functional.
*************** piace figure 4 About Here **********
Huizinga %%% use as illustration of how formed this figure. %7 As noted by Huizinga (1950, p.O)
"There is a third function, however, appicable to both human and animal life, and just as important as
reasoning and making — namely, playing." The main argument here is for additional capabilities. That is,
the homo ludens point of view is not to deny that the agent can reason, indeed much play involves
reasoning, but rather to argue for the additional creative, affective, imaginative capabilities that go beyond
pure reason no matter how limited. The second argument is for the ability to create and act within a
particular environment, one which is make-believe and often an abstraction of reality. In a sense this is
simply an argument that the agent has the ability to act symbolically and to engage in task related behavior
but with a particular emotional state. Thus, agents which have the ability to play and to reason and which
can create and act on the basis of symbols should exhibit all social behavior.
•35-
3. How Close are We to the Artificial Social Agent
It is instructive to contrast the conception of social agent put forward in this paper (see figure 3 with
the cognitive agent — Soar. The first thing one notices is that to convert Soar, which is an actual working
artificial cognitive agent, into a social agent one must do more than simply add the right knowledge.
Imagine that we were to treat Soar as an infant and socialize it by giving it the entire rich knowledge we
were previously describing. That is, what if we gave it knowledge about multiple agent situations (perhaps
a referent group evaluation function for actions), knowledge about real-time interactions (perhaps rules
about how to distinguish real-time from non-real time interactions and how to alter one's response in the
two cases), knowledge about multiple goals (perhaps giving it individual level goals such as hunger and
thirst, group level goals such as maintenance of group identity and size, and social goals such as survival of
the race) knowledge about social structure (perhaps by giving it information that it has a particular age, sex,
religion, job and the number of others in the same or different positions) and cultural-historical knowledge
(perhaps by giving it knowledge about the norms, beliefs, expected actions, and expected response in a set
of situations given the various social positions). The result would be an agent that would face new
situations with equanimity impartially observing and evaluating the situation, applying its knowledge and
determining a course of action, an agent who when faced with the same situation would respond identically
regardless of its health or well being, and agent who when attacked would respond as its socio-culture
dictated but without the hairs on the back of its neck raising, and an agent who when given a classification
or categorization task would respond only on the basis of this social knowledge. A collection of such
agents, would seem to have little inventive capabilities and would quickly come through interaction to act
in a completely ritualized fashion. Moreover, there would be no room for individual differences other than
those dictated by the socio-cultural situation. Such an agent, however "social" in knowledge would
certainly be devoid of the humanism one normally attributes to the true social agent Such humanism, in
our framework derives in large measure from the increased limitations present in an agent with emotional
cognitive capabilities.
Now, admittedly, giving such knowledge to Soar is no trivial task; but in a real sense, it is
straightforward involving situation description and enumeration of appropriate rules. Altering the cognitive
agent into an emotional cognitive agent is, however, an even more difficult task (at least from the design
standpoint). Preliminary investigation has led us to conclude that such transformation requires the agent to
have motor sensory capabilities in addition to cognitive abilities.
The value of this enterprise lies, only in part, in the utilization of this framework to determine how
-36-
far we have come and where to go next in the development of an adequate theory of the social agent To
this end, we have introduced Soar, a model of the human cognitive agent and have noted where and how
Soar must be augmented to construct a social agent Additionally, part of the value of this enterprise lies in
the proposed framework for laying out the nature of social man. This framework makes it possible not only
to contrast, and hence locate the strong and weak points, of seemingly disparate theoretical approaches but
also to determine wherein lies the socialness. As the foregoing discussion exposed — social theories are
for the most part very non-social; i.e., they gain most of their leverage not from social but from cognitive
assumptions. And finally, part of the value of this enterprise come from a new perspective on the inherent
socialness of man. Socialness arises not from capabilities but from limitations Socialness is a response to
environmental complexity. That is, social man arises not in isolation, or simply as a member of a dyad, but
arises in response to the presence of multiple others. Similarly, to survive in his environment social man
can not base his actions on a single goal, or a single sequence of goals; but rather must balance and seek to
satisfice a set of goals simultaneously. The expectation is that this perspective, of socialness as dependent
on limited capabilities in a complex environment, and the implied framework, detailing the nature of the
limitations and complexities, will engender a more complete understanding of the nature of social man.
-37-
REFERENCES
Ax, A. (1953). The physiological differentiation between fear and anger in humans. Psychosomatic
Medicine, 5,433-442.
Blau P.M. (1977). Inequality and Heterogeneity. New York, NY: The Free Press of Macmillan Co.
Card, S., Moran, T. P. & Newell, A. (1983). The Psychology of Human-Computer Interaction. Hillsdale,
NJ: Erlbaum.
Dyer, M. (1983). The Role of Affect in Narratives. Cognitive Science, 7, 211-242.
Festinger, L. (1954). A theory of social comparison processes. Human Relations, 7, 117-140.
FrijdaN. (1987). The Emotions. New York, NY: Cambridge University Press.
Heise, D. (1977). Social Action as the Control of Affect Behavioral Science, 22, 163-77.
Heise, D. (1978). Computer-Assisted Analysis of Social Action (Tech. Rep.). Chapel Hill, N.C. Institute
for Research in Social Science,
Heise, D. (1979). Understanding Events: Affect and the Construction of Social Action. New York, NY:
Cambridge University Press.
Izard, C. (1972). Patterns of Emotions: A New Analysis of Anxiety and Depression. New York, NY:
Academic Press.
Izard, C. (1977). Human Emotions. New York, NY: Plenum.
Kahneman, D., Slovic, P. & Tversky, A. (Eds.). (1982). Judgment under Uncertainty: Heuristics and
biases. Cambridge, England: Cambridge University Press.
Kemper, T.
(1987).
How Many Emotions are There?
Wedding the Social and the Autonomic
Components. American Journal of Sociology, 93, 263-289.
Khaneman, D., Slovic, P., and A. Tversky (Eds.). Khaneman, D., P. Slovic, and A. Tversky (Eds.).
(1982). Judgment Under Uncertainty: Heuristics and Biases. Cambridge, MA, New York, NY:
Cambridge University Press.
Laird, J., A. Newell, and P. Rosenbloom. (1987). Soar: An architecture for general intelligence. Artificial
Intelligence, 33,.
Lehnert, W. (1981). Plot Units and Narrative Summarization. Cognitive Science, 5, 293-331.
Lehnert, W. and E. Vine. (1987). The Role of Affect in Narrative Structure. Cognition and Emotion, 1,
299-322.
March, J.G. and H.A. Simon. (1958). Organizations. New York, NY: Wiley.
Mead G.H. (1934,1962). Mind, Self, and Society. Chicago, IL: University of Chicago Press.
Newell, A. (1982). The knowledge level. Artificial Intelligence, 18(\), 87-127.
•38-
Ortony A., Clore G.L., and A. Collins.
(1988). Principia pathematica: The cognitive structure of
emotions. New York, NY: Cambridge University Press.
Ortony, A., Clore, G.L., and M.A. Foss. (1987). The Referential Structure of the Affective Lexicon.
Cognitive Science, 11, 341-364.
Pitz, G. F., & Sachs, N. J. (1984). Judgment and decisions: Theory and application. Annual Review of
Psychology, 35, 139-163.
RapoportA. (1961). Fights,Games, and Debates. Ann Arbor, MI: University of Michigan Press.
Simon, H.A. (1957). Models of Man.. New York, NY: Wiley.
Simon, H.A. (1957). Administrative Behavior. New York, NY: Macmillan.
Simon, H. A. (1967). Motivational and emotional controls of cognition. Psychological Review, 74, 29-39.
Simon H. (1979). Models of Thought. New Haven, CN: Yale University Press.
Simon H. (1983). Models of Bounded Rationality. Cambridge, MA: MIT Press.
Skinner, B.F. (1953). Science and Human Behavior. New York, NY: Macmillan Co.
Suls, J. M., & Miller, R. L. (Eds.). (1977). Social Comparison Processes: Theoretical and empirical
perspectives. New York: Washington Hemisphere Pub.
Turner J. (1988). A Theory of Social Interaction. Stanford, CA: Stanford University Press.
-39-
Figure 1: Applying the Framework
Social behaviors are listed by name in the cell which corresponds to the capabilities and
content minimally necessary to produce this behavior.
-40-
Move ahead
or subgoal
Figure 2: The Structure of the Cognitive Agent
The cognitive agent's working memory changes as the individual makes decisions and
gather's information through his perception. Cognition acts in a supervisory fashion to define what
actions the agent takes. The learning mechanism (the chunker) creates new rules for the agent
which are then stored as part of long term memory.
-41-
I£<L<oIh
Figure3: The Social Agent
The social agent is characterized by Umited capabilities, an enriched situation, and all of the
knowledge corresponding to each situational level.
-42-
A Limited
n a
Increasingly Rich Situation
Increasingly
Limited
Capabilities
Non-Task
Specific
1.5.6
HomoFaber
Rational
Agent
2.3
FunctionaJism
Real
Interaction
Social
Structural
14>1S
Structuralism
Identity theory
Game
fooory
Economic
tneory
Interact ionism
12.13
Network theories
Structural symbolic
interactJonism
Multiple
Goals
18
Soar
^
Cultural
Historical
16.17
Cultural materialism
Social Dilemas
Administrative Man Ethnomethodology
Information Processing
theory
4
Behavioral
decision
theory
Cognitive
Agent
: Stimulusiim
Response
Multiple
Agents
10.11
Omniscient
Agent
Bound edly
Rational
Agent
——————————————————————
Symbolic man
Symbolic
anthropology
Social interaction
theory
Conflict
theory
Constructivism
Social comparison
theory
Cognitive dissonance
theory
Balance theory
tttAtfMU
ilillllll
liiiiiii!
::;:;:i|fll$ilSii)!l!!P!5^^
W^m^Mfff&t
Emotional
Cognitive
i ' Agent
7.8.9
HomoLudens
Psychological
functbnaJsm
_
3£SffliQSOi<£::S$:$S&
Figure 4: Applying the Framework
Theories are listed by name in the cell which characterized their predominant assumption.
-43-
Increasingly Rich situation
Increasingly
Limited
Capabilities
Non-Task
Specific
Multiple
Agents
Real
Interaction
Social
Structural
Multiple
Goals
Omniscient
Agent
Rational
Agent
Bounded ly
Rational
Agent
Cognitive
Agent
Emotional
Cognitive
Agent
*er and Back. 1950
Lawrence, 1954
Figure 5: Festinger's Social Comparison Theory
The tenets of social comparison theory, and the work done by Festinger in this area, as
represented in the article (Festinger, 1954) are laid out relative to the Model Social Agent that we
have put forward. Each of the hypotheses (H), corollaries (C), and derivations (D) are listed by the
number under which they occur in (Festinger, 1954) in the cell which characterizes the
agent/situation that they presume. In addition, we have taken 9 other papers by Festinger and
colleagues that are associated with SCT and have placed these by author in the It is readily apparent
that the bulk of theory is in the cell — cognitive agent, multi-agent situation, appropriate cell.
-44-
Cultural
Historical
Increasingly Rich Situation
Increasingly
Limited
Capabilities
Non-Task
Specific
Multiple
Agents
Real
Interaction
Social
Structural
Multiple
Goals
Omniscient
Agent
Rational
Agent
Bound edly
Rational
Agent
Cognitive
Agent
Emotional
Cognitive
Agent
Figure 6: Turner's Social Interaction Theory
The tenets of social interaction theory as laid out by Turner (Turner, 1988) are laid out
relative to the Model Social Agent that we have put forward. Each of the key propositions in
chapters 5, 8, 11, and 13 are are listed by the number under which they occur in (Turner, 1988) in
the cell which characterizes the agent/situation that they presume. We append a letter to the front
of these numbers in order to indicate the section they are from in Turner's book. Thus M are the
motivation propositions (ch 5), I are die interaction propositions (ch 8), S are the structuring
propositions (ch 11), IS are the impact of self propositions (ch 13), IFI are the impact of feeling
involved propositions (ch 13), and IFR are the impact of feeling right propositions (ch 13). In
addition, we have taken the various categorization and cognization schemes laid out in these
chapters and mapped them onto our framework. It is readily apparent that the bulk of theory is in
the cell — cognitive agent, cultural-historical situation.
-45-
Cultural
Historical