Download Proceedings of the Workshop “Formalizing Mechanisms for Artificial

Document related concepts

Neural modeling fields wikipedia , lookup

Agent-based model in biology wikipedia , lookup

Enactivism wikipedia , lookup

Mathematical model wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Agent-based model wikipedia , lookup

Intelligence explosion wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Ecological interface design wikipedia , lookup

Knowledge representation and reasoning wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Cognitive model wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Transcript
Ahmed M. H. Abdel-Fattah & Kai-Uwe Kühnberger (eds.)
Proceedings of the Workshop
“Formalizing Mechanisms for Artificial General
Intelligence and Cognition (Formal MAGiC)”
PICS
Publications of the Institute of Cognitive Science
Volume 1-2013
ISSN:
1610-5389
Series title:
PICS
Publications of the Institute of Cognitive Science
Volume:
1-2013
Place of publication:
Osnabrück, Germany
Date:
July 2013
Editors:
Kai-Uwe Kühnberger
Peter König
Sven Walter
Cover design:
Thorsten Hinrichs
! Institute of Cognitive Science
Ahmed M. H. Abdel-Fattah
Kai-Uwe Kühnberger (Eds.)
Formalizing Mechanisms for Artificial
General Intelligence and Cognition
(Formal MAGiC)
1st International Workshop, FormalMAGiC @ AGI 2013,
Beijing, China, July 31, 2013
Proceedings
Volume Editors
Ahmed M. H. Abdel-Fattah
Institute of Cognitive Science
University of Osnabrück
Kai-Uwe Kühnberger
Institute of Cognitive Science
University of Osnabrück
This volume contains the proceedings of the workshop “Formalizing Mechanisms for
Artificial General Intelligence and Cognition (FormalMAGiC)” at AGI-2013.
The final version of this volume will appear online in the “Publication Series of the Institute
of Cognitive Science” (PICS, ISSN 1610-5389).
PICS can be accessed at: http://ikw.uni-osnabrueck.de/de/ikw/pics
1st International Workshop, FormalMAGiC @ AGI 2013,
Beijing, China, July 31, 2013
Program Committee
Committee Co-Chairs
! Ahmed M. H. Abdel-Fattah, University of Osnabrück
! Kai-Uwe Kühnberger, University of Osnabrück
Committee Members
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
Joscha Bach, Humboldt University of Berlin
Tarek R. Besold, University of Osnabrück
Stefano Borgo, ISTC-CNR
Selmer Bringsjord, RPI
Ross Gayler, Senior R&D Consultant Melbourne
Markus Guhe, University of Edinburgh
Helmar Gust, University of Osnabrück
Jerry R. Hobbs, USC/ISI
Haythem O. Ismail, Cairo University
Arne Jönsson, Linköping University
Ulf Krumnack, University of Osnabrück
Oliver Kutz, University of Bremen
Mark Lee, University of Birmingham
Maricarmen Martínez Baldares, University of the Andes Bogota
Ekaterina Ovchinnikova, USC/ISI
Ute Schmid, University of Bamberg
Kristinn Thórisson, Reykjavik University & Icelandic Institute for Intelligent Machines
Pei Wang, Temple University Philadelphia
Table of Contents
Keynotes:
! Zhongzhi Shi: “Computational Model of Memory in CAM”
! Stuart C. Shapiro: “Specifying Modalities in the MGLAIR Architecture”
Accepted papers:
! Daniele Porello, Roberta Ferrario, Cinzia Giorgetta1: “Ontological modeling of
emotion-based decisions”
! Naoya Arakawa: “Information Binding with Dynamic Associative Representations”
! Kristinn Thórisson: “Reductio ad Absurdum: Oversimplification in Computer Science
and its Devastating Effect on Artificial Intelligence Research”
! Naveen Sundar Govindarajulu, Selmer Bringsjord, John Licato: “On Deep
Computational Formalization of Natural Language”
! Ulf Krumnack, Ahmed Abdel-Fattah, and Kai-Uwe Kühnberger: “Formal Magic for
Analogies”
! Pei Wang: “Formal Models in AGI Research”
! Xiaolong Wan: “On Special Theory of Relativity of Function – an Interpretation to
`the Failure of Equivalent Substitution Principle’ ”
Computational Model of Memory in CAM
Zhongzhi Shi
Key Laboratory of Intelligent Information Processing
Institute of Computing Technology, Chinese Academy of Sciences
[email protected]
Abstract
The goal of artificial general intelligence (AGI) is the development and
demonstration of systems that exhibit the broad range of general intelligence found
in humans. The key issue underlying the achievement of AGI is the effective
modeling of the mind, which may be considered the focus of an interdisciplinary
subject of “intelligence science“. First I outline an attempt at a comprehensive
mind model, the Consciousness And Memory model (CAM). Then I talk about the
computational model of memory in CAM, including working memory, semantic
memory, episodic memory and procedural memory. All of these models are
described by dynamic description logic (DDL), which is a formal logic with the
capability for description and reasoning regarding dynamic application domains
characterized by actions. The directions for further researches on CAM will be pointed
out and discussed.
Bio-Sketch: Zhongzhi Shi is a professor at the Institute of Computing Technology,
Chinese Academy of Sciences, leading the Intelligence Science Laboratory. His
research interests include intelligence science, cognitive science, machine learning,
multi-agent systems, semantic Web and service computing. Professor Shi has
published 14 monographs, 15 books and more than 450 research papers in journals
and conferences. He has won a 2nd-Grade National Award at Science and
Technology Progress of China in 2002, two 2nd-Grade Awards at Science and
Technology Progress of the Chinese Academy of Sciences in 1998 and 2001,
respectively. He is a fellow of CCF and CAAI, senior member of IEEE, member of
AAAI and ACM, Chair for the WG 12.2 of IFIP. He serves as Editor in Chief of
Series on Intelligence Science.
Acknowledgement: This work is supported by National Basic Research Priorities
Programme (No. 2013CB329502), National Science Foundation of China (No. 61035003 ,
60933004), National High-tech R&D Program of China (No.2012AA011003).
Specifying Modalities in the MGLAIR
Architecture
Jonathan P. Bona and Stuart C. Shapiro
Department of Computer Science and Engineering
University at Buffalo, The State University of New York
{jpbona,shapiro}@buffalo.edu
Abstract. The MGLAIR cognitive agent architecture includes a general
model of modality and support for concurrent multimodal perception and
action. An MGLAIR agent has as part of its implementation multiple
modalities, each defined by a set of properties that govern its use and its
integration with reasoning and acting. This paper presents MGLAIR’s
model of modality and key mechanisms supporting their use as parts of
computational cognitive agents.
Keywords: multi-modality, cognitive architectures, agent architectures,
embodied agents
1
Introduction
The MGLAIR (Multimodal Grounded Layered Architecture with Integrated
Reasoning) cognitive agent architecture extends the GLAIR architecture [1] to
include a model of concurrent multimodal perception and action. Of central
importance to MGLAIR is its treatment of afferent and efferent modalities as
instantiable objects that are part of agent implementations. The architecture
specifies how modalities are defined and managed, what properties they posses,
and how their use is integrated with the rest of the system. Agents using these
modalities deal independently with sense data and acts that correspond to distinct capabilities.
MGLAIR is divided into three major layers, illustrated in Figure 1. The
Knowledge Layer (KL) and its subsystems perform conscious reasoning, planning, and acting. A gradation of abstractions across the layers of the architecture terminates in symbolic knowledge at the KL. An MGLAIR agent becomes
consciously aware of percepts when they are added to its KL. The SensoriActuator Layer (SAL) is embodiment-specific and includes low-level controls
for the agent’s sensori-motor capabilities. The Perceptuo-Motor Layer (PML)
connects the mind (KL) to the body (SAL), grounding conscious symbolic representations through perceptual structures. The PML is further stratified into
sub-layers. The highest PML sub-layer, comprised of PMLa and PMLs, grounds
KL symbols for actions and percepts in subconscious actions and perceptual
structures respectively. The lowest PML sub-layer, the PMLc, directly abstracts
2
Specifying Modalities in the MGLAIR Architecture
the sensors and effectors at the SAL into the basic behavioral repertoire of the
robot body. The middle PML layer, the PMLb, handles translation and communication between the PMLa and the PMLc. The inward-pointing and outwardpointing arrows spanning the layers in Figure 1 represent afferent and afferent
modalities respectively.
Fig. 1. MGLAIR
MGLAIR’s KL is implemented in SNePS, a logic-based Knowledge Representation and Reasoning system [2] [3]. SNeRE (the SNePS Rational Engine), the
SNePS subsystem that handles planning and acting [4], is a key component of
MGLAIR. SNeRE connects agents’ logic-based reasoning with acting. The plans
formed and actions taken by an agent at any time depend in part on the agent’s
beliefs (including beliefs about the world based on its perceptions) at that time.
2
Modality
A modality corresponds to a single afferent or efferent capability of an agent:
a limited resource capable of implementing only a limited number of related
activities simultaneously. Each agent’s embodiment determines which modalities
are available to it.
An MGLAIR modality possesses a directional data channel that connects
the mind with the body. The modality itself handles the transmission and transformation of data in the channel. Within an afferent modality raw sensory data
originating at the SAL is passed up to the PML and converted to perceptual
structures that are aligned with and used as the basis for conscious symbolic
percepts at the KL. Any action an agent consciously performs is available as
Specifying Modalities in the MGLAIR Architecture
3
an abstract symbolic representation in the knowledge layer, corresponding to
low-level motor control commands in the SAL, which it is connected to through
alignment with the intermediate PML structures. The flow of distinct types
of sensory and motor impulses between the agent’s mind and body occur independently, each in its own modality. MGLAIR’s PML structures constitute
unconscious multi-modal representations. Though they are inaccessible to the
agent for conscious reflection, they play a crucial role in its cognition.
2.1
Modality Properties
An MGLAIR modality specification is a 9-tuple of modality properties, which
determine its behavior:
� name, type, pred, chan, access, focus, conflict, desc, rels �:
These are: a unique name for the modality; a type, (subtype of afferent or
efferent); KL predicates used for percepts or acts in the modality; the modality’s data channel ; a flag granting/denying the agent conscious access to the
modality; a specification for modality focus (see §3.5); a conflict handler for
when multiple acts or multiple sensations try to use the modality simultaneously (see §3.2, §3.4); a description; and relations to other modalities.
3
3.1
Key Modality Structures and Mechanisms
Perceptual Functions
Each afferent modality has a function specific to it that is applied to perceptual structures in its PMLs to convert them into symbolic knowledge and assert
them to the KL. The nature of the perceptual function depends on the type of
perceptual structures it handles, and on the nature of the sensor and modality.
For instance, a perceptual function for a visual modality might take as input a
structure representing different visual features (shapes, colors, etc), and produce
KL terms and propositions classifying and identifying objects in the field of vision. All perceptual functions take as input a timestamped PML structure from
the modality’s perceptual buffer and produce as output KL terms that represent the percept using predicates associated with the modality. Each modality’s
perceptual function also links the resulting terms to the modality in which they
originated by embedding a reference to the modality within the SNePS data
structure that represents each term.
3.2
Perceptual Buffers
Modality buffers for each perceptual modality in the PMLs queue up perceptual
structures to be processed and consciously perceived by the agent. Perceptual
buffers may have a fixed capacity that limits the number of elements the buffer
can hold. Otherwise, buffers with unlimited capacity must have an expiration
interval, an amount of time after which data in the buffer expires and is discarded
without being perceived by the agent.
4
Specifying Modalities in the MGLAIR Architecture
Modality buffers serve to smooth out perception and minimize the loss of
sensory data that the agent would otherwise fail to perceive because of temporary
disparities between the speed with which the sensor generates data and the
time it takes to process and perceive that data. By adjusting a buffer’s size
or expiration interval, and by specifying how full buffers are handled, an agent
designer may achieve a range of possible effects suitable for different types of
agents and modalities.
Timestamped sensory structures assembled at the PMLb are added to the
modality buffer as they are created. If the buffer is full when a structure is
ready to be added, for instance because the buffer has a fixed capacity and the
expiration interval is too long, then an attempt to add a new structure to the
buffer will be handled either by blocking and discarding the new data rather
than adding it to the buffer - in which case it will never be perceived by the
agent - or by making space for the new data by deleting, unperceived, the oldest
unprocessed structure in the buffer even if it has not expired.
Each modality buffer has a buffer management process that repeatedly removes the oldest non-expired structure from the buffer and applies the modality’s
main perceptual function to it. These processes are affected by changes in the
modality’s focus, discussed more in §3.5.
3.3
Act impulses in efferent modalities
At the knowledge layer, SNeRE connects the agent’s reasoning and acting capabilities through the management of policies and plans. An example policy (stated
in English from the agent’s perspective) is, “Whenever there is an obstacle close
in front of me, I should move back, then turn, then resume operating.” An example plan is, “To pick up an object, I first open my grasping effector, then position
it over the object, then I lower it, then I grasp with it, then I raise it.” Each
act plan consists of a complex act, which is being defined by the act plan, and a
sequence of other acts that comprise the complex act. These may be themselves
complex acts or primitive acts. All such plans must eventually bottom out in
primitive acts – acts that the agent cannot introspect on or further divide into
their composite parts, but which it may simply perform.
Each efferent modality contains within its PML an action buffer that stores
act impulses resulting from conscious actions the agent has performed, but that
have not yet been executed at the lower levels (i.e. they have not yet caused the
relevant effectors to have their effects on the world).
Plans for complex acts may be comprised of acts that use different modalities.
For instance, depending on the agent’s embodiment, moving a grasper and using
it may be independent of each other, and therefore use separate modalities. Since
these separate modalities operate independently, actions may occur in them
simultaneously. For instance, moving the grasper arm to a position and opening
the grasper, if they use separate motors, could very well be performed at the
same time without effecting each other’s operation.
Specifying Modalities in the MGLAIR Architecture
3.4
5
Efferent Modality Buffers
Like percept buffers, action buffers are located in the PMLb. Act impulses are
added to the buffer as a result of primitive acts that are performed at the PMLa,
and are removed and processed at the PMLc, where they are further decomposed
into low-level commands suitable for use by the SAL. For instance, a primitive
action for a grasping effector might be to move the effector some number of
units in a particular direction. When the PML function attached to this action is
called, it places a structure representing this impulse and its parameters into the
modality’s action buffer. As soon as the modality is available (immediately, if it
was not already carrying out some action when the move action was performed),
the structure is removed from the buffer and processed at the PMLc, where it is
converted into commands the SAL can execute (e.g. apply a certain amount of
voltage to a particular motor for a duration).
When an efferent modality’s act buffer is empty, an act impulse added to the
buffer as a result of the agent’s consciously performing an act will be immediately
removed and sent to the SAL for execution. When an action is performed using
an efferent modality whose buffer is not empty, the default behavior is to add
the act impulse to the buffer, from which it is be removed and executed when
the modality is available. For example, a speech modality might be configured
to buffer parts of utterances and realize them in order as the relevant resource
becomes available. Another option is for the new impulse to clear the buffer
of any existing act impulses. For instance, an agent with a locomotive modality
might be configured to discard outstanding impulses and navigate to the location
it has most recently selected. The manner in which such conflicts are resolved is
specified as part of the modality specification.
Action buffers in efferent modalities have similar properties to the percept
buffers in afferent modalities: their capacities are configurable as are their expiration intervals. One difference is that MGLAIR’s model of modality focus
currently applies only to perception and does not account for focusing on actions in efferent modalities.
3.5
Regulating Modality Focus
MGLAIR’s model of modality focus allows agents to selectively dedicate more
or less of their resources to the task of perceiving within a particular afferent
modality. An agent designer may specify a default focus level for a modality and
either permit or forbid the agent to adjust its focus levels on its own modalities
– including the possibility of ignoring a modality altogether. Altering a modality’s focus increases or decreases the frequency with which its internal processes
run. This allows an agent to prioritize processing of percepts from a particular
modality relative to the others depending on its current task and priorities.
Because each agent has only a limited amount of computational resources it is
possible for a modality to interfere with the operation of others by monopolizing
those resources, even though each modality is operating independently with its
own processes and structures. This happens when a modality has to work harder
6
Specifying Modalities in the MGLAIR Architecture
than others to convert its sensory data into perceptual knowledge, or when one
modality is producing much more, or much richer, sensory data than others.
Without a mechanism to selectively allocate its focus to the modality that is more
relevant to its situation and task, agents would expend most of their processing
power on more demanding modalities, even when they are less urgent or less
relevant to the agent’s task than some less demanding modalities.
Altering a modality’s focus adjusts the priority of the modality’s buffer management and perceptual function processes. For an agent with just two modalities
with nearly all of the same properties connected to two identical sensors in the
same environment, but with different levels of focus, the modality with the lower
focus will remove and process PML structures from its buffer less frequently, and
will therefore be more likely to reach its capacity or to have elements reach their
expiration times before they are processed and perceived. That is, an agent
that is not focused on some modality may fail to perceive some things that the
modality is capable of sensing.
4
Example Modality
As an example, consider an agent embodied in a small robot with - among other
sensors - a unidirectional ultrasonic range finder that the agent uses to avoid
bumping into obstacles as it moves around its environment. The agent may
be equipped with other sensors and effectors as well – a camera based visual
modality, motorized wheels, a grasping effector.
To make use of this sensor, the agent must have a way of representing the
information that the sensor conveys and of becoming aware of (perceiving) it. It
becomes aware of percepts via an afferent modality that connects the sensor to
the agent’s mind, with mechanisms to convert raw sensory data into the agent’s
perceptual representation. The range finder operates by sending out sound waves
and then detecting how long it takes for the echo to bounce off of the nearest
object and return to the sensor. At the lowest level - the SAL - the data produced by this sensor is in the form of voltage levels generated by a piezoelectric
receiver. These values are converted into an integer value between 0 and 255,
which corresponds to the distance to the nearest object in front of the sensor.
To use this sense to avoid banging into walls and other obstacles, it may suffice
for the agent to perceive that the next thing in front of it is very close, very
far, or in-between, rather than a precise value. This is achieved by having the
PML convert values like 34, 148, etc, into structures at the granularity we want
the agent to perceive: close, medium, and so on. These are removed from the
modality’s sensory buffer and processed by the modality’s perceptual function,
which produces symbolic representations that are then asserted at the KL, e.g.
the term DistanceIs(far), which represents the proposition that the distance
to the nearest object in front of the agent is far. These percepts (beliefs) affect
the agents behavior if it holds policies like “when the nearest thing in front of
me is very close, stop any forward motion and turn before resuming it.” Figure
2 shows the layers and examples of information present at each.
Specifying Modalities in the MGLAIR Architecture
Fig. 2. View of a single afferent modality for an ultrasonic sensor
5
7
1
Conclusions
The MGLAIR architecture provides a model of afferent and efferent modalities for computational cognitive agents. MGLAIR agents that instantiate these
modality objects can use them to sense and act - and make inferences and plans
about percepts and actions - concurrently in different modalities.
By dividing agents’ capabilities into modular modalities, each with its own
properties governing its use, MGLAIR allows agents to sense and act simultaneously using different resources with minimal interference, and to consciously
decide which resources to focus on for particular tasks. The specific properties
that determine how each modality functions, are defined as part of a modality
specification that is shared among the layers of the architecture.
References
1. Shapiro, S.C., Bona, J.P.: The GLAIR Cognitive Architecture. International Journal
of Machine Consciousness 2(2) (2010) 307–332
2. Shapiro, S.C., Rapaport, W.J.: The SNePS family. Computers & Mathematics with
Applications 23(2–5) (1992) 243 – 275
3. Shapiro, S.C., The SNePS Implementation Group: SNePS 2.7 User’s Manual.
Department of Computer Science and Engineering, University at Buffalo, The
State University of New York, Buffalo, NY. (2007) Available as http://www.cse.
buffalo.edu/sneps/Manuals/manual27.pdf.
4. Kumar, D.: A unified model of acting and inference. In Nunamaker, Jr., J.F.,
Sprague, Jr., R.H., eds.: Proceedings of the Twenty-Sixth Hawaii International Conference on System Sciences. Volume 3. IEEE Computer Society Press, Los Alamitos,
CA (1993) 483–492
1
c
Sensor diagram �Parallax
Inc., available under the Creative Commons AttributionShare Alike 3.0 license at http://learn.parallax.com/KickStart/28015
Ontological modeling of emotion-based decisions
Daniele Porello1 , Roberta Ferrario1 , Cinzia Giorgetta1
Institute of Cognitive Sciences and Technologies, CNR {daniele.porello,
roberta.ferrario, cinzia.giorgetta}@loa.istc.cnr.it
Abstract. In this paper, we discuss a number of elements for developing an ontological model for representing decision making of human agents. In particular,
our aim is to connect the results in cognitive science that show how emotions
affect decisions with agent systems and knowledge representation. We focus in
particular on the case of regret.
1
Introduction
Modeling real agents’ decisions in real-world scenarios is a challenging problem for
multiagent systems, normative systems and knowledge representation studies. When
designing formal systems that are intended to provide tools to assist the organization
of human agents’ interaction and decision making (e.g. socio-technical systems, organizations, law, [2]), it is important to account for what a real agent would do in order to
evaluate the system or to design efficient prescriptions. For example, imagine a security
officer in an airport (a typical case of socio-technical system, in which processes are
carried out partly by humans and partly by artificial agents), who has to decide whether
to check a customers’ belonging or not. This is a decision to be taken under uncertainty
and risk conditions, as the officer has to judge, without much information, whether the
customer could be a suspect. Moreover, we can associate payoffs to the officer’s decision: the officer’s utility increases if (s)he checks a customer that turns out to be a
suspect, it may decrease in case (s)he looses time in checking regular customers, it may
decrease significantly in case (s)he misses a dangerous suspect by not checking him/her.
When designing tools to aid or to evaluate decisions in such a scenario, it is important
to understand and to represent how real agents face this kind of situations. Decision theory and expected utility theory have been widely studied in economics and they provide
prescriptions that represent what an abstract rational agent would do in risky situations.
Real agents, however, often exhibit behavior that significantly diverge from expected
utility theory prescriptions [3]. Since the works of Kahneman and Tverski [9], models
that are able to represent cognitively biased decisions under uncertainty and risk have
been developed. Moreover, several results in the decision making field and in behavioral game theory show how emotions affect decisions under uncertainty and risk (e.g.
[11],[7],[5],[4],[12], [8]).
For the sake of example, we shall focus on investigations in neuroeconomics that
show how emotions like disappointment or regret play an important role in the way
in which agents evaluate risk. The distinction between disappointment and regret is
significant in order to model in a principled way expectations of what agents would do
!2
in a given scenario. Both emotions are reactions to an unsatisfactory outcome and both
arise from counterfactual thinking. In particular, following the analysis in [14], regret
is based on behaviour-focused counterfactuals, whereas disappointment is based on
situation-focused counterfactuals. The difference being that regret entails that the agent
directly chose a course of actions with an unsatisfactory outcome and a better choice
would have been available, whereas disappointment entails that the agent bears no direct
responsibility for the choice and its outcome. Or, better, both depend on what the agent
believes about his/her responsibility with respect to the negative outcome [17]. Results
in neuroeconomics show for example that regret leads to riskier choices [8], whereas,
in the case of disappointment, there is no relevant influence on the level of risk of the
subsequent choices. In our airport example, suppose the officer checks nearly every
passenger and the procedure takes so long that the flight must be delayed. Suppose also
that the airline officially complains with the security company. Disappointment may be
caused in case an external decision, like a supervisor’s order, has forced the officer to
check all passengers. By contrast, regret may be a consequence of the officer’s decision
of checking all passengers. The results that we have mentioned state that it is likely
that, after (s)he has felt regret, the officer’s behavior on his/her successive choices is
going to be riskier, for instance leading him/her to lower security standards. In case of
disappointment, his/her behavior does not vary considerably with respect to subsequent
choices. The aim of this paper is to introduce a number of important elements in order
to develop a model that is capable of representing and accounting for the effects of
emotions in decision making. Moreover, the model has to interface such information
with the normative specifications of the system the agent is living and acting in.
In order to achieve our goal, we start developing an ontological analysis of the elements that constitute agents’ decisions. The motivations behind our choice to adopt
an ontological approach are manifold. First of all, we are interested in applications in
socio-technical systems, and in particular in designed systems where multiple artificial agents interact with humans. A well-founded ontological model should, on the one
hand, allow humans to better understand the system they are living and acting in and,
on the other hand, once embedded in an artificial agent, should enable the latter to automatically reason about the system, and to exchange information with other (human
and artificial) agents in a sound and interoperable way. In particular, including the representation of emotions and of their influence on decisions into the ontological model,
should provide the artificial agents with means for interpreting, evaluating, and possibly
foreseeing humans’ decisions.
We shall use the approach to ontology provided by DOLCE [10] as it is capable of
specifying some of the essential elements that we need for our analysis; in particular, we
will focus on the notions of agent (inspired by the belief-desire-intentions (BDI) model),
of course of actions, decision, outcome, and notions describing the emotions that may
affect the decision. The remainder of this paper is organized as follows. In Section 2,
we discuss the elements of the ontological model we are interested in developing and
we present our discussion of emotion-based decisions. In Section 3, some final remarks
are drawn.
3
!
2
Ontological analysis: DOLCE
We present some features of DOLCE [10], the ground ontology, in order to place the elements of our analysis within the general context of a foundational ontology1 . The ontology partitions the objects of discourse into the following basic categories: endurants
(intuitively, objects) ED, perdurants (intuitively, events) PD, individual qualities Q, and
abstracts ABS. Individual qualities are entities that agents can perceive or measure or
evaluate that inhere to a particular. For example, ”the color of my car”. They are partitioned into quality kinds Qi , such as the color, the weight, the length. Examples of
abstracts are provided by the categories of spaces. Spaces intuitively represent values
for quality kinds. The relationship between quality kinds and spaces is modeled by the
location relation loc(qx , sx ) that means, for example, that the redness of John’s car is
located at a certain point in the space of colors.
Figure 1 shows the categories we are interested in. We restrict our presentation to
the case of endurants, namely we focus on the objects of emotions or decisions.
3
ED
PED
NPED
Physical
End.
Nonphysical
End.
...
PO
Physical
object
NAPO
APO
Nonagent.
p. o.
Agentive
p. o.
NPO
D
Nonphy.
obj.
Descriptions
sD
Sdesc.
cD
...
MOB
SOB
Mental
obj.
Social
obj.
PLN
PRC
COJ
NASO
ASO
Plans
Percept
Computed
obj.
Nonagentive
s. o.
Agentive
s. o.
...
...
COB
COI
SAG
Social
Agent
SC
Society
...
PEL
Primary
feeling
...
COD
Comp. Comp. Comp.
belief desire intention
COF
Complex
feeling
...
...
REGO
DISO
disappointment Regret
obj.
obj.
...
Fig. 1. A fragment of DOLCE
Fig. 1. An excerpt of the DOLCE taxonomy focused on mental objects
situations by means of S-Descriptions (descriptions of situations). Plans are thus a subcategory of S-Descriptions, they are the way in which agents think about situations.
1
We
present
the descriptive
features
DOLCE
For implementability
issues,
see [10].
Plans
are complex
objects
as theyofmay
be . composed
of a number
of tasks,
they have
preconditions, post-conditions, and so on [1]. An agent may think about several plans
(i.e. several courses of action). Following the BDI model [10], we represent an agent
that is willing to bring about a particular plan p by means of the relation has BDI on
which holds between an agentive object i and a plan p when the agent has the relevant beliefs, desires, intentions to bring a bout the plan: has BDI on(i, p).2 In order
to evaluate the consequences of a plan for an agent i, we consider the situation that is
the outcome of the plan: Out(s, p). We introduce a preference relation on situations
P ref (s, s� , i) meaning that the agent i prefers situation s to situation s� . Moreover,
we introduce a preference relation on plans: an agent i prefers the plan p to plan p� ,
pref (p� , p, i), iff the outcome of p is preferred to the outcome of p� by i. In our model,
a decision that an agent takes is a choice between plans. According to expected utility
theory, we denote ui (s) the utility for agent i of situation s, and the expected utility of
s as uei (s) = u(s) · π(s), where π(s) is the probability of s to be the case. In order to
represent uncertainty, we introduce a quality space QL of situations that represents the
2
Agentive objects allow for distinguishing decisions taken by agentive physical objects, e.g. a
person, and agentive social objects, e.g. a person in the role of an officer.
!4
2.1
Plans and decisions
We follow the analysis of plans and norms provided by [1], that introduces a new basic
category of situations SIT. Situations may be viewed as complex perdurants and they
may be also considered a type of (possibly partial) events. Ho let to queste cose 100
anni fa e non ricordo null; davvero le situazioni possono essere sia endurant che perdurant? Ti credo che a Nicola non sono mai piaciute! Devo guardarmi il paper Agents
refer to or think about situations by means of S-Descriptions (descriptions of situations).
Plans are thus a subcategory of S-Descriptions, they are the way in which agents think
about situations. Plans are complex objects as they may be composed of a number of
tasks, they have preconditions, post-conditions, and so on [1]. An agent may think about
several plans (i.e. several courses of action). Following the BDI model [15], we represent an agent that is willing to bring about a particular plan p by means of the relation
has BDI on which holds between an agentive object i and a plan p when the agent
has the relevant beliefs, desires, intentions to bring about the plan: has BDI on(i, p).2
In order to evaluate the consequences of a plan for an agent i, we consider the situation that is the outcome of the plan: Out(s, p). We introduce a preference relation
on situations P ref (s, s� , i) meaning that the agent i prefers situation s to situation s� .
Moreover, we introduce a preference relation on plans: an agent i prefers the plan p to
plan p� , pref (p� , p, i), iff the outcome of p is preferred to the outcome of p� by i. In our
model, a decision that an agent takes is a choice between plans. According to expected
utility theory, we denote ui (s) the utility for agent i of situation s, and the expected
utility of s as uei (s) = u(s) · π(s), where π(s) is the probability of s to be the case.
Riscirtto un po’ In order to represent uncertainty in our ontological model, we introduce a quality kind QL for the likelihood of situations, that is used to express the
extent to which the agent views the situation as likely. Moreover, we assume a quality
space SL to represent probability values. By modeling the likelihood of a situation as
an individual quality, we are assuming that human agents can measure or evaluate the
likelihood of situations and this is compatible with the psychological literature, see for
example [9]. We define the location relation loc(qs , ls , i) that associates the likelihood
quality qs of the situation s with the probability ls according to agent i’s view. The
relationship with decision theory is the following: we can view situations as the space
of events and the quality as the mapping of values of a probability distribution that i
defines on events. Note that, since a plan describes a complex situation, the probability
of the execution of the plan has to be computed taking into account conditional dependencies of sub-situations that occur in the plan. We are not going to go into the details of
computing the probability and we abstractly represent the likelihood of a plan by means
of the likelihood of its outcome: Out(p, s) ∧ loc(qs , ls , i). According to expected utility
theory, the abstract rational agent would choose the plan p that maximizes the expected
utility has BDI on(i, p) ↔ ¬∃p� pref (p, p� , i), where i’s preferences are defined as:
pref (p, p� , i) iff the expected utility of p is greater than the expected utility of p� . However, as our discussion is centered on human agents, there may be several reasons that
prevent it to be the case (besides not knowing what the best plan is), as recent studies
2
Agentive objects allow for distinguishing decisions taken by agentive physical objects, e.g. a
person, and agentive social objects, e.g. a person in the role of an officer.
!5
in behavioral decision [18] making have already pointed out. As we have seen, in the
context of risky choices, regret is one of these reasons.
2.2
Representing emotion-based decisions under uncertainty
An ontological analysis of the BDI model and of emotions has been developed in [13]
and [6]. In particular, feelings like disappointment and regret can be categorized as
particular complex feelings3 , which are in turn particular types of complex percepts.
Complex feelings depend on primary feelings (PEL) as well as on beliefs. Our analysis of the distinction between disappointment and regret is then the following. For
reasons of space, we omit the subtleties of the analysis in [13] and we define regret
and disappointment referred to a situation that is the outcome of a plan. We introduce
a dependence relation of a mental object x on a situation s, Dep(x, s). At this level of
analysis, the relation Dep is very abstract and it may be open to several interpretations:
a situation may cause disappointment because of some of the objects that participate in
the situation (e.g. the outcome of the plan is an asset for which the agent gets a very low
payoff) or because of some property of the situation itself (e.g. the outcome of the plan
is an event that entails waiting in line in a long queue). Regret is caused by the negative
outcome of a plan that is chosen by agent i in spite of the existence of an alternative plan
p� that the agent might have chosen with a better outcome4 . Disappointment is caused
by a plan whose outcome is dominated by another situation, but i is not responsible.
REGO (x, i)
≡ AP O(i) ∧ P LN (p) ∧ Dep(x, s) ∧ Out(p, s) ∧ has BDI on(i, p)
∧ ∃p� pref (p, p� , i)
DISO (x, i)
(1)
≡ AP O(i) ∧ P LN (p) ∧ Dep(x, s) ∧ Out(p, s) ∧ ∃p� pref (p, p� , i) (2)
The condition AP O(i) specifies that the emotions apply to agentive physical objects and the condition P LN (p) specifies that both emotions are dependent on the
outcome of a plan. The distinction shows that, in case of regret, the actualization of
the outcome is caused by the agent’s choice, whereas disappointment is caused by the
negative outcome deriving from a course of actions triggered by someone else’s choice,
namely by another plan. Note that the definition of disappointment is narrower than its
intuitive meaning, as it makes the disappointing situation depend on the course of action described by a plan, i.e. decided by someone else. However, our definitions avoid
unintuitive cases such as being disappointed by any possible disfavored event. A more
precise definition can be formulated by introducing hope or expectations [13]. We can
now present our preliminary analysis of how regret that is caused by a particular choice
3
4
Disappointment and regret can be more precisely defined as cognitive emotions [16]; here we
use the locution “complex feelings” following [13].
The responsibility of a choice of a plan is defined by has BDI on. A closer examination shall
single out the role of intentions in taking responsibility. We leave this point to future work.
!6
of p may affect subsequent choices of plans. We define it as a reason to choose a plan,
namely we restate the definition of has BDI on. We present it in a semi-formal way
in order to avoid introducing the relevant elements of temporal analysis and of expected
utility theory.
Definition 1. An agent i has BDI on a plan p at time t iff either p is its most preferred
implementable plan; or p is dominated by p�� , p is more risky than p�� , p has a higher
utility value, and there is a precedent time t� , such that the agent has chosen p� and p�
has caused regret on i.
3
Conclusion
We have presented a number of elements that are important to develop an ontological
analysis of the role of emotions in decision making. In particular, we have sketched how
to integrate in DOLCE decisions as choices between plans, emotions, and risk attitudes.
Future work shall provide a tighter connection between the cognitive and the normative
modules of DOLCE and the analysis of decisions under uncertainty.
There are several possible extensions of the present work, which is centered on the
influence of regret on risk-seeking behavior, for what concerns a single agent. However,
in environments populated by more agents, as for instance multiagent systems, understanding and modeling how beliefs, emotions and expectations mutually influence each
other becomes particularly important. In such systems agents, while planning, should
try to foresee which would be the preferences of the other agents and their propensity to risk, given also the previous history. It is worth noting that, in the specific case
of regret, when other agents are involved, the agent who was in charge of the choice
may feel regret for him/herself and guilt with respect to the others, who in their turn
can feel disappointment for the negative outcome. Furthermore, we could also think
about the emotional reactions following collective decisions. In the more specific case
of socio-technical systems, decisions can also depend on the performance of technical
devices, so not directly from agents, but there may be agents who are responsible of
the functioning of such devices (sometimes their designer may be at the same time a
participant of the system) and may eventually feel regret. It is exactly in such complex
scenarios that the use of the ontological approach is especially useful. Finally, so far we
have focused just on a specific emotion, regret (and marginally disappointment), and a
specific attitude, propensity to risk, but cognitive science studies have dealt with other
emotions, like fear, sadness, happiness, anger, guilt etc. and also with other mental attitudes, like for instance propensity to cooperation, that can affect each other and then
influence (individual and/or social) decisions. There are then many possible extensions
to this preliminary model that, once complete, could be also implemented in artificial
agents, thanks to the use of ontologies.
Acknowledgements
Daniele Porello, Roberta Ferrario and Cinzia Giorgetta are supported by the V IS C O S O
project, which the present paper is part of. V IS C O S O is financed by the Autonomous
Province of Trento through the “Team 2011” funding programme.
Bibliography
[1] G. Boella, L. Lesmo, and R. Damiano. On the ontological status of plans and
norms. Artif. Intell. Law, 12(4):317–357, 2004.
[2] G. Boella and L. V. D. Torre. Introduction to normative multiagent systems. Computational and Mathematical Organization Theory, 12:71–79, 2006.
[3] C. Camerer. Behavioral Game Theory: Experiments in Strategic Interaction. The
Roundtable Series in Behavioral Economics. Princeton University Press, 2003.
[4] H. Chua, R. Gonzalez, S. Taylor, R. Welsh, and I. Liberzon. Decision-related loss:
regret and disappointment. Neuroimage, 47:2031–2040, 2009.
[5] G. Coricelli, H. Critchley, M. Joffily, J. O’Doherty, A. Sirigu, and R. Dolan. Regret and its avoidance: A neuroimaging study of choice behaviour. Nature Neuroscience, 8:1255–1262, 2005.
[6] R. Ferrario and A. Oltramari. Towards a computational ontology of mind.
[7] W. Gehring and A. Willoughby. The medial frontal cortex and the rapid processing
of monetary gains and losses. Science, 295:2279–2282, 2002.
[8] C. Giorgetta, A. Grecucci, N. Bonini, G. Coricelli, G. Demarchi, C. Braun, and
A. G. Sanfey. Waves of regret: A meg study of emotion and decision-making.
Neuropsychologia, 2012.
[9] D. Kahneman and A. Tversky. Prospect theory: An analysis of decision under
risk. Econometrica, 47(2):263–91, March 1979.
[10] C. Masolo, S. Borgo, A. Gangemi, N. Guarino, and A. Oltramari. Wonderweb
deliverable d18. Technical report, CNR, 2003.
[11] B. Mellers, Schwartz, and I. A., Ritov. Emotion-based choice. Journal of Experimental Psychology: General, 128:332–345, 1999.
[12] A. Nicolle, D. Bach, J. Driver, and R. Dolan. A role for the striatum in regretrelated choice repetition. Journal of Cognitive Neuroscience, 23:1–12, 2010.
[13] A. Oltramari. Hybridism in cognitive science and technology. Foundational and
implementational issues. PhD Thesis, Università degli Studi di Trento. Scienze
della Cognizione e della Formazione, 2006.
[14] W. W. van Dijk, M. Zeelenberg, and J. van der Pligt. Blessed are those who expect
nothing: Lowering expectations as a way of avoiding disappointment. Journal of
Economic Psychology, 24(4):505–516, 2003.
[15] M. Woolridge. Introduction to Multiagent Systems. John Wiley & Sons, Inc., New
York, NY, USA, 2008.
[16] M. Zeelenberg. Anticipated regret, expected feedback and behavioral decisionmaking. Journal of Behavioral Decision Making, 12:93–106, 1999.
[17] M. Zeelenberg, W. van Dijk, A. Manstead, and J. van der Pligt. The experience of
regret and disappointment. Cognition and Emotion, 12:221–230, 1998.
[18] M. Zeelenberg, W. van Dijk, A. Manstead, and J. van der Pligt. On bad decisions and disconfirmed expectancies: The psychology of regret and disappointment. Cognition and Emotion, 14:521–541, 2000.
Information Binding with
Dynamic Associative Representations
ARAKAWA, Naoya
Imaging Science and Engineering Laboratory, Tokyo Institute of Technology
Abstract. This paper proposes an explanation of human cognitive functions in terms of association. In this proposal, representation is not static
but is achieved by dynamic retrieval of patterns by means of association. With this dynamic representation, information binding in cognitive
functions such as physical world recognition, planning and language understanding is illustrated.
Keywords: association, generativity, dynamic representation, the binding problem
1
Introduction
This paper addresses the issue of representation in human cognitive functions
in terms of association and attempts to show the benefit of seeing cognitive
representation as dynamic associative processes with examples of perception,
planning and language understanding. In this paper, association designates the
process of obtaining a pattern from another through learning. As discussed in
[4], patterns here can be sub-symbolic vector representations formed by nonsupervised learning.
The reason why association is taken to be the basis for explaining cognitive functions here is two-fold: 1) cognitive functions are often accounted for
with sub-symbolic information processing and association is considered to be
its abstraction; 2) while sub-symbolic information processing is formalized with
models such as neural networks and Bayesian models, its technical details could
obscure the essence of over-all functions in certain discussions. Having said that
association is an abstraction of sub-symbolic information processing, the following are reasons why sub-symbolic information processing is important for
explaining cognitive functions. First, for models of brain functions such as neural networks are normally assumed to be sub-symbolic, their incorporation would
be essential for biologically natural or realistic cognitive modeling. Second, while
learning is an essential cognitive function, most modern-day learning models are
sub-symbolic. Finally, sub-symbolic information processing would evade thorny
issues in cognitive science such as the issue of classical categories brought forth by
cognitive linguists such as Lakoff [7] and the frame problem. Associative memory
!2
could evade the issue of classical categories, since sub-symbolic associative models (e.g., neural models) can represent prototypical and fuzzy categories. Certain
frame problems for intelligent agents to find relevant information could also be
evaded, as sub-symbolic associative memory could retrieve most relevant information (association) for given patterns first (by means of a competitive process
among‘ neurons, ’for example) (see [5] for a debate).
However, there are numerous problems with using association with cognitive
modeling. First of all, it is not clear if association alone is enough for explaining
dynamic cognitive functions such as planning or language understanding, as
association is nothing more than mapping between patterns. Secondly, as it was
argued in the computationalism vs. connectionism debate (see [2]), it is not
clear how connectionist/associationist models account for generativity of, say,
linguistic structure. Moreover, associationism also bears the so-called binding
problem for explaining the integration of various pieces of information (see [12]
and 2.1 below).
Given the aforementioned pros and cons, my motivation is to explain human
cognitive functions as an associative processes by addressing the problems raised
above.
2
The Proposal
The basic idea of the proposal is that associative representation is better conceived dynamically as traversal of association paths, if we are to address the
issues of information binding and generativity. Note that the issues are not
problematic in the case of classical symbolic models (e.g., [9]) or symbolic/subsymbolic hybrid cognitive architectures (e.g., [1] [10]), in which the composition/binding of information can be done symbolically by means of variable binding. However, when we regard the human brain as a sub-symbolic associative
machine, the apparent lack of binding mechanism becomes problematic for explaining human cognitive functions. Therefore, a (sub-) motivation here is to
explain cognitive functions with an associative mechanism without bringing in
variable binding. While the motivation is mainly explanatory, the proposal may
entertain technological merit: a cognitive system may be constructed with subsymbolic learning mechanisms without generating symbolic rules for the sake
of simplicity, although whether such a system has a practical advantage would
depend on the case.
In the sections to follow, the proposed method will be explained more in
detail by using concrete examples. First, physical world (scene) recognition is
discussed as the basic case of the binding problem. Then, discussions of planning
and language understanding will follow.
2.1
Physical world recognition
Recognizing the physical environment involves recognizing objects, their locations and features such as motion. When a cognitive system has to perceive
multiple objects at once, the binding problem occurs: the system has to extract
!3
features from each entity and ascribe them to the proper item. The problem is
acute with parallel information processing presumably performed in our brain;
how can pieces of information represented in parallel be integrated in a coherent representation? For example, seeing a blue circle and a red rectangle, how
does a brain integrate the representations of blue, red, circle and rectangle as
those of a blue circle and a red rectangle in a particular relative location? In the
current binding problem literature, synchronicity is often mentioned as a possible solution to the problem ([12][3], for example). However, synchronicity alone
does not solve the problem. In the first place, it does not answer the question
of how different objects are represented in the brain. Secondly, considering that
visual scenes are perceived with saccades, there remains a suspicion that visual
perception is rather dynamic than a synchronic process.
An associative system could cope
with the (perceptual) binding problem
in the following manner. Suppose that
the system obtains a feature pattern
of a physical object as well as a pattern representing the coordinates1 of
the object and that it obtains the information of an object at a time. In
this situation, the system shifts ‘ attention ’ among objects to recognize Fig. 1 Association among figure repremore than one object2 . The associa- sentations
tive retrieval of object representation Each figure represents feature patterns for
could be driven by cues representing the figure. The numbers represent cue patorientations (such as 15 degrees left) terns for relative degrees and distances.
and relative distances (Fig. 1). In this
scheme, a scene (containing more than one object) may not be simultaneously
represented, but object representations may be retrieved one-by-one by means
of association in short-term memory or working memory and bound together as
a coherent scene representation. What assures the information binding here is
not synchronicity but the potential or mechanism to bring all relevant representations together.
2.2
Planning
A planning mechanism by means of association could be conceived in a rather
straightforward way as described by the following (pseudo) algorithm:
1
2
Information on the location of physical objects can be obtained via integrating various modalities such as vision, touch, vestibular sense and the sense of eye rotation.
The idea is in line with the Feature Integration Theory [14] but formulated in a more
abstract level.
4
!
if the pattern representing the current situation is not desirable according to
certain criteria, then
associate the current situation pattern (CSP) with an action pattern
(APT), which represents either a directly executable action or a plan.
if CSP and APT are not associated with (do not predict) a pattern
representing a satisfactory situation, then
associate another APT with CSP by suppressing the current APT.
end
if APT represents a learned plan (PL), then
LP: associate APT with a sequence of APTs (APT1, APT2,…).
if the situation pattern associated with (predicted from) CSP and a
sequence consisting of APTi (i=1,2,…) is evaluated as unsatisfactory,
then
abort and suppress PL.
end
if APTi is a learned plan, then
retrieve a sub-sequence for APTi recursively (⇒ LP:).
end
if all the sub-sequences consist of directly executable actions and the
patterns of situations associated (predicted) with the entire APTs and
CSP are satisfactory, then
the entire action (a new plan) is ready to be executed.
end
end
While such an algorithm
is a variation of classical
planning algorithms, some
notes can be given for
that to be realized associatively. In planning, the output sequence could have any
length and plan hierarchy
could have any depth. In
other words, a plan is a cognitive representation having
a combinatory/generative nature. A combinatory representation can be represented
Fig. 2 Planning by means of association
with an associative network,
Ovals are patterns representing the indicated
which actually is a dynamic
content. Blue solid lines represent associative
process of recalling (associrelations.
ating) patterns. In case of a
plan, the entire plan can be represented by retrieving its parts one by one by
means of association. A representation of an action in a plan may be retrieved
5
!
from a representation of the preceding action with a cue pattern representing
the temporal relation succeeding. Here, note that in the sample algorithm above,
a plan is formed (information is bound) with association but without having
recourse to rules with variables.
A plan hierarchy (Fig. 2) would be represented by associating a pattern
representing a learned plan with patterns representing actions or sub-plans3 .
Constructing hierarchical plans requires back-tracking, which, in turn, requires
a mechanism that suppresses an unwanted pattern and retrieves another relevant
pattern (by means of association).
2.3
Language understanding
Understanding language involves syntactic parsing and associating syntactic
structures with semantic representations. As a sentence in a human language
can be indefinitely long, the corresponding associative syntactic/semantic representation can be indefinitely large. Again, such representation has the combinatory/generative nature. While an associative network may not be able to
keep simultaneous activation of patterns for a large representation, the entire
representation could be retrieved piece by piece.
As for semantic representation,
computational models known as
semantic networks (e.g., [11]) are
considered to be associative. A
node of a semantic network represents an individual or type of physical objects, events, situations, time
or location.4 Semantic networks
normally have labeled edges to represent relations between nodes. A
labeled node may be realized in
an associative network as a patFig. 3 A regular phrase structure
tern representing a node and a cue
Blue solid lines can be associative relations,
pattern representing the type of
where ovals represent patterns for syntactic
an edge or relation. For example,
categories.
the edge or relation representing
bigger-than may be realized as association from a pattern representing an individual and a (cue) pattern representing bigger-than to the representation of
another individual.
With regard to parsing, the syntax of human language has recursive structure, i.e., a structure may be embedded in another structure. Such recursive
3
4
[13] can be an exemplary work in this line.
A node in an associative network may accrue meaning by means of non-supervised
learning so that they can be interpreted as a node of a ‘ semantic ’ network. In
fact, a semantic network alone does not give the semantics to its components. See
discussion in [6].
!6
structure would be constructed by a mechanism similar to the one discussed
in the section of planning above, where the construction of sentential structure
would be the goal. The system could traverse a syntactic tree by associating
daughter constituents with the parent node and associating one node to its sibling nodes with cue patterns representing such as left and right (Fig. 3).
Finally, mapping from
syntactic structure to semantic structure must be
taken into consideration.
A pattern representing
a syntactic constituent
such as a clause, verb
and noun may be associated with a pattern representing situation, event
and object respectively.
Here, syntactic relations
may be associated with
Fig. 4 Syntactic and Semantic Patterns
patterns representing semantic relations. For example, an English verb phrase consisting of a verb and
a following noun (having the accusative relation) would be associated with the
representation of an event, an object and the theme relation between them (Fig.
4).
3
Conclusion
The associative representations proposed herein have shared characteristics. In
the three cases, representation is not static but is achieved by dynamically retrieving patterns by association. The associative retrieval is accompanied by cue
patterns such as those indicating spatial and temporal directions and semantic/syntactic relations. The proposed representations also cope with the issues
of dynamicity and generativity and the binding problem. As for dynamicity,
planning and language understanding discussed above are dynamic and the representations proposed here are all dynamic. As for generativity, representations
in planning and language understanding are generative and those in physical
world recognition can also be generative, as a scene is composed of many objects.
The issue of binding without variable was addressed in physical world recognition as well as in language understanding and planning, as these processes must
also bind pieces of information together into coherent representations.
The proposal here apparently requires empirical studies. In particular, if
the illustrated mechanisms are to serve as certain functions, association should
be properly controlled. So, while the author plans to implement experimental
systems for corroborating the proposal, the issue of the (executive) control [8]
shall be seriously addressed.
!7
Acknowledgements
I would like to express my deep gratitude to the comments from the referees of
this paper and also from Quentin Quarles, without which the paper did not form
the current shape.
References
[1] ACT-R: http://act-r.psy.cmu.edu
[2] Fodor, J.A., Pylyshyn, Z.W.: Connectionism and cognitive architecture: A critical
analysis. Cognition 28(1), 3–71 (1988)
[3] Fuster, J.M.: Cortex and mind: Unifying cognition. Oxford University Press (2005)
[4] Gärdenfors, P.: Conceptual spaces: The geometry of thought. MIT press (2004)
[5] Haselager, W., Van Rappard, J.: Connectionism, systematicity, and the frame
problem. Minds and Machines 8(2), 161–179 (1998)
[6] Johnson-Laird, P.N., Herrmann, D.J., Chaffin, R.: Only connections: A critique
of semantic networks. Psychological Bulletin 96(2), 292–315 (1984)
[7] Lakoff, G.: Women, fire, and dangerous things: What categories reveal about the
mind. University Of Chicago Press (1990)
[8] Miyake, A., Shah, P.: Models of working memory: Mechanisms of active maintenance and executive control. Cambridge University Press (1999)
[9] Newell, A.: Unified theories of cognition. Harvard University Press (1994)
[10] OpenCog: http://wiki.opencog.org/w/The Open Cognition Project
[11] Quillian, M.R.: Semantic memory. In: Minsky, M. (ed.) Semantic information
processing, pp. 227–259. MIT Press (1968)
[12] Revonsuo, A.: Binding and the phenomenal unity of consciousness. Consciousness
and cognition 8(2), 173–185 (1999)
[13] Subagdja, B., Tan, A.H.: A self-organizing neural network architecture for intentional planning agents. In: Proceedings of The 8th International Conference on
Autonomous Agents and Multiagent Systems-Volume 2. pp. 1081–1088. International Foundation for Autonomous Agents and Multiagent Systems (2009)
[14] Treisman, A.M., Gelade, G.: A feature-integration theory of attention. Cognitive
psychology 12(1), 97–136 (1980)
Reductio ad Absurdum:
On Oversimplification in Computer Science and its
Pernicious Effect on Artificial Intelligence Research
Kristinn R. Thórisson1,2
1
Center for Analysis and Design of Intelligent Agents / School of Computer Science, Reykjavik
University, Venus, Menntavegur 1, Reykjavik, Iceland
2
Icelandic Institute for Intelligent Machines, 2.h. Uranus, Menntavegur 1, Reykjavik, Iceland
[email protected]
Abstract. The Turing Machine model of computation captures only one of its
fundamental tenets – the manipulation of symbols. Through this simplification
it has relegated two important aspects of computation – time and energy – to the
sidelines of computer science. This is unfortunate, because time and energy
harbor the largest challenges to life as we know it, and are therefore key reasons
why intelligence exists. As a result, time and energy must be an integral part of
any serious analysis and theory of mind and thought. Following Turing's
tradition, in a misguided effort to strengthen computer science as a science, an
overemphasis on mathematical formalization continued as an accepted
approach, eventually becoming the norm. The side effects include artificial
intelligence research largely losing its focus, and a significant slowdown in
progress towards understanding intelligence as a phenomenon. In this position
paper I briefly present the arguments behind these claims.
1
Introduction
A common conception is that the field of computer science provides an obvious and
close to ideal foundation for research in artificial intelligence (AI). Unfortunately
some fundamental derailments prevent computer science – as practiced today – from
providing the perfectly fertile ground necessary for the two to have the happy
marriage everybody is hoping for. Here we will look at two major such derailments.
2
Derailment #1: The Turing Machine
Alan Turing's characterization of computation – as the sequential reading and
writing of symbols by a simple device we now know as a Turing Machine (Turing
1948) – is generally considered to be a cornerstone of computer science. Turing's
influential paper On Computing Machinery and Intelligence (Turing 1950) took
notable steps towards considering intelligence as a computational system. The
foundation of what came to be called artificial intelligence – the quest for making
machines capable of what we commonly refer to as "thought" and "intelligent action"
– was laid a few years after his papers were written. The main inspiration for the field
came of course from nature – this is where we still find the best (some might say only)
examples of intelligence. The original idea of artificial intelligence, exploration of
which already started with cybernetics (cf. Heylighen & Josly 2001), was to apply the
tools of modern science and engineering to the creation of generally intelligent
machines that could be assigned any task: that could wash dishes and skyscraper
windows to writing research reports and discovering new laws of nature; that could
invent new things and solve difficult problems requiring imagination and creativity.
Before we go further on this historical path, let's look at two natural phenomena
that play a large role in science and engineering: time and energy. Time and energy
are directly relevant to the nature of intelligence on two levels. First, because every
computation must take place in a medium, and every medium requires some amount
of time and energy to act, there are limits on the number of computations that can be
produced a given timeframe. This level of detail is important to AI because
intelligence must be judged in the context of the world – including the computing
medium – in which it occurs: If the mind of an intelligent agent cannot support a
sufficient computation speed for it to act and adapt appropriately in its environment,
we would hardly say that the agent is "dumb" because it would be physically
incapable of acting intelligently. The physical properties of environments present time
and energy constraints; the "hardware" of a thinking agent must meet some minimum
specification to support thought at sufficient speeds for survival. Unless we study the
role time and energy play at this level of the cognitive computing medium we are
neither likely to understand the origins of intelligence nor its operating principles.
At the cognitive level time must in fact occupy part of the content of any intelligent
mind: Every real-world intelligent agent must be able to understand and think about
time, because everything they do happens in time. The situation is similar with respect
to energy at this level (although in the case of humans it used to be more relevant the
past than it is now, as foraging and farming occupied more time in our ancestors'
minds than ours). In either case, a key role of intelligence from moment to moment
remains in large part to help us handle the ticking of a real-world clock by thinking
about time: To make better use of time, to be able to meet deadlines and understand
the implications of missing them, to shorten the path from a present state to a new
state, to speed up decision time by using past experiences and decision aids, and so
on. Having unbounded time means that any problem can be solved by a complete
search of all possibilities and outcomes. But if this is the case, intelligence is
essentially not needed: Disregarding time renders intelligence essentially irrelevant.
And so the very subject of our study has been removed. Therein lies the rub: Unlike
the paths taken (so far) in some of the subdomains of computer science, the field of
AI is fundamentally dependent on time and energy – these are two of its main raison
d'être – and therefore must be an integral part of its theoretical foundation.
Fast forward to the present. The field we know as 'computer science' has been
going strong for decades. But it gives time and energy short shrift as subjects of
importance. To be sure, progress continues on these topics, e.g. in distributed systems
theory and concurrency theory, among others. But it is a far cry from what is needed,
and does not change historical facts: Few if any programming languages exist where
time is a first-class citizen. Programming tools and theories that can deal properly
with time are sorely lacking, and few if any real methods exist to build systems for
realtime performance without resorting to hardware construction. Good support for
the design and implementation of energy-constrained and temporally-dependent
systems (read: all software systems) is largely relegated to the field of "embedded
systems" (cf. Sifakis 2011) – a field that limits its focus to systems vastly simpler than
any intelligent system and most biological process found in nature, thus bringing little
additional value to AI. As a result, much of the work in computer science
practitioners – operating systems, databases, programming tools, desktop
applications, mathematics – are rendered irrelevant to a serious study of intelligence.
What caused this path to be taken, over the numerous others possibilities suggested
by cybernetics, psychology, engineering, or neurology? Finding an explanation takes
us back to Turing's simplified model of computation: When he proposed his definition
of computation Turing branched off from computer engineering through a dirty trick:
His model of computation is completely mute on the aspects of time and energy. Yet
mind exists in living bodies because time is a complicating factor in a world where
energy is scarce. These are not some take-it-or-leave-it variables that we are free to
include or exclude in our scientific models, these are inseparable aspects of reality.
As an incremental improvement on past treatments, some might counter, Turing's
ideas were an acceptable next step, in a similar way that Newton's contributions in
physics were before Einstein (they were not as thoroughly temporally grounded). But
if time and energy are not needed in our theories of computation we are saying that
they are irrelevant in the study of computation, implying that it does not matter
whether the computations we are studying take no time or infinite time: The two
extremes would be equivalent. Such reductio ad absurdum, in the literal meaning of
the phrase, might possibly be true in some fields of computer science – as they
happen to have evolved so far – but it certainly is not true for AI. If thinking is
computation we have in this case rendered time irrelevant to the study of thought.
Which is obviously wrong.
An oversimplification such as this would hardly have been tolerated in
engineering, which builds its foundations on physics. Physicists take pride in making
their theories actually match reality; would a theory that ignores significant parts of
reality have been made a cornerstone of the field? Would the theory of relativity have
received the attention it did had Einstein not grounded it with a reference to the speed
of light? Somehow E = m is not so impressive. The situation in computer science is
even worse, in fact, because with Turing's oversimplification – assuming infinite time
and energy – nothing in Einstein's equation would remain.
4
Derailment #2: Premature Formalization
The inventors of the flying machine did not sit around and wait for the theory of
aerodynamics to mature. Had the Wright brothers waited for the "right mathematics",
or focused on some isolated part of the problem simply because the available
mathematics could address it, they would certainly not be listed in history as the
pioneers of aviation. Numerous other major discoveries and inventions – electricity,
wireless communications, genetics – tell a similar story, providing equally strong
examples of how scientific progress is made without any requirement for strict formal
description or analysis.
In addition to relegating time and energy to a status of little importance in AI,
rubbing shoulders with computer science for virtually all of its 60-year existence has
brought with it a general disinterest in natural phenomena and a pernicious obsession
with formalization. Some say this shows that AI suffers from physics envy – envy of
the beauty and simplicity found in many physics equations – and the hope of finding
something equivalent for intelligence. I would call it a propensity for premature
formalization. One manifestation of this is researchers limiting themselves to
questions that have a clear hope of being addressed with today's mathematics –
putting the tools in the driver's seat. Defining research topics in that way – by
exclusion, through the limitations of current tools – is a sure way to lose touch with
the important aspects of an unexplained natural phenomenon.
Mathematical formalization does not work without clear definitions of terms.
Definition requires specifics. Such specification, should the mathematics invented to
date not be good for expressing the full breadth of the phenomena to be defined
(which for complex systems is invariably the case), can only be achieved through
simplification of the concepts involved. There is nothing wrong with simplification in
and of itself – it is after all a principle of science. But it matters how such
simplification is done. Complex systems implement intricate causal chains, with
multiple negative and positive feedback loops, at many levels of detail. Such systems
are highly sensitive to changes in topology. Early simplifications are highly likely to
leave out key aspects of the phenomena to be defined. The effects can be highly
unpredictable; the act will likely result in devastating oversimplification.
General intelligence is capable of learning new tasks and adapting to novel
environments. The field of AI has, for the most part, lost its ambition towards this
general part of the intelligence spectrum, and focused instead on the making of
specialized machines that only slightly push the boundaries of what traditional
computer science tackles every day. Part of the explanation is an over-reliance on
Turing's model of computation, to the exclusion of alternatives, and a trust in the
power of formalization that borders on the irrational. As concepts get simplified to fit
available tools, their correspondence with the real world is reduced, and the value of
subsequent work is diminished. In the quest for a stronger scientific foundation for
computer science, by threading research through the narrow eye of formalization,
exactly the opposite of what was intended has been achieved: The field has been
made less scientific.
5
What Must Be Done
In science the questions are in the driver seat: A good question comes first,
everything else follows. Letting the tools decide which research questions to pursue is
not the right way to do science. We should study more deeply the many principles of
cognition that are difficult to express in today's formalisms, system architectures
implementing multiple feedback loops at many levels of detail, for instance; only this
way can we simultaneously address the self-organizing hierarchical complexity and
networked nature of intelligent systems. Temporal latency is of course of central
importance in feedback loops and information dissemination in a large system. All
this calls for greater levels of system understanding than achieved to date (cf. Sifakis
2011), and an understanding of how time and energy affect operational semantics.
The very nature of AI – and especially artificial general intelligence (AGI) – calls
for a study of systems. But systems theory is immature (cf. Lee 2006) and computer
science textbooks typically give system architecture short shrift. The rift between
computer science and artificial intelligence is not a problem in principle – computer
science could easily encompass the numerous key subjects typically shunned in AI
today, such as non-axiomatic reasoning, existential autonomy, fault-tolerance,
graceful degradation, automatic prioritization of tasks and goals, and deep handling of
time, to name some basic ones. Creativity, insight and intuition, curiosity, perceptual
sophistication, and inventiveness are examples of more exotic, but no less important,
candidates that are currently being ignored. Studying these with current formalisms is
a sure bet on slow or no progress. We don't primarily need formalizations of cognitive
functions per se, first and foremost we need more powerful tools: New formalisms
that don't leave out key aspects of the real world; methods that can address its
dynamic complexity head-on, and be used for representing, analyzing, and ultimately
understanding, the operation of large complex systems in toto.
Acknowledgments. Thanks to Eric Nivel, Pei Wang, Hrafn Th. Thórisson, Helgi Páll
Helgason, Luca Aceto, Jacky Mallett, and the anonymous reviewers for comments on the paper.
This work has been supported in part by the EU-funded project HUMANOBS: Humanoids
That Learn Socio-Communicative Skills by Observation, contract no. FP7-STReP-231453
(www.humanobs.org), by a Centres of Excellence project grant (www.iiim.is) from the
Icelandic Council for Science and Technology Policy, and by grants from Rannís, Iceland.
References
Heylighen, F. & C. Joslyn (2001). Cybernetics and Second-Order Cybernetics.
Encyclopedia of Physical Science & Technology, 3rd ed. New York: Academic Press.
Lee, E. E. (2006). Cyber-Physical Systems - Are Computing Foundations
Adequate? Position Paper for NSF Workshop On Cyber-Physical Systems: Research
Motivation, Techniques and Roadmap.
Sifakis, J. (2011). A Vision for Computer Science – the System Perspective. Cent.
Eur. J. Comp. Sci., 1:1, 108-116.
Turing, A. (1948). Intelligent Machinery. Reprinted in C. R. Evans and A. D. J.
Robertson (eds.), Cybernetics: Key Papers, Baltimore: University Park Press, 1968.
Turing, A. M. (1950). Computing Machinery and Intelligence. Mind, 59:236, 433460.
On Deep Computational Formalization of Natural Language
Naveen Sundar Govindarajulu is a PhD candidate in Computer Science at RPI.! ! His thesis
work is on designing and implementing the first uncomputable games and has been partly
funded by the International Fulbright Science and Technology Award. Naveen holds a dual
degree in Physics and Electronics & Electrical engineering from BITS-Pilani, obtained with help
provided by the GE Foundation's Scholar-Leader award. His prior experience includes research
at Hewlett Packard Labs, the Tata Institute of Fundamental Research, and the Indian Space
Research Organization’s Mission Control Center.
Dr. Selmer Bringsjord is Professor of Logic, Computer Science, Cognitive Science, Management
& Technology, and is the Chair of the Department of Cognitive Science at RPI. Prof. Bringsjord
is also the Director of the Rensselaer AI & Reasoning (RAIR) Lab. Bringsjord’s focus is on logicbased AI, from both an engineering and foundations point of view. He is the author of
numerous papers and books, and routinely lectures and demos across the globe.
John Licato! is a computer science Ph.D. student at Rensselaer Polytechnic Institute, working
with the Rensselaer AI and Reasoning lab. His research interests include Analogico-Deductive
Reasoning (ADR) which combines analogical and deductive reasoning techniques to solve
problems, the computational modeling and philosophical basis of analogy, robotics, and the
modeling of higher cognition.
On Deep Computational Formalization of Natural
Language
Naveen Sundar Govindarajulu1 • Selmer Bringsjord2 • John Licato3
Rensselaer AI & Reasoning (RAIR) Lab
Cognitive Science2 , Computer Science1,2,3 & Lally School of Management & Technology2
Rensselaer Polytechnic Institute (RPI), Troy NY 12180 USA.
{govinn,selmer,licatj}@rpi.edu
Keywords: Deontic Cognitive Event Calculus, formalization of representation and reasoning, Montague semantics, Discourse Representation Theory.
1
Introduction
Current AI and NLP operate on fragments of natural language and within specific application domains. Even the most successful AI/NLP system in existence today, IBM’s
Watson, is highly limited when it comes to simple language processing just beyond its
ken. In order to paralyze (at least the Jeopardy!-winning version of) Watson, one has
only to ask it questions that have never been asked before, or questions to which answers
have never been recorded before. Such queries are easy to formulate; for example:
– “If I have 4 foos and 5 bars, and if foos are not the same as bars, how many foos will I have
if I get 3 bazes which just happen to be foos?”; or
– “What was IBM’s Sharpe ratio in the last 60 days of trading?”1
We contend that one of the major reasons for this lack of generality in AI/NLP
systems is the absence of a wide-ranging formalization of natural language that is both
fully formal and rigorously computational. We herein elaborate on the requirements that
such a theory should meet, and very briefly evaluate the two most prominent projects
in formal semantics: the Montagovian approach and Kamp’s DRT approach (laid out
under our rubric). We then encapsulate our own approach, which is rooted in formal
computational logic; this approach fares markedly better than either the Montagovian
or the DRT tack.
2
Requirements for a Computational Formalization of Natural
Language
Formalizing language can quickly become a philosophically troubled project, but we
are humble, in that we want to formalize language just to the extent that it allows us to
1
The approach presented in this paper will be used in a collaboration with IBM, and with RPI’s
Jim Hendler, in order to enable subsequent versions of Watson to answer such questions on
the strength of robust problem-solving.
do certain meaningful things computationally. The requirements that drive our efforts
are spelled out below.
All language formalization approaches treat language as an isolated phenomenon
separate from cognitive processes that use language. From a purely scientific perspective, this view, while convenient, is incomplete. The existing general approach in formal
semantics (or even pragmatics) is that sentences (or linguistic phenomena) are considered in isolation, and various formal structures are posited for their meaning. Pragmatics tries to rectify this, but there is no convincing unified framework, for example like
model theory for formal semantics, for formal pragmatics.
Our requirements below stem from the observation that any account of language
needs to include a full account of how and where it is used. Let us denote the set of
all natural-language sentences and expressions in some language by ‘L ,’ and the set
of all formal expressions that we can computationally handle by ‘F .’ Though some
will be philosophically inclined to reject the idea, it is not unreasonable to hold that
objects in F represent meaning, or, a bit more precisely, are what sentences mean. On
this foundation, any formalization of language would naturally attempt to define F and
give us a “meaning mapping” µ such that ∀s ∈ L ∃m ∈ F µ(s, m). With this background,
we can distill our requirements into the following pair.
Formalization of Extensional and Intensional Representation The formalization should specify both F and some general class of µ. The class F should be syntactically rich enough to
handle not just extensional sentences of natural language such as “All apples are red.” but
also challenging intensional sentences such as “Jack believes that Jane believes that all
apples are red.” (Roughly put, extensional concepts pertain to those of the physical kind
and intensional concepts to cognitive concepts.) The requirement to formalize both extensional and intensional sentences rules out simple extensional languages such as those at the
heart of the Semantic Web (e.g., description logics, covered e.g. in Baader, Calvanese &
McGuinness 2007), or those that are the target of simple domain-specific semantic parsing
techniques (Kwiatkowski, Zettlemoyer, Goldwater & Steedman 2010). For a more extensive
defense of the position that extensional languages and logics are not adequate to capture intensional concepts, please consult (Bringsjord & Govindarajulu 2012), in which we look at
three different ways of formalizing knowledge within first-order logic. All three approaches
fail by either introducing an inconsistency or by enabling unsound inferences to be drawn.
Formalization of Reasoning The formalization should also have scope for including all the different kinds of reasoning processes that can be carried out with the aid of natural language.
Such wide-ranging reasoning of course goes beyond just simple classical deduction. In general, given a set ΓL of sentences in natural language from which we can deduce/produce/infer
another set of sentences Γ�L via some reasoning process θ, we should have formalized θ computationally as Θ (or at least be able to). That is, we should have ΓF →Θ Γ�F .
3
Current Approaches to Formalizing of Natural Language
Current approaches to formalizing language can be broadly divided into the two aforementioned camps: the Montagovian approach based on Montague’s work, and the Discourse Representation Theory (DRT) framework. DRT can be considered an offshoot
of the Montagovian approach and aims to incorporate pragmatics fully into its own
account.
3.1
Montague’s Framework
Based on the simple ontology in our requirements, we can say that this approach tries to
give a formal account of the possible meanings, the set F , of different natural-language
expressions and how they relate to the syntax of natural-language expressions, L , with
a general theory for µ. The main shortcoming of this approach is that it fails to give a
unified account, either informally and formally, of the various kinds of cognitive processes that language takes part in. Another shortcoming of this approach is that there
seems to be a general absence of proof-theoretic methods. Formal Montagovian semantics leans heavily on model-theoretic methods. While model-theoretic semantics has
found success in mathematical logic, the absence of proof-theoretic methods makes it
hard to build computational methods.2 From a purely extensional standpoint, modeltheoretic semantics would seem to suffice. However, using model theory to account for
cognitive concepts such as knowledge becomes more problematic, a fact long reflected
in the invention of intensional logics in order to model knowledge, belief, obligation,
and so on (a survey of such logicist modeling is provided in Bringsjord 2008). A good
overview of the general approach and a brief history of Montague’s approach can be
found in (Montague 1974, Dowty, Wall & Peters 1981).3
3.2
Discourse Representation Theory
DRT (Kamp & Reyle 1993) arose to address one of the main perceived shortfalls of
Montague’s approach: assigning meaning to linguistic expressions in a sentence taking into account other sentences. While DRT has been successful in modeling quite a
bit of pragmatics, it lacks a unified formal framework like that of Montagovian formal
semantics. The lack of this framework renders it incomplete for our purposes. Another
shortcoming of DRT is that there is no support for intensional concepts and tense. There
are some theories within the DRT family that try to mould intensionalities into extensional predicates. We show in detail in (Bringsjord & Govindarajulu 2012) how this
moulding can be problematic in general.
4
A More Rigorous Framework
We suggest an approach based on a formal logic with an explicit proof theory rooted in
computational methods. We implement one incarnation of this approach via the Deontic
Cognitive Event Calculus (DC EC ∗ ). DC EC ∗ is a quantified modal logic that builds
upon on the first-order Event Calculus (EC). EC has been used quite successfully in
modelling a wide range of phenomena, from those that are purely physical to narratives
expressed in natural-language stories (Mueller 2006).4
2
3
4
See (Francez & Dyckhoff 2010) for more discussion on the computational advantages of a
proof-theoretic semantics for natural language.
Janssen’s article (Janssen 2012) on Montague’s approach is laconic and lucid, but readers who
want more can consult the original sources.
A nice overview of EC is provided in the well-known “AIMA” textbook (Russell & Norvig
2009).
EC is also a natural platform to capture natural-language semantics, especially that
of tense; see (Van Lambalgen & Hamm 2005). EC has a shortcoming: it is fully extensional and hence, as explained above, has no support for capturing intensional concepts
such as knowledge and belief without introducing unsoundness or inconsistencies. For
example, consider the possibility of modeling changing beliefs with fluents. We can
posit a “belief” fluent belief(a, f) which says whether an agent a believes another fluent
f. This approach quickly leads to serious problems, as one can substitute co-referring
terms into the belief term, which leads to either unsoundness or an inconsistency (see
Figure 3 in (Bringsjord & Govindarajulu 2012)). One can try to overcome this using
more complex schemes of belief encoding in FOL, but they all seem to fail. A more
detailed discussion of such schemes and how they fail can be found in the analysis in
(Bringsjord & Govindarajulu 2012).
5
More Detailed Discussion
5.1
Formalization of Representation
DC EC ∗ (deontic cognitive event calculus) is a multi-sorted quantified modal logic (for
coverage of multi-sorted logic, see Manzano 1996), that has a formal, recursively defined syntax, and a proof calculus.5 DC EC ∗ syntax includes a system of sorts S, a
signature f , a grammar for terms t, and a grammar for sentences φ. The proof calculus
is based on natural deduction (Jaśkowski 1934), and includes all the introduction and
elimination rules for first-order logic, as well as rules for the intensional operators. The
formal semantics for DC EC ∗ is still under development; a semantic account of the
wide array of cognitive and epistemic constructs found in the logic is no simple task
— especially because of two self-imposed constraints: resisting fallback to the standard ammunition of possible-worlds semantics (which for reasons beyond the scope
of the present paper we find manifestly implausible as a technique for formalizing the
meaning of epistemic operators), and resisting the piggybacking of deontic operators
on pre-established logics not expressly created and refined for the purpose of doing
justice to moral reasoning in the human realm. For an introduction, see (Arkoudas &
Bringsjord 2009).
5.2
Formalization of Reasoning
DC EC ∗ supports three different modes of reasoning. We briefly touch upon them,
thereby illustrating that DC EC ∗ (or, in fairness, any similar calculus) can be used to
model a diverse array of reasoning processes.
5
The syntax of the language of DC EC ∗ and the rules of inference for its proof calculus
are described in detail in the formal specification available at http://www.cs.rpi.edu/
˜govinn/dcec.pdf. For prior work describing the system, please refer to (Arkoudas &
Bringsjord 2009, Bringsjord & Govindarajulu 2013). We are grateful to the referees for their
comments and observations on a partial version of this specification, which we had included
in a prior version of the present paper.
5.3
Deductive Reasoning
DC EC ∗ includes a deductive proof calculus Θd . This calculus enables one to reason
not only about purely extensional concepts like events, fluents, and moments, which is
reasoning enabled by the pure Event Calculus, but in addition enables reasoning about
intensional concepts, such as the beliefs, knowledge, desires, intentions, communications, etc. of agents. This calculus has for instance been used to analyze the false-belief
task computationally. While it is possible in principle to have a monolithic proof finder
to answer whether Γ �Θd γ, proof search in this calculus can be broken down into cognitively plausible reusable procedures called λµ-methods. For a set of methods relevant
to the false-belief task, see (Arkoudas & Bringsjord 2009). This deductive calculus has
also been used to solve the generalized wise-men puzzle (for any number of wise men);
see (Arkoudas & Bringsjord 2004).
5.4
De Se Statements
We have proposed certain syntactic constructs in (Bringsjord & Govindarajulu 2013)
by which one can:
1. distinguish de se statements from de dicto and de re statements; and
2. differentiate first- and third-person de se beliefs.
The former can be achieved by introducing an operator ∗ which can let us know if
agents are referring to themselves in an irreducible manner. This construct, stemming
from Castañeda’s work (e.g., Castañeda 1999), is insufficient alone, and most work in
formal semantics of self-reference stops with this mechanism. This mechanism fails
when we want to distinguish between third-person de se statements and first-person de
se statements. What is needed is some way of recording in the reasoning process Θ a
symbol for the agent doing the reasoning. This syntactic construct goes beyond what
can be talked about in classical formal semantics. Table 1 illustrates representations for
different de se statements in DC EC ∗ .6
5.5
Extensional Analogical Reasoning
Another focus of some of the present authors is Analogico-Deductive Reasoning (ADR)
(Licato, Bringsjord & Hummel 2012), a hybrid of analogical and hypothetico-deductive
reasoning that attempts to automate the generation (through analogical mapping and
inference) of hypotheses, which are then subjected to deductive reasoning techniques.
The use of ADR has been modeled in psychological experiments (Licato et al. 2012,
Bringsjord & Licato 2012); and most recently, in Licato et al. (2013), an ADR system,
Modifiable Engine for Tree-based Analogical Reasoning (META-R), was applied to the
domain of theorem- and proof-discovering in cutting-edge mathematical logic. Because
the surface syntax of the extensional subset of DC EC ∗ can be represented using tree
structures, it lends itself nicely to the algorithms implemented by META-R.
6
We capture the indexicals {agent, time, place, . . .}, relevant for any inference, by this this notation: Γ �{agent,time,place,...} φ. For example, irreducible first-person inferences can be partially
written as Γ �I φ.
NL Sentence
Table 1: De Se Belief in DC EC ∗
DC EC ∗
Jack believes that the per- B(jack, now, ∃x : Agent(named(x, “Jack”) ∧ rich(x)))
son named “Jack” is rich.
Jack believes of the person ∃x : Agent(named(x, “Jack”) ∧ B(jack, now, rich(x)))
named “Jack” that he is rich.
Jack believes that he himB(jack, now, rich(jack∗)))
self is rich.
I believe that I myself am
�I B(I, now, rich(I∗)))
rich.
6
Type
De Dicto
De Re
Third-person
De Se
First-person
De Se
Conclusion & Next Steps
Our future work falls into two categories: work aimed at increasing the expressive
power of DC EC ∗ , and work aimed at capturing more cognitive processes.
6.1
Representation: Scoped Terms and Natural-Language Connectives
The DC EC ∗ syntax presented in this paper lacks a way to represent general noun
phrases and quantifiers. All the connectives in the current syntax derive from mathematical logic; for example, the conjunctions in “Jack fell down and Jill came tumbling
after” and “x ≥ 3 and x ≤ 5” do not have the same meaning. Future work will incorporate such constructs and refine the proof theory to accommodate more cleanly these
new elements.
6.2
Reasoning: Planning Integrated with Intensional Concepts
Another thread of research is to focus on designing a planning framework (for an agent)
that integrates all the intensional operators with planning in an event calculus-based
planning formalism. This work is well underway, and some demonstrations should soon
be possible.
References
Arkoudas, K. & Bringsjord, S. (2004), Metareasoning for Multi-agent Epistemic Logics, in ‘Proceedings of the Fifth International Conference on Computational Logic In Multi-Agent Systems (CLIMA 2004)’, Lisbon, Portugal, pp. 50–65.
URL: http://kryten.mm.rpi.edu/arkoudas.bringsjord.clima.crc.pdf
Arkoudas, K. & Bringsjord, S. (2009), ‘Propositional Attitudes and Causation’, International
Journal of Software and Informatics 3(1), 47–65.
URL: http://kryten.mm.rpi.edu/PRICAI w sequentcalc 041709.pdf
Baader, F., Calvanese, D. & McGuinness, D., eds (2007), The Description Logic Handbook:
Theory, Implementation (Second Edition), Cambridge University Press, Cambridge, UK.
Bringsjord, S. (2008), Declarative/Logic-Based Cognitive Modeling, in R. Sun, ed., ‘The Handbook of Computational Psychology’, Cambridge University Press, Cambridge, UK, pp. 127–
169.
URL: http://kryten.mm.rpi.edu/sb lccm ab-toc 031607.pdf
Bringsjord, S. & Govindarajulu, N. (2013), Toward a Modern Geography of Minds, Machines,
and Math, in V. C. Müller, ed., ‘Philosophy and Theory of Artificial Intelligence’, Vol. 5 of
Studies in Applied Philosophy, Epistemology and Rational Ethics, Springer Berlin Heidelberg, pp. 151–165.
URL: http://www.springerlink.com/content/hg712w4l23523xw5
Bringsjord, S. & Govindarajulu, N. S. (2012), ‘Given the web, what is intelligence, really?’,
Metaphilosophy 43(4), 464–479.
URL: http://dx.doi.org/10.1111/j.1467-9973.2012.01760.x
Bringsjord, S. & Licato, J. (2012), Psychometric Artificial General Intelligence: The PiagetMacGyver Room, in P. Wang & B. Goertzel, eds, ‘Theoretical Foundations of Artificial
General Intelligence’, Atlantis Press.
URL: http://kryten.mm.rpi.edu/Bringsjord Licato PAGI 071512.pdf
Castañeda, H.-N. (1999), ‘He’: A Study in the Logic of Self-Consciousness, in J. G. Hart &
T. Kapitan, eds, ‘The Phenomeno-Logic of the I: Essays on Self-Consciousness’, Indiana
University Press, 601 North Morton Street, Bloomington, Indiana 4704-3797 USA.
Dowty, D., Wall, R. & Peters, S. (1981), Introduction to Montague Semantics, D. Reidel, Dordrecht, The Netherlands.
Francez, N. & Dyckhoff, R. (2010), ‘Proof-theoretic Semantics for a Natural Language Fragment’, Linguistics and philosophy 33(6), 447–477.
Janssen, T. M. V. (2012), Montague semantics, in E. N. Zalta, ed., ‘The Stanford Encyclopedia
of Philosophy’, winter 2012 edn.
Jaśkowski, S. (1934), ‘On the Rules of Suppositions in Formal Logic’, Studia Logica 1, 5–32.
Kamp, H. & Reyle, U. (1993), From Discourse to Logic: Introduction to Model-theoretic Semantics of Natural Language, Formal Logic and Discourse Representation Theory, 1 edn,
Springer.
Kwiatkowski, T., Zettlemoyer, L., Goldwater, S. & Steedman, M. (2010), Inducing Probabilistic CCG Grammars from Logical form with Higher-order Unification, in ‘Proceedings of
the 2010 conference on empirical methods in natural language processing’, Association for
Computational Linguistics, pp. 1223–1233.
Licato, J., Bringsjord, S. & Hummel, J. E. (2012), Exploring the Role of Analogico-Deductive
Reasoning in the Balance-Beam Task, in ‘Rethinking Cognitive Development: Proceedings
of the 42nd Annual Meeting of the Jean Piaget Society’, Toronto, Canada.
URL: https://docs.google.com/open?id=0B1S661sacQp6NDJ0YzVXajJMWVU
Licato, J., Govindarajulu, N. S., Bringsjord, S., Pomeranz, M. & Gittelson, L. (2013), ‘Analogicodeductive Generation of Gödel’s First Incompleteness Theorem from the Liar Paradox’, Proceedings of the 23rd Annual International Joint Conference on Artificial Intelligence (IJCAI–
13) .
Manzano, M. (1996), Extensions of First Order Logic, Cambridge University Press, Cambridge,
UK.
Montague, R. (1974), Formal Philosophy: Selected Papers of Richard Montague, Yale University
Press, New Haven, CT.
Mueller, E. (2006), Commonsense Reasoning, Morgan Kaufmann, San Francisco, CA.
Russell, S. & Norvig, P. (2009), Artificial Intelligence: A Modern Approach, Prentice Hall, Upper
Saddle River, NJ. Third edition.
Van Lambalgen, M. & Hamm, F. (2005), The Proper Treatment of Events, Vol. 6, Blackwell
Publishing.
Formal Magic for Analogies
Ulf Krumnack, Ahmed Abdel-Fattah, and Kai-Uwe Kühnberger
Institute of Cognitive Science, University of Osnabrück, Germany
Abstract. During the last decades, a number of different approaches to model
analogies and analogical reasoning have been proposed, that apply different knowledge representation and mapping strategies. Nevertheless, analogies still seem to
be hard to grasp from a formal perspective, with no known treatment in the literature of their formal semantics. In this paper we present a universal framework
that allows to analyze the syntax and the semantics of analogies in a logical setting without committing ourselves to a specific type of logic. We then apply these
ideas by considering an analogy model that is based on classical first-order logic.
1 Introduction
There is no uncontroversial theory of the semantics of analogies (although several models for ‘computing’ analogies have been proposed). Even worse, for all established
frameworks even an endeavor to find a semantics of the underlying computations cannot be found. As a consequence of these deficiencies, the desired generalization capabilities of appropriate AI systems are currently far from being reachable. From an AI
perspective, it would be desirable to have at least a model that could be combined with
standard mechanisms of knowledge representation and reasoning. In order to bridge the
gap between algorithmic approaches for analogical reasoning and the denotational semantics underlying these algorithms, the usage of analogy mechanisms for AI systems
is proposed, in this paper, in an abstract way. Semantic issues of analogical relations are
investigated and a model theory of analogical transfers is specified.
2 Modeling Analogies: The Setting
Our approach aims at achieving an abstract syntactic and semantic representation of
modeling analogies in an arbitrary logical framework. The approach is loosely discussed in this section and formalized in the next. By an arbitrary logical framework
we refer to a general framework that represents knowledge using a logic language (e.g.
predicate first-order logic).
There seems to be a generally non-controversial core interpretation of analogies in
the literature: Analogical relations can be established between a well-known domain,
the source, and a formerly unknown domain, the target, without taking much input data
(examples) into account. It is rather the case that a conceptualization of the source domain is sufficient to generate knowledge about the target domain, which can be achieved
by associating attributes and relations of the source and target domains. New “conceptual entities” can be productively introduced in the target domain by the projection (or
the transfer) of attributes and relations from the source to the target, which allow the
performance of reasoning processes to take place in the target domain.
That is, analogical relations between the entities in the input domains result from
analogical transfer, which identifies their common (structural) aspects, and exports
some of these aspects from the source to the target, in order for the treatment of the
target (as being similar to the source) to consistently come about. The common aspects
of the input domains are identified, and made explicit, in a generalized domain that captures the common parts of the input domains. This generalized domain can be thought
of as being the mutual “generalization” sub-domain of both input domains. Figure 1 depicts this overall idea using S and T as source and target domains, respectively, where
m represents an analogical relation between them, and the generalization, G, represents
the common parts of S and T .
Generalization (G)
abstraction
Source (S)
analogical transfer
Target (T )
m
Fig. 1. An overall view of creating analogies.
It is worth noting that the amount of “coverage” in analogy-making plays a role,
since it basically reflects the limit to which parts of the domains may or may not be
included in the generalization. Moreover, not only some parts of the domains may be
irrelevant to each other, of course, but also the domains can be pragmatically irrelevant
for a human reasoner. These issues are not discussed in this paper in further detail.
Notice that the abstraction process results in a generalization of source and target.
The inverse operation, namely to construct the input domains from the generalization
can be modeled by substitutions and therefore result in specializations of the generalization.
3 A Formal Framework
We will present our ideas using the theory of institutions by briefly recalling the central
notions (the interested reader is referred to [1, 2]). An institution formalizes the intuitive
notion of a logical system into a mathematical object. A particular system is constituted
by a signature Σ, which gives rise to a syntax, formalized as a set of sentences Sen(Σ),
and a semantics, formalized by a category of models Mod(Σ). Sentences and models
are related by satisfaction |=Σ expressing which sentences hold in which models. A
simple example of an institution can be given in well-known terms of first-order logic
(FOL). In this case, the Σ-sentences Sen(Σ) corresponds to the set of all FOL formulas that can be built using symbols from a signature Σ. For each signature Σ the
collection Mod(Σ) of all Σ-models corresponds in FOL to the collection of all possible interpretations of symbols from Σ. The Σ-models and Σ-sentences are related by
the relation of Σ-satisfaction, which corresponds to the classical model theoretic satisfaction relation in FOL. A change in notation should not alter truth of formulae. This is
formalized by assuming functoriality of the mappings Sen and Mod in the following
way: given two signatures Σ1 and Σ2 , and a signature morphism, i.e. a structure preserving mapping between signatures, σ : Σ1 → Σ2 , then this will induce a mapping
Sen(σ) : Sen(Σ1 ) → Sen(Σ2 ) on the syntactic level and a contravariant mapping
Mod(σ) : Mod(Σ2 ) → Mod(Σ1 ) on the semantic level, such that satisfaction is
preserved:
Sen(σ)(ϕ) ∈ Sen(Σ2 )
ϕ ∈ Sen(Σ1 )
|=Σ1
Mod(σ)(m) ∈ Mod(Σ1 )
iff
|=Σ2
(1)
m ∈ Mod(Σ2 )
For our exposition, the notion of a signature morphism does not suffice, so we will make
use of a more general concept.1
Definition 1. For any signature Σ of an institution, and any signature morphisms χ1 :
Σ → Σ1 and χ2 : Σ → Σ2 , a general Σ-substitution ψχ1 :χ2 , depicted by
Σ1 !!
ψ
" Σ2
!!
# """
!!
"
χ1 !!!
"" χ2
""
Σ
consists of a pair #Sen(ψ), Mod(ψ)$ , where
– Sen(ψ) : Sen(Σ1 ) → Sen(Σ2 ) is a function
– Mod(ψ) : Mod(Σ2 ) → Mod(Σ1 ) is a functor
such that both of them preserve Σ, i.e. the following diagrams commute:
Mod(ψ)
Sen(ψ)
$ Sen(Σ2 )
Mod(Σ1 ) (
Mod(Σ2 )
Sen(Σ1 )
%##
&$
%%
%%
&&
##
$
&
$
%
&
##
%%
&
$$
&
#
$
%
&
Mod(χ1 ) %
Sen(χ1 )
#
$$ Sen(χ2 )
'
)&& Mod(χ2 )
Sen(Σ)
Mod(Σ)
and such that the following satisfaction condition holds:
Mod(ψ)(m2 ) |= ρ1 if and only if m2 |= Sen(ψ)(ρ1 )
for each Σ2 -model m2 and each Σ1 -sentence ρ1 .
General Σ-substitutions extend the idea of a signature morphism. Although, in general
there does not need to be a mapping on the level of signatures between Σ1 und Σ2 , most
general Σ-substitution considered in practice are induced by some form of signature
mapping. Every signature morphism can be seen as general Σ-substitution, and many
other mappings, like classical first-order substitutions, second-order substitutions (for
FOL), and derived signature morphisms give rise to a general Σ-substitution.
We now turn to modeling analogies in this framework. Analogy-making can be
broadly characterized by the result of finding an analogical relation linking salient ‘conceptual entities’ of two (structured) domains to each other. The first of of these domains
is designated as the “source” domain (the one which is standardly considered to be the
richer domain including more available and accessible knowledge) and the other as the
“target”. Analogical reasoning is then seen as the ability to treat the target as being similar, in one aspect or another, to the source, depending on their shared commonalities
in relational structure or appearance. Once such an analogical mapping is discovered it
can give rise the to a transfer of knowledge between the two domains.
The basic idea in our approach is to view analogy-making as an “generalization process”: the association of corresponding ‘conceptual entities’ in the input domains gives
1
Based on [1, section 5.3]
rise to a generalization capturing the commonalities. In other words, the two given input
domains, source and target, have a common core that is given by specific instantiations
for the generalization as depicted in figure 1. We assume that a source domain S and
a target domain T are given using a logical formalism, i.e. within a common institution I. The signatures for these domains are denoted by ΣS and ΣT respectively. We
will further allow the use of common symbols from a background signature Σ. To spell
out the above-mentioned scheme of analogy by generalization, we introduce a further
signature ΣG that provides the generalized symbols:
Definition 2. Given two signatures ΣS and ΣT over a common signature Σ, a generalization ‫ ג‬is defined to be a triple #ΣG , σ, τ $, consisting of a signature ΣG , and general
Σ-substitutions σ and τ as indicated in the following diagram:
Σ* G
σ
ΣS (
+
τ
,
Σ
$ ΣT
As a Σ-substitution is defined as a pair of mappings on sentence and model level, every
generalization gives rise to the following diagrams:
Sen(ΣG )
* %%
%%Sen(τ )
&&
&
%%
&
&
%%
&
&
%.
-&&
$ Sen(ΣT )
Sen(ΣS ) (
Sen(Σ)
Sen(σ)
Mod(ΣG )
1((
'0
((Mod(τ )
((
'
'
((
'
(
''
/
$ Mod(Σ) (
Mod(ΣS )
Mod(ΣT )
Mod(σ)'''
Furthermore, for every ΣG -sentence ρ, every ΣS -model mS and every ΣT -model mT ,
the following satisfiability conditions hold:
ρ
Sen(σ)(ρ)
ΣS |=
/
mS
iff
ΣG |=
Sen(τ )(ρ)
ρ
and
/
Mod(σ)(mS )
iff
/
Mod(τ )(mT )
ΣG |=
ΣT
|=
/
mT
In this setting, we can introduce an analogical relation on the level of sentences as well
as on the level of models.
ℵ
ℵ
Definition 3. An analogy ℵ is defined as a pair #∼Sen , ∼Mod $ of relations such that
for every pair of sentences s ∈ Sen(ΣS ), t ∈ Sen(ΣT ), and every pair of models
ℵ
ℵ
mS ∈ Mod(ΣS ), mT ∈ Mod(ΣT ) with s ∼Sen t and mS ∼Mod mT it holds
mS |= s
iff
mT |= t
A direct consequence from this definition is
Fact 1 Every generalization ‫ ג‬gives rise to an analogy ℵ in the following way:
– On the level of sentences, for every s ∈ Sen(ΣS ) and t ∈ Sen(ΣT ) the relation
ℵ
s ∼Sen t holds iff there exists a g ∈ Sen(ΣG ) such that Sen(σ)(g) = s and
Sen(τ )(g) = t.
– On the semantic level, for every mS ∈ Mod(ΣS ) and mT ∈ Mod(ΣT ) the
ℵ
relation mS ∼Mod mT holds iff Mod(σ)(mS ) = Mod(τ )(mT ).
Based on these notions, we now consider the domain theories, i.e. sets of sentences
ThS ⊆ Sen(ΣS ) and ThT ⊆ Sen(ΣT ), used to model the source and target domain
respectively. For a given generalization ‫ ג‬we say that a set of ΣG -sentences Th is a
‫ג‬-generalization of ThS and ThT , if Sen(σ)(Th) ⊆ ThS and Sen(τ )(Th) ⊆ ThT .
Fact 2 For every generalization ‫ ג‬and every pair of theories ThS , ThT there exists a
maximal (with respect to set inclusion) ‫ג‬-generalization ThG .
We call that maximal ThG the ℵ-generalization of ThS and ThT for the analogy ℵ
induced by ‫ג‬. The sentences Sen(σ)(ThG ) and Sen(σ)(ThG ) are the parts of the
domains that are covered by the analogy. Coverage comprises the idea of a “degree of
association” of sub-theories between theories ThS and ThT and plays an important role
in current approaches for analogy-making.
Fact 3 For every sentence s ∈ Sen(ΣS ) that is covered by an anlogy ℵ, there exists a
ℵ
sentence t ∈ Sen(ΣT ) such that s ∼Sen t and vice versa.
4 An Example
Section 3 proposes a general framework applicable to arbitrary analogy-making approaches which are based on a broad range of underlying logical systems. As long
as common expressions in the source and target domain are associated via the analogy ℵ, a generalization for both domains is computed, and source and target can be
recovered from the generalization using general Σ-substitutions, an analogy ℵ can be
formally described on the syntactic and semantic level. Heuristic-Driven Theory Projection (HDTP) [3] is an example of an analogy engine that instantiates this general
formal framework described in Section 3 specifically for first-order logic (FOL).
HDTP provides an explicit generalization of two domains specified as theories in
(many-sorted) FOL as a by-product of establishing an analogy.2 HDTP proceeds in
two phases: in the mapping phase, the source and target domains are compared to find
structural commonalities, and a generalized description is created, which subsumes the
matching parts of both domains. In the transfer phase, unmatched knowledge in the
source domain can be mapped to the target domain to establish new hypotheses. The
overall idea of establishing an analogy between two domains in the HDTP framework
is nicely covered in Figure 1.
For our current purposes, we will only consider the mapping mechanism in more
detail. The mapping is achieved via a generalization process, in which pairs of formulas from the source and target domain are anti-unified resulting in a generalized theory
that reflects common aspects of the two domains. Formulas that are generalized to the
same element in the generalized theory are considered to be analogically related. The
generalized theory can be projected into the original domains by substitutions which
are computed during anti-unification. We say that a domain formula is covered by the
analogy, if it is within the image of this projection, otherwise it is uncovered. In analogy making, the analogical relation is used in the transfer phase to translate additional
uncovered knowledge from the source to the target domain.
2
To improve readability we omit the sortal specifications of terms in this paper.
Technically, HDTP is based on restricted higher-order anti-unification [3] which is
defined on four basic types of substitutions.3 In order to allow a mild form of higherorder anti-unification, we extend classical FOL terms by introducing variables that can
take arguments: for every natural number n we assume an infinite set Vn of variables
with arity n Here we explicitly allow the case n = 0 with V0 being the set of FOL
variables. In this setting, a term is either a first-order or a higher-order term, i.e. an
expression of the form F (t1 , . . . , tn ) with F ∈ Vn and terms t1 , . . . , tn .
Definition 4. We define the following set of basic substitutions:
!
1. A renaming ρF,F replaces a variable F ∈ Vn by another variable F " ∈ Vn of the
same argument structure:
ρF,F
!
F (t1 , . . . , tn ) −−−→ F " (t1 , . . . , tn ).
2. A fixation φF
c replaces a variable F ∈ Vn by a function symbol f of the same
argument structure:
φF
f
F (t1 , . . . , tn ) −−→ f (t1 , . . . , tn ).
!
3. An argument insertion ιF,F
G,i with 0 ≤ i ≤ n, F ∈ Vn , G ∈ Vk with k ≤ n − i, and
F " ∈ Vn−k+1 is defined by
ιF,F
G,i
!
F (t1 , . . . , tn ) −−−→
F " (t1 , . . . , ti , G(ti+1 , . . . , ti+k ), ti+k+1 , . . . , tn ).
!
4. A permutation παF,F with F, F " ∈ Vn and bijective α : {1, . . . , n} → {1, . . . , n}
rearranges the arguments of a term:
π F,F
!
α
F (t1 , . . . , tn ) −−
−→ F " (tα(1) , . . . , tα(n) ).
For generalizing complex terms, we can successively apply several substitutions:
To receive a non-ambiguous set of substitutions we apply the basic substitutions in the
order renaming, argument insertion, permutation, and finally fixation. We will call any
composition of basic substitutions a (higher-order) substitution and write t → t" , if
there exists a sequence of basic substitutions that transforms t into t" . We will call t" an
(higher-order) instance of t, and t an (higher-order) anti-instance of t" .
HDTP can now be interpreted using the concepts described in Section 3. The underlying institution I is the institution FOL, i.e. the category of signatures Sign is the
category of logical first-order signatures, the collection of Σ-sentences is the class of
all first-order formulas, and the collection Mod(Σ) of all Σ-models is the collection
of all possible interpretations of symbols from Σ.
The basic substitutions from Definition 4 give rise to mappings on the syntactic
and semantic level.4 On the syntactic level, a basic substitution replaces function and
predicate symbols in the way described above, inducing a function on the set of FOL
3
4
Restricted higher-order anti-unification resembles to a certain extent strategies proposed in the
context of ontology repair plans [4].
In fact, the basic substitutions are special cases of second-order substitutions in first-order
logic, which can also be described as derived signature morphisms, cf. [1, 101].
formulas. For example, given a FOL signature Σ = #Φ, Π$ with function symbols Φ
!
and predicate symbols Π, a renaming ρF,F induces a function
!
Sen(ρF,F ) : Sen(Φ ∪ {F }, Π) → Sen(Φ ∪ {F " }, Π)
On the semantic level, we get a mapping in the opposite direction: given a model for
#Φ ∪ {F " }, Π$, we can interpret #Φ ∪ {F }, Π$-formulas, by first translating them via
!
!
ρF,F and using the given model. Hence ρF,F induces a mapping of models
!
Mod(ρF,F ) : Mod(Φ ∪ {F " }, Π) → Mod(Φ ∪ {F }, Π).
It is easy to see, that the satisfiability condition is fulfilled:
)
$ Sen(ρF,F ! )(s)
s
|=$Φ∪{F ! },Π %
|="Φ∪{F },Π$
/
!
Mod(ρF,F )(m) (
) /
m
The same holds for the other basic substitutions as can be easily verified. Hence
every HDTP substitution t → t" , as a composition of basic substitutions resulting in
an anti-instance t of t" , gives rise to a general substitution in the sense of Definition 1.
Therefore the operations HDTP performs in computing an analogical relation between
a given source and target domain fit into the framework presented in Section 3. Furthermore, a syntactic and semantic interpretation of the HDTP computation is provided
by the presented approach. This allows to adopt the notions of analogical relation and
coverage introduced there.
5 Conclusion
Establishing an analogy between two input domains by an abstraction process does not
only agree with the utilization of cognitive capabilities, but can also be given a sensible
interpretation on the semantic level, and implemented in logic-based AI systems. As
the paper shows, the generalized theory and the analogical relation (established on a
purely syntactic basis) can be connected to model theoretic relations on the semantic
level in a coherent manner. We think that the analysis of the model theoretic semantics
of analogies will be helpful for developing and improving computational models for (or
based on) analogical reasoning. A better understanding of this type of representation
can help in modeling certain types of creativity as well as understanding and explaining
a broad range of cognitive capacities of humans, as recently discussed in e.g. [5].
On the other hand, several interesting open questions remain. First, it seems to be
straight forward to apply our approach to a logic based systems like Heuristic-Driven
Theory Projection [3]. But the framework should be applied to other symbolic analogy
models, such as, for example, SME [6]. Second, several approaches in the field of theorem proving and ontology repair systems include substitutions resulting in a change of
the underlying signature of the respective theory [4]. Last but not least, the present paper sketches only the specification of the syntax and semantics of certain non-classical
forms of reasoning in a FOL setting. A thorough examination of extensions of reasoning with respect to alternative logical systems (like higher-order logic, description logic,
model logic, equational logic) remains a topic for the future.
References
1. Diaconescu, R.: Institution-independent Model Theory. Studies in Universal Logic. Birkhäuser, Basel (2008)
2. Goguen, J.A., Burstall, R.M.: Institutions: abstract model theory for specification and programming. Journal of the ACM 39(1) (January 1992) 95–146
3. Schwering, A., Krumnack, U., Kühnberger, K.U., Gust, H.: Syntactic principles of HeuristicDriven Theory Projection. Special Issue on Analogies - Integrating Cognitive Abilities. In:
Journal of Cognitive Systems Research 10(3) (2009) 251–269
4. McNeill, F., Bundy, A.: Dynamic, Automatic, First-Order Ontology Repair by Diagnosis of
Failed Plan Execution. International Journal of Semantic Web and Information Systems 3
(2007) 1–35
5. Abdel-Fattah, A., Besold, T., Kühnberger, K.U.: Creativity, cognitive mechanisms, and logic.
In Bach, J., Goertzel, B., Iklé, M., eds.: Proc. of the 5th Conference on AGI, Oxford. Volume
7716 of Lecture Notes in Computer Science. (2012) 1–10
6. Falkenhainer, B., Forbus, K., Gentner, D.: The Structure-Mapping Engine: Algorithm and
Example. Artificial Intelligence 41 (1989) 1–63
Authors:
Ulf Krumnack
Ahmed Abdel-Fattah
Kai-Uwe Kühnberger
Artificial Intelligence Research Group,
Institute of Cognitive Science, University of Osnabrück,
Albrechtstr. 28, 49076 Osnabrück,
Germany.
Formal Models in AGI Research
Pei Wang
Temple University, Philadelphia PA 19122, USA
http://www.cis.temple.edu/∼pwang/
Abstract. Formal models are necessary for AGI systems, though it does
not mean that any formal model is suitable. This position paper argues
that the dominating formal models in the field, namely logical models
and computational models, can be misleading. What AGI really needs
are formal models that are based on realistic assumptions on the capacity
of the system and the nature of its working environment.
1
The Power and Limit of Formal Models
The need for formal models for AGI research is not a novel topic. For example,
AGI-09 had a workshop titled “Toward a Serious Computational Science of Intelligence” [1]. In [2], I proposed the opinion that a complete A(G)I work should
consist of (1) a theory of intelligence, expressed in a natural language, (2) a formal model of the theory, expressed in a symbolic language, and (3) a computer
implementation of the model, expressed in a programming language. Though
the necessity of (1) and (3) are obvious, there is a large number of AGI projects
without a clearly specified formal model. Such projects are often described and
carried out according to the common practice of software engineering.
If an AGI system is eventually built as a computer system with software
and hardware, why bother to have a formal model as an intermediate step between the conceptual design and the physical implementation? As I argued in
[3], formalization improves a theoretical model by disambiguating (though not
completely) its notions and statements. In particular for AGI, a formal model
tends to be domain independent, with its notions applicable to various domains
by giving the symbols different interpretations. Though it is possible to skip
formalization, such a practice often mixes the conceptual issues and the implementational issues, thus increasing the complexity of a system’s design and
development.
However, to overemphasize the importance of formalization for AGI may
lead to the other extreme, that is, to evaluate a formal model for its own sake,
without considering its empirical justification as a model of intelligence, or its
feasibility of being implemented in a computer system. Though the rigor and
elegance of a model are highly desired, they are still secondary when compared
with the correctness and applicability of the fundamental assumptions of the
model. A mathematical theory may have many nice properties and may solve
many practical problems in various fields, but this does not necessarily mean
that it will be equally useful for AGI. Actually it is my conjecture that a major
2
!
reason for the lack of rapid progress in this field is the dominance of the wrong
formal models, in particular, those based on mathematical logic, the theory of
computation, and probability theory. In this paper, I summarize my arguments
against certain logical models and computational models in AGI.
2
Logical Models and AGI
As I argued in [4, 2], mathematical logic was established to provide a logical
foundation for mathematics, by formalizing the valid inference patterns in theorem proving. However, “theorem proving” is very different from commonsense
reasoning, and this conclusion has been reached by many logicians and AI researchers. Consequently, various non-classical logics and reasoning models have
been proposed, by revising or extending traditional mathematical logic [5, 6].
Even so, the following fundamental assumptions in classical logic are still often
taken for granted:
Correspondence theory of truth: The truth-value of a statement indicates
the extent to which the statement corresponds to an objective fact.
Validity as truth-preserving: An inference rule is valid if and only if it derives true conclusions from true premises.
My own AGI project NARS is a reasoning system that rejects both of the above
assumptions. Instead, they are replaced by two new assumptions:
Empirical theory of truth: The truth-value of a statement indicates the extent to which the statement agrees with the system’s experience.
Validity as evidence-preserving: An inference rule is valid if and only if its
conclusion is supported by the evidence provided by its premises.
Based on the above assumptions, as well as the assumption that an intelligent system should be adaptive and can work with insufficient knowledge and
resources, NARS is designed, which implements a formal logic [7, 2, 8]. NARS
fundamentally differs from mathematical logic, since it is designed to work in
realistic situations, while the latter is for idealized situations.
NARS consistently handles many issues addressed in non-classical logics:
Uncertainty: NARS represents several types of uncertainty, including randomness, fuzziness, ignorance, inconsistency, etc., altogether as the effects of various forms of negative or future evidence.
Ampliativity: Beside deduction, NARS also carries out various types of nondeductive inference, such as induction, abduction, analogy, and other types
of inference that produce “ampliative” conclusions.
Openness: NARS is always open to new evidence, which may challenge the
previous beliefs of the system, and therefore lead to belief revisions and
conceptual changes.
Relevance: The inference rules not only demand truth-value relationships between the premises and the conclusions, but also semantic relationships, that
is, their contents must be related.
!3
Both in logic and in AI, the above issues are usually addressed separately,
and a new logic is typically built by extending or revising a single aspect of
classical logic, while leaving the other aspects unchanged [6, 9, 10]. NARS takes
a different approach, by treating the issues as coming from a common root, that
is, the assumption on the insufficiency of knowledge and resources [4, 2].
3
Computational Models and AGI
Since an AGI will eventually be implemented in a computer system, it is often
taken for granted that all processes in the system should be designed and analyzed according to the theory of computation. Concretely, it means the problem
the system needs to solve will be defined as a computation, and its solution as
an algorithm that can be implemented in a computer [11].
I have argued previously that such a conceptual framework is not suitable
for AI at the problem-solving level [7, 2]. Like mathematical logic, the theory of
computation also came from the study of problem solving in mathematics, where
the “problems” are abstracted from their empirical originals, and the “solutions”
are expected to be conclusively correct (i.e., cannot be refuted or revised later),
context independent (i.e., having nothing to do with where and when the problem
appears), and expense irrelevant (i.e., having nothing to do with how much time
has been spent on producing it).
Therefore, problem solving in mathematics can be considered as “time-free”
and repeatable. When dealing with abstract problems, such an attitude is justifiable and even preferred — mathematical solutions should be universally applicable to different places in different times.
However, the problem-solving processes in intelligence and cognition are different, where neither the problems nor the solutions are time-free or accurately
repeatable. In practical situations, most problems are directly or indirected related to predictions of future events, and therefore have time requirements attached. In other words, “solving time” is part of the problem, and a “solution”
coming too late will not qualify as a solution at all. On the other hand, the
solutions for a problem usually depend on the system’s history and the current
context. This dependency comes from the adaptive nature of the system and
the real-time requirement. By definition, in an adaptive system the occurrences
of the same problem get different solutions, which are supposed to be better
and better in quality. Therefore, each occurrence of a problem is unique, if the
system’s state is taken into consideration as a factor. For an adaptive system,
usually in its lifetime its internal states never repeat, and nor does its external
environment. Consequently, its problem-solving processes cannot be accurately
repeatable.
Though it is still possible to focus on the relative stable aspects of an intelligent system, so as to specify its stimulus–response relationship as a function, in
the sense that the same (immediate) input always lead to the same (immediate)
output. However, such a treatment excludes some of the prominent features of
intelligence, such as its adaptivity, originality, and flexibility.
4
!
For instance, from the very beginning of AI, “learning” has been recognized
by many researchers as a central aspect of intelligence. However, the mainstream
“machine learning” research has been carried out in the framework of computation:
The objective of learning is to get a function. At the object level, though
during the learning process the same problem instance gets multiple solutions with improving quality, it is usually expected that the problem-solution
relation will eventually converge to a function,
The learning process follows an algorithm. At the meta-level, each type
of learning is usually defined as a computation, and follows an algorithm,
with the training data as input, and the learned function as output.
The human learning process does not fit into this framework, because it is usually
open-ended, and does not necessarily converge into a stable function that maps
problems into solutions. Even after intensive training in a certain domain, an
expert can still keep the flexibility and adaptivity when solving problems. Also,
human learning processes do not follow fixed procedures, because such processes
are usually not accurately predictable or repeatable.
The above conclusions do not mean that intelligence has nothing to do with
computation. At a certain level of description, the activities of an intelligent
system can be analyzed as consisting of many “basic steps”, each of them is
repeatable and can be specified as a computation following a fixed algorithm. It
is just that a problem-solving process typically consists of many such steps, and
its composition depends on many factors that are usually not repeated during
the system’s life cycle.
Such a formal model is provided in NARS [7, 2, 8]. As a reasoning system,
NARS solves each problem by a sequence of inference steps, where each step
is a simple computation, but since the steps are linked together at run time
according to many ever-changing factors, the problem-solving process cannot be
considered as a computation, since it is not repeatable.
Here I want to argue that “intelligence” should be formalized differently from
“computation”. As far as time is concerned, the differences are:
The time-dependency of problem. In computation, a problem-solving process can take an arbitrarily long time, as long as it is finite. In intelligence.
a problem-solving process is always under a time pressure, though the time
requirement is not necessarily represented as a hard deadline. In general, the
utility value of a solution decreases over time, and a solution may lose most
of its utility if it is found too late. When time-pressure changes, the problem
is also more or less changed.
The time-dependency of solution. In computation, the correctness of a solution has nothing to do with when the problem appears, but in intelligence,
it does. Whether a solution is reasonable should be judged not only according to the problem, but also the available knowledge and resources at the
moment. A reasonable solution obtained by a student in an emergency may
not be reasonable when provided by an expert after a long deliberation.
5
!
To implement such a model, NARS is designed in the following way:
– Each inference task has a priority-value attached to reflect the (relatively defined) urgency for it to be processed. Similarly, each belief and concept in the
system has a priority-value attached to reflect its importance at the moment.
All these values are adjusted by the system according to its experience.
– The selection of inference rules is data-driven, decided by the task and belief
winning the system’s attention at the moment. Since the selection of task
and belief is context-sensitive, so is the overall inference process.
In this way, it is possible to implement a non-computable process using computable steps [2].
4
Summary
AGI research needs formal models of intelligence and cognition. Since all the
existing models were designed for other purposes, they should not be directly
applied without fundamental revision. Here the key issue is not in the specific
features of a model, but in the basic assumptions behind it. More effort should
be put into the developing of new formal models that satisfy the requirements
of AGI, though the task is difficult and the result will not be perfect soon.
References
1. Bringsjord, S., Sundar G, N.: Toward a serious computational science of intelligence
(2009) Call for Papers for an AGI 2010 Workshop.
2. Wang, P.: Rigid Flexibility: The Logic of Intelligence. Springer, Dordrecht (2006)
3. Wang, P.: Theories of artificial intelligence – meta-theoretical considerations. In
Wang, P., Goertzel, B., eds.: Theoretical Foundations of Artificial General Intelligence. Atlantis Press, Paris (2012) 305–323
4. Wang, P.: Cognitive logic versus mathematical logic. In: Lecture notes of the
Third International Seminar on Logic and Cognition, Guangzhou (2004) Full text
available online.
5. McCarthy, J.: Artificial intelligence, logic and formalizing common sense. In
Thomason, R.H., ed.: Philosophical Logic and Artificial Intelligence. Kluwer, Dordrecht (1989) 161–190
6. Haack, S.: Deviant Logic, Fuzzy Logic: Beyond the Formalism. University of
Chicago Press, Chicago Press (1996)
7. Wang, P.: Non-Axiomatic Reasoning System: Exploring the Essence of Intelligence.
PhD thesis, Indiana University (1995)
8. Wang, P.: Non-Axiomatic Logic: A Model of Intelligent Reasoning. World Scientific, Singapore (2013).
9. Gabbay, D.M.: Logic for Artificial Intelligence and Information Technology. College
Publications, London (2007)
10. Russell, S., Norvig, P.: Artificial Intelligence: A Modern Approach. 3rd edn. Prentice Hall, Upper Saddle River, New Jersey (2010)
11. Marr, D.: Vision: A Computational Investigation into the Human Representation
and Processing of Visual Information. W. H. Freeman & Co., San Francisco (1982)
!"# $%&'()*# +,&-./# -0# 1&*)2(3(2/# -0# 45"'2(-"6)"# 7"2&.%.&2)2(-"# 2-# 82,&# 4)(*5.&#
-0#9:5(3)*&"2#$5;<2(252(-"#=.("'(%*&>?#
#$%&'&()!*%(!
!
@;<2.)'2A#+,!-./!0/'%-$1$-,!&2!23(4-$&( 56/4$%'!7./&0,!&2!8/'%-$1$-,!&2!93(4-$&(!:5789;! ! $<!&(',!
3</23'!2&0!-./!-03-.=23(4-$&(!2&0>/?!@,!23(4-$&(!@$(%0,!4&((/4-$1/<!$(!4'%<<$4!60&6&<$-$&(%'!'&)$4!
/A3$1%'/(-',! -0%(<2&0>/?! &(/! -&! &(/! @,! -B&=1%'3/?! (&(=-03-.=23(4-$&(! 2&0>/?! @,! (&(=23(4-$&(!
3(%0,! &6/0%-&0<C! D&0/&1/0E! @,! -./! 5789E! F-./! 2%$'30/! &2! -./! /A3$1%'/(-! <3@<-$-3-$&(! 60$(4$6'/G!
B.$4.!%66/%0<!$(!6.$'&<&6.$4%'!'&)$4E!%(%',-$4!6.$'&<&6., %(?!$(2&0>%-$&(!<4$/(4/!-30(<!-&!@/!%!
(%-30/!&2!4'%<<$4%'!'&)$4C! !
B&/#C-.D<E!HA3$1%'/(4/!53@<-$-3-$&(E!I&(=703-.=J%'3/?=93(4-$&(E!703-.=93(4-$&(E!5789C!
!
FA# # 7"2.-D5'2(-"#
7./! >&?/0(! >%-./>%-$4%'! '&)$4! .%<! @/4&>/! %! 4&>>&(! -./&0/-$4%'! 2&3(?%-$&(! @/-B//(!
$(2&0>%-$&(!<4$/(4/!%(?!>&?/0(!%(%',-$4!6.$'&<&6.,C!K&B/1/0E!-./!4'%<<$4%'!6%0-!&2!>&?/0(!'&)$4!
<//><!(&-!1/0,!B/''!-&!?/%'!B$-.!F-./!2%$'30/!&2!-./!/A3$1%'/(-!<3@<-$-3-$&(!60$(4$6'/G!B.$4.!&2-/(!
%66/%0<! $(! (%-30%'! '%()3%)/E! -0%?$-$&(%'! 6.$'&<&6.,E! A3%(-3>! -./&0,! %(?! 4&>63-/0! '%()3%)/C! 9&0!
/L%>6'/M! !
FN'-.&3).!O!)/-!%!.$).!<%'%0,!@3-!O!%>!(&-!1/0,!0$4.C!O!6%,!%!-%L!%(?!?&(%-/!>34.!>&(/,C!
O-!$<!&@'$)%-&0,!-.%-!O!6%,!%!-%LC!O-!$<!(&-!&@'$)%-&0,!-.%-!O!?&(%-/!>34.!>&(/,CG!
7./!'&)$4%'!</>%(-$4<!$(!-./!'%<-!2/B!</(-/(4/<!%0/M!FO!6%,!%!-%LG!$<!-03/P!FO!?&(%-/!>34.!>&(/,G!$<!
-03/C!FO-!$<!&@'$)%-&0,!-.%-!O!6%,!%!-%LG!$<!-03/P!FO-!$<!&@'$)%-&0,!-.%-!O!?&(%-/!>34.!>&(/,G!$<!2%'</C!
7./0/2&0/E!-./0/!$<!%!2&0>%'!60&@'/>!QM!F-./!<%>/!-03-.=1%'3/!&2!-./!-B&!?$0/4-',!4&00/<6&(?$()!
4'%3</<!$(!-B&!4&>6'/L!</(-/(4/<!B.$4.!.%1/!?$22/0/(-!-03-.=1%'3/!/'</!$(!B.$4.!-./,!%0/!&(',!R&$(!
@,!-./!<%>/!4&((/4-$1/GC! !
534.! 60&@'/>! Q! %'<&! .%?! '&()! 63SS'/?! -./! 2&3(?/0! &2! >&?/0(! '&)$4! 90/)/C! 90/)/! %(?! .$<!
2&''&B/0<!-.&3).-!-.%-!B/!.%?!-&!3</!$(-/(<$&(%'!'&)$4!-&!<&'1/!$-C!7./!>&?/0(!>&?%'!'&)$4 %<!&(/!
&2!-./!>&<-!-,6$4%'!%(?!>%-30/!T$(?!&2!$(-/(<$&(%'!'&)$4<!$(!(&(=4'%<<$4%'!'&)$4< B.$4.!.%<!.%?!(&-!
&(',! %! '%0)/! %(?! 4&>6'/L! <,<-/><! $(! 2&3(?%-$&(E! @3-! %'<&! B$?/',! %66'$/?! $(! 4&>63-/0! <4$/(4/E!
>&?/0(! %(%',-$4! 6.$'&<&6.,E! -./! $(-/060/-%-$&(! &2! A3%(-3>! >/4.%($4<! %(?! /1/(! >%(,! <&4$%'!
<4$/(4/<!%(?!.3>%($-$/<C! !
U3$(/V<! %0)3>/(-! WXY! '$T/! -./! 60&@'/> %@&1/! A3/<-$&($()! -./! 1%'$?$-,! &2! /%0',! >&?/0(!
>&?%'! '&)$4E! @3-! $-! <-$>3'%-/?! -./! ?/1/'&6>/(-! &2! <,<-/>! -./&0$/<! &2! >&?%'! '&)$4E! <34.! %<! -./!
%??$-$&(! &2! -./! (/4/<<$-%-$&(! 03'/! $(! <,(-%L! %(?! -./! $(1/(-$&(! &2! 6&<<$@'/! B&0'?! </>%(-$4<C!
K&B/1/0E! -./! 6&<<$@'/! B&0'?! </>%(-$4<! ?&/<! (&-! >%T/! %(,! >&?%'! 2&0>3'%! 4&00/<6&(?! -&! %!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Z
! #$%&'&()!*%(!
[/6%0->/(-!&2!\.$'&<&6.,E!K3%S.&()!]($1/0<$-,!&2!54$/(4/!%(?!7/4.(&'&),!
\C^C!+&L!X_``aXE!*3.%(E!K3@/$E!\C!8C!&2!b.$(%C! ! !
H=>%$'
!
.B%(L'c"d_C4&>!
!"!
2$0<-=&0?/0! 2&0>3'%! &(/! @,! &(/E! %(?! 1$4/! 1/0<%W"YC! D&0/&1/0E! <&>/! )/(/0%'$S/?! >&?%'! '&)$4! %<!
?/&(-$4! '&)$4! 60&?34/<! '&)$4%'! 6%0%?&L/<! B.$4.! %0/! ?$22$43'-! -&! <&'1/C! D%(,! <4.&'%0<! $(!
$(2&0>%-$&(!<4$/(4/!%(?!%0-$2$4$%'!$(-/''$)/(4/!B&00,!%@&3-!-./!?$<3($-,!@/-B//(!>&?%'!'&)$4!%(?!
4'%<<$4%'!'&)$4C!N-!2$0<-E!F(/4/<<%0$',G!$(!>&?%'!'&)$4!$<!%!(&(=-03-.=23(4-$&(!&6/0%-/ @3-!4'%<<$4%'!
'&)$4!$<!-03-.=23(4-$&(!'&)$4!&(',C! !
7.$<!%0-$4'/!$<!%'&()!%!-&6=?&B(!%660&%4.E!20&>!@/.$(?!-./!4'%<<$4%'!60&6&<$-$&(!'&)$4!2&0>!&2!
3($1/0<%'$-,! %(?! $-<! 60&6/0! 4&>6'$%(4/! &2! (%-30%'! '%()3%)/E! <//T$()! -./! 0/'%-$&(<.$6! @/-B//(! %!
3($2&0>! >/-.&?! &0! 4&>60/./(<$1/! -./&0,! -&! /L6'&0/! -./! (&(=4'%<<$4%'! '&)$4! %(?! 4'%<<$4%'! '&)$4C!
5$(4/! 4'%<<$4%'! 60&6&<$-$&(%'! '&)$4! $<! -./! >&<-! >%-30/! %(?! -./! >&<-! <$>6'/! >&?/0(! '&)$4E! %(?!
>&?%'! &6/0%-&0! $<! %! 3(%0,! &6/0%-&0E! <&! B/! 4&(<$?/0! -./! 0/'%-$&(<! @/-B//(! -03-.=1%'3/?! 23(4-$&(!
4&((/4-$1/! $(! 4'%<<$4%'! 60&6&<$-$&(%'! '&)$4! %(?! (&(=-03-.=1%'3/?! 23(4-$&(! &6/0%-/! $(! >&?%'!
60&6&<$-$&(%'!'&)$4!B$-.!-B&!-03-.=1%'3/?C!7./!-./&0,!%@&3-!-./!0/'%-$&(<.$6!$<!56/4$%'!7./&0,!&2!
8/'%-$1$-,!&2!93(4-$&(!:5789;C!
7./0/2&0/E!4&>6'/-/',!3($2$/?!-./!(%-30/!&2!>&?%'!'&)$4!%(?!&2!4'%<<$4%'!'&)$4!0/1/%'/?!@,!&(/!
%0-$4'/!$<!(&-!0/%'$<-$4E!@3-!20&>!-./!F-./!2%$'30/!&2!/A3$1%'/(-!<3@<-$-3-$&(!60$(4$6'/fE!B/!4%(!230-./0!
4'%0$2,!>&?%'!60&6&<$-$&(%'!'&)$4!B$-.!-B&!-03-.=1%'3/?!60/2/00$()!%<!%(!$?$&>!-&!%!F20%)>/(-G!&2!
4'%<<$4%'!60&6&<$-$&(%'!'&)$4C! !
GA# # !3&.3(&H#-0#$+14#
N!2/B!6%6/0<!%0/!?/?$4%-/?!-&!<,<-/>%-$4%'',!$(1/<-$)%-/!$(-&!-./!(%-30/!&2!-./!(&(=-03-.=23(4-$&(<!
&0! &2! -./! (&(=-03-.=23(4-$&(%'! 4&((/4-$1/<! WgE! aE! eYC! H1/(! -.&3).! -./,! .%?! <-3?$/?! $(! ?/-%$'! -./!
?$22/0/(4/<! @/-B//(! (&(=-03-.=23(4-$&(! %(?! -03-.=23(4-$&(! &0! ?$22/0/(4/<! @/-B//(!
(&(=-03-.=23(4-$&(%'!4&((/4-$1/!%(?!-03-.=23(4-$&(%'!4&((/4-$1/!W_E!dE!hY (&!&(/!0/4&)($S/<!-./!
/A3$1%'/(-!-0%(<2&0>%-$&(!@/-B//(!-./>E!(&0!?&!>&0/!0/2/0!-&!)$1/!<34.!-0%(<2&0>%-$&(!%!-$).-',!
4'/%0!?/2$($-$&(C! !
GAF#I-"J$2.('2*/#K&0("&D#I-"J+.52,J45"'2(-"#
N!23(4-$&(!$<!%!4&00/<6&(?$()!0/'%-$&(!&2!-B&!1%0$%@'/<!:&0!&2!-B&!</-<!&0!&2!-B&!2&0>3'%<;C!9&0!%!
1%0$%@'/!L!%(?!%!?/6/(?/(-!1%0$%@'/!,!@&-.!$(!-./$0!1%'3/?=?&>%$(E!$2!L!.%<!%!1%'3/!4&00/<6&(?$()!
,! .%1$()! &(/! %(?! &(',! &(/! 1%'3/E! -./(! -.$<! ,! $<! %! 23(4-$&(! &2! LC! N(&-./0! B%,E! B./(! ,! $<! (&-! %!
23(4-$&(!&2!LE!,!$<!%!23(4-$&(!&2!i,!%-!'/<-C!7.%-!$<!-./!0/'%-$1$-,!&2!(&(=23(4-$&(C!703-.=23(4-$&(!$<!
-./! 23(4-$&(! $(! B.$4.! -./! ?&>%$(! &2! -./! 1%0$%@'/<! %(?! -./! ?/6/(?/(-! 1%0$%@'/! %0/! %''! -./!
-03-.=1%'3/?=?&>%$(! &(',C! I&(=-03-.=23(4-$&(! $<! (&-! -./! 23(4-$&(! $(! B.$4.! -./! ?&>%$(<! &2! -./!
1%0$%@'/<! %(?! &2! -./! ?/6/(?/(-! 1%0$%@'/! %0/! %''! -./! -03-.=1%'3/?=?&>%$(! &(',C! O-! $(4'3?/<! -./!
(&(=-03-.=1%'3/?! 23(4-$&(! %(?! -./! -03-.=1%'3/! &2! (&(=23(4-$&(C! 7.%-! $<! -&! <%,E! B/! <-3?,! -./!
(&(=-03-.=23(4-$&(!$<!%4-3%'',! -B&=-03-.=1%'3/?!(&(=23(4-$&(C!K&B/1/0E!-&!.%1/!-./!6.$'&<&6.$4%'!
'&)$4!4&00/<6&(?$()!-&!-./!-/L-E!B/!<-$''!4%''!$-!(&(=-03-.=1%'3/?!23(4-$&(C!
j/(/0%'',E!B/!-.$(T!%(,!1%0$%@'/<!6!%(?!6"!B.$4.!%0/!$(?/6/(?/(-!&2!/%4.!&-./0!%0/!-B&!@%<$4!
-03-.!23(4-$&(<C!K&B/1/0E!B./(!6!$<!-03/E!6"!$<!(&-!&(',!%!-03-.=1%'3/!/$-./0!-03/!&0!2%'</E!@3-!-B&!
-03-.=1%'3/<!-03/!%(?!2%'</!4&00/<6&(?/?C!5&!6"!$<!(&-!%!-03-.!23(4-$&(!&2!6C!7.$<!$<!(&-!-&!<%,!B./(!
6!$<!-03/!-./(!6"!$<!@&-.!-03/!%(?!2%'</!%-!-./!<%>/!-$>/E!@3-!-&!<%,!-.%-!6!%(?!6"!.%<!%!0/'%-$&(!&2!
(&(=-03-.!23(4-$&(!B.$4.!4%(!@/!.&'?!@&-.!B./(!6!$<!-03/!-./(!6"$<!-03/!%(?!B./(!6!$<!-03/!-./(!6"!
$<!2%'</C!7.$<!<$-3%-$&(!$<!>34.!'$T/!-./!)/(/0%'!>%-./>%-$4%'!0/'%-$&(!&2!-B&!1%0$%@'/<!Lk,eC!7./!
!
!e!
0/'%-$&(!Lk,e!$<!.&'?!@&-.!B./(!Lk"!-./(!,k"!%(?!B./(!Lk"!-./(!,k="C!^2!4&30</E!B./(!B/!-.$(T!
%@&3-!6"!$<!%!(&(=-03-.!23(4-$&(!&2!6E!B/!&(',!3</!-./!2&0>%'!(%-30/!&2!>&?/0(!>%-./>%-$4%'!'&)$4!
%(?!$-!$<!$(?/6/(?/(-!&(!B.%-!$<!-./!/L%4-!>/%($()!&2!-03/C! !
GAG#@#4-.L)*#K&0("(2(-"#-0#I-"J+.52,J45"'2(-"#
l$T/! >&?%'! '&)$4E! 5789! &(',! %??<! %! <,>@&'! -&! '%()3%)/! &2! 4'%<<$4%'! 60&6&<$-$&(%'! '&)$4! <,<-/>!
b\P!@3-!3('$T/!>&?%'!'&)$4E!-./!<,>@&'!5789!&(',!%??/?!4%(!@/!/(-$0/',!?/2$(/?!@,!-./!<,>@&'<!
$(!b\C!7./0/2&0/E!-./!(/B!<,<-/>!5b\!$<!%4-3%'',!<-$''!b\C!
O(!b\ @%<$4!<,>@&'<!%0/ !
"=" 6 6" 6eE!m 6(m ( nIoC! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !
"=e
!
"=_ ! :!E! ! ; !
O(!5b\E!@%<$4!<,>@&'!%??/?!$< !
"=X ! K !
K&0("(2(-"#FE!+%<$4!3(%0,!&6/0%-/!K!
K6!k?/2!6"C! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! :";!
M-.-**)./FJFE!&-./0!"g!3(%0,!&6/0%-&0<!4%(!@/!0/430<$1/',!?/2$($-/!@,!</-!n
E!K!o!$(!7%@'/!"C! !
7%@'/!"M!! 703-.!7%@'/!&2!+%<$4!](%0,!^6/0%-&0<!
6!
6 "!
[ "! [ e! [ _! [ X! [ g! [ d! [ a! [ h!
[ p!
[! [! [
"`!
""!
"e!
[
"_!
[
"X!
[
"g!
[
"d!
"!
"!
"!
"!
"!
"!
`!
"!
"!
"!
`!
`!
`!
"!
`!
`!
`!
`!
"!
`!
"!
"!
"!
`!
"!
"!
`!
`!
"!
"!
`!
`!
"!
`!
`!
`!
`!
"!
"!
"!
`!
"!
"!
`!
"!
`!
"!
`!
"!
`!
`!
"!
`!
`!
`!
`!
"!
`!
"!
"!
"!
`!
`!
"!
`!
"!
"!
`!
`!
`!
"!
`!
6 6"k! ! 6
k! Ka6
K! k!
K6! 6!
d
6!
6
6
6!
K!
6!
6
6
! 6
!
K!
6!
6
!
K!
6!
K!
6!
6
6! K! 6!
! 6!
K!
6
!
K!
6!
K!
6!
6
\!
6
K!
6!
6
!
K!
6!
K!
6!
6!
K! K! K!
K!
K!
K!
K!
"`
""
"e
"_
"X
"g
"d
6!
6!
6!
6!
6! ! 6!
6!
K!
6!
6
K!
K!
6 ! 6 !
!
!
K"
6!
Ke
6!
K_
6!
KX
6!
!
M-.-**)./FJGE!KK K6k?/2!6(E!(
!!
(!
!
!
Kg
6!
Kd
6!
Ka
6!
Kh
6!
Kp
6!
nIoC! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !
e !!
!_!
M-.-**)./FJNE!-./!</-!&2!d!<,>@&'<!n
P! ! KP! ! 6P! ! :!E! ! ;!o!4%(!?/2$(/!%(,!60&6&<$-$&(!%(?!%(,!
2&0>3'%! $(! 5b\E! <$(4/! /1/0,! 1%0$%@'/! 4%(! @/! ?/2$(/?! @,! K! %(?! 6E!%(?! %(,! 2&0>3'%! $<! 2&0>/?! @,!
1%0$%@'/<!$(!b\!%(?!</-!n
E!o!%(?! ! :!E! ! ;C! !
! ! ! 9&0!4&(1/($/(4/E!B/!3</!</1/0%'!%3L$'$%0,!<,>@&'<!$(!5b\M!
M-.-**)./FJN ! "6E! e6E!m%<!)/(/0%'!(&(=-03-.=23(4-$&(<!&2!6E!%(?!/%4.!&2!-./>!4%(!@/!
/A3$1%'/(-!-0%(<2&0>/?!$(-&!%!4&>6&</?!-03-.=1%'3/?!23(4-$&(E!%(?!1$</!1$<%C! !
"NE! "+E! eNE! e+E! m%<! )/(/0%'! (&(=-03-.=23(4-$&(<! &2! %(,! 2&0>3'%! E! %(?! /%4.! &2!
-./>!4%(!@/!/A3$1%'/(-!-0%(<2&0>/?!@&-.!$(-&!%!4&>6&</?!-03-.=1%'3/?!23(4-$&(!%(?!$(-&!%!)/(/0%'!
(&(=-03-.=23(4-$&(!&2!6C!9&0!/L%>6'/E!K!$N!k?$:NE!N";E!K!$+!k?$:+E!+";C!
6! %<! %! (&(=-03-.=23(4-$&(! &2! 6! B.$4.! $<! /A3$1%'/(-! -&! %! -03-.! 23(4-$&(! %0@$-0%0,! </'/4-/?!
20&>!-./!%''!</-!&2!-03-.!23(4-$&(<!$(!b\C! N!%<!%(,!)/(/0%'!(&(=-03-.=23(4-$&(!&2!2&0>3'%!N!$(!
5b\C!
*/!0/)%0?!-./!-03-.=1%'3/?!23(4-$&(<!%<!%!<6/4$%'!T$(?!&2!(&(=-03-.=1%'3/?!23(4-$&(<C! !
K&0("(2(-"#GE!5/>%(-$4!0/60/</(-%-$&(!
578\! >%$(-%$(<! -./! </>%(-$4<! &2! b\! 3(4.%()/?P! &(',! -./! 0&B<! &2! -./! -03-.! -%@'/! %0/!
/A3$1%'/(-! -&! 0/B0$-/! %<! %! <$()'/! 0&B! -03-.! 2&0> $(! B.$4.! FqG! 0/60/</(-<! -./! 2$0<-! ?$4.&-&>,!
@/-B//(!-03/!%(?!(&-!-03/!:&0!2%'</;E!Fq!qG0/60/</(-<!-./!</4&(?!?$4.&-&>,E!%(?!<&!&(C! !
9&0!/L%>6'/E!$2!-./!-03-.=1%'3/!&2!6!%<<$)(/?!-&!@/!F"q`GE!-./(!-./!-03-.=1%'3/!&2!K!6!%<<$)(/?!
-&!@/!F"qq`q"qq`GE!%(?!-./!-03-.=1%'3/!&2!K!K!6!%<<$)(/?!-&!@/!F"qqq`qq"qqq`q"qqq`qq"qqq`GC!
M-.-**)./#GJFE#I/)%-$&(!
7./!(/)%-$&(!&2!%(,!2&0>3'%!$<!(/)%-$&(!&2!/%4.!</6%0%-/!-03-.=1%'3/C!
9&0! /L%>6'/E! -./! </>%(-$4! 2&0>3'%! &2! N! $<M! `q"E! <&! -./! (/)%-$&(! &2! N! $<M! :`q";k `q
"k"q`C! !
M-.-**)./F#GJGE#[$<R3(4-$&(!
9&0!%(,!-B&!2&0>3'%<!N!%(?!+E!N +!$<!-.%-!-./!?$<R3(4-$&(!2&0!/1/0,!6%0-$-$&(!@/-B//(!N
+C!
HC)C $2!Nk`q"qq`E!+k"qq`q" -./(!N +!k`q"qq` "qq`q"k!:` "qq`;q:"qq` ";k!::` ";qq:`
`;;q::" ";qq!:` ";;k!"qq`q"qq"C! ! ! ! ! ! !
GAN#9O%*)")2(-"# #
*/! %0/! 2%>$'$%0! B$-.! -./! 4/0-%$(-,! &2! F-./! 3(4/0-%$(-,G! $(! A3%(-3>! >/4.%($4<C! U3%(-3>!
>/4.%($4<! 4%((&-! 60/4$</',! 2&0/<//! -./! 60/?$4-$1/! 0/<3'-<! &2! <$()'/! A3%(-3>! >/%<30/>/(-! @3-! $-!
4%(!2&0/<//!-./!60&@%@$'$-,C! K&B/1/0E!-./!60&@%@$'$-,!&2!-./!0/<3'-<!&2!%!<$()'/! >/%<30/>/(-!-.%-!
A3%(-3>!>/4.%($4<!4%(!2&0/<//!$<!4&>6'/-/',!?/2$($-/C!5-$''!-%T/!K!6!%<!%(!/L%>6'/C!9$0<-!&2!%''E!
B./(! 6! $<! %! ?/2$($-/! -03-.! 1%'3/! `E! -./! -03/! 1%'3/! &2! K! 6! $<! (&-! %! ?/2$($-/! /$-./0! "&0! `! @3-! .%<!
3(4/0-%$(-,!&2!-03-.!1%'3/!"q`!:B/!4%''!$-!4%(!@/!-03/!4%(!@/!2%'</!@3-!(&-!@/!@&-.!-03/!%(?!2%'</!%-!
-./!<%>/!-$>/;P!@3-!-./(E!-.%-!-03-.!1%'3/!&2!K!6!$<!"q`!$<!4&>6'/-/',!?/2$($-/C!
\/0.%6<!<&>/!&(/!>%,!-.$(T!-.%-!-&!?/2$(/!-./!</-!&2!(&(=-03-.=1%'3/?!23(4-$&(<!4&>6'/-/',E!
B/!.%1/!-&!3</!/1/0,!-03-.=23(4-$&(!2&0>/?!@,!&(/!&0!</1/0%'!1%0$%@'/<!%(?!4'%<<$4%'!4&((/4-$1/<!
-&!4&00/<6&(?!&(/!@,!&(/!-&!%!3($A3/',!(&(=-03-.=1%'3/?!23(4-$&(C!9&0!/L%>6'/E! 6"e6!k!?/2!6
6"
6ee6!k!?/2!6 6e mC!I/1/0-./'/<<E!&@1$&3<',E!-.$<!?/2$($-$&(!$<!(&-!2$--$()!B$-.!(%-30%'!
'%()3%)/C!9&0!/L%>6'/E!/1/(!$2!B/!@/'$/1/!-./!F&@'$)%-&0,G!$<!%!(&(!-03-.=23(4-$&(%'!4&((/4-$1/E!
B/!(/1/0!-.$(T!-./0/!%0/!-B&!?$22/0/(-!4&((/4-$1/<!$(!FO-!$<!&@'$)%-&0,!-.%-!O!6%,!%!-%LG!%(?!FO-!$<!
!
X!
!
&@'$)%-&0,!-.%-!O!?&(%-/!>&(/,CG!7./!0/4300/(4/!&2!F&@'$)%-&0,G!$<!R3<-!-./!<%>/r!
NA# # 7"2&.%.&2)2(-"<#2-#2,&#4)(*5.&#-0#9:5(3)*&"2#$5;<2(252(-"#=.("'(%*&# #
5b\!$<!R3<-!%(&-./0!2&0>!&2!b\E!@3-!$-!B$''!2&0>!2/B!$>6&0-%(-!?/0$1/?!03'/<!-.%-!%0/!(&-!/%<,!-&!
?/-/4-!$(!-./!&0$)$(%'!b\!<,<-/>C!
NAF#7L%-.2)"2#=.-%&.2(&<#-0#M*)<<(')*#=.-%-<(2(-")*#P-Q('#1&3&)*&D#R/#$+1=# #
15*&#FE!?$22/0/(-$%-$()!&6/0%-&0!B$-.!<36/06&<$-$&(!03'/!
HC)C!KK6kK6"k!6e 6"kK6 !
15*&#GE#I&(=4&>6'/-/',!<3@<-$-3-%@$'$-,!03'/! !
9&0!%(,!(&(=-03-.=23(4-$&( +!0/6'%4/ NE!$(!)/(/0%'E!%44&0?$()!-&!-./!s=e!0%-./0!-.%(!-./!
s="M!
s="M!$2!
s=e
N!k?!:N N"
-./(!
+k?!:+E! ! N" C! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! :_;! !
$2 N!k?!:N N"
-./(!
+k?!:+ +" C! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! :X;!
O(!2%4-E!B./( $<!0/?34/?!-&!@/!%!-03-.=1%'3/?!23(4-$&(!&6/0%-&0E!-./(!s=e!$<!0/?34/?!-&!s="P!
O2!-./!'%<-!1%0$%@'/<!<.&B(!$(!N!%(?!$(!+!%0/!-./!<%>/E!-./(!s=e!$<!0/?34/?!-&!s="C!
NAG#+,&#4)(*5.&#-0#9:5(3)*&"2#$5;<2(252(-"#=.("'(%*&#("#'*)<<(')*#3(&H#
N44&0?$()!-&!5789E!(&!>%--/0!B.%-!6"!/L%4-',!$<!$(!%!(&(=-03-.=1%'3/?!23(4-$&(!&2!6!-.%-!$<!K6k!
6"M!$2!6k"E!-./(!K!6k"q`P!$2!6k`E!-./(!K!6k"q`C!5&E!B/!4%(!)/-!%!0/430<$1/!?/2$($-$&(!2&0!K!K!6M!$2!
K!6k"E!-./(!K!K!6k!K!:K!6;!k"q`P!$2!K!6k`E!-./(!K!K!6k!K!:K6;!k"q`C!O(!%??$-$&(E!B/!)/-!-.$<!
?/2$($-$&(M!$2!6k"E!-./(!K!K!6!k"qq`q"qq`P!$2!6k`E!-./(!KK!6!k"qq`q"qq`C!
*/! &0?/0! -B&! 2&0>3'%<E!Nk! 6E!+k!K6C!^@1$&3<',E!N! %(?! +! %0/! -B&! ?$22/0/(-! 2&0>3'%<C! +3-!
B./(! 6k"! %(?! Nk6k"P! +kK6kKNk"q`E! K+k! KK6k"qq`q"qq`C! 7.$<! <.&B<! -.%-! -./! 0/<3'-! 4%(! @/!
<%-$<2%4-&0,M!B./(!Nk+k"E!K!Nk" K!+k`C!7&!%44&>>&?%-/!-./!0/<3'-E!%44&0?$()!-&!5789E!K!K!
6!$(!<,(-%L!.%<!-&!@/!6eE!@3-!4%(!(&-!@/!6"C! !
7&! <3>! 36E! B/! )/-! F%! 2%$'30/! &2! /A3$1%'/(-! <3@<-$-3-$&(! 60$(4$6'/G! R3<-! $(! 4'%<<$4%'! '&)$4!
<,<-/>!5b\!&(',M!B./(!6k"E!6"k"!%(?!6ek`P!Nk6k!6"k+k"E!@3-!K!Nk!K!6k!6"k"E!K!+k!6ek!`C!
7.%-!$<!-&!<%,E!-./0/!$<!%!)0&36!&2!-03-.!%<<$)(>/(-E!N!%(?!+!%0/!/A3$1%'/(-E!@3-!B./(!B/!3</!+!-&!
<3@<-$-3-/!N!&2!K!NE!-./(!K!N!%(?!K!+!%0/!(&-!/A3$1%'/(-C! !
^2! 4&30</E! F%! 2%$'30/! &2! /A3$1%'/(-! <3@<-$-3-$&(! 60$(4$6'/G! ./0/! $<! -03',! F(&(=4'%<<$4%'GE! 2&0!
FK! G! $<! (&-! 4'%<<$4%'! '&)$4! 4&((/4-$1/! @3-! %! (&(=-03-.=1%'3/?! 23(4-$&(%'! &6/0%-/E! @3-! B/! (//?(V-!
$(-/(<$&(%'! '&)$4! &0! %(,! &-./0! (&(=4'%<<$4%'! '&)$4! -&! $(-/060/-! $-C! 7.%-! &(4/! 0/)%0?/?! %<! 63SS'$()!
F(&(=4'%<<$4%'G! 4.%0%4-/0$<-$4<! 60/</(-/?! $(!(%-30%'! '%()3%)/!&0! $(! <4$/(-$2$4! '%()3%)/! -30(! &3-! -&!
@/!F4'%<<$4%'G!&(/<!$(!2%4-C!
NAN#7"2&.%.&2)2(-"#0-.#%.-;*&L#S#
NANAF#!3&.3(&H#-0#L-D)*#%.-%-<(2(-"#*-Q('# #
7&!>%T/!-.$()<!<$>6'/!%(?!60/4$</E!B/!4&(<$?/0!%-&>$4!>&?%'!60&6&<$-$&(!&(',!$(!-.$<!%0-$4'/C!^2!
4&30</E!F G$<!&6/0%-/!F(/4/<<%0,GE!@3-!20&>!630/!2&0>%'!1$/BE!F G!$<!%!<,>@&'!4%''/?!F@&LG!&(',C!
*/! (//?(V-! -&! T(&B! -./! ?/-%$'! >/%(<! &2! -./! F G! $(! (%-30/! '%()3%)/! %(?! -./! '&()! .$<-&0,! &2!
!
!g!
F(/4/<<%0,G! $(! 6.$'&<&6.,! &2! '&)$4C! O-! $<! B/''=T(&B(! -.%-!
6! $<! %! @$1%'/(-! 3(%0,!
(&(=-03-.=23(4-$&(!&2!6E!5&!B/!4%(!3</!-.$<!/A3$1%'/(-!-0%(<2&0>%-$&(!-&!4&(<$?/0!/1/0,! 6!$(!%(,!
>&?%'!<4./>%C!7%T/!7!<4./>%!
6t6!2&0!%(!/L%>6'/C!*/!4%(!2$(?!-.%-M!
9$0<-',E! -./! </>%(-$4! 0/'%-$&(! @/-B//(! K"_6! %(?! 6! $<! -./! <%>/! %<! -./! </>%(-$4! 0/'%-$&(!
@/-B//(! 6!%(?!6E!%(?!%44&0?$()!-&!5789E!%(,!@$1%'/(-!3(%0,!(&(=-03-.=23(4-$&(!>%,!%'B%,<!@/!
/A3$1%'/(-!-0%(<2&0>/?!-&!%!-03-.=1%'3/?!23(4-$&(E!<$(4/!K"_6k!6 6"E!B/!4%(!3</ "_6k!6
6!
-&!0/60/</(-! 6!$(!7!<4./>%C! !
5/4&(?',E! F 6G! $(! 7! <4./>%! B.$4.! %4-3%'',! 0/60/</(-<! %! </-! &2! -./&0/><! &2! 4'%<<$4%'!
60&6&<$-$&(%'! '&)$4! 0/60/</(-<! %! </-! &2! :@3-! (&-! %;! -03-.=23(4-$&(<! $(! B.$4.! /%4.! >%T/! 7! <4./>%!
1%'$?!$<!$$2!/%4.!$<!-.$<!2&0>M! !
6!k?/2!6
6C! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! :g;! ! ! ! !
7.$0?',E!%44&0?$()!-&!F6&<<$@'/!B&0'?!</>%(-$4< -.%-! 6!$<!-03/!$(!%4-3%'!B&0'?!$(!7!<,<-/>!
0/60/</(-<! 6! $<! -03/! $(! %''! &2! %44/<<$@'/! 6&<<$@'/! B&0'?! B.$4.! %0/! 0/2'/4-$1/! -&! %4-3%'',! B&0'?C!
9&0-3(%-/',E!%(,!-03-.=23(4-$&(!2&0>/?!%<!6
6E!<34.!%<!6E!6 6"E!6
6"E!6
6E!6 : 6
6e!;E!/-4E!B.$4.!'/-<!7!<4./>%!1%'$?!$<!%(?!&(',!$<!-./!-03-.=23(4-$&(!$(!b\!B.$4.!$<!8/2'/4-$1/!-&!6!
$(!-./!</(</!&2!-./!</-!-./&0,!WpYE!%(?!%(,!-03-.=23(4-$&(!B.$4.!'/-<![!<4./>%!1%'$?!$<!%(?!&(',!$<!
5/0$%'$-,!-&!6!$(!-./!</(</!&2!-./!</-!-./&0, %(,!-03-.=23(4-$&(!B.$4.!'/-<!X!<4./>%!1%'$?!$<!%(?!
&(',!$<!H34'$?/%(!-&!6!$(!-./!</(</!&2!-./!</-!-./&0, <&!&(!%(,!&-./0!<4./>%C! !
9&30-.',E! .&B! %@&3-! F 6! $<! -03/G! $(! %! 6&<<$@'/! B&0'?! &-./0! -.%(! $(! -./! %4-3%'! B&0'?u!
9&0>%'',E!(&!>%--/0!.&B!4&>6'/L!&2!-./!$((/0!<-034-30/!&2! 6E!B./(!B/!-.$(T!%@&3-!-./!</>%(-$4!
0/'%-$&(!@/-B//(! 6!%(?!6E!-./,!.%1/!%(?!&(',!.%1/!-./</!_!6&<<$@$'$-$/<M!"E"P!`E"P!`E`C!7.%-!$<!-&!
<%,E!$2! 6!$<!-03/E!-./!2&0>%'!</>%(-$4<!$<!&(',!-./!<%>/!%<!-03/E!(&!>%--/0!-.%- 6!$<!-03/!$(!-.$<!
6&<<$@'/!B&0'?!&0!$(!%(,!&-./0!&(/P!$2! 6!$<!2%'</E!-./!2&0>%'!</>%(-$4<!$<!&(',!-./!<%>/!%<!2%'</E!
(&!>%--/0!-.%- 6!$<!2%'</!$(!-.%-!6&<<$@'/!B&0'?!&0!$(!%(,!&-./0!&(/C! !
9$2-.',E!$(!b\E! 6k?/2!6
6E! 6!$<!/A3$1%'/(-!-&!%!-03-.!23(4-$&(!%0@$-0%0,!</'/4-/?!20&>!
-./!</-!&2!-03-.!23(4-$&(<!$(!b\C!O(!(%-30%'',!'%()3%)/E! 6!$<!F%(,B%,E!6GE!&0!>&0/!4&(1$(4/?!F$-!
$<!%'B%,<!2%'</!-.%-!%(,!60/>$</!$(2/0<!(/)%-$&(!&2!6GC! !
5$L-.',E!B./(!6!$<!-&!?/2$($-/!-03-.!1%'3/!"E!-./!-03/!1%'3/!&2! 6!$<!(&-!?/2$($-/!/$-./0!"&0!`!
@3-!.%<!3(4/0-%$(-,!&2!-03-.!1%'3/!"q`!:B/!4%''!$-!4%(!@/!-03/!4%(!@/!2%'</!@3-!(&-!@/!@&-.!-03/!%(?!
2%'</!%-!-./!<%>/!-$>/;E!@3-!-.%-!-03-.!1%'3/!&2! 6!$<!"q`!$<!4&>6'/-/',!?/2$($-/C!D&0/&1/0E!B./(!
-./!6!$<!%!?/2$($-/!-03-.!1%'3/!`E!%'-.&3).!-.%-!-./!</>%(-$4<!&2!-./! 6!B.$4.!$<!3(4/0-%$(',!-03-.!
1%'3/!"q`!$<!4&>6'/-/',!?/2$($-/E!-.%-!-./!'&)$4%'!<,(-%L!&2! 6!3(?/-/0>$(/?E!2&0!B/!<-$''!4%(!(&-!
0/)%0?! 6!%<!-./!4&(R3(4-$&(!@/-B//(!6!%(?!%(&-./0!?/2$($-/!(&(=-03-.!23(4-$&(!&2!6C!K&B/1/0E!
-.$<! $<! -&-%'! 4/0-%$(-,! -.%-! %-! -.$<! -$>/E! 6! $(! '&)$4%'! <,(-%L! 4%(! @/! ?/2$(/?! %<! %! 4&(R3(4-$&(!
@/-B//(!1%0$%@'/!6!%(?!<&>/!(&(=-03-.!23(4-$&(!&2!6!$(!b\P!-.%-!$<!-&!<%,E!-.$<!$<!-&-%'!4/0-%$(-,!-.%-E!
'&)$4%'! <,(-%L! &2! 6! $(! 7! <4./>%! 4%(! &(',! @/! ?/2$(/?! %<! %! (&(=-03-.! 23(4-$&(! &2! 6! B.$4.! $<!
/A3$1%'/(-!-&!%(,!-03-.!23(4-$&(!B.$4.!$<!8/2'/4-$1/!-&!6!$(!-./!</(</!&2!-./!</-!-./&0,!C!7.$<!$<!-&-%'!
4/0-%$(-,!-.%-E! 6!4%(!(&-!@/!?/2$(/?!%<!%!4&(R3(4-$&(!@/-B//(!6!%(?!<&>/!&(/!(&(=-03-.!23(4-$&(!
&2!6!$(!\b!%(?!%<!%(&-./0!4&(R3(4-$&(!@/-B//(!6!%(?!%(,!&-./0!(&(=-03-.!23(4-$&(!&2!6!$(!b\!%-!-./!
<%>/!-$>/C! !
N-!'%<-E!.&B!%@&3-!-./!F(/4/<<$-%-$&(!03'/G!$<u! ! ^2!4&30</E!$-!$<!1/0,!4&>6'/Lr!+3-!-./0/!$<!(&!
3<%)/!&2!F(/4/<<$-%-$&(!03'/G!./0/C!9&0!B/!&(',!4&(<$?/0!%-&>$4!>&?%'!60&6&<$-$&(!(&BE!-.%-!$<!-&!
<%,!Nk!6!4%(!(&-!@/!/1/0!-03/E!%(?!F(/4/<<$-%-$&(!03'/G!$<!F$2 NE!-./(
NGC! !
!
d!
!
NANAG#@#R.(&0#("2.-D5'2(-"#-0#2,&#K&-"2('#*-Q('#
5$(4/!$-!.%<!%'>&<-!$(/1$-%@',!@//(!?/&(-$4!6%0%?&L!B./(!?/&(-$4!&6/0%-&0 &@'$)%-&0, &0!F&3).-!
-&G!$<!?$0/4-',!$(-/060/-/?!$(-&!>&?%'!&6/0%-/ (/4/<<%0, $(![!<,<-/>!&2!>&?%'!60&6&<$-$&(%'!'&)$4E!
B/! -0,! -&! $(-/060/-! ?/&(-$4! &6/0%-&0! %<! <36/06&<$-$&(! &2! (&(=-03-.=1%'3/?! 23(4-$&(<C! */! -0,! -&!
?/2$(/!%-&>$4!?/&(-$4!60&6&<$-$&(<!'$T/!-.$<M!
[/&(-$4!'&)$4!$(4'3?/<!%(?!&(',!$(4'3?/<!'&)$4!&2!>&0%'$-,!%(?!'&)$4!&2!&@'$)%-$&(!W"`YC!
@A#P-Q('#-0#L-.)*(2/#
O-!$<!6/0>$<<$&(!-.%-!6G!$<!F$-!$<!6&<<$@'/!-.%-!6!$<!>&0%'GM!
\6!k?/2!
D!6C! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! :d;!
! FO-!&3).-!-&!@/!-.%-!6G!$<!F$-!$<!(&-!6&<<$@'/!-.%-!6!$<!(&-!>&0%'GM!
6!k?/2
^@1$&3<',E!
6
\
6!k
D!6C! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! :a;!
D!: 6!;C!
6t!D!6E!@3- 6!t\6!$<!(&-!1%'$?C!
RA#P-Q('#-0#-;*(Q)2(-"#
O-!$<!6/0>$<<$&(!-.%-!6G!$<!F$-!$<!6&<<$@'/!-.%-!6!$<!(&0>%-$1/GM!
\6!k?/2!
I!6C! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! :h;!
! FO-!$<!&@'$)%-&0,!-.%-!6G!$<!F$-!$<!(&-!6&<<$@'/!-.%-!(&!6!$<!(&0>%-$1/GM!
6!k?/2 \
6C! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! :p;!
^@1$&3<',E! 6k!9: 6;!k
I!: 6!;
I!6k I!6C!9!$<!2&0@$??/(C
6t\6E!@3- 6!
tI!6!$<!(&-!1%'$?C! 6!t6!$<!(&-!1%'$?C!6t\6!$<!(&-!1%'$?C!
O(! @&-.! :";! %(?! :e;E! F G$<! F6&<<$@$'$-,G! $(! <&>/! <,<-/>! &2! >&?%'! 60&6&<$-$&(%'! '&)$4! 20&>!
5789E! I! :6;! $<! K6E! <&! ?&! D! :6;C! 7.%-! $<! -&! <%,E! $2! \! B%<! R3<-! F GE! -./(! -./! '&)$4! <,<-/>! &2!
&@'$)%-$&(!0%-./0!-.%(!-./!'&)$4!<,<-/>!&2!>&0%'$-,!B&3'?!@/!0/)%0?/?!%<![!<,<-/>C!^2!4&30</E!$(!
'&)$4!<,<-/>!&2!&@'$)%-$&(E!-.&3).! 6!$<!(/4/<<%0,!\E!\6!$<!(&-!6&<<$@',!6!@3-!6&<<$@',!-.%-!6!$<!
(&0>%-$1/C! !
D&0/&1/0 $(!(%-30/!<$-3%-$&( F-.%-!6!$<!(&0>%-$1/G!%(?!F-.%-!(&!6!$<!(&0>%-$1/G!4%(!(&-!@/!
@&-.!2%'</C!7.%-!$<!-&!<%,E!B./(!2&0>/0!I!6!$<!"E!-./(!-./!'%-/0!I!: 6!;!$<!"q`P!*./(!2&0>/0!$<!`E!
-./(!-./!'%-/0!$<!"E!<34.!-.%-! !
! B./(!I!6k!K6E!I!: 6!;k!KXK6C! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! :"`;!
NANAN#@#=.&*(L(")./#7"2&.%.&2)2(-"#0-.#%.-;*&L#S#
*/! &0?/0! -.&</! 60&6&<$-$&(<! %-! -./! @/)$(($()! &2! -.$<! %0-$4'/M! FO! 6%,! %! -%LG! %<! 6E! FO! ?&(%-/! >34.!
>&(/,G!%<!6TC!FTG!0/60/</(-<!-.%-!-./0/!%0/!T!-$>/V<!<36/06&<$-$&(!&2!FKGC!FO-!$<!&@'$)%-&0,!-.%-!O!
6%,! %! -%LG! %<! 6E! FO-! $<! &@'$)%-&0,! -.%-! O! ?&(%-/! >34.! >&(/,G! %< 6TC! T eE! T IE! $(! -.$<!
<$-3%-$&(E!<34.!-.%-C!
"M! 6k!9: 6;!k
I!: 6!;!k!
I!: 6!;P!
e I!6k!K6k!6"P!
_
6k
I!: 6!;k
KXK6!k
KX6"!k
: 6"! K6";k :6"
6e;C!
!
a!
!
X
6Tk!
I!: 6T!; !
g I!6Tk!K6Tk6Tv"P!
d
6Tk
I!: 6T!;k
KXK6T!k! :6Tv"
6!Tve;C!
O-!$<!/%<,!-&!<//!-.%-!B./(!6k6Tk"E! 6!%(?! 6T!4%(!@/!?$22/0/(-!-03-.!1%'3/<E!2&0!6 6" 6e 6T
6Tv"!%(?!6Tve!%0/!$(?/6/(?/(-!&2!/%4.!&-./0E!(&!>%--/0!.&B!F G!$<!$(![!<,<-/>!&0!$(!7!<,<-/>C!
TA#M-"'*5<(-"<#
7./! 4&0/! &2! 5789! -./&0,! $<! -./! ?$<4&1/0,! %(?! 0$)&0&3<! 4'%<<$4%'! ?/2$($-$&(! &(! -./! /A3$1%'/(-!
-0%(<2&0>%-$&(!@/-B//(!-03-.!23(4-$&(!%(?!(&(=-03-.!23(4-$&(!$(!-./!<,<-/>!&2!60&6&<$-$&(%'!'&)$4!
-B&=1%'3/?C!9$0<-',E!B/!0/%'$S/!-./!0/'%-$1$-,!&2!-03-.=1%'3/?!23(4-$&(=(/<<C!5/4&(?',E!B/!3</!&(',!
<$L!<,>@&'<!n!KE!6P! E! P!:E! ! ;!o!-&!?/2$(/!%''!-03-.=23(4-$&(<!$(!B.$4.!/%4.!4%(!@/!/A3$1%'/(-!-&!
%! (&(=-03-.! 23(4-$&(! $(! b\C! 7.$0?',E! B/! 2$(?! &3-! -.%-! F-./! 2%$'30/! &2! -./! /A3$1%'/(-! <3@<-$-3-$&(!
60$(4$6'/G! 4%(! @/! %3-&>%-$4%'',! )/(/0%-/?! 20&>! 5b\! $(!4'%<<$4%'! 1$/BC! 9&30-.',E! B/! )/-! -.%-! %(,!
<4./>%!$(!60&6&<$-$&(%'!>&?%'!'&)$4!>3<-!@/!%!)0&36!&2!-./&0/><!$(!\bC!9$2-.',E!/1/(!-.&3).!B/!
.%1/! (&-! @//(! -.&0&3).',! 0/?34/?! F6&<<$@'/! B&0'?! </>%(-$4<GE! B/! T(&B! -.%-! >%(,! <$()'/!
(&(=-03-.!23(4-$&(!&6/0%-&0<!$(!(%-30/!'%()3%)/!-30(!-&!@/!<&>/!<36/06&<$-$&(!&2!</1/0%'!(&(=-03-.!
23(4-$&(! &6/0%-&0<! $(! 5b\C!7./</! (&-! &(',! <.&B! -.%-! -./! (%-30%'! '%()3%)/! <,<-/>! $<! %4-3%'',! -./!
F<%1/G! %(?! F<%2/-,G @3-! %'<&! <.&B! %! @0$).-! B%,! 2&0! >%(,! <4$/(-$<-<! $(! $(2&0>%-$&(! <4$/(4/<! &0!
4&>63-/0! -/4.(&'&)$/<! B.&! %0/! B&00,$()! %@&3-! -./! ?$<3($-/! &2! (&(=4'%<<$4%'! '&)$4! %(?! 4'%<<$4%'!
'&)$4C! !
! ! ! D&0/&1/0E! %44&0?$()! -&! 5789! $(! 60&6&<$-$&(%'! '&)$4E! %(,! -03-.! 23(4-$&(! 4%(! @/! /A3$1%'/(-',!
-0%(<2&0>/?! -&! %! (&(=-03-.! 23(4-$&(! ?/2$($-/',E! @3-! B./-./0! %(,! (&(=-03-.! 23(4-$&(! 4%(! @/!
/A3$1%'/(-',!-0%(<2&0>/?!-&!%!-03-.!23(4-$&(!?/2$($-/',E!$-!$<!<-$''!%(!&6/(!A3/<-$&(C!
@'U"-H*&DQ&L&"2<A#7.$<!B&0T!$<!<366&0-/?!@,!-./!I%-$&(%'!5&4$%'!54$/(4/!9&3(?%-$&(!&2!b.$(%!
:I&<Ce``aSL4__;! %(?! -./! K3>%($-$/<! 8/</%04.! 93(?<! $(! K3%S.&()! ]($1/0<$-,! &2! 54$/(4/! %(?!
7/4.(&'&),!:e`"e8*w[`h;C!
1&0&.&"'&<#
"C!+'%4T@30(E!\CE!J%(!+/(-./>E!xCE!*&'-/0E!9CM!K%(?@&&T!&2!D&?%'!l&)$4E!H'</1$/0!:e``a;!
eC!K$''E! [C!xCE! D4'/&?E!5C! sCM! ^(!703-.=93(4-$&(%'$-,E! 8/1$/B! &2! 5,>@&'$4! l&)$4! _! :X;E! deh=d_e!
:e`"`;!
_C!D%04&<E!xCM!*.%-!$<!%!(&(=-03-.=23(4-$&(%'!'&)$4u!5-3?$%!l&)$4%!pe!:e;E!e"g=eX`!:e``p;!
XC! U3$(/E! *CJCM! 7./! \0&@'/>! &2! O(-/060/-$()! D&?%'! l&)$4E! -./! x&30(%'! &2! 5,>@&'$4! l&)$4! "eE!
X_=Xh!:"pXa;!
gC!U3$(/E!*C!JCM!D/-.&?<!&2!l&)$4E!66C!"="`C!K%01%0?!]($1/0<$-,!\0/<<E!K%01%0?!:"phe;!
dC!8/<4./0E!ICM!7./!b&(4/6-!&2!8%(?&>(/<<E!7./&0$%!ea!:";E!"="":"pd";!
aC!54.($/?/0E!+CM!703-.=23(4-$&(%'$-,E!8/1$/B!&2!5,>@&'$4!l&)$4!"!:";E!dX=ae!:e``h;!
hC! 5-/6./(E! 8CM! b&(?$-$&(%'<! %0/! (&-! 703-.=93(4-$&(%'M! N(! N0)3>/(-! 20&>! \/$04/E! N(%',<$<! geE!
g="e!:"ppe;!
pC!*%(E!#ClCE!l$E!9CyCE!7$%(E!#CM!7./!5/4&(?!HL6'&0%-$&(!-&!](%0,!^6/0%-&0<M![/&(-$4!l&)$4!%(?!
[/&(-$4! \%0%?&LE! -./! x&30(%'! &2! N(.3$! ]($1/0<$-,! :5&4$%'! 54$/(4/! /?$-E! $(! b.$(/</;! "h:_;E!
_h=Xa!:e`"e;!
"`C!*%(E!#C'CM!7./!7.$0?!HL6'&0%-$&(!-&!](%0,!^6/0%-&0<M!&(!D&?/0(!D&?%'!l&)$4!20&>!5789E!
!
h!
!
-./! x&30(%'! &2! K3%S.&()! ]($1/0<$-,! &2! 54$/(4/! %(?! 7/4.(&'&),C! :5&4$%'! 54$/(4/! /?$-E! $(!
b.$(/</;!ed:_;E!__=_p!:e`"e;!
!
!
p!
!