Download Die Grenzen des Verstehens – Voraussetzungen der

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Enactivism wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Intelligence explosion wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Transcript
March 2008
7
Principles of
Synthetic
Intelligence
Joscha Bach,
University of Osnabrück,
Cognitive Science
AGI 08
What is Artificial General Intelligence up to?
Perception, and what depends on it, is inexplicable in
a mechanical way, that is, using figures and motions.
Suppose there would be a machine, so arranged as
to bring forth thoughts, experiences and
perceptions; it would then certainly be possible to
imagine it to be proportionally enlarged, in such a
way as to allow entering it, like into a mill. This
presupposed, one will not find anything upon its
examination besides individual parts, pushing each
other—
and never anything by which a perception could be
explained.
(Gottfried Wilhelm Leibniz 1714)
March 1st, 2008
2
AGI 08
What is Artificial General Intelligence up to?
Perception, and what depends on it, is inexplicable in
a mechanical way, that is, using figures and motions.
Suppose there would be a machine, so arranged as
to bring forth thoughts, experiences and
perceptions; it would then certainly be possible to
imagine it to be proportionally enlarged, in such a
way as to allow entering it, like into a mill. This
presupposed, one will not find anything upon its
examination besides individual parts, pushing each
other—
and never anything by which a perception could be
explained.
(Gottfried Wilhelm Leibniz 1714)
March 1st, 2008
3
AGI 08
AI Scepticism: G. W. Leibniz
Perception, and what depends on it,
is inexplicable in a mechanical way,
that is, using figures and motions.
March 1st, 2008
4
AGI 08
AI Scepticism: Roger Penrose
The quality of understanding and
feeling possessed by human beings
is not something that can be
simulated
computationally.
Perception,
and what depends on it,
is inexplicable in a mechanical way,
that is, using figures and motions.
March 1st, 2008
5
AGI 08
AI Scepticism: John R. Searle
Syntax by itself is neither
The quality of understanding and
constitutive of nor sufficient for
feeling possessed by human beings
semantics.
is not something that can be
Computers only do syntax, so
simulated
computationally.
Perception,
and what depends on it,
they can never understand
is inexplicable in a mechanical way,
anything.
that is, using figures and motions.
March 1st, 2008
6
AGI 08
AI Scepticism: Joseph Weizenbaum
Syntax by itself is neither
The quality of understanding and
constitutive of nor sufficient for
feeling possessed by human beings
semantics.
is not something that can be
Computers
only is
do syntax, so
Human
experience
simulated
computationally.
Perception,
and what depends on it,
they
can
never
understand
not transferable.
(…)
is inexplicable in a mechanical way,
anything.
Computersthat
canis,not
befigures and motions.
using
creative.
March 1st, 2008
7
AGI 08
AI Scepticism: General Consensus…
Syntax by itself is neither
The quality
of understanding
and
Computers
can
not,
constitutive of nor sufficient for
feeling possessed by human beings
semantics.because they should not.
is not something that can be
Computers
only doissyntax, so
Human
experience
simulated
computationally.
Perception,
and what
depends on it,
The
“Winter
of
AI”
they can never
understand
not transferable.
(…)
is
inexplicable
in a mechanical way,
anything. is far from over.
Computers can
be figures and motions.
thatnot
is, using
creative.
March 1st, 2008
8
AGI 08
AI is not only trapped by cultural opposition
AI suffers from
- paradigmatic fog
- methodologism
- lack of unified architectures
- too much ungrounded, symbolic modeling
- too much non-intelligent, robotic programming
- lack of integration of motivation and
representation
- lack of conviction
March 1st, 2008
9
AGI 08
Lessons for Synthesizing Intelligence
1. Build whole, functionalist architectures
March 1st, 2008
10
AGI 08
Lessons for Synthesizing Intelligence
1. Build whole, functionalist architectures
(infrared) imaging of
combustion engine
March 1st, 2008
11
AGI 08
Lessons for Synthesizing Intelligence
1. Build whole, functionalist architectures
(infrared) imaging of
combustion engine
March 1st, 2008
12
AGI 08
Lessons for Synthesizing Intelligence
1. Build whole, functionalist architectures
March 1st, 2008
13
AGI 08
Lessons for Synthesizing Intelligence
1. Build whole, functionalist architectures
Requirement:
Dissection of system
into parts
and relationships
between them
March 1st, 2008
14
AGI 08
#1: Build functionalist architectures
Requirement:
Dissection of system
into parts
and relationships
between them
March 1st, 2008
15
AGI 08
Lessons for Synthesizing Intelligence
1. Build whole, functionalist architectures
2. Let the question define the method
March 1st, 2008
16
AGI 08
Lessons for Synthesizing Intelligence
1. Build whole, functionalist architectures
2. Let the question define the method –
not vice versa!
AI‘s specialized sub-disciplines will not be
re-integrated into a whole.
March 1st, 2008
17
AGI 08
Lessons for Synthesizing Intelligence
1. Build whole, functionalist architectures
2. Let the question define the method
3. Aim for the Big Picture, not narrow solutions
March 1st, 2008
18
AGI 08
Conceptual Analysis: HCogAff (Sloman 2001)
March 1st, 2008
19
AGI 08
Lessons for Synthesizing Intelligence
1. Build whole, functionalist architectures
2. Let the question define the method
3. Aim for the Big Picture, not narrow solutions
4. Build grounded systems
March 1st, 2008
20
AGI 08
Lessons for Synthesizing Intelligence
1. Build whole, functionalist architectures
2. Let the question define the method
3. Aim for the Big Picture, not narrow solutions
4. Build grounded systems –
but do not get entangled in the „Symbol Grounding
Problem“
The meaning of a concept is equivalent to an
adequate encoding over environmental patterns.
March 1st, 2008
21
AGI 08
Modal vs. amodal representation (Barsalou 99)
March 1st, 2008
22
AGI 08
Lessons for Synthesizing Intelligence
1. Build whole, functionalist architectures
2. Let the question define the method
3. Aim for the Big Picture, not narrow solutions
4. Build grounded systems
5. Do not wait for robots to provide embodiment
March 1st, 2008
23
AGI 08
Lessons for Synthesizing Intelligence
1. Build whole, functionalist architectures
2. Let the question define the method
3. Aim for the Big Picture, not narrow solutions
4. Build grounded systems
5. Do not wait for robots to provide embodiment –
Robotic embodiment is costly, but not necessarily
more “real” than virtual embodiment.
March 1st, 2008
24
AGI 08
March 1st, 2008
25
AGI 08
Lessons for Synthesizing Intelligence
1. Build whole, functionalist architectures
2. Let the question define the method
3. Aim for the Big Picture, not narrow solutions
4. Build grounded systems
5. Do not wait for robots to provide embodiment
6. Build autonomous systems
March 1st, 2008
26
AGI 08
Lessons for Synthesizing Intelligence
1. Build whole, functionalist architectures
2. Let the question define the method
3. Aim for the Big Picture, not narrow solutions
4. Build grounded systems
5. Do not wait for robots to provide embodiment
6. Build autonomous systems
Intelligence is an answer to serving polythematic
goals, by unspecified means, in an open
environment.
 Integrate motivation and emotion into the model.
March 1st, 2008
27
AGI 08
Lessons for Synthesizing Intelligence
1. Build whole, functionalist architectures
2. Let the question define the method
3. Aim for the Big Picture, not narrow solutions
4. Build grounded systems
5. Do not wait for robots to provide embodiment
6. Build autonomous systems
7. Intelligence is not going to simply “emerge”
March 1st, 2008
28
AGI 08
Lessons for Synthesizing Intelligence
1. Build whole, functionalist architectures
2. Let the question define the method
3. Aim for the Big Picture, not narrow solutions
4. Build grounded systems
5. Do not wait for robots to provide embodiment
6. Build autonomous systems
7. Intelligence is not going to simply “emerge”:
Sociality, personhood, experience, consciousness, emotion,
motivation will have to be conceptually decomposed and
their components and functional mechanisms realized.
March 1st, 2008
29
AGI 08
Taking the Lessons: MicroPsi
• Integrated architecture, based on a theory originating in
psychology
• Unified neuro-symbolic representation (hierarchical
spreading activation networks)
• Functional modeling of emotion:
–
–
Emotion as cognitive configuration
Emotional moderators
• Functional modeling of motivation:
–
–
–
Modeling autonomous behavior
Cognitive and Physiological drives
Integrating motivational relevance with perception/memory
March 1st, 2008
30
AGI 08
Implementation: MicroPsi (Bach 03, 05, 04, 06)
Eclipse Environment
Node Net Editor
World Editor
Net Simulator/
Agent
Execution
World Simulator
Monitoring
Console
Application
3D Display
Server
3D Display
Client
March 1st, 2008
31
AGI 08
Implementation: MicroPsi (Bach 03, 05, 04, 06)
Eclipse Environment
Node Net Editor
World Editor
Net Simulator/
Agent
Execution
World Simulator
Monitoring
Low-level perception
Console
Application
3D Display
Server
3D Display
Client
March 1st, 2008
32
AGI 08
Implementation: MicroPsi (Bach 03, 05, 04, 06)
Eclipse Environment
Node Net Editor
World Editor
Net Simulator/
Agent
Execution
World Simulator
Monitoring
Low-level perception
Console
Application
3D Display
Server
3D Display
Client
Control and simulation
March 1st, 2008
33
AGI 08
Implementation: MicroPsi (Bach 03, 05, 04, 06)
Eclipse Environment
Node Net Editor
World Editor
Net Simulator/
Agent
Execution
World Simulator
Monitoring
Low-level perception
Console
Application
3D Display
Server
3D Display
Client
Multi-agent interaction
March 1st, 2008
Control and simulation
34
AGI 08
Implementation: MicroPsi (Bach 03, 04, 05, 06)
Eclipse Environment
Robot control
Node Net Editor
World Editor
Net Simulator/
Agent
Execution
World Simulator
Monitoring
Low-level perception
Console
Application
3D Display
Server
3D Display
Client
Multi-agent interaction
March 1st, 2008
Control and simulation
35
AGI 08
Foundation of MicroPsi: PSI theory (Dörner 99, 02)
How can the different
aspects of cognition be
realized?
March 1st, 2008
36
AGI 08
PSI theory (Dörner 99, 02)
March 1st, 2008
37
AGI 08
PSI theory (Dörner 99, 02)
March 1st, 2008
38
AGI 08
PSI theory (Dörner 99, 02)
March 1st, 2008
39
AGI 08
PSI theory (Dörner 99, 02)
March 1st, 2008
40
AGI 08
Motivation in PSI/MicroPsi
March 1st, 2008
41
AGI 08
Integrated representation
March 1st, 2008
42
AGI 08
Goal of MicroPsi: broad model of cognition
Aim at
• Perceptual symbol system approach
• Integrating goal-setting
• Use motivational and emotional system as integral
part of addressing mental representation
• Physiological, physical and social demands and
affordances
• Modulation/moderation of cognition
March 1st, 2008
43
AGI 08
Lessons for Synthesizing Intelligence
1. Build whole, functionalist architectures
2. Let the question define the method
3. Aim for the Big Picture, not narrow solutions
4. Build grounded systems
5. Do not wait for robots to provide embodiment
6. Build autonomous systems
7. Intelligence is not going to simply “emerge”
Website:
www.cognitive-agents.org
• Publications,
• Download of Agent,
• Information for Developers
March 1st, 2008
44
AGI 08
… and this is where it starts.
Thank you!
Website: www.cognitive-agents.org
• Publications,
• Download of Agent,
• Information for Developers
March 1st, 2008
45
AGI 08
Many thanks to…
- the Institute for Cognitive Science at the University of
Osnabrück and the AI department at Humboldt-University
of Berlin for making this work possible
- Ronnie Vuine, David Salz, Matthias Füssel, Daniel Küstner,
Colin Bauer, Julia Böttcher, Markus Dietzsch, Caryn Hein,
Priska Herger, Stan James, Mario Negrello, Svetlana
Polushkina, Stefan Schneider, Frank Schumann, Nora
Toussaint, Cliodhna Quigley, Hagen Zahn, Henning Zahn and
Yufan Zhao for contributions
March 1st, 2008
46
AGI 08
Motivation in PSI/MicroPsi
March 1st, 2008
47
AGI 08
Modulation in PSI/MicroPsi
March 1st, 2008
48
AGI 08
Motivation in PSI/MicroPsi
Urges/drives:
–
–
Finite set of primary, pre-defined urges (drives)
All goals of the system are associated with the satisfaction of an
urge
including abstract problem solving, aesthetics, social relationships
and altruistic behavior
–
Urges reflect demands
–
Categories:
 physiological urges (food, water, integrity)
 social urges (affiliation, internal legitimacy)
 cognitive urges (reduction of uncertainty, and competence)
March 1st, 2008
49
AGI 08
Emotion in PSI/MicroPsi
Lower emotional level (affects):
–
–
–
Not independent sub-system, but aspect of cognition
Emotions are emergent property of the modulation of
perception, behavior and cognitive processing
Phenomenal qualities of emotion are due to
 effect of modulatory settings on perception on
cognitive functioning
 experience of accompanying physical sensations
(Higher level) emotions:
–
–
Directed affects
Objects of affects are given by motivational system
March 1st, 2008
50