Download slides - AGI conferences

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Machine learning wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Concept learning wikipedia , lookup

Intelligence explosion wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Transcript
Learning from
Inconsistencies in an
Integrated Cognitive
Architecture
The First Conference on Artificial General Intelligence (AGI-08)
March 1st, 2008
Kai-Uwe Kühnberger (with Peter Geibel, Helmar Gust, Ulf Krumnack, Ekaterina
Ovchinnikova, Angela Schwering, Tonio Wandmacher)
Kai-Uwe Kühnberger et al.
Universität Osnabrück
The First Conference on Artificial General Intelligence (AGI-08)
Memphis, March 1st, 2008
Overview
 Introduction

Learning in Cognitive Systems
 The I-Cog Architecture

General Overview of the System
 Learning from Inconsistencies


General Remarks
Learning from Inconsistencies in Analogy
Making and the Overall System
 Conclusions
Kai-Uwe Kühnberger et al.
Universität Osnabrück
The First Conference on Artificial General Intelligence (AGI-08)
Memphis, March 1st, 2008
Introduction
Learning in Cognitive Systems
Kai-Uwe Kühnberger et al.
Universität Osnabrück
The First Conference on Artificial General Intelligence (AGI-08)
Memphis, March 1st, 2008
Learning
 Usually cognitive architectures are based on a number of
different modules.

Example: Hybrid System
 Obviously, coherence problems and consistency clashes
can occur, in particular, in hybrid systems.
 In hybrid architectures, two main questions can be asked:


On which level should learning be implemented?
What are plausible strategies in order to resolve
inconsistencies?
 Idea of this talk: Use occurring inconsistencies as a
mechanism (trigger) of learning.
Kai-Uwe Kühnberger et al.
Universität Osnabrück
The First Conference on Artificial General Intelligence (AGI-08)
Memphis, March 1st, 2008
The I-Cog
Architecture
General Overview
Kai-Uwe Kühnberger et al.
Universität Osnabrück
The First Conference on Artificial General Intelligence (AGI-08)
Memphis, March 1st, 2008
A Proposal: I-Cog
 I-Cog is a modular system consisting of three main modules:

Analogy Engine (AE):


Ontology Rewriting Device (ORD):


Claim: AE is able to cover a variety of different reasoning
abilities.
Claim: Ontological background knowledge needs to be
implemented in a way, such that dynamic updates are possible.
Neuro-Symbolic Learning Device (NSLD):

Claim: The neuro-symbolic learning device enables robust
learning of symbolic theories form noisy data.
 Finally: these three modules interact in a non-trivial way and are
governed by a heuristic-driven Control Device (CD).

Kühnberger, K.-U. et al. (2007): I-Cog: A Computational Framework for Integrated Cognition of Higher
Cognitive Abilities, in Proceedings of MICAI 2007, LNAI 4827, pp. 203-214, Springer.
Kai-Uwe Kühnberger et al.
Universität Osnabrück
The First Conference on Artificial General Intelligence (AGI-08)
Memphis, March 1st, 2008
The Overall I-Cog Architecture
Kai-Uwe Kühnberger et al.
Universität Osnabrück
The First Conference on Artificial General Intelligence (AGI-08)
Memphis, March 1st, 2008
Learning in I-Cog
 Learning is based on occurring inconsistencies

In the case of ORD, rewriting algorithms make sure that inconsistencies
are resolved (where this is possible).


NSLD is a learning device, where weights are adjusted based on
backpropagation of errors.


Ovchinnikova, E. & Kühnberger, K.-U. (2007). Debugging Automatically Extended Ontologies,
GLDV-Journal for Computational Linguistics and Language Technology, 23(2):19-33 .
Gust, H., Kühnberger, K.-U. & Geibel, P. (2007). Learning Models of Predicate Logical Theories with
Neural Networks Based on Topos Theory, in P. Hitzler & B. Hammer (eds.): Perspectives of NeuralSymbolic Integration, Series “Computational Intelligence”, Springer, pp. 209-240.
In the case of AE, it is possible to reduce many adaptation processes to
occurring inconsistencies.
 Claim 1: Learning is distributed over the whole system.
 Claim 2: Learning takes place because errors / inconsistencies occur
triggering an adaptation process.
Kai-Uwe Kühnberger et al.
Universität Osnabrück
The First Conference on Artificial General Intelligence (AGI-08)
Memphis, March 1st, 2008
Learning from
Inconsistencies
The Example of Analogical
Reasoning
Kai-Uwe Kühnberger et al.
Universität Osnabrück
The First Conference on Artificial General Intelligence (AGI-08)
Memphis, March 1st, 2008
General Remarks
 Inconsistencies are classically connected to logic
If for a set of axioms  (relative to a language L)  can be
entailed and  can be entailed, then  is inconsistent.
 We use the term “inconsistency” rather loosely and do not restrict
this concept to logic. Here are some examples:
 Every analogy establishes a relation that resolves a clash of
concepts, information, interpretations etc.



Ontology generation / learning


Ovchinnikova, E., Wandmacher, T. & Kühnberger, K.-U. (2007). Solving Terminological Inconsistency
Problems in Ontology Design, IBIS 4:65-80.
Non-monotonicity effects in reasoning.

Kai-Uwe Kühnberger et al.
Universität Osnabrück
Gust, H. & Kühnberger, K.-U. (2006). Explaining Effective Learning by Analogical Reasoning, 28th
Annual Conference of the Cognitive Science Society, pp. 1417-1422.
Ovchinnikova, E. & Kühnberger, K.-U. (2006). Adaptive ALE-TBox for Extending Terminological
Knowledge, in Proceedings of AI’06, LNAI 4304, Springer, pp. 1111-1115.
The First Conference on Artificial General Intelligence (AGI-08)
Memphis, March 1st, 2008
The Analogy Engine
 The Analogy Engine is based on Heuristic-Driven Theory Projection
(HDTP).



HDTP is a mathematically sound theory of computing analogies.
It is based on anti-unification of a source theory ThS and a target
theory ThT.
It was applied to various domains like naïve physics, metaphors,
geometric figures etc.
 Some features:





Complex formulas can be anti-unified.
A theorem prover allows the re-representation of formulas.
Whole theories can be generalized.
The involved processes are governed by heuristics.
Gust, H., Kühnberger, K.-U. & Schmid, U. (2006). Metaphors and Heuristic-Driven Theory
Projection (HDTP), Theoretical Computer Science, 354:98-117.
Kai-Uwe Kühnberger et al.
Universität Osnabrück
The First Conference on Artificial General Intelligence (AGI-08)
Memphis, March 1st, 2008
Recursion Example I
Source ThS: Addition
1: x: add(x,0) = x
2: xy: add(x,s(y)) = s(add(x,y))
Target ThT: Multiplication
1: x: mult(x,s(0)) = x
2: xy: mult(x,s(y)) = add(x,mult(x,y))
Generalized Theory ThG:
1: x: OP1(x,E) = x
2: xy: Op1(x,s(y)) = Op2(Op1(x,y))
For the generalized theory, the following substitutions need to be established:
1: E  0,
Op1  add,
Op2  s
2: E  s(0),
Op1  mult,
Op2  z.add(x,z)
Kai-Uwe Kühnberger et al.
Universität Osnabrück
The First Conference on Artificial General Intelligence (AGI-08)
Memphis, March 1st, 2008
Recursion Example II
Source ThS: Addition
1: x: add(0,x) = x
2: xy: add(s(y),x) = add(y,s(x))
Target ThT: Multiplication
1: x: mult(0,x) = 0
2: xy: mult(s(y),x) = add(x,mult(x,y))
Generalized Theory ThG:
1: x: Op(E,x) = x
Trying to anti-unify 1 and 1 is not possible. But by using axioms 1 and 2 we
can derive
mult(s(0),x) = add(x,mult(0,x)) = add(x,0) = … = add(0,x)
Hence we can derive: 3: x: mult(s(0),x) = x
For the generalized theory, the following substitutions can be established:
1: E  0, Op  add
Kai-Uwe Kühnberger et al.
Universität Osnabrück
and
2: E  s(0), Op  mult
The First Conference on Artificial General Intelligence (AGI-08)
Memphis, March 1st, 2008
Conclusion
 Main claims:


In cognitive architectures “inconsistencies” (as
used in the broad sense here) should be
considered as a trigger for learning and adaptation.
These adaptation processes can be relevant for:
 Adapting background knowledge,
 Reasoning processes of various types,
 Neuro-based learning approaches.
 Learning in the systems is therefore distributed
and continuously realized.
Kai-Uwe Kühnberger et al.
Universität Osnabrück
The First Conference on Artificial General Intelligence (AGI-08)
Memphis, March 1st, 2008
Thank you very much!!
Questions?
Kai-Uwe Kühnberger et al.
Universität Osnabrück
The First Conference on Artificial General Intelligence (AGI-08)
Memphis, March 1st, 2008
References
 Analogical Reasoning (Selection)





Gust, H., Kühnberger, K.-U. & Schmid, U. (2006). Metaphors and HeuristicDriven Theory Projection (HDTP), Theoretical Computer Science, 354:98117.
Gust, H. & Kühnberger, K.-U. (2006). Explaining Effective Learning by
Analogical Reasoning, in: R. Sun & N. Miyake (eds.): 28th Annual
Conference of the Cognitive Science Society, Lawrence Erlbaum, pp. 14171422.
Gust, H., Krumnack, U., Kühnberger, K.-U. & Schwering, A. (2007). An
Approach to the Semantics of Analogical Relations, in S. Vosniadou et al.
(eds.): Proceedings of EuroCogSci 2007, Lawrence Erlbaum, pp. 640-645.
Krumnack, U., Schwering, A., Gust, H. & Kühnberger, K.-U. (2007).
Restricted Higher-Order Anti-Unification for Analogy Making, to appear in
Proceedings of AI’07, Springer.
Gust, H., Krumnack, U., Kühnberger, K.-U. & Schwering, A. (2008).
Analogical Reasoning: A Core of Cognition, to appear in Künstliche
Intelligenz 1/2008.
Kai-Uwe Kühnberger et al.
Universität Osnabrück
The First Conference on Artificial General Intelligence (AGI-08)
Memphis, March 1st, 2008
References

Neuro-Symbolic Integration (Selection)
 Gust, H., Kühnberger, K.-U. & Geibel, P. (2007). Learning and Memorizing Models
of Logical Theories in a Hybrid Learning Device, to appear in Proceedings of
ICONIP 2007, Springer.
 Gust, H., Kühnberger, K.-U. & Geibel, P. (2007). Learning Models of Predicate
Logical Theories with Neural Networks Based on Topos Theory, in P. Hitzler & B.
Hammer (eds.): Perspectives of Neural-Symbolic Integration, Series
“Computational Intelligence”, Springer, pp. 209-240.
 Ontology Rewriting (Selection)

Ovchinnikova, E. & Kühnberger, K.-U. (2007). Debugging Automatically Extended
Ontologies, GLDV-Journal for Computational Linguistics and Language Technology,
volume 23(2).

Ovchinnikova, E., Wandmacher, T. & Kühnberger, K.-U. (2007). Solving
Terminological Inconsistency Problems in Ontology Design, International Journal of
Interoperability in Business Information Systems, 4:65-80.
Ovchinnikova, E. & Kühnberger, K.-U. (2006). Adaptive ALE-TBox for Extending
Terminological Knowledge, in A. Sattar & B. H. Kang (eds.): Proceedings of AI’06,
LNAI 4304, Springer, pp. 1111-1115.

Kai-Uwe Kühnberger et al.
Universität Osnabrück
The First Conference on Artificial General Intelligence (AGI-08)
Memphis, March 1st, 2008
References
 I-Cog




Kühnberger, K.-U., Geibel, P., Gust, H., Krumnack, U., Ovchinnikova, E.,
Schwering, A. & Wandmacher, T. (2008): Learning from Inconsistencies in an
Integrated Cognitive Architecture, to appear in Proceedings of AGI 2008, IOS
Press.
Kühnberger, K.-U. (2007): Principles for the Foundation of Integrated Higher
Cognition (Abstract). In: D. S. McNamara & J. G. Trafton (Eds.), Proceedings
of the CogSci 2007, (p. 1796). Austin, TX: Cognitive Science Society.
Kühnberger, K.-U., Wandmacher T., Schwering, A., Ovchinnikova, E.,
Krumnack, U., Gust, H. & Geibel, P. (2007): I-Cog: A Computational
Framework for Integrated Cognition of Higher Cognitive Abilities, in
Proceedings of MICAI 2007, LNAI 4827, pp. 203-214, Springer.
Kühnberger, K.-U., Wandmacher, T., Schwering, A., Ovchinnikova, E.,
Krumnack, U., Gust, H. & Geibel, P. (2007): Modeling Human-Level
Intelligence by Integrated Cognition in a Hybrid Architecture, in P. Hitzler, T.
Roth-Berghofer, S. Rudolph: FAInt-07, Workshop at KI 2007, CEUR-WS, vol.
277, pp. 1-15.
Kai-Uwe Kühnberger et al.
Universität Osnabrück
The First Conference on Artificial General Intelligence (AGI-08)
Memphis, March 1st, 2008
Members of the AI group

Peter Geibel
 Jens Michaelis

Karl Gerhards
 Ekaterina Ovchinnikova

Helmar Gust
 Angela Schwering
 Konstantin Todorov

Ulf Krumnack
 Ulas Türkmen

Kai-Uwe Kühnberger
 Tonio Wandmacher
Kai-Uwe Kühnberger et al.
Universität Osnabrück
The First Conference on Artificial General Intelligence (AGI-08)
Memphis, March 1st, 2008