Download Graduiertenkolleg Adaptivity in Hybrid Cognitive Systems Artificial

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Artificial consciousness wikipedia , lookup

Holonomic brain theory wikipedia , lookup

Theory wikipedia , lookup

Central pattern generator wikipedia , lookup

Network science wikipedia , lookup

Neural modeling fields wikipedia , lookup

Neural correlates of consciousness wikipedia , lookup

Neo-Piagetian theories of cognitive development wikipedia , lookup

Cognitive neuroscience wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Binding problem wikipedia , lookup

Situated cognition wikipedia , lookup

Computational creativity wikipedia , lookup

Neuroeconomics wikipedia , lookup

Dual process theory wikipedia , lookup

Knowledge representation and reasoning wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

AI winter wikipedia , lookup

William Clancey wikipedia , lookup

Cognitive development wikipedia , lookup

Neuroethology wikipedia , lookup

Artificial general intelligence wikipedia , lookup

Symbolic behavior wikipedia , lookup

Social network (sociolinguistics) wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Catastrophic interference wikipedia , lookup

Neurophilosophy wikipedia , lookup

Convolutional neural network wikipedia , lookup

Natural computing wikipedia , lookup

Nervous system network models wikipedia , lookup

Artificial intelligence wikipedia , lookup

Metastability in the brain wikipedia , lookup

Cognitive model wikipedia , lookup

Stephen Grossberg wikipedia , lookup

Artificial neural network wikipedia , lookup

Types of artificial neural networks wikipedia , lookup

Recurrent neural network wikipedia , lookup

Transcript
Graduiertenkolleg
Adaptivity in Hybrid Cognitive Systems
Institute of Cognitive Science, University of Osnabrück
Artificial Intelligence
Contact:
Prof. Dr. Kai-Uwe Kühnberger (http://www.cogsci.uos.de/~kkuehnbe)
PD Dr.-Ing. Helmar Gust (http://www.cogsci.uos.de/~hgust)
Dissertation Projects
AI 1: Development of a Module for the Modeling of Logical Theories with Neural
Networks
The starting point of this dissertation project is the transfer of logical terms and formulas into
a version of a variable-free logic, where elements can be considered as arrows in a Topos. By
interpreting arrows as atomic entities, it is possible to represent relations between these
arrows by equations. Therefore, a logical theory consisting of a set of axioms corresponds to a
system of equations. For theories that can be represented by a finite set of equations, neural
networks can be used, in order to learn representations of models of these equations: the
neural networks that are used in this approach are mapping symbols of a logical theory into a
representation space, i.e. a n-dimensional vector space. The hypothesis is that the similarity in
the behavior of symbols is mirrored in topological nearness in the representation space. In this
dissertation, a module should be developed that can translate logical descriptions of planningbased behavior into a neural system (co-adviser is Joachim Hertzberg). Furthermore, the
project will cooperate with the working group Neurobiopsychology, dissertation project C, in
which the relation between neural and symbolic representations will be examined.
AI 2: The Modeling of Non-Classical Logical Theories with Neural Networks
Non-consistent axiom systems are not an appropriate domain for classical logical approaches.
Nevertheless, there are non-classical theories to model (at least to a certain extent) nonmonotonic and paraconsistent reasoning. If we train a neural network with an inconsistent
axiom system, then it can be expected that the effects of inconsistencies remains locally
constrained, i.e. in a broad range of areas the network should nevertheless be able to draw
plausible inferences. From another perspective there is the possibility to examine strongly
underdetermined representations in this approach. There is a natural connection between this
dissertation project and dissertation project A of the working group computational linguistics,
in which – from a linguistic perspective – underdetermined mental lexical representations are
examined. Furthermore, there are relations to philosophical approaches with respect to the
symbol-grounding problem and the question how model-based theories can show emergent
behavior.
Theoretical Background of the Dissertation Projects
Whereas symbolic approaches have a long history in artificial intelligence and have been
applied successfully to a wide range of problem specifications, it happened not until the
beginning of the 90ties that AI research started to examine also biologically-inspired
frameworks for AI applications, paradigmatically represented by artificial neural networks.
Symbolic theories can be used prototypically to model higher cognitive abilities like planning,
reasoning, search etc., but do have obvious problems in modeling lower cognitive abilities
like motor control or image recognition. The situation is mirror-inverted in the case of neural
networks, i.e. both approaches have complementary strengths and weaknesses. This obvious
gap between the two types of modeling has not been bridged so far. These projects focus on
adaptation processes and interactions between these levels of description; in particular, they
focus on constraints that are imposed by one level on the other.
The interaction between and the adaptation of the different levels of description can be
examined in different ways:
1. One can translate symbolic systems into artificial neural networks and vice versa:
a. Symbols and their relations are mapped to the network topology.
b. The network topology and weights of the nets are directly translated into (Fuzzy-)
symbolic systems (often as an inverse procedure to 1a.).
2. Based on a fixed network topology (determined by external principles or a class of
topologies) one can examine the following two alternatives:
a. Translate composition or inference principles of symbolic systems into learning
tasks for the network.
b. Translate trained neural networks into symbolic systems using the analysis of the
network topology and the weight distribution (perhaps taking the network
dynamics determined by the learning mechanism into account).
3. One can simulate the behavior of the system:
a. Simulate a symbolic rule system by a neural network.
b. Simulate a trained neural network by a symbolic rule system.
In the dissertation projects, the second of the three alternatives will be examined. On the one
hand, inference processes can be modeled by neural networks (Hitzler, et al. 2004) and on the
other, the structural set-up of a logical system can be approximated by a network. The second
approach will be considered in these projects in a way, such that not inference processes, but
models are computed. This reflects the fact that humans can argue very well with models, but
not as well with logical deductions. In these dissertation projects, the sketched interaction
between symbolic and neural systems should be considered in the context of further
dissertation projects, in particular, the projects of Peter König (Emergence and Adaptation of
Symbolic Representations), Joachim Hertzberg (Scenic Interpretations from Sensory Data),
and Martin Riedmiller (Integration of Symbolic Control Knowledge in RL-Methods).
Previous Work
Whereas there is plenty of work concerning the first and the third type of addressing the
problem of interaction and adaptation of the different levels (examples are Darbari, 2000;
Shavlik & Towell, 1994; Nauck et al. 1996; Funahashi, 1989), there is not much research
endeavor spent to the second type (an overview can be found in Bader et al., 2004): in Hitzler
et al. (2004) a deduction operator TP of a logic program is approximated by a neural network.
In Healy & Caudell (2004), an approach is described that uses category theoretic methods to
assign logical theories (concepts) to neural constructions.
In a series of papers, the working group Artificial Intelligence has published results related to
these dissertation projects during the last years. Only two recent examples are mentioned
here: in Gust, Kühnberger & Geibel (2007), a theory for the approximation of models for
first-order logic was developed. Kühnberger et al. (2007) embeds this theory into an overall
context of a cognitive architecture in order to achieve an approach for integrated cognition.
References
Bader, S., Hitzler, P. & Hölldobler, S. (2004). The Integration of Connectionism and First-Order Knowledge
Representation and Reasoning as a Challenge for Artificial Intelligence, http://wwwaifb.unikarlsruhe.de/WBS/phi/pub/inf04.pdf.
Darbari, A. (2000). Rule Extraction from Trained ANN: A Survey, Technical Report WV-2000-03, Knowledge
Representation and Reasoning Group, Department of Computer Science, FH Dresden.
Funahashi, K.-I. (1989). On the approximate realization of continuous mappings by neural networks. Neural
Networks, 2:183-192.
Gust, Helmar, Kühnberger, Kai-Uwe & Geibel, Peter (2007). Learning Models of Predicate Logical Theories
with Neural Networks Based on Topos Theory, erscheint in P. Hitzler & B. Hammer (eds.): Perspectives
of Neural-Symbolic Integration, Reihe “Computational Intelligence”, Springer, pp. 209-240.
Kühnberger, Kai-Uwe, Wandmacher Tonio, Schwering, Angela, Ovchinnikova, Ekaterina, Krumnack, Ulf, Gust,
Helmar & Geibel, Peter (2007). I-Cog: A Computational Framework for Integrated Cognition of Higher
Cognitive Abilities, erscheint in Proceedings of 6th Mexican International Conference on Artificial
Intelligence (MICAI 2007), Springer.
Healy, M. & Caudell, T. (2004). Neural Networks, Knowledge and Cognition: A Mathematical Semantic Model
Based upon Category Theory, University of New Mexico, EECE-TR-04-020.
Hitzler, P., Hölldobler, S. & Seda, A. (2004). Logic programs and connectionist networks. J.App.Logic, 2:245272.
Nauck, D., Klawonn, F. & Kruse, R. (1996). Neuronale Netze und Fuzzy-Systeme. Computational Intelligence.
Vieweg, Braunschweig.
Shavlik, J. & Towell, G. (1994). Knowledge-based artificial neural networks. AI 70:119-165.