Download NVKI-Nieuwsbrief December 1998 162 A NEW TAKE

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
A NEW TAKE-OFF FOR AI?
Editor-in-Chief
Most scientists would agree that AI has not achieved his goal of creating an intelligent machine. The original
high hopes fueled by the extravagant claims of early AI researchers and by the impressive performances of the
first computers, soon turned into a general feeling of disappointment. It became clear that there is more to
intelligence than fast flawless computation. Nowadays, artificial intelligence still has many critics stating that
intelligent machines cannot be constructed and that the search for artificial intelligence is deemed to fail.
Nevertheless, artificial intelligence research is prospering. Apparently, the goal of artificial intelligence is still
inspiring a large part of the scientific community. But what is this goal, anyway?
In the winter 1998 special issue of Scientific American on Exploring Intelligence, Patrick Hayes and Kenneth
Ford rethink the goal of artificial intelligence. According to Ford and Hayes the traditional goal of AI is “to
create a machine that can successfully imitate human behavior”. This goal is directly reflected in the Turing
test in which an intelligent machine can fool the judge by imitating a human being. Ford and Hayes argue that
the traditional goal is too limited for present-day AI research and should be replaced by a more contemporary
goal: “to provide a computational account of intelligence”. To support their argument, they draw an analogy
between the efforts to build a flying machine with those of building an intelligent machine. Originally, the
developers of artificial flying machines took birds as an example. The reader may recall the soundless blackand-white movies showing the desperate attempts of early artificial-flight researchers to fly in machines
equipped with flapping wings. In their contribution, Ford and Hayes show that the quest for a bird-like
machine hampered rather than helped the development of flying machines. In a similar vein, the quest for
humanoid machines may hamper the development of intelligent machines. Therefore, the goal of AI has to be
broadened to encompass mental abilities of both humans and non-humans. Ford and Hayes appear to say that
by focussing on the intelligence (flying) rather than on the human behavior (flapping wings),
artificial intelligence is ready to take off.
The current goal of AI research is certainly much broader than creating a machine that can pass
the Turing test. It might even be something like “providing a computational account of
intelligence”. What bothers me is the parallel that Ford and Hayes draw between a flying machine and an
intelligent machine. The point is that a bird knows what flying is and a human being knows what intelligence
is. To a bird, an artificial flying machine is a kind of brute-force approach to flying. It can bring you anywhere
in a short time, but it makes a lot of noise, it smells, and its flight pattern does not attract females. To a human
being, an artificial intelligent machine, such as Deep Blue, is a brute-force approach to playing chess, it can
bring you a victory, but it makes noise, and you can not intimidate it.
Many AI researchers have reached the insight that intelligence is closely related to embodiment. The physical
make-up of an intelligent being is closely related to its way of perceiving the world, interacting with the world
and thinking about the world. As a result, human intelligence, bird intelligence, and cockroach intelligence are
all very different. So, if your goal is to “provide a computational account of intelligence” in one of these (or any
other) species, mimicking is allowed and even required.
This newsletter contains (amongst others) reviews on the sessions held during the NAIC'98 at the CWI in
Amsterdam. Bart de Boer was awarded with the best-paper award at the NAIC and as a result became a new
editor for the newsletter. I would like to congratulate Bart with his prize and welcome him to our editorial
board. In addition, I am happy to welcome Antal van den Bosch who is also strengthening our editorial board.
Fotoos on page ?????are made by Eric Postma.
De uitgave van de NVKI-Nieuwsbrief wordt in 1998 mede mogelijk gemaakt door de Stichting Informatica
Onderzoek in Nederland (SION).
Sponsoring AHOLD
NVKI-Nieuwsbrief
162
December 1998
TABLE OF CONTENTS
(Editor-in-Chief).................................................................................................................................................162
Table of contents ....................................................................................................................................................1
NVKI-Board News (Joost Kok)........................................................................................................................... 1
The 10th NAIC in Amsterdam ..............................................................................................................................1
Session Invited Speaker (Joke Hellemons en Eric Postma)........................................................................ 1
Session Language and Linguistics 1(Ton Weijters) .......................................................................................1
Session Learning (Ida Sprinkhuizen-Kuyper)..................................................................................................
Session Programming (Maurice Bruynooghe) .................................................................................................
Session Logics 1 (Luc de Raedt) ........................................................................................................................
Session Knowledge Engineering 1 (Cees Witteveen) .......................................................................................
Session Decision Networks 1 (Maarten van Someren).....................................................................................
Session Robotics (Frans Groen) ........................................................................................................................
Session Search (Cor Bioch) ................................................................................................................................
Session Language and Linguistics 2 (Edwin de Jong) .....................................................................................
Session Agent Technology 1 (Peter Braspenning)............................................................................................
Business Session on Electronic Commerce 1 (Han La Poutré) .......................................................................
Session Evolutionary Algorithms (Dirk Thierens)...........................................................................................
Session Agent Technology 2 (J.-J. Meyer) ........................................................................................................
Business Session on Electronic Commerce 2 (Frank van Harmelen).............................................................
Session Decision Networks 2 (Joost Kok) .........................................................................................................
Session Multi-Agent Demonstrations 1 (Erica van de Stadt) ..........................................................................
Session Electronic Commerce (Gert-Jan Beijer) .............................................................................................
Session Knowledge Engineering 2 (Pierre Yves Schobbens)...........................................................................
Session Multi-Agent Demonstrations 2 (Robert van Liere) ............................................................................
Session Neural Networks (Eric Postma) ...........................................................................................................
Session Logics 2 (Yao-Hua Tan)........................................................................................................................
Machine Learning of Phonotactics (Antal van den Bosch)............................................................................... 1
The Minimum Description Length Principle and Reasoning Under Uncertainty (Ronald de Wolf)................
Taalkundige Analyse van Zakelijke Conversaties (Hans Weigand) ....................................................................
?? (Jaap van den Herik).........................................................................................................................................1
BENELOG 1998 (Sandro Etalle) ............................................................................................................................
SIKS ..........................................................................................................................................................................
ANTS’98: From Ant Colonies to Artificial Ants (Katja Verbeek) .......................................................................
Artificial Intelligence Research at CNTS (Walter Daelemans and Steven Gillis)...............................................
Artificial Intelligence and Beyond (..).....................................................................................................................
AI Abroad: A Year at the University of Calgary (Niek Wijngaards) ................................................................1
Section Knowledge Systems in Law and Computer Science (Section-editor Radboud Winkels) ....................1
PROSA- een Computerprogramma als instructieomgeving (Raf van Kuyck and Stijn Debaene)................. 1
Rechtsinformatica en Hard Cases (Ronald van den Hoogen) ..............................................................................
Power - Programma Ondersteuning Wet en Regelgeving (Arno Lodder) ........................................................1
Conferenties, Symposia, Workshops ................................................................................................................... 1
Email-adressen bestuursleden / Redactie NVKI-Nieuwsbrief / Hoe word ik lid? / Kopij /
Oude nummers/Advertenties / Adreswijzigingen ............................................................................................... 1
Joost Kok
Chairman
NVKI-BOARD NEWS
NVKI-Nieuwsbrief
163
December 1998
ASSEMBLY
The NAIC’98 was a succesful conference: 150
participants from Belgium and the Netherlands. One
participant even had to take the nightbus from Oxford
to be able to attend the event. It was a lively event,
which was well organized by the Evolutionary
Computing group of Han La Poutré at the CWI in
Amsterdam. During the conference, many interesting
papers and demo's were presented. Also, the
conference dinner was a great succes. Especially, the
Italian wine was very nice.
November18, 1998
Frank van Harmelen, Secretary
1. Agenda
2. Yearly Report
3. Financial Report
4. Selection of New Board
5. Merge with Belgian AI Society
6. NAIC Conferences
The chairman opens the meeting at 13.30h.
Yearly Report.
The chairman mentions the following points as main
activities of the board in the past year:
- Acquisition of sponsorship by Bolesian (major
sponsor) and AHOLD
- Recruited Belgian Newsletter editor (Edwin de
Jong)
- Publicity activities aimed at new Belgian
members (poster action, Web-site)
- Invited Gert-Jan Beijer from Bolesian B.V. as
informal board-member to replace Henk Venema
who has left Bolesian.
- Supervised organisation of NAIC'98
- Financial Report
At the general assembly meeting a new board has
been elected. Besides myself, the members of the nes
board are: Rineke Verbrugge, Wiebe van der Hoek,
Gert-Jan Beijer, Walter Daelemans, Eric Postma and
Yao-Hua Tan. Last year the board spent quite some
time on the Belgian-Dutch integration. The result is
reflected in the society’s new name: BelgischNederlandse
Vereniging Voor Kunstmatige
Intelligentie/ Association pour Intelligence Artificielle
Belgique-Neerlandais (BNVKI/AIABN).
Ida Sprinkhuizen-Kuyper, Henk Venema, Bernard
Manderick and Frank van Harmelen have left the
board. I would like to thank them for all the work they
have been doing. Frank van Harmelen will assist the
board (especially the new members) in the next year.
The expenses over this financial year were
somewhat lower than expected, mainly due to
underspending on subsidising workshops and
tutorials. At the same time, the income was higher
than expected, due to the acquisition of sponsorship.
This has led to a much smaller negative balance in
the budget than foreseen.
A discussion on a new format for the BNAIC was
initiated in the last newsletter and also during the
general assembly meeting at the NAIC in Amsterdam.
Many useful remarks and suggestions were made by
the participants.
In the coming year the board of the BNVKI/AIABN
will make an effort to further improve the format of
the BNAIC and to recruit new members from AIrelated fields.
The accounts committee has inspected the books,
has given some recommendations to the treasurer
and was satisfied that the books were in order. By
general acclamation the meeting accepts the
recommendation of the accounts committee to
discharge the treasurer of her duties.
The treasurer presents the new budget to the
meeting. A small negative balance is foreseen, but
this is deemed acceptable in the light of the available
reserves. The meeting accepts the proposed budget
by general acclamation.
foto
MINUTES OF THE NVKI GENERAL
(Inserted after announcement of the chairman during
the conference closing session). The chairman
proposes that the new accounts committee consists of
Catholijn Jonker, Walter Thoen en Edwin de Jong.
Berndard Manderick all step down from the board.
Henk Venema has already resigned during the past
year. The board proposes Gert-Jan Beijer (Bolesian
BV), Rineke Verbrugge (Universiteit Groningen),
Wiebe van der Hoek (Universiteit Utrecht) and
Walter Daelemans (Universiteit Antwerp/KUB) as
new members. Joost Kok is proposed as chair of the
Election of new board
Frank van Harmelen, Ida Sprinkhuizen-Kuyper and
NVKI-Nieuwsbrief
164
December 1998
board. This proposal is accepted by the meeting by
general acclamation.
buy the various publications on offer from the
BNVKI/AIABN.
The chairman closes the meeting at 14.45 h.
Merge with Belgian AI Society
The board reports the new and succesful
Dutch-Belgian collaboration in the past year, the
increase in the number of Belgian members, and the
significant Belgian participation in NAIC'98. The
board proposes to go ahead with the proposed
extension of the NVKI to include the Belgian as well
as the Dutch AI community. Some discussion follows
concerning the new name of the association. The
board proposes BNVKI/AIABN, standing for:
Belgisch-Nederlandse Vereniging voor Kunstmatige
Intelligentie/Association pour Intelligence Artificielle
Belgique-Neerlandais. In favour of this proposal is the
continuity with the current name. Some discussion
follows regarding the length of the acronym. After
some discussion, the meeting accepts the proposal of
the board with 12 in favour, 5 against and 4
abstentions.
SESSION INVITED SPEAKER
foto Luc Steels
NAIC'98
The NAIC'98 has succeeded in attracting 130
participants plus 20 student participants. There were
65 submissions of which 50 were accepted. Some 40
of the 65 were new publications not submitted
elsewhere. The new business-track on E-commerce
has attracted significant interest from commercial
partners. The meeting is very grateful to Han la Poutré
and Jaap van den Herik for their efforts in organising
the NAIC'98.
Future of the NAIC
Some discussion follows concerning the future format
of our annual conference. Some of the aims of the
conference are mentioned: presentation of scientific
results, a forum for young researchers, a meeting
place for the Belgian-Dutch AI community, and a
meeting place for academia and industry. Possible
improvements to the current format are suggested:
aiming for a special issue of a journal for a selected
set of NAIC contributions, re-integration with the
applied AI conference, the additional tutorials, and
aiming for internationally published proceedings. No
consensus was reached on the merits of the various
proposals. The board will ensure that these and other
proposals are taken into account when organising next
year's conference.
BNAIC'99
The 1999 conference will be called BNAIC'99, and
will be organised in Maastricht.
Any other business
Jaap van den Herik encourages all those present to
NVKI-Nieuwsbrief
165
December 1998
SESSION LANGUAGE AND LINGUISTICS 1
Ton Weijters
Eindhoven University of Technology
Emergence of sound systems through selforganization
B. de Boer
Vrije Universiteit Brussel
The first parallel session on Language and Linguistics
was an elaboration of invited lecture by Luc Steels:
Bootstrapping Cognition Through Language. This is
not surprising because both Luc Steels and all
speakers in this session are members of the Artificial
Intelligence Laboratory of the Vrije Universiteit
Brussel.
The third and last paper was nominated for two
NAIC’98 awards: the best paper award and the best
Ph.D. student-paper award (ultimately Bart appears
the winner of the best-paper award). His paper
describes a model for explaining the emergence and
the universal structural tendencies of vowel systems.
Both are considered as the result of self-organization
in a population of language users. The language
users try to imitate each other and to learn each
other's vowel systems. The speaker showed through
computer simulations that coherent and natural
(similar to human) sound systems can indeed emerge
in populations of artificial agents. The important
parameters in these simulation were only human
speech production characteristics, perception
characteristics and noise. Really a very interesting a
very readable paper that deserved to be awarded as
the best paper award.
The evolution of a lexicon and meaning in
robotic agents through self-organization
P. Vogt
Vrije Universiteit Brussel
The first paper discusses interdisciplinary
experiments, combining robotics and evolutionary
computational linguistics. The goal of the experiments
is to investigate if robotic agents can originate the
lexicon of language (naming objects). The lexicon is
propagated through social interactions of the
individual agent with its environment including other
agents, and a natural selection-like ontogenetic
development between the agents.
The Development of a Lexicon Based Behavior
E. de Jong
Vrije Universiteit Brussel
foto
The subject of the second presentation was strongly
related to the subject of the preceding presentation.
However, this paper investigates whether a group of
agents may develop a common lexicon; relating words
to situations (not objects). Each agent independently
decides which situations are useful to distinguish,
based on experience with the environment. The
question is of whether a process of self-organization
results in the development of a shared lexicon. The
experimental results presented by the speaker showed
that this question can be answered positively.
SESSION LEARNING
Ida Sprinkhuizen-Kuyper
Universiteit Leiden
Relational Reinforcement Learning
S. D_eroski, L. De Raedt, and H. Blokeel
Katholieke Universiteit Leuven
The goal of the first talk was to show that a
combination of reinforcement learning and relational
learning can handle new learning tasks by using a
more expressive representation language. Planning
in the blocks world served as an example.
It is not always clear for me whether the reported
experiments are frustrated or stimulated by the use of
real robots. Problems with sensing capabilities and
radio link reliability seems to take a lot of research
effort. On the other hand, the videos displayed during
the presentations were entertaining and illustrative.
Goal-driven Learning for Knowledge
Acquisition
M. Van Someren
Universiteit van Amsterdam
In the second talk Maarten van Someren gave an
overview of an architecture to generate and select
NVKI-Nieuwsbrief
knowledge acquisition operators from existing
knowledge systems and sources of knowledge (e.g.,
166
December 1998
human experts). It was quite a large system to
describe in a twenty-minute talk, but Maarten was
able to give an impression of a useful architecture for
generating knowledge systems.
Katholieke Universiteit Leuven and D.A. de Waal,
Potchefstroom University, South Africa
Some programs explore an infinite search space (e.g.
in planning) and run forever when the query is such
that the problem has no solution. The absence of a
solution can be proven by showing a model of the
program in which the query is false. This work
develops methods for searching such models and
compares the methods with program-analysis
methods which can, - as a byproduct - show failure
of queries and with some model generation methods
used in theorem proving.
Unsupervised Learning of Subcategorisation
Information and its Application in a Parsing
Subtask
S. Buchholz
Katholieke Universiteit Tilburg
The last talk of this session was about
subcategorization, i.e., a lexical property of (mainly)
verbs. Sabine showed that unsupervised learning of
subcategorization information can improve the
complement-adjunct distinction task by 1%, which is
2/3 of the improvement obtained by using this
information from tree-bank annotations, for which
much more work is necessary. The accuracy
improvement of 1% corresponds to an error reduction
of 15%.
The session on learning was an interesting opening
session of the NAIC'98.
A Lazy Logic programming Language
S. Etalle , University of Maastricht, and
F. van Raamsdonk, CWI
Lazyness - employing a call-by-need evaluation
mechanism - is a major feature in many functional
programming languages (e.g., Haskell). The notion
does not fit standard logic programming where an
answer is only returned when there are no more
subgoals to resolve. This paper proposes a lazy logic
language. It is obtained by adding two kind of
annotations to standard logic programming: requests
and strictness, and by replacing the notion of
successful derivation by the notion of adequate
derivation.
SESSION LOGIC PROGRAMMING
Maurice Bruynooghe
Katholieke Universiteit Leuven
A Framework for Bottom-up Specialisation
of Logic Programs
W. Vanhoof, D. De Schreye, and B. Martens
Katholieke Universiteit Leuven
SESSION LOGICS 1
Luc de Raedt
Katholieke Universiteit Leuven
SESSION KNOWLEDGE ENGINEERING 1
Cees Witteveeen
Technische Universiteit Delft
Traditionally, partial deduction of logic programs --- a
specialisation technique --- is performed top-down
and specialises programs by exploiting part of the
program input (query) which is already available at
specialisation time. This work proposes a framework
for bottom-up partial deduction. It is argued that such
an approach is simpler and gives better results in
situations where no partial input is available as a part
of the query, but rather as a set of predicate
definitions. For example, as the predicates defining an
abstract data type, or as the object program to be
manipulated by a meta-interpreter.
Three
rather
diverging
contributions
characterized the first session on knowledge
engineering, thereby illustrating the potential
broad scope of this field and showing the possible
benefits particular approaches in AI can have for
other areas both within and outside AI.
Characterizing approximate problem-solving
by partially fulfilled pre- and postconditions
F. van Harmelen and A. Ten Tije
Vrije Universiteit Amsterdam
Detecting Unsolvable Queries
for Definite Logic Programs
M. Bruynooghe, H. Vandecasteele, M. Denecker
The first contribution was the presentation of
Frank van Harmelen and Annette ten Tije. Their
work is an intriguing attempt to apply
approximation methods developed in AI to a
rather well-established software engineering
NVKI-Nieuwsbrief
pre-and postconditions paradigm. Where
traditionally the precondition has to be satisfied
completely in order to evaluate the fulfillment of
the postcondition, these authors proposed a more
refined framework where also a relationship can
167
December 1998
be established between partially
preconditions and postconditions.
fulfilled
Donkers, Uiterwijk and Van den Herik presented a
method for finding the best decisions in a Markov
Decision Network. This is a representation for
describing the relations between tests, actions and
states. The idea is to represent a "groups" of states as
one node with a description that abstracts from some
of the state variables. The networks are
"markovian". Donkers et al. adapt a method for
finding the optimal action or test for a POMDP
(Partially Observed Markov Decision Process) to
linear Markov Decision Networks. In this context
the term “Linear” means that the value of an episode
is a linear function of costs and rewards of
individual steps. This allows for an efficient method
to find the optimal action or test. Their example was
quite timely: optimising the control of dikes to
prevent floods.
Version Space Retraction with Integrated
Instance/Concept-Based Boundary Sets
E. Smirnov and P. Braspenning
Universiteit Maastricht
The second talk was presented by the first author and
was based on their award-winning paper at the
ECAI'98. In the paper they show that revision
methods can be incorporated in the classical version
space concept-identification framework in both an
effective and efficient way. The importance of such an
approach cannot be overlooked easily: it opens a
possible way to merge the hitherto largely isolated
(AI) paradigms of theory revision and inductive
learning.
Decision Trees: Equivalence and
Propositional Operations
H. Zantema
Universiteit Utrecht
Specification of Dynamics for Knowledge-Based
Systems
P. van Eck, J. Engelfriet, D. Fensel, F. van
Harmelen, Y. Venema and M. Willems
Vrije Universiteit Amsterdam
Hans Zantema presented results on combining
decision trees into new decision trees. These trees
are a notation for boolean functions and this raises
(by analogy) questions about detecting equivalence
between trees and combining trees efficiently into
new trees. Zantema gives an algorithm for the first
point. This turns out to be NP-complete as for
arbitrary boolean functions but the complexity is
approximately in the order of the product of sizes of
the trees. He then shows that a tree, that represents
the conjunction or disjunction of two given trees,
cannot be represented efficiently (much more
compact) than by simply adding trees. These results
are useful for the design of algorithms for refinement
or revision of decision trees.
The last talk was a final illustration of the usefulness
of, not only merging the ideas of several authors, but
also thereby comparing different knowledge
specification formalisms. In this case the problem was
to specify the dynamic reasoning behaviour of a
knowledge-based system. The method followed was a
detailed comparison of the resulting formalizations of
a specific example. The comparison was made on two
dimensions: the first one dealing with the kind of
concepts used to analyze the example, the second one
dealing with the way in which these concepts were
represented.
SESSION DECISION NETWORKS 1
Maarten van Someren
Universiteit van Amsterdam
An Algorithm for Generating Quasi-Monotone
Decision Trees for Ordial Classification
Problems
R. Pothartst and J.C. Bioch
Erasmus Universiteit Rotterdam
Solving Markov Decision Networks using
Incremental Pruning
H.H.L.M. Donkers, J.W.H.M. Uiterwijk,
and H.J. van den Herik
Universiteit Maastricht
Potharst and Bioch presented a method for learning
quasi-monotonic ordinal decision trees. In some
problems variables have ordered values and one can
assume that the values have a monotonic relation with
the criterion: at some point on the ordered scale the
criterion flips but then it will not flip back. The
method that the decision tree learner C4.5 uses cannot
be forced to construct decision trees that satisfy this
NVKI-Nieuwsbrief
constraint. Potharst and Bioch give a method that
enforces this constraint during learning. For domains
that actually have this "quasi-monotonic" ordered
structure, this gives better results than C4.5.
SESSION ROBOTICS
168
December 1998
Frans Groen
Universiteit van Amsterdam
The last paper discussed the tracking of objects
using an active camera. The presentation gave a
very nice overview of the different approaches to
visually track moving objects. In active vision the
direction in which the camera looks is
dynamically adapted, so that the moving object
stays in the focus of the camera. Active vision is
based on reactive dynamic closed-loop control
and only fast algorithms can be used for that.
Therefore, the method applied in the described
experiment is a very simple and fast one. It
calculates the difference between two successive
images to find the moving objects of which the
centre of gravity is calculated. That is used to
control the pan and tilt motor of the camera.
Results showed that a coke bottle on a string
could be followed with 20 frames a second.
Planning Strategies for Decision-Theoretic
Robotic Surveillance
N.A. Massios and F. Voorbraak
Universiteit van Amsterdam
The first paper in this session discussed a decisiontheoretic approach to planning strategies for
robotic surveillance. When a robot is employed in
a surveillance task, it has to travel the environment
to detect relevant events, such as a fire, using its
sensor that has a certain range. Based on a formal
model of the environment, different surveillance
strategies are presented and illustrated by
examples. The strategies were also implemented in
a simulator, of which results were shown. The
conclusion is that the minimum expected cost
policy behaves well in situations where the
probabilities and costs matter and early detection
is important.
SESSION SEARCH
Cor Bioch
Erasmus Universiteit Rotterdam
Solving Job Shop problems with Critical Block
Neighbourhood Search
P. van Bael and M. Rijkaert
Katholieke Universiteit Leuven
AIACS: A Robotic Soccer Team Using the
Priority/Confidence Model
J. Lubbers, R.R. Spaans, E.P.M. Corten,
and F.C.A. Groen
Universiteit van Amsterdam
In this lecture Patrick described a neighbourhood
search scheduling algorithm to solve the Job-Shop
Scheduling well-known in Operations Research. The
general problem is NP-hard and therefore many
different heuristics/algorithms has been developed
in the literature each with its own advantages and
disadvantages. The most popular algorithms are
iterative and use local search. The most important
part of these algorithms is the way neighbourhood
solutions are created and how to move to one of
these solutions. The main contribution of the authors
is the use of a neighbourhood structure based on
socalled critical blocks and simulated annealing
combined with a new iterative improvement
algorithm. This algorithm is tested on the famous 10
* 10 problem of Mute and Johnson. The experiments
show that the iterative algorithm gives very good
results and that the neighbourhood structure
performs better than the older one. Patrick also
discussed the importance of the algorithm for
industrial applications.
The second paper discussed the architecture of the
robot soccer teams of the UvA, which participated
in the Robocup'98 world championships in Paris.
It is a 3-layer structure consisting of the basic
behavior layer, the skilled behavior layer and an
action manager. The two UvA teams differ mainly
in the action manager. The AIACS team has an
action manager based on a priority/confidence
model. Decisions are made according to a
confidence measure, which is based on the
importance of action and the satisfaction of preconditions. The AIACS team became ninth in the
Robocup championship. The AIAACS team is
compared with the other UvA team: the Windmill
Wanderers, which has an action layer based on a
decision tree, and reached the third place in the
Robocup championship.
Tracking Objects Using an Active Camera
T. Belpaeme
Vrije Universiteit Brussel
Simulated Annealing with estimated
temperature: a new efficient temperature
schedule based on the notion of acceptance
E. Poupaert and Y. Deville
Université Catholique Louvain
NVKI-Nieuwsbrief
Simulated annealing is a well-known local search
optimisation algorithm. This algorithm proposed by
Metropolis simulates the behaviour of a system at a
given temperature. The neighbour of the current
169
December 1998
solution is generated randomly at each iteration. If it
is better than it is accepted as the new current
solution, otherwise it is accepted with a probabiltiy
depending on the energy difference between the two
solutions and the temperature. The most important
part of the simulated annealing algorithm is the
temperature schedule used to decrease the
temperature. The aim of this schedule is to reach
thermal equilibrium. A good schedule must be
efficient, general and robust. In his lecture Eric
Poupaert discussed several annealing algoritms and
proposed a new algorithm based on the idea on
maintaining an evolving target acceptation probability
throughout the optimisation process. To compare this
algorithm with the classical ones the authors have
implemented a platform for general local search
optimisation problems. They compared their
algorithm with three classical ones. The algorithms
were tested on two classes of problems: the
geographic travelling salesman problem and real
function optimisation. The new algorithm outperforms
the classical algorithms and compares well with the
best algorithm known of date (Saef). So the idea of an
evolving target acception during optimisation is very
promising. In the discussion generalisations to other
local search algorithms were mentioned. For example
tuning of natural selection pressure in genetic
algorithms.
Antal van de Bosch presented this paper which is
to appear in the prestigious Machine Learning
Journal. Several thorough experiments have been
performed, all of which support the claim stated
in the informative title. Instead of classical treebased methods, memory-based learning methods
were applied using Tilburg's own publicly
available TiMBL package. The experiments
included grapheme-to-phoneme conversion (GS),
part of speech tagging (POS), base noun phrase
chunking (NP), and prepositional-phrase
attachment (PP).
As an example of POS, the word 'man' in 'the old
man the boats' should be identified as a verb
phrase in this context, whereas probably in most
contexts it functions as a noun phrase. In NP, the
difference in function of the latter parts of 'she
ate pizza with a fork' and 'she ate pizza with
anchovis' should be recognized. A first question
that was investigated concerned 'editing', the
practice of removing outliers to improve
generalization, which is common in supervised
learning. This involved a measure of typicality
(from Zhang) and an indicator for class
prediction strength (from Salzberg). The
conclusion was that editing is always harmful.
Moreover, and this is the main thesis of the
paper, it turned out that removing exceptional
instances (instances with low typicality or
prediction strength) is much more harmful than
removing typical instances.
SESSION LANGUAGE AND LINGUISTICS 2
Edwin de Jong
Vrije Universiteit Brussel
Lexical Cohesion and Authorship Attribution
H. Paijmans
Katholieke Universiteit Tilburg
Furthermore, the benefits of memory based
learning in the domain of language learning were
discussed. IB1-IG, a memory-based method
based on the information gain criterion, was
found to perform better than eager learning.
Disjunctiveness, a characteristic property of
language data, explains the importance of
exceptional instances.
Unfortunately this paper was not presented. In the
paper, text-cohesion methods are applied to
authorship attribution, i.e. determining the author
of a text. Even though this was not a primary goal
in Paijmans' research, it yielded interesting results,
and sheds new light on the question of who wrote
the 'Federalist papers', a historical collection of
writings.
Argue! - An Implemented System for
Computer-Mediated Defeasible Argumentation
B. Verheij
Universiteit Maastricht
Forgetting Exceptions is Harmful in
Language Learning
W. Daelemans, A. Van den Bosch, and J. Zavrel
Katholieke Universiteit Brabant
This paper was presented by Bart Verheij.
Defeasible argumentation consists of inference
(drawing conclusions from premisses), justification
(giving reasons for a premiss) and attack (giving
counterarguments to an argument). Since no
available formalism is generally agreed upon,
NVKI-Nieuwsbrief
Verheij uses his own formalism called cumulA,
which specifies argumentation as a tree of
arguments. The speaker showed exemplary
prudence by presenting a weakness of his own
formalism, in that counterarguments constitute a
rather general class. The question with defeaters
170
December 1998
is: what is a counterargument to what.
Experiments with a graphical notation, involving
boxes drawn around the elements of an argument,
demonstrated that this issue is more difficult than
it may seem at first sight.
uitgevoerd. Een voorbeeld is een architect,
waarvan de expertise zich betrekt op het
ontwerpen van gebouwen. Algemeen gesproken
genereert
een
design
agent
een
design-object-beschrijving op basis van de
informatie die van andere agenten wordt
verkregen, en stelt de design-agent ook zelf
wederom resultaten ter beschikking aan andere
agenten.
Finally, the pros and cons of the Argue! system
were discussed. The positive aspects are that it
provides a graphical representation of attack, and
that the argumentation is free and not bounded by
the system. The less desirable features include the
lack of rules and a not so intuitive user interface.
The author concluded with the consideration that
because the field of defeasible argumentation is
still young, experiments such as these are useful
since they provide a testbed or showcase, and can
also be a practical aid. Future work may include
constructing a template-based interface, as in Tom
Gordon's Zeno system.
De bijdrage bediscussieerde hoe een generieke
design-architectuur kan worden geintroduceerd
door op geeigende wijze een generiek
agent-model (gebaseerd op DESIRE) samen te
brengen met een (verfijning van een) generiek
design model. Als zodanig verenigt het resultaten
uit het veld van Multi-Agent Systemen met
resultaten uit het veld van AI & Design. De
auteurs beogen onder andere deze design
architectuur toe te passen binnen het domein van
Electronic Commerce.
SESSION AGENT TECHNOLOGY 1
Peter J. Braspenning
Universiteit Maastricht
Meer bepaald heeft men de ontwikkeling van een
multi-agent 'broker'-architectuur op het oog,
waarin zowel 'broker'-agenten als PersonalAssistant-agenten en andere taak-specifieke
agenten huizen. Iedere 'broker'-agent kan dan
dynamische reconfiguratie van agenten uitvoeren
alsmede de realisatie/introductie van nieuwe
agenten of de modificatie van reeds bestaande
agenten. Het is duidelijk dat de daartoe
benodigde conceptuele design zeer goed gebruik
kan maken van de eerder aangeduide generieke
design agent. Aanbevelingswaardig is om op zoek
te gaan naar de originele paper wanneer men
preciezer kennis wil nemen van de synergie
tussen MAS en AI & Design.
Deze eerste sessie gewijd aan Agent Technologie
(AT) omvatte bijdragen van de Vrije Universiteit,
de Universiteit Utrecht en een gezamenlijke
bijdrage van de Universiteiten van Warschau en
Groningen. Hieronder bespreek en plaats ik elk
van deze bijdragen in dit turbulente subdomein
van de Artificiële Intelligentie.
Compositional Design of a
Generic Design Agent
F.M.T. Brazier, C.M. Jonker, J. Treur,
en N.J.E. Wijngaards
Vrije Universiteit Amsterdam
Deze presentatie door Catholijn Jonker was
gebaseerd op een abstract welke refereert naar een
uitgebreide bijdrage van dezelfde auteurs in de
‘Proceedings of the AAAI Workshop on Artificial
Intelligence and Manufacturing’ van 1998. Het
hoofdthema is "ontwerpen"; een taak die vaak
door meerdere gespecialiseerde agenten wordt
Communicatie binnen Multi-Agent Systemen
neemt de vorm aan van uitwisseling van informatie
tussen agenten. Echter, verschillende agenten
kunnen verschillende conceptualisaties van de
omgeving hanteren en dus ook een ander
vocabulair gebruiken om hun informationele
attitudes te representeren. Dat heeft tot gevolg dat
de betekenis die verzendende agenten aan hun
gegevens hechten kan verschillen van die welke
ontvangende agenten daaraan hechten. Om
NVKI-Nieuwsbrief
Constructing Translations Between Individual
Vocabularies in Multi-Agent Systems
R.M. van Eijk, F.S. de Boer, W. van der Hoek,
en J.-J. Ch. Meyer
Universiteit Utrecht
werkelijk begrepen te worden dienen beide typen
van agenten dus te beschikken over de gebruikte
semantiek van de ander, oftewel beide typen van
agenten dienen te beschikken over de
interpretaties van de gebruikte constanten en
relatie-tekens.
Er werd een logisch raamwerk gepresenteerd, dat
gebaseerd is op de mogelijke-werelden semantiek,
waarin het mogelijk is om de communicatie van
171
December 1998
agenten te modelleren die een verschillend
vocabulair gebruiken om hun informatie te
representeren. Agenten kunnen zo hun eigen
vocabulair uitbreiden met termen uit een
"vreemd" vocabulair als ook hun 'belief'
expanderen met nieuw verworven informatie.
Aldus kunnen bruggen worden geslagen tussen
individuele lexicons.
De gebruikte architectuur kan helpen bij het
verhelderen van de onderlinge afhankelijkheden
van de betrokken agenten, die, bijvoorbeeld,
bezig zijn met individuele probleemoplossing of
meer verantwoordelijk zijn voor de geëigende
organisatie
van
de
cooperatieve
probleemoplossing. De vier stadia, 1) potentiële
herkenning [van een planningsprobleem], 2)
teamvorming, 3) plan-vorming, en 4) team-actie,
van de architectuur hebben allemaal een inherent
dynamisch karakter en vereisen daarom ook
daarop toegesneden methoden. De auteurs
abstraheren van deze methoden, maar definiëren
de stadia door middel van de te bereiken
resultaten en associeren deze met de eerder
aangeduide motivationele attitudes.
Het is ook mogelijk om de 'beliefs' van andere
agenten te vergelijken met hun eigen 'belief' om zo
uit te maken of de interpretaties van de symbolen
in het vreemde vocabulair corresponderen met de
interpretatie van een van de tekens in hun eigen
vocabulair. De abstract in de NAIC Proceedings
verwijst door naar de aanzienlijk meer uitgebreide
bijdrage in de Proceedings van de 8e AIMSA
conferentie (AIMSA'98) die dit jaar is uitgegeven
door Springer Verlag.
Opgemerkt wordt, dat het thema collectieve
team-actie relatief schaars wordt bediscussieerd
in de Multi-Agent Systeem (en AI) literatuur in
tegenstelling tot de drie daaraan voorafgaande
stadia. Wanneer tijdens planexecutie een
collectieve intentie wordt onderhouden, is het
cruciaal dat agenten op gepaste en efficiente wijze
her-plannen wanneer enkele leden (agenten) de
hen toebedeelde subtaken niet vervullen. De
hoofdbijdrage is dan ook het noodzakelijke
reconfiguratie-algoritme dat wordt geformuleerd
in termen van de abstracte stadia (van de
Wooldridge/Jennings-architectuur) en hun
complexe wisselwerking. Echter, ook om andere
redenen die te maken hebben met een nadere
kennismaking met het thema motivationele
attitudes is de paper zeer de moeite waard.
A Methodology for Maintaining Collective
Motivational Attitudes during Teamwork
B. Dunin-Keplicz en R. Verbrugge
Rijksuniversiteit Groningen
Deze bijdrage addresseert het gepaste onderhoud
van individuele, sociale en collectieve motivationele
attitudes binnen een groep van heterogene agenten.
Meer speciaal gaat het om een methodologie van
teamwerk (van agenten) gericht op cooperatieve
probleemoplossing in een dynamische, constant
veranderende omgeving. In feite wensen de auteurs
met name de notie van collectieve 'commitment' te
onderzoeken, waaraan ze achtereenvolgens onderscheiden: de constructie, het onderhoud en de
realisatie ervan. Deze hoofdfases van cooperatieve
probleemoplossing worden toegewezen aan een
abstracte architectuur bestaande uit vier stadia
(afkomstig van Wooldridge & Jennings) voor
collectieve probleemoplossing. Vervolgens worden
deze vier generieke stadia en hun onderlingen
afstemming/samenwerking als uitgangspunt
gebruikt voor het opstellen van een flexibel
re-configuratie algoritme teneinde een initiële
planning voor het bereiken van een 'overall'
groepsdoel te kunnen bijstellen.
BUSINESS SESSION ON
ELECTRONIC COMMERCE 1
Gert-Jan Beijer
Bolesian
A formal specification of automated auditing of
trustworthy trade procedures for open
electronic commerce
R.W.H. Bons, F. Dignum, R.M. Lee,
and Y.H. Tan
Erasmus Universiteit Rotterdam
The paper presented by Yao-Hua Tan consisted of
two innovative ingredients: the application of
existing trade procedures in an electronic
commerce (EC) environment and the use of
different kind of logics to prove the
trustworthiness of the first. The first part of the
presentation focussed on current trade procedures
and the different ways companies handle risk while
NVKI-Nieuwsbrief
trading. In an EC environment, with trusted
third parties and EDI as important cornerstones,
companies will only trade goods and services if
the same level of security can be offered. For
example, current e-mail services do not offer any
of the following basic required functionality for
trushworthiness: guaranteed, on time, one-time
sending (no more, no less) that is traceable and
172
December 1998
management. The solution had to take the following
into consideration:
1. an employee is fully responsible for his or her
own agenda,
2. an employee can change or refuse changes to
his or her agenda, and
3. changes in the agenda should reflect different
individual settings, like working hours.
archived.
The application of existing procedures requires the
following:
1. formal representations of trade principles and
procedures, and
2. formal representations of (automated) audit
principles and procedures
In the architecture different agents are discriminated:
1. client,
2. call centre agent,
3. work manager (for managing all agenda’s, one
work manager for every local office),
4. personal assistent (one for managing the agenda
of every employee), and
5. employee.
The techniques that were used to realise this consisted
of different kind of logics:
Epistemic, dynamic, deontic, and temporal logic
in which primitives can be found to represent:
a) directed obligations,
b) general obligations,
c) dynamic logic,
d) action performance, and
e) temporal to-do obligations.
The advantage of multi-agent technology lies in the
fact that in this case, scheduling is a distributed
effort. The prototype was developed based on
principled design at the conceptual level, using a
compositional development method called DESIRE.
The specification method has the appeal of a generic
method for proving the trustworthiness of existing and
new (still to be discovered) trade procedures. In
addition, it visualised where the aspect of trust is
allocated.
A logical model of directed obligations and
permissions to support electronic contracting
Y.H. Tan and W. Thoen
Erasmus Universiteit Rotterdam
Distributed scheduling to support a call center: a
co-operative multi-agent approach
F.M.T. Brazier, C.M. Jonker, F.J. Jungen,
and J. Treur
Vrije Universiteit Amsterdam
In this lecture Y.H. Tan elaborated his ideas on
representing directed obligations in a logical model.
It can be seen as a theoretical basis for the first
lecture summarised in this track. It was the most
theoretical lecture from the three.
In the paper and in the presentation, a prototype
system is described that was developed on behalf of,
and in cooperation with, the Rabobank. During their
search to add value to customers, the Rabobank
decided to make their Call Center available 24 hours
per day. Customers should always be able to make
appointments, whether employees are there or not.
Processing a request implies deciding on a procedure
to follow and schedule this procedure and the
necessary resources. Given the decentralised structure
of the Rabobank, this implicated the support of some
sort of distributed information technology. The type
of support they decided to develop was based on
intelligent (multi) agent technology that takes over
business after regular office hours, especially agenda
Logical preliminaries to represent such models come
from standard deontic logic and action logic from
Santos and Carmo. The best attempt from to represent
this model comes from Herrestad and Krogh, but even
their approach has some disadvantages. An alternative
could be found in the so-called institutional power
operator from Jones and Sergot. These observations
have lead to the conclusion that for logically
representing models for electronic contracts some new
primitives should be used, for instance, on behalf of
NVKI-Nieuwsbrief
A formal model of a contract for electronic
commerce and transactions is useful and necessary
for automatic processing and auditing of such
transactions. Just like in the real world, next to the
prize, the payment and delivery conditions are
important in electronic trade. Directed obligations
play an important role herein. An simple example is:
a seller’s obligation to deliver goods is directed from
the seller to the buyer.
the notion of directed permissions. Permissions, on
the other hand, can have more than one meaning:
1. being allowed to without formal approval,
2. being allowed to after formal approval, or
3. involuntary consequence (if one does not bring
you your stuff, you have to get it yourself).
More research into this area is definitely needed.
SESSION EVOLUTIONARY ALGORITHMS
173
December 1998
Dirk Thierens
Universiteit Utrecht
Solving 3-SAT using Adaptive Sampling
M.B. de Jong and W.A. Kosters
Universiteit Leiden
Building Block Filtering and Mixing
Cees van Kemenade
CWI
In the last paper presented, the authors claim that it
is beneficial to study Evolutionary Computing and
Neurocomputing in a new unifying framework
called Adaptive Sampling. Unfortunately they spend
little time at discussing this framework, but instead
immediately propose two algorithms - inspired by
the Adaptive Sampling view - for solving 3-SAT
problems. Experimental results indicate that one of
the algorithms outperformes the SAW-ing
evolutionary algorithm (also discussed in the
previous talk) that was the best incomplete 3-SAT
method the authors could find in the literature.
The first lecture was given by the organizational chair
of this 10th NAIC conference, Cees van Kemenade
(CWI). Building blocks are groups of non-linear
interacting bits that have to be present all together in
order to find optimal solutions. When these bits are
not closely positioned on the representation string and
no substrings give correct partial information about
the value of the building block, then standard genetic
algorithms will not be able to find the optimal
solution. In this paper a non-standard genetic
algorithm is outlined that deals with this problem.
First, building blocks are explicitly identified, and
next they are juxtaposed - or mixed - to create optimal
or near-optimal solutions. Experiments are conducted
on artificial problems specifically designed to test the
limits of standard GAs, and results show the potential
of the proposed algorithm.
BUSINESS SESSION ON ELECTRONIC
COMMERCE 2
Frank van Harmelen
Vrije Universiteit Amsterdam
The second session on E-commerce consisted of two
presentations.
Solving Binary Constraint Satisfaction Problems
using Evolutionary Algorithms with an Adaptive
Fitness Function
A.E. Eiben, J.I. van Hemert, and E. Marchiori
Universiteit Leiden and A.G. Steenbeek, CWI
The POWER project: ProgrammaOndersteuning in Wet en Regelgeving
T.M. van Engers
Belastingdienst
The second paper presented joint work of the
Universiteit Leiden and CWI and was given by Jano
van Hemert. The paper experimentally compares three
evolutionary algorithms (COE, SAW, and MID) on a
test suite of randomly generated binary Constraint
Satisfaction Problems with finite domains. All three
algorithms adapt the fitness (penalty) function during
the search process. While the performance of the
Co-Evolutionary (COE) approach is rather
unsatisfactory, the other two algorithms seem to trade
off success rate versus computational effort. The
Microgenetic Iterative Descent (MID) performs best
with respect to success rate, but the Stepwise
Adaptation of Weights (SAW) is only slightly worse
on harder problems, and achieves this with less fitness
evaluations.
- to help in checking the consistency of legislation,
- to simulate the effects of introducing new
legislation, and
- to make explicit existing internal knowledge
within the revenue service.
The first was by Mr. van Engers from the Dutch tax
office. He presented the POWER project. This
project tackles the following problems:
- tax regulations are often difficult to understand
for citizens ,
- expert knowledge on tax legislation is sometimes
scarce, even within the revenue service, and
- it is often hard to predict the results of
introducing new legislation.
To tackle these problems, the POWER project has
the following aims:
- to make legislation more transparent,
- to enable a uniform interpretation of legislation,
following block-diagram summarises the steps that
must be taken in the POWER project:
The POWER project is a collaboration between the
University of Tilburg, the University of Amsterdam,
the Ministry of Justice and the Treasury. The
NVKI-Nieuwsbrief
174
December 1998
set of attribute-value pairs plus possibly relations
constraining the possible values. Van Rijn presented
an on-line demonstration of some of the software
developed at Data Distilleries for analysing
credit-risks in banking and in cross-selling products
to customers (cross-selling is a nice word for selling
customers a product they didn't ask for). After the
presentation, a very interesting and lively debate
followed on the legal and moral implications of
these techniques. Is it allowed to use data for
purposes for which the data was not originally
volunteered? What are the moral and legal
implications of judging individuals on the basis of
statistical profiles? In this reporter's humble opinion
these serious issues were not satisfactorily dealt with
by the speaker.
Plaatje
Van Engers finished his interesting presentation by
discussing a number of the hurdles that must be taken
by the POWER project:
- the translation between the source legislation and
a more formal representation must be both
bi-directional and transparent for all parties
involved,
- the programme will have to operate in a culture
that is not very IT oriented,
- the programme will have to be integrated with
other areas of legislation, even though these areas
often use very different conceptualisations (e.g.,
civil law vs. criminal law), and
- results of the programme will have to be
integrated in a traditional IT environment .
SESSION DECISION NETWORKS 2
Joost Kok
Universiteit Leiden
Customized E-Commerce by Data Mining
F. van Rijn
Data Distilleries
Variational Belief Networks for
Approximate Inference
W. Wiegerink and D. Barber
Universiteit Nijmegen
The second presentation was by Mr. van Rijn from
Data Distilleries, one of the many recent CWI spin-off
companies. Data Distilleries was described as a hightech market leader in the data-mining area. Van Rijn
described how many companies these days have large
and fast growing data-bases at their disposal. The
main task of data-miners is to exploit this information.
Traditionally, statistical techniques were used for this
task. The main drawback of these techniques is that
they can only be used to confirm or deny an already
formulated hypothesis. They do not help in
formulating the hypothesis itself. Data-mining
techniques, on the other hand do present new (and
often unexpected) hypotheses to the user. These
techniques often generate and test many thousands of
potential hypotheses, of which only the most
interesting ones are ultimately presented to the user.
In this context, a hypothesis often has the form of a
relationship between a particular customer profile and
a customer's behaviour, where a customer profile is a
The first paper in the session on Decision
networks was presented by Wim Wiegerinck of
the Foundation for Neural Networks, University
of Nijmegen. Approximate techniques are needed
for belief networks with larger clique sizes. The
paper proposes to use mean-field theory. At first
sight, mean field theory is not suited because only
completely factorized models can be used. The
paper shows that it is also applicable to a larger
class of models. One of the advantages is that a
lower bound on the likelyhood is obtained.
Variational Approximations in a Broad and
Detailed Probabilistic Model for Medical
Diagnosis
W. Wiegerinck, E. Ter Braak, W. Ter Burg, M.
Nijman, Y. O, J. Neijt, and H. Kappen
Utrecht University Hospital
networks. The second part of the presentation
consisted of a demo of the system. It was
interesting to see that such a large network could
be manipulated on-line.
The second paper was co-presented by Wiegerinck
and Ter Braak. Ter Braak is affiliated with the
department of internal medicine of the Utrecht
University Hospital. A larger belief network for
medical diagnosis was presented. It consisted of
two parts: a parent network and a child network
conditioned on the state of the parent network. The
nodes in the child network are independent.
Similar techniques to the ones used in the first
paper in the session were used to attack such
NVKI-Nieuwsbrief
Representation and Learning Capabilities of
Additive Fuzzy Systems
D. Ettes and J. van den Berg
Erasmus Universiteit Rotterdam
The third paper was presented by Ettis of the
175
December 1998
via the World Wide Web. People interested in
performances and the theater's program, can have a
preview of the Hall and information about the
performances at there own homes.
Erasmus University. An interesting comparison
between Feedforward Neural Networks (FFN) and
Additive Fuzzy Systems (AFS) was given, and
some of the theory about representation and
generalization for FFN was lifted to AFS. The
representation capabilities of several AFS were
discussed and a fuzzy decision tree algorithm was
proposed. A distinction in the fuzzy rule base is
made by distinguishing the more important fuzzy
rules. It still has to be tested in practice whether it
can be used as a tool for data mining.
The demonstration showed how users can exploit
the Music Hall and experience the view on the stage
from particular seats in the hall. One can imagine
that this kind of visual information indeed adds to
the information that can be acquired by a telephone
call to the Theater. On the other hand, it is
questionable whether dragging the mouse to
navigate though the entrance and up the stairs of the
building is the obvious interaction model. From the
discussion after the demonstration became clear that
perfectionizing this particular interaction had not
been the primary research goal of the project.
Instead, the emphasis of the research had been on
the interactions with Karin. The demonstrations
showed some examples of more or less complex
natural language interactions with Karin. Using
natural language a user can ask information on
different performances and make reservations.
SESSION MULTI-AGENT
DEMONSTRATIONS 1
Erica C. van de Stadt
WizWise Technology
To promote the feeling with real applications, last
NAIC's program included System Demonstrations in
addition to paper contributions.
This short discussion reports on two systems
demonstrations: A Virtual Reality Environment for
Information and Transaction authored by Hendir
Handorp and Anton Nijholt, both from the University
of Twente. And a second one entitled A Model for
Distributed Multi-Agent Traffic Control from
Christine Bel, Wim van Stokkum and Rob van der
Ouderaa, all authors work at Kenniscentrum CIBIT.
The second demonstration showed a graphical
representation of a railway traffic simulation. The
underlying system uses distributed real time traffic
control and is based on multi-agent concepts. The
agents in the model are sticked to the trains and the
elements of infrastructures (crossings). In case
where trains arrive at the same crossing at the same
time, trains and elements of infrastructure negotiate
to form a plan for passing the railway crossing. The
planning is based on quality criteria formulated in
terms of the interest (satisfaction) of the passengers.
The model of control is based on local optimization
between arriving trains and an element of
infrastructure. The demonstration showed a
prototype implementation of the system for the
railway section Arnhem-Utrecht.
The research underlying the first demonstration was
rather on interaction modalities than on virtual reality
modeling. Research on interaction modalities includes
natural language interactions with computerized
agents (by keyboard or speech). In the demonstrated
system the agent was not merely a computer module
but instead, this agent was personified into a woman
called Karin. Karin (modeled by VRML
specifications) is situated in a virtual version of the
Muziek centrum of Enschede. (Which I happen to
know and indeed could recognize in de moving
pictures presented.) The application can be accessed
Compared to a more traditional central control model,
the results of the simulation show that on the
passengers satisfaction criteria, the prototype-systems
scores increase. Unfortunately, the demonstration did
not address scalability issues. In effect, the conditions
for or the situations in which the proposed local
optimization strategy will be successful did not
became clear.
editions of the NAIC will continue to reserve
program space for demonstrations and applications.
SESSION NEURAL NETWORKS
Eric Postma
Universiteit Maastricht
Environment learning and localization
in Sensor Space
B. Kröse
Universiteit van Amsterdam
Having attended both system demonstration sessions
my personal opinion is that system demonstrations
certainly fit in the overall program of NAIC. AI
techniques applied in real situations make the
theoretical concepts concrete and I hope that future
NVKI-Nieuwsbrief
The first contribution to the Neural Networks
176
December 1998
session focussed on the visual navigation of a robot
vehicle through a virtual room. Traditional approaches
to this problem represent the environment in some
manner. Often, a geometric model of the world is
constructed and the location of the vehicle is tracked
by keeping a record of the number of revolutions of
the wheels. An estimate of the position in the world is
then obtained from the distance traversed and the
number and directions of turns taken from some
starting point. However, measuring the distance in this
way is not very accurate. Therefore, an additional or
alternative source of information is required for
estimating the position. After some initial experiments
with range sensors feeding their outputs into a
Kohonen map, Kröse turned to appearance modelling
of the environment. Appearance modelling is inspired
by the work of Murase and Nayar, who projected 2D
views of 3D objects onto a low-dimensional shape
space. They showed that novel views of an object can
be recognised by matching them to their nearest
neighbours in shape space. Appearance modelling in
navigation proceeds in a similar fashion: from a large
number of positions, snapshots of the environment are
taken and stored along with their location.
Subsequently, the vehicle is placed in an arbitrary
position. Again, snapshots are taken and compared to
the stored snapshots. An estimation of the position is
obtained, for instance, by interpolating between the
positions of nearest-neighbouring stored snapshots.
a vehicle was placed at a few hundred different
positions. From each position, snapshots were taken
and stored as images. To reduce the dimensionality
of the images, they were mapped onto their principal
components. A subset of the components served as a
(50-dimensional) subspace on which the images
were projected. In the test phase, the vehicle was
placed at 100 random positions. At each position, a
snapshot was taken and the probability of being in a
certain location was computed. If the probability
was below some threshold, the camera was turned to
a suitable position to take a new snapshot taken. The
results proved the feasibility of the approach. When
allowing multiple snapshots, in about 90% of the
cases a correct localisation was obtained.
The approach of Kröse appears to be a viable
alternative to traditional model-based approaches. In
the near future experiments will be performed using
a real robot placed in a real-world environment.
Given the presence of uncontrollable factors such as
variations in lighting conditions and inaccuracies in
the positions of the camera and vehicle, the move
from the virtual to the real world will not be without
difficulties.
Maximum Likelihood Weights for a Linear
Ensemble of Regression Neural Networks
M. van Wezel, W. Kosters, and J. Kok
Universiteit Leiden
Experimental studies were performed in a virtual
replica of Kröse’s lab. Within this virtual environment
By combining the outputs of multiple networks
trained on the same task, the overall generalisation
performance may be enhanced considerably. The
generalisation performance of a collection of neural
networks, a so-called ensemble, can be optimised by
appropriately weighting their outputs. Michiel van
Wezel presented work that attempted to find an
appropriate set of weights for the linear combination
of networks in an ensemble. He discussed three
general ways of defining the weights, i.e., ‘bagging’,
‘bumping’, and ‘balancing’. In bagging, all weights
have equal values whereas in bumping there is exactly
one non-zero weight. Balancing is the approach in
which an optimal set of weights is determined by a
quadratic programming technique. Van Wezel
presented an alternative approach based on the
maximum likelihood principle. After some elaborate
derivations, he arrived at what he claimed to be an
effective formula for determining the ensemble
weights. The determination required the use of a
conjugate gradient technique, the Bates-Granger
technique, or the simulated-annealing technique.
presentation corroborated the claim of effectiveness.
Maximum likelihood weighting outperformed the
other methods on sets of marketing data, stock
exchange data, and a wave-height data. For large
ensembles, the proposed method largely outperforms
the bagging method. Bumping and balancing
performed better than bagging, but worse than the
maximum likelihood weighting.
Interpreting Knowledge Representations in
BP-SOM
T. Weijters and A. van den Bosch
Technische Universiteit Eindhoven, Katholieke
Universiteit Tilburg
One of the disadvantages of multilayer perceptrons
(MPs) is that they are like black boxes. It is very
hard to interpret the internal representations of a
trained network. The reason is that, unlike
rule-based systems, the representations are
distributed patterns of weights, rather than sets of
rules. In the final talk of the Neural Networks
session, Ton Weijters presented his approach to
opening the black box of his BP-SOM network.
The empirical evidence presented at the end of the
NVKI-Nieuwsbrief
177
December 1998
help of some simple examples, Weijters showed that
the SOM representation can be translated into a set
of understandable IF-THEN rules. As a
consequence, the black box is opened and the hidden
strategies used by the trained MP can be uncovered.
BP-SOM is a hybrid network composed of a
multilayer perceptron and a Kohonen Self-Organizing
feature Map (SOM). The network integrates the
error-based learning of the MP with the
similarity-based learning of the SOM. The integration
is achieved by connecting a SOM to the hidden layer
of the MP. During training, the SOM receives the
activation patterns of the hidden layer as inputs. In
addition, the SOM influences the formation of the
hidden patterns through feedback connections to the
hidden layer. The overall effect of the SOM is that it
reduces the dimensionality of the hidden
representations by using information from the hidden
representations themselves. Previous studies have
shown BP-SOM to outperform both standard MP and
MP with weight decay on a variety of tasks.
Although the three contributions to the Neural
Networks session were entirely different in scope
and presentation, they provided an interesting
overview of neural-network research. Now that the
hype is over, theoretical and application-oriented
neural network research form an indispensable part
of artificial intelligence. The distribution of
neural-network research over disparate domains
such as statistics, robotics, and information retrieval
provides a case in point.
SESSION LOGICS 2
Yao-Hua Tan
Erasmus Universiteit Rotterdam
In his presentation Weijters concentrated on
interpreting the representations arising in BP-SOM
during learning. Earlier observations suggested that
the BP-SOM representations are highly structured. In
particular, the SOM-part appears to organise the
examples in the data set into clusters of similar
representations belonging to the same class. With the
The paper was presented by André Bos. Many
problems in AI are intractable. One way to deal with
this problem is so-called Knowledge Compilation
(KC). The basic idea of knowledge compilation is that
a distinction is made between the fixed part of a
problem and a varying query part. The fixed part can
then be computed off-line and reused for specific
queries, hence reducing the actual run-time
complexity. Bos gave a nice example of knowledge
compilation. In expert systems for the process
industry like oil refinery or a chemical plant the actual
model of the process is usually quite complicated.
Instead of reasoning with the full model each time a
query is computed a table can be computed off-line
with input-output behaviour of the process. When
querying the process this table is used instead of
recomputing every time the whole model. In strict
knowledge compilation there are specific constraints
such as (1) that the result of the off-line computed
fixed part should be polynomial-space bounded, and
(2) that the on-line query reasoning can be done in
polynomial time. The problem with these constraints
is that they are so strict that they exclude some of the
most obvious applications of KC. For example,
neither clausal inference nor logical abduction satisfy
these constraints. In the presentation was explained
how the first constraint can be relaxed to obtain a new
type of knowledge compilation, the so-called nonstrict knowledge compilation (NKC) such that clausal
inference as well as abduction can be compiled in
NKC. However, relaxing the constraint does not solve
NVKI-Nieuwsbrief
Non-Strict Knowledge Compilation
A. Bos and C. Witteveen
Technische Universiteit Delft
all practical problems. It was discussed that in NKC
problem compilation critically depend on the
construction of compilation algorithms that can
handle the size of the set of prime implicates of a
real-life problem, which is known to be hard to
compute. This will be the topic of further research
carried out by the authors.
Generated Preferred Models and Extensions of
Nonmonotonic Systems
J. Engelfriet and H. Herre
Vrije Universiteit Amsterdam
The paper was presented by Joeri Engelfriet. Stable
model semantics is a well-known semantics for
normal logic programming and it can also be used to
provide a semantics for normal default logic via a
simple translation from default logic into logic
programming. Normal logic programming and
default logic cannot be used to represent disjunctive
information. Special extensions of logic
programming and default logic, the so-called
disjunctive logic programming and disjunctive
default logic, have been developed to represent
disjunctive information. Herre and Wagner
introduced a special type of stable models, the socalled stable generated models that can be used to
provide a semantics for these disjunctive nonmonotonic formalisms. In the presentation it was
explained that the stable generated models
correspond with the temporal models, which were
178
December 1998
introduced by Engelfriet and Treur to provide a
semantics for default logic. In particular, it was shown
that the temporal models can be used to model
disjunctive default logic in a similar way as was done
with the stable generated models. Also the relation
was explained between temporal models and semiconstructive extensions in default logic. In particular,
it was shown that temporal models model very
precisely the step-wise construction of semiconstructive extensions. Engelfriet presented the
paper in an original way. First, he presented the
conclusions, and then he reasoned backwards to the
introduction of the paper, explaining for each step
what was a necessary prerequisite to arrive at this
The paper was presented by Shan-Hwei
Nienhuys-Cheng. Inductive logic programming (ILP)
is a type of machine learning that is based on
so-called refinement operators. The basic idea of
downward refinement operators is that you start with
the most general theory, namely a tautology that
implies every sentence, and you gradually refine the
most general theory by specializing it until it fits with
the examples that have to be learned. Technically, this
specialization is the result of a substitution applied to
a formula. An example of such a specialization is that
you refine a universally quantified sentence like
∀xB(x), which means 'x is a block', to B(a) if it is
known that the object a is a block. Since ILP has
usually been implemented in Prolog it has been
developed mainly for the underlying logic of Prolog,
namely Horn clause logic. Horn clause logic is a
fragment of first-order predicate logic. One of the
restrictions is that Horn clauses are universally
quantified formula that do not contain existential
quantifiers. In this presentation it was explained how
refinement operators can be generalized to Predicate
Calculus in conjunctive normal form, which is
equivalent to full first-order predicate logic. One of
the typical problems that had to be solved is that
substitutions on existentially quantified sentences
sometimes lead to generalizations instead of
specializations. For example, if we substitute the
variable y for x in the matrix of the formula ∀x∃yB(x,
y), then the result of the substitution ∀xB(x, x) is
more general than the first formula. The authors
adapted the definition of substitution in such a way
that when applied to an arbitrary PCNF formula the
result is always a specialization. Some of the
adaptations are quite complicated, and the authors
found some very ingenious solutions for this
adaptation. By redefining downwards refinement to
PCNF they showed the way how the rich tradition of
ILP machine learning can be applied to full first-order
predicate logic. The talk was presented by Shan-Hwei
Nienhuys-Cheng in a very lively way. We are very
NVKI-Nieuwsbrief
step. Probably this is the best way to present
technical papers, because you do not have to work
your way through technical details before you can
grasp the conceptual ideas.
Substitutions and Refinement operators
for PCNF
S.-H. Nienhuys-Cheng, W. van Laer,
and L. de Raedt
pleased to see such an active and creative AI
researcher back at the NAIC after her illness of last
year.
SESSION AGENT TECHNOLOGY 2
J.-J. Ch. Meyer
Universiteit Utrecht
A Formal Embedding of AgentSpeak(L) in
3APL
K. Hindriks, F. de Boer, W. van der Hoek,
and J.-J. Meyer
Universiteit Utrecht
The varied session started with a presentation of
Koen Hindriks on A Formal Embedding of
AgentSpeak(L) in 3APL. After giving a short
overview of the agent language AgentSpeak(L)
developed by Anand Rao and the agent language
3APL (triple-A-P-L) that we have proposed
ourselves, Koen showed (or rather made
plausible in a rather audience-friendly way,
leaving the technical details to the reader of the
paper) that it is possible to embed the rather
involved language AgentSpeak(L) into a subset of
the 'much simpler' language 3APL. In order to
make precise in what sense this embedding /
simulation works, a notion of bisimulation
(originally stemming from both concurrency
theory and modal logic) was introduced, ensuring
that the embedding is faithful with respect to
observable steps of computation. (Without this
important restriction the embedding would be
rather trivial since both languages are Turingcomplete.) As a consequence it appeared that
AgentSpeak(L) could be simplified without
sacrificing expressive power. In reply to a
question of Joeri Engelfriet about the
involvement of the simulation, i.e., a terribly
complicated code as a result of the translation,
Koen answered that in fact the resulting 3APL
179
December 1998
code is generally much simpler than the original
code in AgentSpeak(L), since in 3APL various
implementation details (regarding e.g., stacks of
plans) are suppressed.
was clear that the presenter did not want to
adhere to any 'strong' mentalistic interpretation
of the resulting agent, I myself thought that (a
posteriori) it was perhaps still possible to describe
the behaviour of the agents in a BDI-like way, the
agent's agenda providing already for an obvious
I(ntention) aspect. At least it would be interesting
to look at the agent in this way, since it may yield
a neat way to reason about its behaviour. During
the discussion afterwards Frank Dignum raised
the question whether the work reported was
really that specific for "a corporate
environment" as was suggested by the title of the
talk. Unfortunately this interesting discussion had
to be ended by the chairman for reasons of time.
Agent-Based Information Gathering in a
Corporate Environment
A. Wan, F. Wiesman and P. Braspenning
Universiteit Maastricht
The second paper was presented by Fred Wan.
The corporate environment mentioned in the title
of the talk pertained to KPN, and the paper
described an agent system for data-mining and
knowledge discovery for this company, where a
'weak agent' (i.e. non-mentalistic) position was
taken in the design of the agent system. Fred
described the architecture of the system in some
detail. In this architecture the agent's autonomy is
realised by manipulation of an agenda according to
priority ratings on relevance and through a sparse
need of user interaction. Pro-activeness is realized
by means of various retrieval and filtering actions
not explicitly instigated by the user. Although it
The last paper of the Agent Technology 2 session
was written by a "whole bunch" (as I somewhat
disrespectfully announced, but no disrespect
intended!) of authors, viz., a mix from the VUA
and Karlskrona/Ronneby University. It was
presented by Catholijn Jonker who had quite a
busy day that Thursday (I witnessed at least 3
excellent presentations by her that day). Catholijn
illustrated a method of compositional designing
multi-agent systems in DESIRE developed at the
VU by a very nice real-life example, viz. that of
negotiation in the context of electricity companies
dealing with the balancing of the load of the
network. Here the company is willing to cut down
electricity prices if customers are willing to avoid
use of electricity during certain peak hours. The
negotiation that is performed between company
and customers can be done automatically by the
multi-agent system designed, and I believe it is
actually running in Sweden. An important feature
of the design & verification method presented is
that the system can be viewed at several levels of
process abstraction, and that properties at
different levels of process abstraction are involved
in the verification process.
Compositional Design and Verification of a
Multi-Agent System for One-to-Many
Negotiation
F. Brazier, F. Cornelissen, R. Gustavsson, C.
Jonker, O. Lindeberg, B. Polak, and J. Treur
Vrije Universiteit Amsterdam and
Karlskrona/Ronneby University
Promotor: Prof.dr.ir. J.A. Nerbonne
review by Antal van den Bosch
Tilburg University
On October 19, 1998, Erik Tjong Kim Sang
successfully defended his thesis "Machine Learning
of Phonotactics" in Groningen. It concluded an
eight-year period of research which Erik spent partly
as a researcher in Groningen, and partly as a
researcher/lecturer at Uppsala University in Sweden.
Prof. John Nerbonne (Groningen University) was the
promotor, and the thesis committee consisted of
Anton Nijholt, Ger de Haan, and Nicolay Petkov. At
the defence, the promotion committee was
furthermore strengthened by Gertjan van Noord,
Ronan Reilly, Walter Daelemans, and Ben
Spaanenburg.
Erik's thesis concerns the application of a variety of
machine learning techniques to the problem of
modelling the phonotactics of a language. The
phonotactics of a language is the set of constraints
on what constitutes a valid syllable (or a valid word,
made up of one or more syllables) in that language.
The phonotactics of English allow "pand" to be a
possible English word, and would reject "padn". The
phonotactics of languages differ; "mloda" would not
be an acceptable Dutch or English word, but it is in
Polish.
MACHINE LEARNING
OF PHONOTACTICS
Ph.D. dissertation of Erik F. Tjong Kim Sang
NVKI-Nieuwsbrief
180
December 1998
How does one model the phonotactics of a language?
One possibility is to sit down and think up all the
rules and constraints that could discriminate between
allowed and disallowed syllables and words. This has
been done for many languages by linguists, which has
resulted in many good generic descriptions of
language-dependent but also some language-universal
phonotactics. The approach taken in this thesis differs
from the traditional linguistic approach in that it
describes methods to derive phonotactical models
automatically from examples of existing words. The
methods used in the thesis are categorised under the
Because it is obvious that such choices may affect
learning and the success of it, and because one would
like to have a grip on their effect, the thesis covers a
matrix of combinations of representation choices. It
comes as no surprise that this matrix dictates the
structure of the body of the thesis in which the
methods are applied to the problem. The first
dimension in the matrix is formed by the three
learning methods (HMM, SRN, and ILP), described
respectively in chapters 2, 3, and 4. The second
dimension concerns the lowest-level representation of
the data, which is alternated between spelling and
phonology. While spelling is of course a common
format for words, phonology (pronunciation) is
commonly and generally considered to be the actual
level at which phonotactics operate (its spelling
counterpart being referred to sometimes as
graphotactics). Spelling is not phonology; in Dutch
and English it has grown to be a distorted reflection of
it because it has developed at a different pace, and
because it also represents phenomena that are
unrelated to (and sometimes even defy) pronunciation:
morphology (e.g., the "-dt" and the conjunction "-n-"
in Dutch), etymology (the Dutch word "synthese"
mirrors the non-Dutch spelling of old non-Dutch
lemmas; it is not written as "sintese"), and historical
word-image conservatism (the old "-isch" and "-lijk"
at the end of Dutch words). Although I would take
these aspects of spelling to be a reason to discard it
from the study and only focus on phonology, Erik
simply continues attempting both. In the concluding
chapter, he is "surprised" to find that learning
phonotactics from phonology was easier than learning
it from spelling. He shouldn't have been. A general
finding in the literature (cf. Selkirk, 1984; Blevins,
1995) is that almost all syllables in languages like
English and Dutch adhere to the sonority principle.
When people speak, their vocal tract opens and closes
to the rhythm of syllables, which is pretty directly
correlated to the sonority of the uttered phonemes.
Sonority peaks at vowels, and is lowest around
syllable boundaries (with only the /s/ as the occasional
odd one out). This simple regularity already governs a
major part of phonotactics; "padn" is not valid
NVKI-Nieuwsbrief
umbrella term "Machine Learning", and include
Hidden Markov Modelling (HMM), Simple
Recurrent Networks (SRN) and Inductive Logic
Programming (ILP). Basically, all three methods are
given examples of Dutch monosyllabic words to
derive their model of Dutch phonotactics from. How
these "examples" are represented depends not only
on the method, but also on several choices the
experimenter can make; for example, whether to
include some linguistic abstraction, estimated by
linguists to be useful in a phonotactical model.
because the "d" has a lower sonority than "n" while
in a syllable's coda sonority is only allowed to
decrease gradually (which is why the two-syllable
"padna" would be OK again - the "n" is the onset of
the second syllable and the sonority rhythm is not
violated). Of course, this "simple" general principle
assumes a non-trivial abstraction - sonority.
The third dimension is whether the low-level
representation is augmented with linguistic expert
knowledge. Learning may be facilitated and the
result of learning may be more successful when at
the onset of learning some general abstract
knowledge about phonotactics is available to the
learner, e.g., by representing the example data in a
format that focuses on the aspects of the data
linguists claim to be relevant. The knowledge that is
thus included in half of the experiments is a general
model of the phonological structure of the syllable.
Of course, it would have been a give-away to
explicitly tell the learner about real phonotactic
constraints before it starts learning phonotactics; the
chosen syllable model cleverly represents
knowledge that is just below that level of specificity.
It does not tell the learner what phonotactics is;
however, it brings the learner quite close to the point
at which it would be daft to be unable to see the
solution.
The thesis describes empirical work, and everything
follows from the evaluation of the experimental
results. Each experiment, in which one learner is
trained on 5577 Dutch monosyllabic words, results
in some way in an automatically learned model of
Dutch phonotactics. To evaluate each experiment,
Erik measures (1) the model's acceptance rate (in
percentages) of 600 Dutch words that were not in the
learning material, and (2) its rejection rate on 600
implausible words such as "ywua" and "odhnf",
generated by randomly sticking letters to each other
(with probabilities derived from the real-word list).
Of course, any good model of phonotactics should
be able to reject alien words or syllables. Reference
is made to the very classic discussion on the
181
December 1998
possibility of learning language only on the basis of
positive examples, without negative examples: Gold
said in 1967 that it was impossible to build a perfect
model for a general (i.e., not necessarily natural)
language which can generate infinite numbers of
strings (e.g., words) by only looking at positive
examples of the data; it can be expected to perform
badly on rejecting alien strings. Erik's learners are
learning only from positive data, so the fate predicted
by Gold is imminent. However, it is encouraging that
in real life, children pretty much lack and ignore
negative examples, while they all tend to succeed in
learning quite a bit of language. Innate knowledge,
Erik postulates, may be their key aid, and the
inclusion of linguistic background knowledge, one of
the experimental dimensions described above, may be
seen as a model of hardwiring useful innate (initial)
knowledge into the learner.
enforced by the specific biases of the learners
themselves. First, the HMM method is found to
work better with letter or phoneme bigrams than
with unigrams; the latter is more content with "pajn"
than with "pijn" because it prefers "a" over "i" at that
specific position while it does not know that "aj" is
rather unlikely in Dutch. It is not surprising that the
minimal amount of context one could ask for, viz.
one neighbouring letter as in bigrams, helps to learn
phonotactics dramatically. In the end, bigram
HMMs are found to be rather accurate (around 99%)
in accepting positive data, trained on any
representation; rejection of negative data is almost as
accurate, at least on phonological data. Rejection of
negative data in the spelling representation is worse:
91.0% without, and 94.5% with built-in linguistic
knowledge. Erik concludes that the learners found it
easier to discover regularities in the phonological
data as compared to the spelling data. Again, this
comes as no surprise: spelling is not phonology, and
phonology at the syllable level is known to behave
quite nicely.
Returning to the first dimension, the three learning
methods, it becomes clear when reading the chapters
2, 3, and 4 that some serious choices of data
representation that the experimenter has to make are
Second, the SRNs turn out to perform quite
disappointingly. They are trained indirectly on
phontactics by having them learn to predict the next
phoneme in a sequence of phonemes. SRNs learn their
own memory, and the hope was that this memory,
which is quite small and typically grows to represent
general aspects of the learning data rather than
example-specific information, will represent the
underlying grammar of the sequences, viz. the
phonotactics. Although this leads to networks
accepting valid words at a high accuracy, deriving
rejection of alien words from clues in the network's
output turns out infeasible; most alien strings are
simply accepted. The SRN's classification task,
redicting a phoneme given its predecessing sequence,
could have been one by any classification learning
method, including decision trees and memory-based
learners. Although Erik corrected this claim lateron,
the thesis mentions that decision trees and memorybased learners cannot be trained on positive examples
only - but they can be trained straightforwardly on the
SRN's task. Moreover, Erik discards them from is
study because they do not generate rules such as the
ILP approach - but neither do the SRNs or the HMMs.
prefix and suffix hypotheses: when a specific
sequence of letters is valid, and some specific letter
gets attached to the left or right of this string, the
new string is also valid. Fed with all these
hypotheses and the data themselves, the ILP systems
tested need a few training cycles to arrive at what
they estimate to be a good set of hypotheses about
the phonotactics of words and some selected prefix
and suffix hypotheses. While the acceptance rate is
of HMM-quality, ILP's rejection rate is considerably
lower (63.2% for the spelling data, and 86.2% for
the phonetic data). For instance, it accepts "kwrpn",
due to blind prefixing of consonants left of the "n"
which on its own is a valid word according to the
data. The problem disappears when strange words
such as "n", "t", "sst", and "pst" are removed from
the data by hand. Apparently, the ILP approach is
very sensitive to these noisy instances in the data,
more so than the HMM approach of which the
statistical smoothing behaviour is able, apparently,
to ignore this level of noise. When augmented with
the mentioned syllable structure model, the ILP
learners are roughly as accurate in acceptance and
rejection as the HMM learners.
The general discussion focuses on the question
whether the HMM or the ILP method yields the best
models of Dutch phonotactics, SRNs having left the
backdoor. Erik recommends ILP because (1) it can
do a good job when it is equipped with innate
linguistic knowledge, (2) it generates rules which
can be interpreted by humans, and (3) it is faster
than its HMM counterparts. The results that fill the
The third learning method, ILP, is a strong contestant.
It induces rules from data, building on a background
knowledge model. Of course, this background model
correlates with the third dimension in the global
matrix: no linguistic knowledge versus some relevant
innate linguistic knowledge. The "no linguistic
knowledge" variant tells the ILP explicitly what the
HMM represents implicitly, namely a collection of
NVKI-Nieuwsbrief
182
December 1998
global matrix support claim (1), and claims (2) and (3)
should be taken as important considerations for
anyone interested in inducing an interpretable set of
constraints (e.g., a grammar) from raw sequences of
language data (or, for that matter, gene sequences or
stock exchange time series). The main strength of the
thesis is that it has done what was asked at the onset;
it has come up with recommendations for successful
automatic learning of phonotactics, and has done that
in a process that very explicitly introduces a wide and
interesting variety of techniques along the way. The
thesis can be read as a primer in HMMs, SRNs, and
ILP.
reduced" data. It seems that the learning data
covered enough of Dutch to contain almost every
possible "Dutch" bigram; words not in the training
set appear to be composed of these bigrams, while
alien words perhaps have at least one or two
unknown bigrams too many. Having a good bigram
memory is an adequate basis for doing phonotactics.
More than just a bigram context may do just a bit
better, but that is part of future research, and fellow
researchers of Erik in Groningen have already taken
up the challenge.
Above all, I have found the thesis to be an enjoyable
piece of work. It is a clearly written study in
machine learning of natural language, a blooming
field to which this thesis is definitely a valuable
contribution. I should also note that the style of
writing contributes to this positive impression. Erik's
prose is direct, gets to the essentials without
deviations, and represents a clear flow of
argumentation.
The raw fact that phonotactics can be learned is less
surprising, given the regularity of the domain,
especially on the phonological level. Moreover,
giving the learner clues about syllable structure is
almost a give-away. On the other hand, this criticism
is easily outweighed by the big and pleasant surprise
of the reported rejection rates of the learners which
had no negative examples and no linguistic
background: the HMMs and the ILPs with "noiseFortunately, the field of machine learning of natural
language has not lost Erik upon finishing his thesis.
Recently, Erik started as a postdoc researcher at the
Centrum voor Nederlandse Taal en Spraak (Centre for
Dutch Language and Speech), at the Universitaire
Instelling Antwerpen (Antwerp University). His
research project is part of the European "Learning
Computational Grammars" network project, financed
by the EC as a part of the TMR programme (Training
and Mobility of Researchers). Erik is now focussing
on tasks concerning the recognition of the structure of
noun phrases in texts.
AND REASONING UNDER UNCERTAINTY
Dissertatie van Peter Grünwald, CWI
Promotor: Prof.dr. P.M.B. Vitányi
Bespreking door Ronald de Wolf
UvA en CWI
INLEIDING
Op 8 oktober van dit jaar verdedigde Peter
Grünwald, OIO aan het Centrum voor Wiskunde en
Informatica (CWI), zijn dissertatie in de Oude
Lutherse Kerk van de Universiteit van Amsterdam.
De meeste dissertaties in de exacte wetenschappen
zijn tegenwoordig niet veel meer dan bundelingen
van gepubliceerde artikelen, vooraf gegaan door een
snel geschreven inleiding. Zo niet hier. Grünwalds
dissertatie is een flink boekwerk van zo'n 300
pagina's. Hoewel sommige delen snel en onder hoge
druk zijn geschreven, maakt het toch de indruk van
een doorwrocht geheel, dat meer is dan de som van
de gepubliceerde artikelen waarop sommige
hoofdstukken gebaseerd zijn.
References
Blevins, J. (1995). The syllable in phonological
theory. The handbook of phonological theory (ed. J.
A. Goldsmith), pp. 206-244. Cambridge, MA.
Blackwell.
Selkirk, E. O. (1984). On the major class features and
syllable theory. Language sound structure (eds. M.
Aronoff and R. T. Oehrle), pp. 107-136. Cambridge,
MA. The MIT Press.
Tjong Kim Sang, E. F. (1998). Machine learning of
phonotactics. Ph.D. thesis, Rijksuniversiteit
Groningen. Groningen Dissertations in Linguistics,
number 26.
In brede zin gaat het proefschrift over het
modelleren van gegeven data via zogenaamde
Minimum Description Length (MDL) principes, en
over de inductie en generalisatie die daarvan het
gevolg zijn. De gebruikte methoden zijn grotendeels
statistisch,
coderingstheoretisch
en
informatietheoretisch. Wanneer we de dissertatie
binnen het kader van de kunstmatige intelligentie
THE MINIMUM DESCRIPTION LENGTH
PRINCIPLE
NVKI-Nieuwsbrief
183
December 1998
willen plaatsen, is de kortste omschrijving van het
onderwerp waarschijnlijk `statistische Machine
Learning'. Machine Learning (ML) is het deelgebied
van de kunstmatige intelligentie dat zich bezighoudt
met automatisch leren, dat wil zeggen met het
algoritmisch ontdekken van verbanden en
regelmatigheden in gegeven data. Er zijn
verschillende motivaties te geven voor Machine
Learning. Ten eerste is menselijk leergedrag bij
uitstek een gebied waar intelligentie ten toon gespreid
wordt, en dus een voor de hand liggend object van
studie voor AI-onderzoekers en cognitief
psychologen. Een tweede, meer praktische reden is
dat goede leeralgoritmes kennis genereren: de
regelmatigheden die leeralgoritmes vinden kunnen
gebruikt worden zowel voor voorspelling van
toekomstige gebeurtenissen, als voor verklaring van
vroegere gebeurtenissen. Bij de bouw van expertsystemen en andere AI-toepassingen blijkt keer op
keer hoe moeilijk het vaak is voor mensen om hun
kennis "op te schrijven". Wanneer we dus een
kennissysteem willen bouwen (bijvoorbeeld een
medisch expert-systeem dat verbanden legt tussen
symptomen en ziektes), is machinaal-geleerde kennis
vaak het enige goede alternatief voor onvoldoende
beschikbare menselijk-gegenereerde kennis.
Binnen de ML bestaan verschillende "frameworks"
of "paradigma's", die zich met name onderscheiden
door de structuren die gebruikt worden om kennis te
representeren. Zo heb je bijvoorbeeld leren in
neurale netwerken, in beslissingsbomen, en in logica
(inductive logic programming). Een aanpak die
minder afhangt van de kennis-representatie en
daardoor algemener is, is het Minimum Description
Length principe. Dit principe ontstond in de jaren 60
in een geïdealiseerde vorm in het werk van Ray
Solomonoff. De meer praktische uitwerking kwam
in de jaren 70 en 80, met name in het werk van
Jorma Rissanen. Het grootste deel van Grünwalds
proefschrift beslaat theoretisch en experimenteel
onderzoek naar dit MDL principe.
De dissertatie valt uiteen in drie delen, die we
hieronder zullen bespreken.
DEEL I. THEORY OF MDL
bits codeert. Data met een hoge kans krijgt dus een
kort codewoord, data met een kleine kans een lang
codewoord. Iedere H staat nu een beschrijving van
D toe met lengte L(H)+L(D|H). Deze beschrijving
codeert eerst het model H, en codeert vervolgens de
data D met behulp van het model H. Vaak zal deze
beschrijving korter zijn dan de oorspronkelijke
lengte van D. MDL zegt nu dat we het model H
moeten kiezen dat L(H)+L(D|H) minimaliseert. Er
zijn hier twee uitersten. Als we voor H een "lege"
hypothese kiezen zal L(H) laag zijn maar L(D|H)
hoog, en zal er weinig geleerd zijn. Omgekeerd, als
we voor H een hypothese kiezen die D heel precies
beschrijft, dan zal L(H) hoog zijn en L(D|H) laag, en
er zal nu "te veel" geleerd zijn (overfitting). Het
kiezen van een H die de som van L(H) en L(D|H)
minimaliseert, zoekt de gulden middenweg. De
uiteindelijk gekozen H staat de meeste compressie
van de data toe, en zal dus waarschijnlijk de meeste
regelmatigheden in de data bevatten. Hierin
herkennen we het bekende principe van de 14e
eeuwse filosoof Occam: als er verschillende
hypotheses of verklaringen consistent zijn met je
data, kies dan de simpelste ("Occam's razor").
Zoals gezegd, Machine Learning gaat om het vinden
van regelmatigheden in een gegeven set data. MDL
baseert zich op de volgende cruciale observatie:
Iedere regelmatigheid in de data kan gebruikt worden
om de data te comprimeren (korter te representeren).
Simpel voorbeeld: stel de data is de bitstring
011011011011011011011011011011.
De
regelmatigheid is duidelijk: het patroon 011 wordt
steeds herhaald. Maar het herkennen van dit patroon
staat ons ook toe om de data korter te beschrijven,
namelijk als "10 keer 011". Gegeven een klasse van
mogelijke modellen, stelt het MDL principe dat dat
model moet worden gekozen dat de data het meeste
kan comprimeren. Gegeven bovenstaande observatie,
zal dit ook het model zijn dat de meeste
regelmatigheden in de data gevonden heeft, en dat dus
de beste generalisaties van de data geeft.
Hoe kan een model H de data D comprimeren? Het
eenvoudigst is dit uit te leggen aan de hand van Twopart MDL. Stel dat ieder model H ("hypothese") een
kansverdeling is op de mogelijke data, dus P(D|H) is
de kans op data D als H "waar" zou zijn. Stel dat L(H)
de lengte van een beschrijving van H is in een of
ander coderings-schema voor de hypotheses. De
beroemde Shannon-Fano code uit de informatieheorie
vertelt ons dat we kansen kunnen omzetten in
codewoorden: gegeven de kansverdeling P(D|H),
kunnen we een coderings-schema ontwerpen dat data
D met een codewoord van lengte L(D|H)=-log P(D|H)
NVKI-Nieuwsbrief
Welke klasse van mogelijke modellen H moeten we
nemen? In het ideale geval zouden we de klasse van
alle "berekenbare" modellen nemen, en de lengte
L(H) van een model uit die klasse zou gegeven
worden door de zogenaamde Kolmogorov
complexiteit K(H) van H (= de lengte van het kortste
programma dat H "berekent"). K(H) is een
184
December 1998
objectieve maat voor de complexiteit van H.
Onoverkomelijk nadeel hiervan: K(H) is niet
berekenbaar, dus de ideale versie van MDL kan niet
in een algoritme geïmplementeerd worden. Als we
echter onze ambities wat laten zakken, dan kunnen we
een beperktere modelklasse kiezen, toegespitst op het
gebied wat we willen modeleren, en daarvoor een
(efficiënt) berekenbaar coderings-schema maken. Er
allerlei goede formele en informele redenen om de H
uit je modelklasse te kiezen die L(H)+L(D|H)
minimaliseert. Deze beperkte, praktische versie van
MDL (relatief aan een keuze van modelklasse en
coderings-schema) is Rissanens MDL en is het
onderwerp van de dissertatie.
baseert op een prior distributie P(H) op de mogelijke
hypotheses. Inderdaad functioneert P(H) bij de
Bayesianen formeel gezien op ongeveer dezelfde
wijze als de codelengte L(H) bij MDL. Interpretatief
is er echter een wereld van verschil tussen beide.
Terwijl prior probabilities een dubieuze status
hebben (eigenlijk zijn het bijna metafysische
aannames over wat waarschijnlijk is in de wereld,
voorafgaand aan het zien van data uit die wereld),
zijn MDL's codelengtes uitermate concreet en
onafhankelijk van aannames over hoe de wereld zich
gedraagt.
MDL's
interpretatie
van
waarschijnlijkheden als codelengtes (via ShannonFano's L(D|H)=-log P(D|H)) leidt tot een heel andere
kijk op waarschijnlijkheden in het algemeen, en op
statistische inferentie in het bijzonder.
MDL wordt vaak vergeleken met de zogenaamde
Bayesiaanse inferentie, die zich bij model-selectie
Een verwant leerprincipe is het Maximum Entropy
(ME) principe. Dit zegt dat je uit verschillende
consistente modellen het beste degene met maximale
entropie (= grootste mate van uniformiteit) kunt
kiezen. Dit principe is verbonden met de naam van de
fysicus Jaynes en is altijd tamelijk omstreden geweest,
hoewel het in de praktijk vaak succesvol is toegepast.
Een oud resultaat zegt dat ME beschouwd kan worden
als een speciaal geval van MDL, en Grünwald laat
zien hoe. Aan de hand hiervan valt theoretisch beter in
te zien wat de sterke en zwakke punten van ME zijn.
model te simpel en dus strikt genomen "onwaar" is.
DEEL II. EXPERIMENTS WITH MDL
Het tweede deel van de dissertatie beschrijft
experimentele vergelijkingen van het MDL principe
met ander leermethodes uit de klassieke en
Bayesiaanse statistiek. De meeste gevonden
experimentele verschillen kunnen uit de bestaande
theorie verklaard worden. De hoofdconclusie van
Deel II is dat geavanceerde vormen van MDL en
Bayesiaanse methodes vaak verrassend goede
resultaten leveren wanneer slechts kleine data-sets
gegeven zijn. Bij grote data-sets werken alle
methodes ongeveer even goed, maar dat is weinig
verrassend wegens de wetten van de grote getallen
(bij veel data gaan gemiddeldes van data bijna altijd
naar hun verwachting toe). Opvallend is dat MDL
bij kleine data-sets ook iets beter lijkt te werken dan
het zeer-verwante-maar-subtiel-verschillende MML
(Minimum Message Length) principe.
DEEL III. REASONING UNDER
UNCERTAINTY
De belangrijkste nieuwe bijdrage van de dissertatie
aan de theorie van MDL en ME is het onderscheid
tussen "safe" en "risky" toepassingen van statistische
inferentie. Een van data geleerd model zal bijna altijd
een (grove) simplificatie zijn van het gemodelleerde
domein. Toch kunnen we met zo'n simpel model vaak
goede voorspellingen doen, beter dan met een zeer
complex model. Hoe kan dat? Grünwald geeft een
aanzet tot een theoretische verklaring. Stel dat we een
model van bepaalde data geleerd hebben, en daarmee
sommige dingen willen voorspellen. Als de data
waaruit we het model geleerd hebben op een bepaalde
manier representatief was voor wat we willen
voorspellen
(dus
voldoende
relevante
regelmatigheden bevat), dan zal het geleerde model
meestal redelijk goede voorspellingen doen. Maar het
is ook mogelijk dat het geleerde model gebruikt wordt
om voorspellingen te doen over dingen die niet in de
data terug te vinden waren. Het eerste geval is een
"safe" gebruik van het geleerde model, het tweede is
"risky". Grünwald analyseert voor een model dat op
de MDL/ME manier geleerd is (en dus relatief simpel
zal zijn), welke soorten voorspellingen "safe" zijn en
welke "risky". Door je te beperken tot "safe"
voorspellingen krijg je wiskundige garanties dat je
voorspellingen betrouwbaar zijn en er waarschijnlijk
niet te ver naast zullen zitten, ondanks het feit dat je
NVKI-Nieuwsbrief
Het derde deel van de dissertatie valt onder het
logicistische paradigma van de AI en ontwikkelt een
formele theorie van common-sense redeneringen
over gebeurtenissen en veranderingen. Dit deel heeft
minder direct met MDL en statistiek te maken.
Desondanks is er een duidelijke link met delen I en
II van de dissertatie, aangezien zowel leren-van-data
als common-sense redeneren gaan over de vraag:
wat is het beste om te doen in een situatie waarin we
slechts onvolledige informatie hebben? Opnieuw is
het antwoord op deze vraag statistisch geïnspireerd.
Grünwald baseert zijn logisch systeem op een
formalisering van de notie van "causaliteit" en op het
Beginsel van de Voldoende Oorzaak (sufficient
185
December 1998
cause principle). Deze noties zijn afkomstig van
statistisch werk van Judea Pearl. (De epiloog van Deel
III geeft verdere connecties tussen systemen voor nietmonotone logica en statistische redeneersystemen.)
principe ons toestaat om een groot deel van zowel de
successen als de mislukkingen van bestaande
redeneersystemen te verklaren.
CONCLUSIE
Het blijkt dat het voorgestelde systeem een
generalisering is van verschillende bestaande
systemen voor common-sense redeneren, zoals die
van McCain & Turner, Lin, en Baral, Gelfond &
Provetti. Deze bestaande systemen maken vaak
impliciet gebruik van het sufficient cause principle,
maar worden problematisch wanneer ze daarvan
afwijken. Grünwald gebruikt zijn formalisering van
common-sense reasoning om het aloude Yaleshooting probleem nog maar eens te lijf te gaan en een
(hopelijk definitieve) oplossing daarvoor te geven.
Hoofdconclusie van Deel III is dat het sufficient cause
TAALKUNDIGE ANALYSE VAN
ZAKELIJKE CONVERSATIES
Dissertatie van Ans Steuten, TUD
Promotores: J. Dietz en P. Hengeveld
Een indrukwekkend proefschrift: zo'n 300 pagina's,
variërend tussen theorie en praktijk, tussen statistiek,
informatietheorie, informatica, machine learning,
logica en AI. Wegens haast nog wat slordig aan de
randjes (zoals ook toegegeven), maar desondanks
precies en tamelijk leesbaar. Grünwald is nu voor
een jaar als post-doc verbonden aan Stanford
University, waar hij met name de theoretische
innovaties van Deel I van zijn proefschrift verder uit
zal werken.
bevestigen. Onderzoek vanuit LAP houdt zich
bezig met de vraag welke taalhandelingen
onderscheiden moeten worden, hoe deze
samenhangen en waarom de ene interactie
“beter” is gestructureerd dan een ander. Dit alles
tegen de achtergrond van de mogelijke invoering
van informatiesystemen. Vandaar dat men zich
doorgaans beperkt tot zakelijke communicatie, in
een bedrijf of in de handel tussen bedrijven.
Bespreking door Hans Weigand
Katholieke Universiteit Brabant
Wat heeft een informaticus die bezig is
informatiesystemen in te voeren in bedrijven van
doen met taalkunde? Meestal niets. Wanneer
echter die informatiesystemen tot doel hebben de
communicatie in de organisatie te ondersteunen, is
een
analyse
van
de
bestaande
communicatiepatronen nodig alvorens we iets
kunnen zeggen over hoe de situatie, al of niet met
behulp van informatietechnologie, kan worden
verbeterd. Om een dergelijke analyse uit te voeren
maakt de informaticus doorgaans gebruik van
modellen. Een modelleermethode is de methode
DEMO die is ontwikkeld door prof. Jan Dietz van
de Technische Universiteit Delft.
PROEFSCHRIFT
In juni is door Ans Steuten van de TUD een
proefschrift verdedigd (Steuten, 1998) waarin
geprobeerd wordt een brug te slaan tussen de
taalkundige analyse van zakelijke conversaties en
de modelleermethode DEMO. Centrale vraag is
of een taalkundige analyse - eventueel uitgevoerd
door een computerprogramma - een informaticus
kan helpen bij het opsporen van de in het
betreffende domein relevante taalhandelingen.
Als sprekers zich zouden houden aan de
performatieve zinnen zoals die door Searle
worden gehanteerd - “ik beloof hierbij ...” - dan
zou dit een triviale zaak zijn, maar in de praktijk
is dit natuurlijk niet zo. De praktijk is dat
taalhandelingen veelal indirect zijn en ook vaak
impliciet, dus niet eens uitgesproken. Voor de
taalkundige analyse heeft Steuten gebruik
gemaakt van de Functionele Grammatica (Dik,
1989) en de Conversatie Analyse. Het empirisch
materiaal is afkomstig uit telefoongesprekken in
een hotel en bij een arbeidsbureau.
DEMO is gebaseerd op het zogenaamd
Language/Action perspectief (LAP), een stroming
in de informatica die als zodanig is geïnitieerd door
Winograd en Flores (1986), en waarin de
taalhandelingentheorie van Searle een belangrijke
plaats inneemt. Volledigheidshalve is het goed te
vermelden dat behalve in Delft ook LAPonderzoek wordt verricht in Tilburg en Eindhoven
(zie bijvoorbeeld het proefschrift van Verharen,
1997).Waar de meeste benaderingen in de
informatica geneigd zijn communicatie te
beschouwen als de overdracht van gegevens, gaat
het volgens het LAP om wat mensen in
communicatie doen. Bijvoorbeeld, een kamer
reserveren, een opdracht geven, of een opdracht
NVKI-Nieuwsbrief
BRUG TUSSEN DEMO EN FG/CA
Met het door Steuten opgestelde geïntegreerde
model is de brug tussen DEMO en FG/CA gelegd.
186
December 1998
Dit is het goede nieuws. Het slechte nieuws is
echter dat het nog lang zal duren voordat
taalkundige analyse met enige betrouwbaarheid
uitspraken kan doen over illocuties, laat staan
automatisch de taalhandelingen opleveren zoals
DEMO die zou willen zien. Gegeven een DEMO
model, is het wel mogelijk de verschillende stappen
in de loop van de conversatie aan te wijzen. Maar
om het model uit de conversatie te induceren is een
brug te ver.
momenteel zijn binnen de FG gemeenschap over
discourse representaties. Omgekeerd kan het
LAP de inbreng van de taalkunde goed
gebruiken. Men baseert zich daarin meestal op
het klassieke model van de filosoof Searle,
aangevuld of gecorrigeerd door filosofen als
Grice en Habermas. Taalkundigen hebben in het
verleden nogal wat kritiek geuit op Searle. Het is
dan ook zeker nuttig de taalkunde te laten
meepraten over de verdere ontwikkeling van het
LAP.
Ondanks dit negatieve resultaat heeft het
proefschrift zeker zijn waarde. Het onderzoek
levert een bijdrage aan de discussies die er
In het eerste hoofdstuk geeft Steuten aan wat het
hoofddoel van het onderzoek is: een linguïstische
onderbouwing van zakelijke conversaties vanuit
het LAP perspectief en met het oog op
automatische analyse van deze conversaties.
Gegeven deze doelstelling is het eigenlijk jammer
dat slechts twee cases zijn bestudeerd, waarvan
vervolgens slechts één echt uitgewerkt is. Met
behulp van een grotere verzameling (verschillende
transacties, verschillende talen, ...) was het wellicht
mogelijk geweest terugkerende patronen dan wel
interessante onderscheidingen te vinden. Er wordt
gesteld dat een gedegen begrip van zakelijke
conversaties helpt bij het formuleren van
richtlijnen voor een informatieanalIst die een
communicatiemodel wil opstellen van een
organisatie. Naar mijn mening wordt dit in het
proefschrift onvoldoende onderbouwd. Uiteindelijk
is een lexicon van werkwoorden nodig om de
tekstanalyse tot resultaten te laten komen.
Daarmee wordt het probleem dus eigenlijk
verschoven. Hiermee wil ik niet zeggen dat een
corpus van zakelijke conversaties niet een rol kan
spelen in het analysetrajekt, maar het lijkt erop
dat dit beter gebruikt kan worden ter validatie van
een model dat op andere wijze verkregen is.
Aangezien de kern van het communicatiemodel
gevormd wordt door de zogenaamde essentiële
acties, zou je kunnen beginnen met een actiemodel
te maken voor de organisatie (of het domein in
kwestie).
OVERZICHT
Grammatica, waarmee individuele uitingen
kunnen
worden
gerepresenteerd
(naar
syntactische, semantische en pragmatische
inhoud). Aan de andere kant zijn dat de
conversatie-analyse en discourse analyse
waarmee de samenhang van uitingen in een
discourse of dialoog kan worden weergegeven.
Het gaat dan om zaken als turn-taking, feedback,
opening en afsluiting, etc.
HIERARCHISCH MODEL
De verschillende theorieën worden geïntegreerd
in een zogenaamd hiërarchisch model van
zakelijke conversaties. Van boven naar beneden
gelezen ziet dit model er als volgt uit. Het hoogste
niveau is dat van de business transactie. Die
bestaat uit een aantal fasen, en omvat ook het
niet-linguïstische deel, de uitvoering van de acties
waar het om gaat. Een business transactie bestaat
uit een aantal exchanges (niveau 2). Een exchange
is in het eenvoudige geval een combinatie van een
actie van de spreker gevolgd door een reactie van
de hoorder. De onderdelen van de exchange
worden interactionele acties genoemd. Dat zijn
per definitie de kleinste betekenisvolle eenheden
van een conversatie. Voorbeelden zijn directieven
(verzoek) en acceptaties.
Interactionele daden worden gerealiseerd door
illocutionaire daden. Daarbij gaat het dus om de
talige uiting als zodanig. De relatie interactionele
daad - illoctionaire daad is niet één-op-één,
hoewel er prototypische verbanden zijn
(bijvoorbeeld: directief - imperatieve zin). Het is
ook geen deel-geheel relatie, maar zoals gezegd
een doel-middel. In hoofdstuk zes van het
proefschrift
wordt
aangegeven
welke
combinatiemogelijkheden er allemaal zijn. Zo
kan een vraag als “Kunt u ..?” niet alleen maar
een question, maar ook een directief realiseren.
In hoofdstuk twee wordt het LAP perspectief
geschetst en in het bijzonder de DEMO
benadering. DEMO beschouwt de speech acts niet
in isolatie, maar als onderdelen van transacties c.q.
conversaties, zoals de “conversation for action”
(actagene conversatie). In hoofdstuk drie en vier
worden dan de taalkundige theorieën ingevoerd
die nodig zijn voor de taalkundige analyse. Aan de
ene kant is dat de theorie van de Functionele
NVKI-Nieuwsbrief
187
December 1998
Maar aangezien er zoveel mogelijkheden zijn, is
het heel moeilijk om hier richtlijnen uit af te
leiden. Dit wordt nog eens verergerd doordat het
model soms wat teveel detailleringen wenst (zoals
het onderscheid tussen een informatieve en
actagene exchange).
Wat verder nog lijkt te ontbreken als het gaat om
de context is de juridische en organisatorische
achtergrond. Zo maakt het om bij de case van de
hotelkamerreservering te blijven wel uit of de
reservering al tot stand komt in het
telefoongesprek, of pas met de daaropvolgende
fax. Wat ook een rol kan spelen zijn de instructies
die telefonistes ontvangen ten aanzien van de
vragen die ze moeten stellen. Het is duidelijk dat
dergelijke context niet uit de conversatietekst zelf
te halen is. Overigens zou het ook niet zo
eenvoudig zijn om die context in DEMO te
specificeren, het lijkt erop dat daarvoor de
modellen (nog) ontbreken.
Aangezien de grammaticale structuur dus geen
uitsluitsel biedt, wordt in het laatste hoofdstuk de
hulp ingeroepen van, zoals gezegd, het lexicon, en
de context. De laatste wordt in de vorm van scripts
weergegeven.
Nu
zal
ongetwijfeld
de
ervaringskennis van menselijke subjecten, al of
niet in de vorm van scripts, het hen een stuk
gemakkelijker maken om illocutionaire acties te
duiden. Het is echter onduidelijk wat de rol van
scripts is in de methode DEMO en in het beoogde
automatische analyseproces.
Al met al een aardig proefschrift. Het bevat een
hoop bruikbare linguïstische hulpmiddelen. De
waarde ervan ligt denk ik vooral in de link naar de
empirie. Helaas is dat iets waar informatici zich in
het algemeen niet zoveel aan gelegen laten liggen.
Zij zijn meer geïnteresseerd in de gebruikswaarde.
Die lijkt vooralsnog laag.
EVALUATIE
Odijk (12-01), A. Bos (26-01), J. van den Akker (3003), F. Wiesman (7-05), E. Oskamp (13-05), A.
Lodder (5-06), F. Ygge (6-06), A. Steuten (22-06),
L. Combrink-Kuiters (10-09), A-.W. Dutler (22-09),
G.-J. Zwenne (29-09), P. Grünwald (8-10), W. de
Waard (9-10), D. Breuker (16-10), E.F. Tjong-KimSang (19-10), A. Hoekstra (20-10), H. Blockeel (1812), L. Dehaspe (21-12).
Bibliografie
Dik, S.C. (1989). The Theory of Functional
Grammar, Part 1: The Structure of the Clause.
Foris, Dordrecht.
20 x waarvan 2 in 1997.
Hendrik Blockeel (December 18, 1998). Top-down
Induction of First Order Logical Decision Trees.
Katholieke Universiteit Leuven. Promotor:
Prof.dr. M. Bruynooghe, co-promotor: Dr. L. de
Raedt.
Steuten, A. (1998). A contribution to the linguistic
analysis of business conversations, within the
Language/Action
perspective.
Proefschrift,
Technische Universiteit Delft, Delft .
Luc Dehaspe (December 21, 1998). Frequent
Pattern Discovery in First-order Logic. Katholieke
Universiteit Leuven. Promotor: Prof.dr. M.
Bruynooghe, co-promotor: Dr. L. de Raedt.
Verharen, E. (1997). A Language-Action
perspective on the design of Cooperative
Information Agents. Proefschrift, Katholieke
Universiteit Brabant, Tilburg.
Ronald Leenes (January 7, 1999). Hercules of
Karneades; Hard cases in Recht en Rechtsinformatica. Universiteit Twente. Promotor:
Prof.mr. D.W.P. Ruiter, co-promotor: Dr. J.C.
Hage.
Winograd, T, F. Flores (1986). Understanding
Computers and Cognition - a New Foundation for
Design. Ablex, Norwood NJ.
?
?
Ina Enting (January 14, 1999). Zovex, a
Knowledge Based System to Analyse Factors
Associated with Pig-health Disorders. Universiteit
Utrecht. Promotor: Prof.dr.ir. M. Tielen.
Jaap van den Herik
Universiteit Maastricht
Joeri Engelfriet (February 4, 1999). The
Dynamics of Reasoning. Vrije Universiteit
Amsterdam. Promotor: Prof.dr. J. Treur.
H. Jurjus (3-12-97), K. van Belleghem (19-12-97), M.
NVKI-Nieuwsbrief
188
December 1998
Katholieke Universiteit Tilburg. Promotores:
Prof.dr. J.E.J. Prins and Prof.dr. P.M.E. de Bra,
co-promotor: dr. W.J.M. Voermans.
Marco Wiering (February 17, 1999). Explorations
in Efficient Reinforcement Learning. Universiteit
van Amsterdam. Promotor: Prof.dr.ir. F.C.A.
Groen, co-promotor: Dr. H.J.H. Schmidhuber.
BENELOG 1998
Marnix Weusten (March 10, 1999). De Bouw van
Juridische Kennissystemen. KRT: methodologie en
gereedschap. Promotores: Prof.dr. A. Koers and
Prof.dr. H.J. van den Herik.
Sandro Etalle
Universiteit Maastricht
Benelog 1998 was the Tenth Benelux Workshop
on Logic Programming. It took place on Friday
November 20, 1998 at the CWI - the Center for
Mathematics and Computer Science - in
Amsterdam, on the day after the closing of the
NAIC'98. The editors and local organisers were
Krzysztof Apt and Femke van Raamsdonk.
Pierre van de Laar (March 12, 1999). Selection in
Neural Information Processing. Katholieke
Universiteit Nijmegen. Promotor: Prof.dr. S.
Gielen, co-promotor: Dr. T. Heskes.
Luuk Matthijssen (April 9, 1999). Interfacing
Between Lawyers and Computers: An Architecture
for Knowledge-Based Interfaces to Legal Databases.
First-time visitors to the Benelog may have had the
impression that it was the 100th edition, rather
than the 10th. People entered the conference as if
they had been there the day before, much like
students entering their lecture hall. Before the
start, everyone took a coffee and chatted a bit with
everyone else. The atmosphere was very informal
and charming.
Curry and to answer to the questions of myself
and others.
Every group gave one or two presentations
showing their very latest research results. Also,
for each group, some time was reserved for
presenting an overview of the research carried
out in the past year. In this way it was possible to
have - within one day - a good insight into the
hot topics while still retaining a global overview
of the research done in the various institutions.
There were interesting lectures (the possible
exception: my own presentation, where I
misspelled "append" twice; a crime punishable
with life-long exile from the net), always followed
by interesting debates.
I should mention that Krzysztof (I do have a macro
for writing his name) is known in the research
circuit not only for his outstanding scientific
results, but also for his concise and crystal-clear
writing style. His welcome oration, precisely on
time, reflected these virtues: "It is nine thirty, shall
we start?".
And the presentations - all
presentations - magically started exactly on time,
as if it was the most normal thing in the world.
The tenth Benelog was a very inspiring and
interesting experience. What is the secret? Maybe
it is the simplicity of the formula or maybe it is
just the participation of good groups with good
leaders. I don't really know, but I wished that
more workshops and symposia were like it. I also
wish that next year's edition, that will take place
in Maastricht on November the fifth, will be able
to replicate the success of this year. I am looking
forward to meeting you at the 11th Benelog!
Benelog is actually more a discussion forum than
anything else. This year it attracted the logic
programming groups of six outstanding research
centers. Besides the already mentioned CWI, the
University of Leuven was represented by the
groups of Maurice Bruynooghe, Bart Demoen and
Danny de Schreye. There were the groups of Yves
Deville, of the University of Louvain-La-Neuve and
the ones of Jean-Marie Jacquet and Baudouin
Lecharlier, of the University of Namur.
Luxembourg was well represented as well by
Raymond Bisdorff, of the Centre Universitaire of
Luxembourg. The invited speaker was Michael
Hanus, from the Technical University of Aachen.
He gave a talk on Multi-Paradigm Declarative
programming in Curry. Hanus remained for quite
some time after the closing of the workshop to
patiently show how to solve some problems in
o
g
o
l
E
I
T
C
E
S
S
K
I
S
NVKI-Nieuwsbrief
Readers interested in the program or in the
papers that have been presented at the 10th
Benelog may consult http://www.cwi.nl/~femke/
benelog98/ benelog98. html. Preliminary
information about the 11th Benelog can be found
at http://www.cs.unimaas.nl/ ~etalle/benelog99/
benelog99.html.
189
December 1998
SYSTEMS
Nico Roos
Universiteit Maastricht
From November 30 till December 4, 1998, the
SIKS course on Interactive Systems and Multi
Agent Systems was organized. This course
consisted of two separate courses: two days on
Interactive Systems and three days on Multi
Agent Systems. Both courses will be discussed
below.
SIKS COURSE ON INTERACTIVE
SYSTEMS AND MULTI AGENT
INTERACTIVE SYSTEMS
The course on Interactive Systems was given by
Gerrit van der Veer and Charles van der Mast.
They presented a road from a current working
situation, via a desired working situation, to a new
situation in which an interactive system has been
implemented and introduced. The first step in this
process is analyzing the current situation. Gerrit
van der Veer presented several analytical methods
from psychology as well as from ethnography to
analyze the current situation. The psychological
methods aim at determining the explicit knowledge
of human experts, while the ethnographical
methods aim at determining the implicit
knowledge of a group. Using these methods, we get
a picture of what people are doing in a situation,
how they work together, and what knowledge they
are using.
interactions with the user. An other important
consideration in the design of an interactive
system is the choice of a consistent metaphor. A
wrong or inconsistent metaphor can lead to
confusion. Finally the choice of the interaction
style, for example, whether or not to use direct
manipulation, and the interactions themselves are
important.
To get the above mentioned issues right,
evaluation is important. Design guidelines can
help in this stage of the development process.
Another aspect to be evaluated is the cognitive
complexity of the interactive system. Several
techniques that can be used, were briefly
discussed. The importance of this stage was
underlined with a video in which an airplane
crash was reconstructed. The presentation of
information in the newly designed cockpit, played
an important role in the crash.
After analyzing the current situation it is time to
look at the desired new situation. One has to
envision the new desired situation. To determine
whether one’s vision makes any sense, it is
important to evaluate it. Playing the new situation
in several scenario’s is a technique that can be
used in an early stage is. At the SIKS course a
video from 1989 was shown in which HP showed
the office of 1995. An interesting thing in this video
was that it presented personal agents (the subject
of the second part of the course). Other forms of
evaluation, such as prototyping, as well as the
different ways of using a prototype were also
addressed.
The last aspect presented in this course
concerned the management of the development of
an interactive system. Charles van der Mast
emphasized that the traditional waterfall method
is not suited for the design of an interactive
system. Because of the importance of regular
evaluations during the whole development of an
interactive system, several stages have to be
repeated. The revision of earlier stages becomes
difficult if the people who carried out that stage
are no longer available.
Charles van der Mast discussed the design of the
interactive system that is to be used in the desired
situation. An important issue in such a design is
the fact that humans see a computer, the screen
and keyboard, as a social actor. This implies that
the interactive system must be polite in its
NVKI-Nieuwsbrief
MULTI AGENT SYSTEMS
The course on Multi Agent Systems was given by
Catholijn Jonker, Peter Braspenning, John-Jules
Meyer, Han la Poutré, Hans Weigand, Floris
Wiesman and Etiënne Mathijsen. They gave an
190
December 1998
overview of the new developments in the field of
multi agent systems. Since Multi Agent Systems is
a relatively new field, there is yet little consensus
on what an agent is.
intentions, commitments, awareness, and so on.
Peter Braspenning discussed these agents in more
detail but did not formally define them. Instead
he pointed out that there is no consensus and that
the best thing that we can do is looking at
examples.
Catholijn Jonker presented the notion of a weak
agent. Such an agent should exhibit four types of
behavior; autonomous behavior, social behavior,
reactive behavior and pro-active behavior. All
these behaviors must be present in the eyes of an
external observer. What it means for an agent to
be autonomous or pro-active is an issue that
caused much debate. A strange thing was that
these properties were no longer considered
necessary properties for designing an agent.
John-Jules Meyer discussed the BDI (Beliefs,
Desires and Intentions) model, which gives a
formal characterization of a strong agent based
on modal logic. The idea behind this
characterization is that an agent has beliefs about
the world, it has wishes it desires to fulfill, and it
has the intention to actually fulfill some selected
subset of the wishes. The wishes the agent desires
to fulfill, need not form an consistent set. Based
on the current beliefs some of these wishes are
selected to become intention, the agent tries to
realize.
Beside the notion of a weak agent, there is also the
notion of a strong agent. According to Catholijn
Jonker, a strong agent is a weak agent with
additional properties such as beliefs, desires,
When an agent tries to realize its intentions, it
should not re-evaluate its intentions each time it
tries to perform an action. Re-evaluating the
actions requires computational resources. As a
consequence the agent might spend more time on
re-evaluating its intentions than on performing
actions that realize these intentions. So in the
context of limited computational resources, it is
irrational if an agent re-evaluates its intentions too
often. This requirement is denoted by the term
commitment. Peter Braspenning discussed the
importance of commitment and the various ways
in which it is formalized in the literature.
agent that performs a task by manipulating the
objects through their operations. He also sees it
as a way to encapsulate existing Information
Systems, and in this way enable them to
collaborate.
Besides this diversity of agent definitions, there
was also little disagreement on the meaning of
concepts such as autonomy, beliefs, desires,
intentions, commitment, and so on. Especially,
participants working on multi agents made
objections. Some even pointed out that one should
not use common sense psychological concepts to
characterize properties of agents. This would not
justify the proper meaning of these concepts. I
personally see no objection against using terms in
the context of agents. One must, however, keep in
mind that in the context of agents these terms
have a technical meaning, which has some
correspondence with their meaning in the term in
the real world.
Han la Poutré presented a different picture of an
agent. In his view an agent is a module that models
a part of the behavior of real agents such as
humans or companies. He uses these agents to
make predictions about the behavior of humans
and of economic systems. An important aspect of
these agents is that they evolve in time. Genetic
algorithms are used for this purpose.
So what is an agent? In my opinion it is a bad
thing not to have a proper definition of an agent.
Since agent technology is a popular topic, without
a proper definition one can easily get wrong
expectations,
which
may
result
in
disappointments. Therefore, I will try to
formulate what I think an agent is, based on the
views presented at the SIKS course. To those of
you that think that I am partially or completely
wrong, I propose that you send your view to the
NVKI newsletter.
Yet another view was presented by Hans Weigand.
In his view an agent presents a new level of
modeling. In a relational database, relations
between objects are described. Object oriented
modeling goes one step further by describing an
object by its relation with other objects and by
adding operations that can be performed on the
object. To perform a task, in objects oriented
modeling one often has to introduce a problem
object with a do-it operation that performs the task
by activating operations of other objects. Hans
Weigand argued that it is more natural to have an
NVKI-Nieuwsbrief
An agent is a modeling concept. It is used to
191
December 1998
describe an object that can make perceptions, that
can perform actions, that has knowledge about the
environment and that tries to fulfill goals in a
rational way. Again these are properties seen by an
outside observer. Knowledge and goals need not be
represented in an explicit way in the agent. Notice
that this definition is similar to Newell’s
Knowledge Level. This definition of an agent seems
consistent with a weak agent presented by
Catholijn Jonker and with Hans Weigand’s
description of an agent. Properties, such as
autonomy, can be present in different grades.
Autonomy, for example, can vary between a standalone system that does not require external
interference to fulfill its goals if the environment is
changing, and on the other hand agents that learn
from experience. The BDI model is a refinement of
an agent as a modeling concept. Beliefs are an
explicit representation of knowledge, and desires
and intentions are a refinement of goals.
Multi Agent Systems are systems consisting of a set
of agents that interact by observing the (results of)
actions of other agents. A special form of
In order to let agents communicate, a common
language is required. John-Jules Meyer and Hans
Weigand discussed several agent languages. Hans
Weigand pointed out that communication should
be put in a larger setting, called a scenario. A
scenario is a sequence of transactions in which
communi-cation is one of the components.
Scenario’s are important in electronic commerce
where an agreement to buy/sell something should
be followed by the obligation to deliver and the
obligation to pay. It seems that on this aspect of the
interaction between agents, the field of Interactive
Systems could have a valuable contribution.
interaction is communications.
The most interesting agents are those that, as a
group, perform a task or fulfill some goal. Due to
this, the agents must interact, possibly by
communications. Catholijn Jonker demonstrated
several of these agent systems. One
demonstration concerned the load balancing of
electricity use. Here consumer agents negotiate
with the supply agent over the reduction of
electricity use. A consumer agent is a kind of
personal assistance that may negotiate as a
representative of the actual consumer according
to the consumer’s preferences. Another
interesting example was the call center in which
the call center agent tries to make an
appointment for a client. To do this it asks the
agent of a particular office to schedule an
appointment for this client. The office agent then
has to negotiate with the personal agents of the
employers in order to schedule the appointment.
ANTS' 98 FROM ANT COLONIES
TO ARTIFICIAL ANTS:
FIRST INTERNATIONAL
WORKSHOP ON ANT COLONY
OPTIMIZATION
Brussels, Belgium, October 15-16
Katja Verbeeck
Free University Brussels (VUB)
Peter Braspenning discussed the attemps of the
FIPA (Foundation of Intelligent Physical Agents)
to develop an agent architecture for the internet.
The FIPA is not concerned with the architecture of
an agent itself but with the platform on which an
agent lives and the language the agent uses to
communicate. For the latter purpose the FIPA
developed an agent language called ACL.
Mid-October, the first international workshop on ant
colony optimization took place at the Université
Libre de Bruxelles (ULB), organized by Marco
Dorigo (Research Associate at the FNRS, the
Belgian National Fund for Scientific Research) and
colleagues of ULB's IRIDIA lab.
Looking back on the Multi Agent Systems course,
it seems that one aspect was missing; namely the
social behavior of agents. Agents can collaborate
or compete with each other in many different
ways. Some forms of collaboration were shown in
the course. It seems, however, that more can be
said about this topic.
INTRODUCTION
S
K
I
S
o
g
o
L
Ant colony optimization (ACO) studies artificial
systems that are inspired by real ant colonies
behavior. The resulting systems seem to be very well
suited for discrete optimization. Problems like
travelling salesman, sequential ordering, quadratic
assignment, partitioning, graph coloring, routing in
communication networks, and so on, are already
NVKI-Nieuwsbrief
192
December 1998
addressed successfully.
path. Eventually, this autocatalytic process causes
all the ants to take the shortest path.
The main observation on which ACO is based is that
real ants are capable of finding shortest paths from
their nest to food sources and back. They are even
capable of adaptation, that is, of finding new shortest
paths when their environment changes. They can
perform this behavior thanks to a simple pheromone
laying mechanism. In fact, while walking ants deposit
some amount of pheromone on the ground. When ants
move from their nest to the food source they move
mostly randomly, but their random movements are
biased by pheromone trails left on the ground by
preceding ants. Because the ants that initially choose
the shortest path to the food arrive first, this path will
be seen as more desirable by the same ants during
their journey back to the nest (this is called
"differential length effect"). This in turn will increase
the amount of pheromone deposited on the shortest
A quick overview of the state-of-the-art in the field,
which included Ant System, the first ACO system
introduced in 1991 by Marco Dorigo, as well as a
number of more recent ACO algorithms, was given in
a tutorial on Wednesday evening by Dr. Dorigo
himself. The actual workshop started on Thursday
morning. The first speaker, Dr. Owen Holland from
the university of West of England, immediately
grasped the attention of the audience by raising some
interesting questions, like: What are the principles of
ACO algorithms? How do they work? Why do they
work? What does control them? What is the
difference between ACO algorithms and other
algorithms based on physical, chemical (diffusion),
electrical (electrical flows), or even human behaviour?
Answers to these questions were expected to come up
during the workshop.
Artificial ants take advantage of the differential
length as well as of the autocatalytic aspects of real
ant behavior to solve discrete optimization problems.
Artificial ants are software agents that move on a
graph, and that modify some variables associated to
graph elements so to favor the emergence of good
solutions. In practice, to each graph's edge is
associated a variable, called pheromone trail for
analogy with the real ants. Ants add pheromone to
those edges they use and by so doing they increase
the probability with which future ants will generate
good solutions. Artificial ants, as real ones, move
according to a probabilistic decision policy biased
by the amount of pheromone trail they "smell" on
the graph edges.
systems were tested against known routing
algorithms, with very good results. The second part
of the session was dedicated to applications to
combinatorial optimization. The domains of
application are very broad here: from bus driving
scheduling to mobile telephony (the problem of
covering regions), till water irrigation distribution.
CONCLUSION
Certainly not all the questions posed by O. Holland
were answered during this two-day workshop.
Nevertheless, some of the results presented are
excellent, and give researchers a good motivation to
pursue further investigations in this new exciting
area. More information on ants and on ant colony
optimization can be found at the ACO homepage:
http://iridia.ulb.ac.be/dorigo/ACO/ACO.html. The
workshop did not include proceedings, but a
selection of the papers will be published in the
"Future Generation Computer Systems Journal",
special issue on Ant Colony Optimization, Elsevier
North-Holland, next year. The Ants 2000 workshop
will most likely be held again at ULB, Brussels, in
September 2000.
REAL ANT MODELS
The word was then given to biologists and people
who were working on modeling real ants and on the
study of simulations of these models. Some of the
topics people are working on are: How does a colony
of ants allocate tasks in a dynamical environment
without a coordination center? Or, how do they
move? The resulting models were simulated on
computer, and their results were tested against results
obtained by real ants observation.
AI IN VIDEO GAMES
APPLICATIONS
Stephane Assadourian
AI Researcher, APPEAL / IMLABS
The first part of the next session was dedicated to
applications to communication networks as routing
and load balancing. Several routing algorithms were
presented by different people (L.J.M. Rothkrantz, G.
Di Caro & M. Dorigo, D. Snyers). Some of these
Appeal Software is a video game development
company based in Belgium in Namur (Namen). It
was founded in 1995 by Yann Robert, Yves
Grolet and Franck Sauer, who have 12 years of
NVKI-Nieuwsbrief
193
December 1998
experience in the game industry. Currently, we are
busy finishing a game called Outcast, which will be
one of the most advanced real time 3D
action/adventure games ever produced on PC. The
aim of Appeal is no less than giving the gameplay a
new dimension. This involves the development of
new technologies, as done within the Himalaya
project. This project breaks down into several
sub-projects, each relating to a certain field of
expertise including rendering, physics, and
Artificial Intelligence, which will be the subject of
this article.
monsters, but, depending on the design of a game,
they may take other shapes. We're talking virtual
world, so you can follow your imagination as far
as it goes...
A NEW AI ENGINE
Himalaya's AI sub-project is called Lhotse, and it
is focused on a new AI engine for future games.
This engine is constrained by many parameters
since it is embedded in a game application. Such
parameters are expressed in terms of CPU time
allowed. An important difference compared to
academic research in AI is that an engine will not
be built before its consumption of processor time
is known to be adequate. Therefore the
implementation phase is preceded by both a
design phase and a test phase.
Concerning Artificial Intelligence, Appeal's R&D
department is interested in the simulation of
behavior of what we call NPCs. NPC stands for
Non Playable Character, meaning everything the
player does not control. NPCs should be able to
reason, plan, take actions, react to events, and
show emotions. Typically these are people or
The aim of Lhotse is to give birth to a high level
tool that will simulate the major aspects of realistic
behavior, where realistic is taken to mean
believable. During planning for example, we are
not looking for the best solution, but rather for a
solution which produces believable behavior. This
is another difference compared to academic
research in AI. What matters in this type of game,
is the effect on the player.
same way and do things according to their mood,
which is determined by emotions.
COMPETING ACTIONS
Finally, to allow for parallel processing of actions,
an agent based architecture is used. Actions are
designed as operators, which require resources.
The TAKE operator e.g. requires (at least) a
hand to pick up the object. Actions may be
performed simultaneously, so one may see an
NPC taking an object while looking in another
direction, or performing more complex behaviors
such as running to hide while targeting, shooting,
and taking fire. Clearly, this may result in a
competition for resources between different
actions.
Within Lhotse, two objectives are important. First,
NPCs should have an individual semantic
interpretation of what is happening in the world
and of the meaning of (re)actions of others.
Furthermore, parallel processing of actions is
required since a game is a real time application.
Therefore we are designing an engine that will
allow us to code behavior via planning. Thus, the
behavior of an NPC is not hard-coded. Knowledge
needed for planning is encapsulated in the objects
of the world. These objects support constraints. To
give an example, there will be rocks that some
NPCs are able to lift and some aren't, according to
their strength. As a direct consequence of the
constraints specification feature, there is a need for
cooperation if NPCs are to perform tasks they can
not handle alone. A communication protocol will
be developed that will allow NPCs to ask for help
and accept or deny these requests according to
parameters such as mood, or emotional state.
Indeed, emotions will allow us to capture an
important part of behavior. NPCs are
characterized by many features, allowing to create
characters that look different and act in different
ways because they are not sensitive to events in the
NVKI-Nieuwsbrief
The resources feature only solves the problem of
deciding which actions are allowed to be
performed for a given NPC, but not which action
to select from several competing ones. An
Applicable Operator List is created for the
Active Goal. Based on various heuristics and
priorities attached to each branch, it can then be
decided which operator should be developed.
Thus, we are subgoaling from operators to
operators, propagating potential constraints
along the tree. One might wonder what should
happen if an agent wants to pick up an object and
open a door (also requiring the hand) at the same
time. Let's assume that both the object and the
door to be opened are close enough to perform
the action. If both hands are free, there's no
competition for resources between the operator
194
December 1998
TAKE and the operator OPEN_DOOR, since all
required resources can be allocated perfectly. If at
least one hand is not free, there's a competition for
the resource left hand (or right hand) and the
winner will be determined according to heuristics
such as which operator is in the highest priority
branch, or which one satisfies the largest number
of goals.
with students for either a Ph.D. work or an
internship. So do not hesitate to contact us at
[email protected], for a possible collaboration.
This was a general view of the project and if you
want to know more about it, feel free to visit the
following
web
page,
http://www.appeal.be/research/
Lhotse/lhot_par.htm, dedicated to the Lhotse
project.
Walter Daelemans & Steven Gillis
University of Antwerp
ARTIFICIAL INTELLIGENCE
RESEARCH AT CNTS
CNTS (Centrum voor Nederlandse Taal en
Spraak) is the Centre for Dutch Language and
Speech of the University of Antwerp (UIA).
CNTS specializes in applying computational
modeling and AI techniques in the fields of
computational linguistics, language technology,
artificial intelligence and computational
psycholinguistics. In addition, there is also a fair
amount of more general linguistics, phonetics,
and psycholinguistics research.
library and information science students looking
for an introduction to the AI approach to
computational linguistics.
Also, if you have some remarks feel free to contact
me at [email protected].
Finally, I've got to point out the fact that Appeal is
clearly willing to support any kind of collaboration
Computational Linguistics has a long tradition at
UIA. Luc Steels (now VUB Brussels and Sony
Computer Science Lab Paris) started his career in
AI in Antwerpen in the seventies with a doctorate
in computational linguistics. His student,
Koenraad De Smedt (now in Bergen, Norway)
started his work on object-oriented knowledge
representation for language processing in
Antwerpen before moving to Nijmegen in the early
eighties. Willy Martin (now in Amsterdam, VU)
and his co-workers developed tagging (word class
disambiguation), lemmatization, and lexicographic
technology, mainly for English, throughout the
eighties, and started with a set of courses in
computational linguistics. The nineties saw the
start of the CNTS, and a shift of research activities
to the application of machine learning techniques
in language technology, linguistics, and
psycholinguistics. We believe that the application
of machine learning and statistical pattern
recognition techniques will make a difference in
these three areas of language research. Currently,
the Artificial Intelligence research activities in
CNTS are directed mainly by Steven Gillis and
Walter Daelemans, and the group working on
these issues has recently grown to about 8
researchers, thanks to financial support from
FWO Vlaanderen, IWT, and European funding.
There is now also a fairly complete curriculum in
Natural Language Processing allowing linguistics
students to specialize in language technology both
at an undergraduate and postgraduate level. The
courses are also used by computer science and
NVKI-Nieuwsbrief
In the remainder of this text, we will describe our
main research topics and results. Within Machine
Learning we have focused our attention (in close
cooperation with the ILK research group in
Tilburg, http://ilk.kub.nl/) on memory-based
learning. This approach is based on the general
idea that cognitive tasks (e.g. language
processing) originate from the direct reuse of
experiences of the task rather than from the
application of rules or other abstractions
extracted from the experience. It is interesting to
see that the approach has been advocated under
different guises in Artificial Intelligence (e.g.
work on instance-based learning, memory-based
reasoning, case-based reasoning, etc.), but also in
the "linguistic underground" (as an alternative to
Chomskyan linguistics, e.g. Skousen, Bybee,
Derwing, Ohala and others).
The last decade, we have investigated this
hypothesis in the context of the three research
fields mentioned earlier, viz. language
technology, computational psycholinguistics and
linguistics. In language technology, where
knowledge acquisition bottlenecks have always
hindered practical application of rule-based
computational linguistics, we have shown that for
problems such as speech synthesis and text
analysis, the memory-based approach is often
195
December 1998
superior in accuracy to alternative rule-based and
statistical approaches. It also allows very fast
development of language processing modules to be
used in language technology applications.
research has only recently started systematically,
results
indicate
that
memory-based
computational models can indeed mimic and
motivate the `rule-based' behaviour (including
regularization and irregularization) that is
observed so often in human language processing.
As such, memory-based computational models
may become an interesting alternative to both
dual route and connectionist single route models
of human language processing.
Y
T
I
S
R
E
V
I
N
U
E
H
T
T
A
R
A
E
Y
A
In computational psycholinguistics, our group has
begun investigating the psychological relevance of
memory-based language processing by comparing
the output and errors made by our algorithms to
that of children in first language acquisition, and
adults performing linguistic tasks. Although this
In linguistics, finally, we have shown that the
memory-based model matches well with current
thinking in cognitive linguistics. As memory-based
learning crucially relies on the availability of huge
amounts of training materials, i.c. linguistic lexica
and corpora, CNTS is currently involved in several
projects that aim at collecting huge databases of
spoken language and is involved in research efforts
to enrich these databases with linguistically
relevant annotations. For instance, CNTS is
participates in the collection and annotation of the
'Corpus Gesproken Nederlands', an important
initiative of the Dutch and the Flemish
governments to collect 10 million words of spoken
Dutch that will eventually be available to the
research community. CNTS is also involved in a
Dutch-Flemish project that aims at collecting a
phonetically rich and balanced database of
Standard Dutch. In addition, CNTS houses the
European headquarters of CHILDES, an
organization that archives and linguistically
annotates corpora of children's (and adults')
spontaneous conversational speech. In CHILDES
corpora from more than 20 different languages are
available.
Y
R
A
G
L
A
C
F
O
y
r
sg
a
d
r
l
a
a
a
C
g
f
n
jo
i
y
Wi
t
s
kr
e
e
i
v
N
i
n
U
:
m
a
d
r
e
t
s
m
A
r
e
d
i
s
n
o
C
n
o
e
l
p
o
e
p
y
n
a
m
,
r
e
t
n
e
c
y
t
i
c
d
l
o
n
a
h
t
i
w
y
t
i
c
t
c
a
p
m
o
c
a
d
i
m
u
h
a
n
i
g
n
i
v
i
l
s
t
n
a
t
i
b
a
h
n
i
0
0
0
,
0
0
7
t
u
o
b
a
,
s
t
e
e
r
t
s
e
h
t
.
l
e
v
e
l
a
e
s
e
v
o
b
a
y
l
e
r
a
b
r
o
r
e
d
n
u
e
t
a
m
i
l
c
e
t
a
r
e
d
o
m
:
y
r
a
g
l
a
C
r
e
d
i
s
n
o
C
n
r
e
d
o
m
a
h
t
i
w
y
t
i
c
g
n
i
d
n
a
p
x
e
r
e
v
e
,
t
u
o
d
a
e
r
p
s
y
r
e
v
a
g
n
i
v
i
l
s
t
n
a
t
i
b
a
h
n
i
0
0
0
,
0
0
7
t
u
o
b
a
,
s
t
e
e
r
t
s
e
h
t
n
o
e
l
p
o
e
p
t
o
h
r
o
)
C
o
5
2
w
o
l
e
b
(
g
n
i
z
e
e
r
f
r
e
h
t
i
e
,
r
i
a
y
r
d
y
r
e
v
n
i
.
l
e
v
e
l
a
e
s
e
v
o
b
a
s
e
r
t
e
m
0
0
0
1
t
u
o
b
a
t
a
,
)
C
o
5
2
+
r
e
v
o
(
d
l
u
o
w
y
h
w
o
S
.
s
e
t
i
s
o
p
p
o
r
i
e
h
t
t
s
o
m
l
a
e
r
a
s
e
i
t
i
c
e
s
e
h
T
n
i
g
n
i
v
i
l
r
e
d
i
s
n
o
c
m
a
d
r
e
t
s
m?
Ar
a
e
y
l
l
u
f
a
r
o
f
y
r
a
g
l
a
C
n
i
g
n
i
v
i
l
y
d
o
b
y
n
a
T
R
A
T
S
E
H
T
e
j
i
r
V
e
h
t
t
a
h
c
r
a
e
s
e
r
D
h
P
y
m
h
t
i
w
d
e
t
r
a
t
s
l
l
a
t
I
e
c
n
e
g
i
l
l
e
t
n
I
l
a
i
c
i
f
i
t
r
A
e
h
t
t
a
,
m
a
d
r
e
t
s
m
A
t
i
e
t
i
s
r
e
v
i
n
U
e
h
d
n
a
p
u
o
r
g
t
a
h
t
f
o
r
o
s
s
e
f
o
r
p
e
h
t
s
i
r
u
e
r
T
n
a
J
.
p
u
o
r
G
I
.
h
c
r
a
e
s
e
r
y
m
d
e
s
i
v
r
e
p
u
s
r
e
i
z
a
r
B
s
e
c
n
a
r
F
d
n
a
l
a
n
o
i
t
i
s
o
p
m
o
c
f
o
n
g
i
s
e
d
e
r
n
o
h
c
r
a
e
s
e
r
n
i
d
e
t
a
p
i
c
i
t
r
a
p
c
i
t
s
o
n
g
a
i
d
a
f
o
n
g
i
s
e
d
e
r
e
h
t
o
t
d
e
i
l
p
p
a
,
s
e
r
u
t
c
e
t
i
h
c
r
a
a
f
o
n
o
i
t
a
c
i
f
i
d
o
m
f
l
e
s
e
h
t
d
n
a
m
e
t
s
y
s
g
n
i
n
o
s
a
e
r
.
m
e
t
s
y
s
t
n
e
g
a
i
t
l
u
m
e
g
d
e
l
w
o
n
K
e
h
t
g
n
i
t
i
s
i
v
s
a
w
I
,
6
9
9
1
r
e
b
m
e
v
o
N
n
I
,
f
f
n
a
B
f
o
n
w
o
t
n
a
i
d
a
n
a
C
e
h
t
n
i
p
o
h
s
k
r
o
W
n
o
i
t
i
s
i
u
q
c
A
t
n
e
g
a
i
t
l
u
m
g
n
i
y
f
i
d
o
m
f
l
e
s
a
n
o
r
e
p
a
p
a
t
n
e
s
e
r
p
o
t
The web-site of CNTS can be visited at
http://webger-www.uia.ac.be/webger/ger/cnts/main
.html
For more information, contact Steven Gillis
([email protected])
,
h
c
r
a
e
s
e
r
D
h
P
y
m
f
o
d
n
e
e
h
t
r
a
e
n
n
e
h
t
s
a
w
I
.
m
e
t
s
y
s
e
h
t
f
o
s
t
n
a
p
i
c
i
t
r
a
p
e
r
i
u
q
n
i
y
l
e
t
i
l
o
p
o
t
d
e
t
r
a
t
s
d
n
a
e
l
b
a
l
i
a
v
a
t
u
o
b
a
g
n
i
h
t
y
n
a
w
e
n
k
y
e
h
t
r
e
h
t
e
h
w
p
o
h
s
k
r
o
w
y
n
a
f
o
w
o
n
k
t
o
n
d
i
d
y
n
a
m
e
l
i
h
W
.
s
n
o
i
t
i
s
o
p
c
o
d
t
s
o
p
D
A
O
R
B
A
I
A
NVKI-Nieuwsbrief
w
e
f
,
s
r
a
c
y
n
a
m
,
s
l
l
a
m
g
n
i
p
p
o
h
s
,
a
e
r
a
n
w
o
t
n
w
o
d
Apart from memory-based learning, we have also
investigated the use of symbolic rule learning
(decision tree learning, rule induction, ILP) as a
tool in linguistic research. In this research, we are
mainly interested in using machine learning
techniques to discover and evaluate linguistic
hypotheses and categories (formulated as
generalizations). We apply these techniques mainly
to discovering and evaluating theories about
phonology and morphology.
196
December 1998
e
t
i
u
q
s
a
w
n
g
i
s
e
D
&
I
A
d
n
a
s
m
e
t
s
y
s
t
n
e
g
a
i
t
l
u
m
o
t
e
m
d
e
t
i
v
n
i
y
l
e
t
a
i
d
e
m
m
i
m
e
h
t
f
o
e
n
o
,
s
n
o
i
t
i
s
o
p
n
e
p
o
e
h
t
t
a
c
o
d
t
s
o
p
e
h
t
t
r
a
t
s
o
t
d
e
d
i
c
e
d
s
a
w
t
I
.
e
t
a
i
r
p
o
r
p
p
a
n
i
e
c
n
e
i
c
S
r
e
t
u
p
m
o
C
f
o
t
n
e
m
t
r
a
p
e
d
e
h
t
t
a
c
o
d
t
s
o
p
a
o
d
.
r
a
e
y
e
n
o
f
o
n
o
i
t
a
r
u
d
e
h
t
r
o
f
,
8
9
9
1
f
o
g
n
i
n
n
i
g
e
b
o
s
l
a
I
,
r
e
v
e
w
o
H
!
e
s
r
u
o
c
f
o
d
e
r
e
t
t
a
l
f
s
a
w
I
.
.
.
y
r
a
g
l
a
C
t
r
a
t
s
t
o
n
d
l
u
o
c
I
o
s
,
D
h
P
y
m
h
s
i
n
i
f
o
t
d
e
d
e
e
n
n
a
:
d
e
s
i
n
a
g
r
o
y
l
t
n
e
i
n
e
v
n
o
c
y
r
e
v
s
i
p
u
o
r
g
e
h
t
f
o
b
a
l
e
h
T
,
s
e
i
t
i
l
i
b
i
s
s
o
p
e
m
o
s
d
e
n
i
l
t
u
o
y
l
f
e
i
r
b
y
e
h
T
.
y
l
e
t
a
i
d
e
m
m
i
g
n
i
d
n
u
o
r
r
u
s
s
e
c
i
f
f
o
d
n
a
,
s
e
l
c
i
b
u
c
i
m
e
s
h
t
i
w
e
c
a
p
s
n
e
p
o
n
i
h
c
r
a
e
s
e
r
n
w
o
r
i
e
h
t
d
n
a
n
o
i
t
a
c
u
d
e
e
c
n
a
t
s
i
d
,
.
g
.
e
d
n
a
t
e
e
m
y
l
i
s
a
e
n
a
c
s
t
n
e
d
u
t
s
d
n
a
f
f
a
t
S
.
e
c
a
p
s
n
e
p
o
s
i
h
t
.
s
m
e
t
s
y
s
t
n
e
g
a
i
t
l
u
m
y
b
e
s
u
r
o
f
y
l
t
s
o
m
m
o
o
r
g
n
i
t
e
e
m
A
.
s
e
u
s
s
i
s
s
u
c
s
i
d
:
h
c
r
a
e
s
e
r
y
r
o
t
a
p
i
c
i
t
r
a
p
e
h
t
n
i
s
d
i
a
b
a
l
s
i
h
t
f
o
s
r
e
b
m
e
m
g
n
i
r
e
e
n
i
g
n
E
e
r
a
w
t
f
o
S
e
h
t
d
e
t
i
s
i
v
I
7
9
9
1
r
e
b
m
e
t
p
e
S
n
I
s
s
u
c
s
i
d
o
t
d
e
y
o
l
p
m
e
e
b
n
a
c
a
i
d
e
m
i
t
l
u
m
t
r
a
e
h
t
f
o
e
t
a
t
s
r
e
t
u
p
m
o
C
f
o
t
n
e
m
t
r
a
p
e
d
e
h
t
f
o
t
r
a
p
,
k
r
o
w
t
e
N
h
c
r
a
e
s
e
R
e
h
T
.
s
e
p
y
t
o
t
o
r
p
d
n
a
,
k
r
o
w
g
n
i
o
g
n
o
,
s
r
e
p
a
p
s
s
u
c
s
i
d
o
t
,
)
C
o
U
(
y
r
a
g
l
a
C
f
o
y
t
i
s
r
e
v
i
n
U
e
h
t
t
a
e
c
n
e
i
c
S
y
r
e
v
a
s
i
h
t
s
e
k
a
m
h
c
r
a
e
s
e
r
o
t
h
c
a
o
r
p
p
a
e
v
i
t
a
r
o
b
a
l
l
o
c
e
h
T
.
y
r
a
g
l
a
C
n
i
g
n
i
v
i
l
r
o
f
s
n
o
i
t
a
r
a
p
e
r
p
e
k
a
m
d
n
a
b
o
j
e
h
t
e
k
i
l
h
c
u
m
e
r
e
h
p
s
o
m
t
a
n
a
n
i
k
r
o
w
o
t
e
c
a
l
p
e
l
b
a
t
r
o
f
m
o
c
y
l
t
n
e
r
r
u
c
d
n
a
g
n
u
o
y
s
i
,
w
a
h
S
d
e
r
d
l
i
M
y
b
d
e
d
a
e
h
s
i
p
u
o
r
g
.
p
u
o
r
g
'
d
l
o
'
y
m
n
i
o
t
d
e
s
u
s
a
w
I
r
e
h
t
o
d
n
a
y
r
t
s
u
d
n
i
h
t
i
w
s
e
i
t
g
n
o
r
t
s
s
a
h
t
I
.
g
n
i
d
n
a
p
x
e
n
e
k
o
p
s
e
v
a
h
I
t
i
s
i
v
s
i
h
t
g
n
i
r
u
D
.
C
o
U
e
h
t
n
i
s
t
n
e
m
t
r
a
p
e
d
H
C
R
A
E
S
E
R
E
H
T
y
l
t
n
e
r
r
u
c
e
r
a
o
h
w
,
s
r
e
b
m
e
m
f
f
a
t
s
w
e
n
l
a
r
e
v
e
s
h
t
i
w
t
n
e
g
a
i
t
l
u
m
,
s
p
a
m
t
p
e
c
n
o
c
n
i
h
c
r
a
e
s
e
r
n
i
d
e
v
l
o
v
n
i
n
i
d
n
u
o
r
g
k
c
a
b
y
M
.
c
t
e
,
t
n
e
m
e
g
a
n
a
m
w
o
l
f
k
r
o
w
,
s
m
e
t
s
y
s
:
s
a
e
r
a
e
e
r
h
t
n
i
s
i
n
i
d
e
v
l
o
v
n
i
y
l
t
n
e
r
r
u
c
m
a
I
h
c
r
a
e
s
e
r
e
h
T
S
E
C
N
E
R
E
F
F
I
D
e
c
n
a
t
s
i
d
,
g
n
i
r
e
e
n
i
g
n
e
s
t
n
e
m
e
r
i
u
q
e
r
r
o
f
s
l
e
d
o
m
s
s
e
c
o
r
p
a
n
i
(
s
m
e
t
s
y
s
t
n
e
g
a
i
t
l
u
m
r
o
f
s
e
r
u
t
c
e
t
i
h
c
r
a
d
n
a
,
n
o
i
t
a
c
u
d
e
s
i
h
t
n
e
e
w
t
e
b
s
e
c
n
e
r
e
f
f
i
d
w
e
f
a
e
t
i
u
q
e
r
a
e
r
e
h
T
s
i
e
r
e
h
t
s
a
e
r
a
e
e
r
h
t
l
l
a
n
I
.
)
g
n
i
t
t
e
s
g
n
i
r
u
t
c
a
f
u
n
a
m
.
m
a
d
r
e
t
s
m
A
n
i
t
i
e
t
i
s
r
e
v
i
n
U
e
j
i
r
V
e
h
t
d
n
a
y
t
i
s
r
e
v
i
n
u
s
e
v
i
g
h
c
i
h
w
,
s
t
n
e
d
u
t
s
d
n
a
s
r
e
h
c
r
a
e
s
e
r
h
t
i
w
n
o
i
t
a
r
o
b
a
l
l
o
c
:
C
o
U
e
h
t
t
a
'
g
n
i
t
e
g
d
u
b
'
e
r
o
m
h
c
u
m
s
i
e
r
e
h
t
l
l
a
f
o
t
s
r
i
F
.
s
n
o
i
n
i
p
o
d
n
a
s
d
n
u
o
r
g
k
c
a
b
e
s
r
e
v
i
d
f
o
x
i
m
t
n
a
s
a
e
l
p
a
t
i
,
d
e
e
t
n
a
r
a
u
g
t
o
n
s
i
k
s
e
d
r
u
o
y
n
o
r
e
t
u
p
m
o
c
a
g
n
i
v
a
h
e
n
O
.
h
c
r
a
e
s
e
r
f
o
s
a
e
r
a
e
e
r
h
t
l
l
a
n
i
e
d
a
m
s
i
s
s
e
r
g
o
r
P
,
s
l
l
a
c
e
n
o
h
p
e
l
e
t
l
l
a
,
y
l
r
a
l
i
m
i
S
.
r
o
f
d
i
a
p
e
b
o
t
s
d
e
e
n
e
c
n
a
t
s
i
d
a
d
n
a
l
a
c
o
l
a
h
t
o
b
s
a
t
h
g
u
a
t
g
n
i
e
b
w
o
n
s
i
e
s
r
u
o
c
:
t
e
g
d
u
b
a
n
o
d
e
c
a
l
p
e
b
o
t
d
e
e
n
.
c
t
e
s
e
i
p
o
c
o
t
o
h
p
r
e
g
g
i
b
a
r
e
t
s
e
m
e
s
t
x
e
n
,
t
n
e
m
i
r
e
p
x
e
n
a
l
l
i
t
s
s
i
s
i
h
T
.
e
s
r
u
o
c
y
r
o
t
a
d
n
a
m
e
d
i
w
y
t
i
s
r
e
v
i
n
u
e
h
T
.
l
a
n
o
s
r
e
p
r
o
h
c
r
a
e
s
e
r
r
a
l
i
m
i
s
a
n
i
p
u
t
e
s
e
b
l
l
i
w
)
s
t
n
e
d
u
t
s
f
o
r
e
b
m
u
n
n
i
(
e
s
r
u
o
c
y
r
a
l
a
s
r
u
o
y
t
c
a
p
m
i
n
a
c
h
c
i
h
w
s
r
o
t
c
u
r
t
s
n
i
f
o
s
n
o
i
t
a
u
l
a
v
e
t
n
e
m
t
r
a
p
e
d
e
h
t
f
o
p
u
o
r
G
e
c
a
f
r
e
t
n
I
c
i
n
o
l
o
H
e
h
T
.
r
e
n
n
a
m
7
,
y
a
d
r
e
p
s
r
u
o
h
4
2
e
h
T
.
e
c
n
e
r
e
f
f
i
d
r
e
h
t
o
n
a
e
r
a
e
s
a
e
r
c
n
i
n
o
h
c
r
a
e
s
e
r
e
h
t
n
i
s
e
t
a
p
i
c
i
t
r
a
p
g
n
i
r
e
e
n
i
g
n
E
l
a
c
i
n
a
h
c
e
M
f
o
o
n
:
t
e
s
s
a
t
a
e
r
g
a
s
i
y
t
i
s
r
e
v
i
n
u
e
l
b
i
s
s
e
c
c
a
k
e
e
w
r
e
p
s
y
a
d
d
n
a
,
c
i
t
s
i
l
a
e
r
y
n
a
m
e
d
i
v
o
r
p
y
e
h
t
:
s
m
e
t
s
y
s
t
n
e
g
a
i
t
l
u
m
y
a
m
h
c
i
h
w
s
r
u
o
h
g
n
i
k
r
o
w
o
t
d
e
n
i
f
n
o
c
u
o
y
e
r
a
r
e
g
n
o
l
r
o
f
s
l
e
d
o
m
s
s
e
c
o
r
p
n
o
k
r
o
w
e
h
T
.
s
e
s
a
c
e
s
u
,
e
f
i
l
l
a
e
r
t
o
n
m
'
I
.
s
e
l
u
d
e
h
c
s
h
c
r
a
e
s
e
r
/
k
r
o
w
r
u
o
y
h
t
i
w
t
c
i
l
f
n
o
c
n
o
i
t
a
r
o
b
a
l
l
o
c
t
n
a
s
a
e
l
p
a
s
i
g
n
i
r
e
e
n
i
g
n
e
s
t
n
e
m
e
r
i
u
q
e
r
t
s
u
j
,
y
a
d
a
s
r
u
o
h
4
2
k
r
o
w
d
l
u
o
h
s
e
n
o
t
a
h
t
g
n
i
s
o
p
o
r
p
n
i
p
u
o
r
g
d
l
o
y
m
d
n
a
p
u
o
r
g
t
n
e
r
r
u
c
y
m
n
e
e
w
t
e
b
s
t
i
d
n
a
y
t
i
s
r
e
v
i
n
u
e
h
t
o
t
s
s
e
c
c
a
f
o
d
n
i
k
s
i
h
t
g
n
i
v
a
h
t
a
h
t
n
o
g
n
i
k
r
o
w
m
'
I
,
C
o
U
e
h
t
t
a
b
o
j
y
m
o
t
t
x
e
N
.
m
a
d
r
e
t
s
m
A
t
e
g
,
n
i
p
e
t
s
y
l
f
e
i
r
b
o
t
r
e
i
s
a
e
h
c
u
m
t
i
s
e
k
a
m
t
n
e
m
p
i
u
q
e
e
l
b
a
t
'
n
s
a
w
I
s
a
,
s
d
n
e
k
e
e
w
d
n
a
s
g
n
i
n
e
v
e
e
h
t
n
i
s
i
s
e
h
t
y
m
s
d
n
e
k
e
e
w
n
i
,
.
g
.
e
(
n
i
a
g
a
e
v
a
e
l
d
n
a
,
e
n
o
d
s
g
n
i
h
t
e
m
o
s
t
o
n
s
i
s
i
h
t
e
s
r
u
o
c
f
o
t
u
b
(
e
r
u
t
r
a
p
e
d
y
m
e
r
o
f
e
b
t
i
h
s
i
n
i
f
o
t
I
s
a
d
e
s
i
l
a
r
e
n
e
g
e
b
t
o
n
n
a
c
s
g
n
i
d
n
i
f
e
s
e
h
T
.
)
s
y
a
d
i
l
o
h
d
n
a
.
)
s
c
o
d
t
s
o
p
r
o
f
e
r
u
d
e
c
o
r
p
d
e
d
n
e
m
m
o
c
e
r
a
e
h
t
n
i
s
e
i
t
i
s
r
e
v
i
n
u
t
n
e
r
e
f
f
i
d
n
i
d
e
y
o
l
p
m
e
n
e
e
b
t
'
n
e
v
a
h
.
t
h
g
u
o
h
t
r
o
f
d
o
o
f
e
m
o
s
s
e
v
i
g
t
i
t
u
b
,
s
d
n
a
l
r
e
h
t
e
N
e
s
r
u
o
c
+
+
C
a
t
h
g
u
a
t
e
v
a
h
I
8
9
9
1
f
o
r
e
t
s
e
m
e
s
t
s
r
i
f
e
h
t
n
I
0
5
1
:
e
c
n
e
i
r
e
p
x
e
n
a
e
t
i
u
q
s
a
w
t
a
h
T
.
s
t
n
e
d
u
t
s
r
a
e
y
t
s
r
i
f
r
o
f
m
a
I
(
c
p
a
n
o
k
r
o
w
o
t
e
v
a
h
I
t
a
h
t
s
i
e
n
i
m
f
o
e
v
e
e
p
A
e
r
i
t
n
e
e
h
T
.
+
+
C
f
o
"
+
+
"
e
h
t
h
c
a
e
t
o
t
d
a
h
I
o
h
w
s
t
n
e
d
u
t
s
g
n
i
y
r
t
a
s
i
h
c
i
h
w
,
)
S
O
c
a
M
d
n
a
s
i
r
a
l
o
S
n
o
k
r
o
w
o
t
d
e
s
u
r
o
t
c
u
r
t
s
n
i
r
e
h
t
o
e
h
t
,
s
t
n
e
d
u
t
s
0
0
3
f
o
d
e
t
s
i
s
n
o
c
s
s
a
l
c
,
s
t
o
o
b
e
r
y
l
i
a
d
t
s
o
m
l
a
o
t
d
e
n
g
i
s
e
r
w
o
n
m
a
I
.
e
c
n
e
i
r
e
p
x
e
s
e
s
s
a
l
c
r
u
o
h
t
o
b
d
n
e
e
h
t
n
I
.
s
t
n
e
d
u
t
s
0
5
1
r
e
h
t
o
n
a
t
h
g
u
a
t
t
i
t
u
b
,
l
a
c
i
p
y
t
a
m
'
I
e
b
y
a
M
.
c
t
e
,
s
m
a
r
g
o
r
p
g
n
i
h
s
a
r
c
t
n
a
s
a
e
l
p
a
s
a
w
t
a
h
t
o
s
,
e
g
a
r
e
v
a
l
l
a
r
e
v
o
e
m
a
s
e
h
t
d
a
h
y
m
n
o
k
r
o
w
h
s
i
n
i
f
o
t
e
m
o
h
o
g
I
t
a
h
t
s
r
u
c
c
o
y
l
t
n
e
u
q
e
r
f
s
t
n
e
d
u
t
s
e
h
t
y
b
s
n
o
i
t
a
u
l
a
v
e
)
y
r
o
t
a
d
n
a
m
(
e
h
T
.
e
m
o
c
t
u
o
d
n
a
'
y
l
i
s
a
e
'
e
n
o
d
s
g
n
i
h
t
t
e
g
o
t
t
s
u
j
,
c
a
M
l
a
n
o
s
r
e
p
r
o
f
l
l
a
t
a
d
a
b
t
o
N
.
e
v
i
t
c
u
r
t
s
n
o
c
y
l
t
s
o
m
d
n
a
g
n
i
r
e
t
t
a
l
f
e
r
e
w
y
n
a
m
o
s
y
h
w
r
e
d
n
o
w
e
m
s
e
k
a
m
y
l
l
a
e
r
t
I
.
'
y
l
k
c
i
u
q
'
y
m
(
s
e
t
a
u
d
a
r
g
r
e
d
n
u
r
o
f
e
s
r
u
o
c
a
g
n
i
h
c
a
e
t
e
m
i
t
t
s
r
i
f
e
h
t
.
.
.
s
'
c
p
n
o
k
r
o
w
o
t
e
s
o
o
h
c
e
l
p
o
e
p
,
.
g
.
e
,
s
p
u
o
r
g
l
l
a
m
s
g
n
i
h
c
a
e
t
n
i
s
i
e
c
n
e
i
r
e
p
x
e
s
u
o
i
v
e
r
p
.
)
g
n
i
t
t
e
s
r
a
n
i
m
e
s
a
n
i
s
t
n
e
d
u
t
s
r
a
e
y
h
t
r
u
o
f
December 1998
197
NVKI-Nieuwsbrief
:
e
r
u
t
l
u
c
d
e
s
a
b
e
l
i
b
o
m
o
t
u
a
n
a
s
i
y
r
a
g
l
a
C
.
m
a
d
r
e
t
s
m
A
e
r
a
d
a
o
r
b
a
y
a
t
s
s
i
h
t
g
n
i
r
u
d
t
e
g
I
s
e
c
n
e
i
r
e
p
x
e
e
h
T
r
o
,
h
g
u
o
n
e
t
n
e
u
q
e
r
f
t
o
n
t
u
b
,
e
l
b
a
l
i
a
v
a
s
i
t
i
s
n
a
r
t
c
i
l
b
u
p
d
n
a
h
c
r
a
e
s
e
r
n
o
w
e
i
v
r
e
d
a
o
r
b
a
e
m
e
v
i
g
d
n
a
e
s
r
e
v
i
d
e
s
r
u
o
c
f
O
.
l
u
f
e
s
u
y
l
l
a
e
r
t
i
e
k
a
m
o
t
,
h
g
u
o
n
e
e
t
a
l
l
i
t
n
u
t
n
e
r
e
f
f
i
d
a
o
t
g
n
i
t
p
a
d
a
,
s
i
h
t
o
t
n
o
i
t
i
d
d
a
n
I
.
s
e
i
t
e
i
c
o
s
e
l
c
y
c
i
b
y
l
n
o
,
y
t
i
c
e
h
t
r
e
v
o
l
l
a
s
y
a
w
h
g
i
h
,
s
t
e
e
r
t
s
y
t
p
m
e
l
a
c
i
m
o
c
o
t
d
a
e
l
s
a
h
d
n
a
,
n
u
f
s
i
)
.
c
t
e
,
s
m
o
t
s
u
c
l
a
c
o
l
e
h
t
,
s
t
c
u
d
o
r
p
r
o
f
s
e
m
a
n
d
n
a
r
b
w
e
n
g
n
i
n
r
a
e
l
(
e
r
u
t
l
u
c
o
Ct
d
a
e
l
s
r
a
c
e
s
e
h
t
l
l
a
t
u
b
,
s
r
a
c
g
n
i
s
u
r
o
f
t
p
o
e
l
p
o
e
p
s
e
k
a
m
0
4
s
a
w
o
l
s
a
s
e
r
u
t
a
r
e
p
m
e
t
h
t
i
w
s
r
e
t
n
i
w
g
n
i
v
a
h
n
o
d
n
e
p
e
d
y
l
d
r
a
h
I
,
d
e
v
o
r
p
m
i
h
s
i
l
g
n
E
h
c
t
u
D
a
.
s
g
n
i
d
n
a
t
s
r
e
d
n
u
s
i
m
l
e
e
f
n
e
v
e
I
d
n
a
,
y
t
i
c
s
u
o
m
y
n
o
n
a
n
a
s
e
t
a
e
r
c
s
i
h
T
.
s
a
e
r
a
e
h
t
,
l
l
a
f
o
t
n
a
t
r
o
p
m
i
t
s
o
m
d
n
A
.
l
l
a
t
a
y
r
a
n
o
i
t
c
i
d
a
o
t
t
e
g
o
t
b
r
u
b
u
s
a
h
g
u
o
r
h
t
g
n
i
k
l
a
w
n
e
h
w
'
t
c
e
p
s
u
s
'
,
k
c
a
b
d
i
a
l
,
d
e
x
a
l
e
r
y
r
e
v
e
r
a
r
a
f
o
s
t
e
m
e
v
'
I
s
n
a
i
d
a
n
a
C
n
i
e
v
i
l
I
e
t
i
u
s
t
n
e
m
e
s
a
b
e
h
t
,
y
l
e
t
a
n
u
t
r
o
F
.
e
c
a
l
p
r
a
l
u
c
i
t
r
a
p
.
d
o
o
g
s
i
e
r
e
h
e
f
i
l
o
s
;
e
l
p
o
e
p
s
u
o
r
u
o
m
u
h
,
y
t
i
s
r
e
v
i
n
u
f
o
e
c
n
a
t
s
i
d
g
n
i
k
l
a
w
t
a
d
e
t
a
c
o
l
y
l
t
n
e
i
n
e
v
n
o
c
s
i
m
'
I
.
b
u
h
t
i
s
n
a
r
t
c
i
l
b
u
p
a
d
n
a
s
l
l
a
m
g
n
i
p
p
o
h
s
f
o
e
l
p
u
o
c
a
a
s
i
h
c
i
h
w
,
t
r
o
p
r
i
a
e
h
t
o
t
o
g
o
t
d
e
e
n
I
l
i
t
n
u
.
K
.
O
y
l
l
a
u
s
u
.
.
.
t
i
s
n
a
r
t
c
i
l
b
u
p
a
i
v
e
g
n
e
l
l
a
h
c
n
a
i
d
a
n
a
C
e
h
t
f
o
y
t
i
m
i
x
o
r
p
e
h
t
s
i
y
r
a
g
l
a
C
f
o
s
u
l
p
g
i
b
A
y
l
e
t
u
l
o
s
b
a
h
t
i
w
e
g
n
a
r
n
i
a
t
n
u
o
m
n
w
o
n
e
r
e
h
t
,
s
e
i
k
c
o
R
0
9
n
i
h
t
i
W
.
s
e
k
a
l
d
n
a
s
n
i
a
t
n
u
o
m
,
s
y
e
l
l
a
v
t
n
e
c
i
f
i
n
g
a
m
l
a
n
o
i
t
a
n
e
h
t
f
o
e
n
o
f
o
e
l
d
d
i
m
e
h
t
n
i
e
b
n
a
c
u
o
y
s
e
t
u
n
i
m
:
w
o
n
s
s
'
e
r
e
h
t
n
e
h
w
d
n
a
(
g
n
i
k
i
h
y
n
a
m
e
r
e
h
w
,
s
k
r
a
p
:
o
o
z
a
n
i
g
n
i
k
l
a
w
e
k
i
l
s
l
e
e
f
t
I
.
e
l
b
a
l
i
a
v
a
e
r
a
s
l
i
a
r
t
)
g
n
i
i
k
s
e
l
g
n
i
s
a
n
o
d
e
r
e
t
n
u
o
c
n
e
e
b
n
a
c
e
f
i
l
d
l
i
w
f
o
s
e
i
c
e
p
s
y
n
a
m
.
t
e
y
r
a
e
b
a
t
e
m
t
'
n
e
v
a
h
I
y
l
e
t
a
n
u
t
r
o
f
e
k
i
h
k
r
o
w
t
e
N
h
c
r
a
e
s
e
R
g
n
i
r
e
e
n
i
g
n
E
e
r
a
w
t
f
o
S
f
o
(
o
h
w
d
n
e
i
r
f
l
r
i
g
y
m
m
o
r
f
y
a
w
a
r
a
f
m
'
I
y
l
e
t
a
n
u
t
r
o
f
n
U
e
c
n
e
i
c
S
r
e
t
u
p
m
o
C
f
o
t
n
e
m
t
r
a
p
e
D
e
k
a
m
s
e
o
d
s
i
h
T
.
n
e
e
v
l
e
t
s
m
A
n
i
b
o
j
r
e
h
d
e
u
n
i
t
n
o
c
)
e
s
r
u
o
c
y
r
a
g
l
a
C
f
o
y
t
i
s
r
e
v
i
n
U
e
n
o
h
p
e
l
e
t
(
l
l
e
w
s
a
e
v
i
s
n
e
p
x
e
d
n
a
,
s
e
m
i
t
t
a
t
l
u
c
i
f
f
i
d
e
f
i
l
a
d
a
n
a
C
,
y
r
a
g
l
a
C
o
s
l
a
s
'
t
a
h
T
.
)
s
h
t
n
o
m
o
w
t
y
r
e
v
e
h
t
r
o
f
&
k
c
a
b
g
n
i
y
l
f
,
s
l
l
a
c
l
n
.
u
v
.
s
c
@
k
e
i
n
/
a
c
.
y
r
a
g
l
a
c
u
.
c
s
p
c
@
k
e
i
n
k
o
o
l
d
n
a
y
r
a
u
n
a
J
n
i
m
a
d
r
e
t
s
m
A
o
t
n
r
u
t
e
r
l
l
'
I
n
o
s
a
e
r
e
h
t
,
l
a
r
e
n
e
g
n
I
.
s
d
n
a
l
r
e
h
t
e
N
e
h
T
n
i
e
g
n
e
l
l
a
h
c
c
i
f
i
t
n
e
i
c
s
a
r
o
f
t
u
o
k
r
o
w
,
D
h
P
u
o
y
h
s
i
n
i
f
(
l
l
e
w
e
r
a
p
e
r
p
u
o
y
f
i
t
a
h
t
y
a
s
d
'
I
g
n
i
k
r
o
w
)
.
c
t
e
,
r
e
h
t
o
t
n
a
c
i
f
i
n
g
i
s
r
u
o
y
t
e
e
m
o
t
e
l
u
d
e
h
c
s
a
.
d
e
d
n
e
m
m
o
c
e
r
e
b
n
a
c
d
a
o
r
b
a
n
o
i
t
i
s
o
p
c
o
d
t
s
o
p
a
n
i
s
d
r
a
a
g
n
j
i
W
k
e
i
N
Section-Editor
Radboud Winkels
s
a
h
h
s
i
l
g
n
E
y
m
e
s
r
u
o
c
f
O
n
a
i
r
t
s
e
d
e
p
o
n
t
s
o
m
l
a
d
n
a
,
e
s
u
l
a
n
o
i
t
a
e
r
c
e
r
r
o
f
s
y
a
w
h
t
a
p
PROSA - EEN COMPUTERPROGRAMMA
ALS INSTRUCTIEOMGEVING
VOOR HET ONDERSTEUNEN VAN HET
LEREN OPLOSSEN VAN EEN JURIDISCHE
CASUS
SECTION KNOWLEDGE
SYSTEMS IN LAW AND
COMPUTER SCIENCE
December 1998
198
NVKI-Nieuwsbrief
S
E
C
N
E
I
R
E
P
X
E
n
i
g
n
i
v
i
l
m
o
r
f
t
n
e
r
e
f
f
i
d
e
t
i
u
q
s
i
y
r
a
g
l
a
C
n
i
g
n
i
v
i
L
░
Lezing door Antoinette Muntjewerff,
Technische Universiteit Twente
(1965), die beschrijft hoe een omgeving dient
ingericht te worden om leerprocessen voor een
gegeven doel te stimuleren. Al snel bleek deze
theorie te globaal om er een afdoende
instructietheorie uit te kunnen afleiden en werd
gekeken naar Merrill (1983) die ze verder heeft
gespecificeerd. Deze laatste verfijnde de
instructies om een bepaald leerdoel te bereiken
tot op het niveau van een les. Noch de modellen
van Gagné als Merrill besteden echter aandacht
aan het motivatieaspect binnen het leerproces:
daarom werd tenslotte gekeken naar het ARCSmodel van Keller & Suzuki (1988) waarbinnen
motivatie wel is opgenomen.
Verlag Raf van Kuyck & Stijn Debaene
Katholieke Universiteit Leuven
In de rechtspraktijk is het oplossen van juridische
casussen een belangrijke bezigheid. Om die reden
moet het aanleren van de methodiek hiertoe een
plaats vinden in het juridisch onderwijs. Binnen
het project PROSA (Probleemsituaties op het
terrein van het Administratief procesrecht), dat
kadert in het promotieonderzoek van Antoinette
Muntjewerff, wordt een computerprogramma
ontwikkeld dat moeilijkheden die studenten
ondervinden bij het oplossen van juridische
casussen moet verhelpen.
De toepassing wordt ontwikkeld in het pakket
Authorware, een tool voor de ontwikkeling van
interactieve educatieve toepassingen met onder
andere een goed beheer van gebruikersgegevens en
hun vorderingen.
Vervolgens moet de taak van het juridisch casus
oplossen worden geanalyseerd om aldus te
kunnen specificieren wat wel en wat niet in het
instructiemodel op te nemen. Tenslotte werd
gekeken naar de moeilijkheden van casus
oplossen zelf. Zo werd bijvoorbeeld een
vergelijkende studie uitgevoerd waarbij zowel
studenten als experten een casus moesten
oplossen. Een frappant resultaat was dat
experten in een bepaald juridisch domein toch
moeilijkheden ondervinden in een ander domein.
De hypothese dat de problemen bij het oplossen
van casussen vooral voortkomen uit een gebrek
aan methode (Crombag) dient dus aangevuld te
worden. Zo blijkt naast de methode ook de
specifieke domeinkennis of de inhoudelijke
ondersteuning van primordiaal belang.
De presentatie gebeurde in twee luiken: eerst
werden de ontwerpbeslissingen van het project
verduidelijkt en vervolgens werd een korte
demonstratie van PROSA gegeven.
Een geautomatiseerd leerproject staat of valt met
gefundeerde theoretische aannames betreffende
leer- en instructietheorie. Daarnaast zijn een
analyse van de taak van het juridisch casus
oplossen en een inventarisatie van de
moeilijkheden hierbij noodzakelijk om de leer- en
instructietheorie te specificeren. Vooreerst is
uitgegaan van de theorie, opgesteld door Gagné
Beide aspecten (methode en inhoudelijke
ondersteuning) dienen bijgevolg opgenomen te
worden in een systeem dat casus oplossen wil
aanleren aan studenten. Daarom werd PROSA
gebouwd in twee presentatielagen: de presentatie
van de inhoud en de werkvorm enerzijds en de
presentatie van de ondersteuning anderzijds. Deze
twee lagen zijn aanwezig voor elk van de drie
taakonderdelen: de casus, de juridische oplossing
en de rechtsregel. Samen resulteert dit in een
kleurrijk computerscherm bestaande uit zes
onderscheiden vensters.
feiten of het eigenlijke probleem behoort aldus
niet tot het systeem. (Bijvoorbeeld: er zitten geen
adders onder het gras; een virtueel interview met
een denkbeeldige cliënt is niet nodig.) De
ondersteuning bij dit taakonderdeel bestaat dan
onder andere uit een begrippenlijst, informatie
over wat een casus is, het structureren ervan
enzovoort.
Het tweede taakonderdeel bevat de presentatie
van rechtsregels. Binnen dit venster dient de
student de naar zijn mening van toepassing
zijnde rechtsregel te selecteren. De ondersteuning
bij dit taakonderdeel bestaat onder andere uit
uitleg betreffende de ordening van rechtsregels,
het goed en efficiënt lezen van rechtsregels
enzovoort. Door middel van de selectie van delen
van de casus en de rechtsregels dient de student
in het derde taakonderdeel - 'construeer
juridische oplossing' - een oplossing voor de casus
te construeren. Dit gebeurt door koppeling ('is' of
Het eerste taakonderdeel bevat de presentatie van
de juridische casus. De juridische casussen zijn
geordend
naar
onderwerp
binnen
het
bestuursrecht en binnen elk onderwerp naar
moeilijkheidsgraad. De casussen bevatten niet
meer en niet minder dan alle gegevens die nodig
zijn om tot een oplossing te komen. Ook de vraag
horende bij de casus wordt gesteld. Het vinden van
NVKI-Nieuwsbrief
199
December 1998
hard cases in recht en rechtsinformatica.
'is niet') van deze delen. Bij deze taak zijn zowel
het proces als het product van belang: ten eerste
dient de student één van de drie voorziene routes
(waarvan één aanbevolen) bij het oplossen van een
casus te volgen (proces) en ten tweede dient hij
uiteraard te komen tot een correcte oplossing van
de casus, bestaande uit een aantal onderdelen
(koppelingen) in een bepaalde volgorde en een
antwoord op de gestelde vraag.
Hage heeft ooit beweerd dat als een toepassing van
rechtsinformatica gebruikt wordt bij het nemen van
een juridische beslissing, deze toepassing aan twee
eisen dient te voldoen: het systeem moet
betrouwbaar zijn en het moet non-triviale
oplossingen bieden. Betrouwbaarheid betekent dat
de conclusies die het systeem trekt correct en
juridische houdbaar zijn. Non-trivialiteit betekent
dat het systeem geen conclusies produceert die de
gebruiker zelf ook had kunnen trekken. Men zou ook
meer of andere eisen aan een toepassing van
rechtsinformatica kunnen stellen. De genoemde
eisen vormen echter vaak het uitgangspunt van
onderzoek op het gebied van de rechtsinformatica.
Een klassiek probleem in de rechtsinformatica is
daarom hoe men ervoor kan zorgen dat systemen aan
deze eisen kunnen voldoen.
Op elk moment in een PROSA-sessie kan de
gebruiker geconfronteerd worden, al dan niet op
aanvraag, met feedback. Zo kan de student
feedback betreffende het door hem gevolgde
proces en het bereikte product opvragen. Het
product wordt vergeleken met de normoplossing:
zijn alle onderdelen aanwezig en in de juiste
volgorde en is het antwoord juist?
Globaal houdt PROSA tevens bij welke casussen de
student reeds oploste. Ook hieromtrent wordt
feedback gegeven, bijvoorbeeld in verband met de
graduele stijging van de moeilijkheidsgraad.
Kernprobleem van de rechtsinformatica is de
openheid van het recht. Dat wil zeggen: de
juridische kennis op basis waarvan conclusies
getrokken worden, is niet statisch, maar kan
veranderen in de loop van een proces of geschil. In
de rechtstheorie spreekt men dan van 'hard cases' of
moeilijke gevallen. Voor betrouwbare juridische
informatiesystemen heeft men echter juridische
kennis nodig die juist wél statisch is. De
betrouwbaarheid komt bij hard cases dus in het
geding. Als men echter toepassingen zoekt in
rechtsgebieden waarop juridische kennis wél statisch
is, in de rechtstheorie spreekt men dan van 'clear
cases' of eenvoudige gevallen, bestaat het risico dat
het systeem triviale conclusies trekt. Het klassieke
probleem van de hard en clear cases houdt de
rechtsinformatica dan ook al jaren bezig.
Op het einde van de lezing werd nog een korte
demonstratie gegeven van PROSA. Een casus van
gemiddelde moeilijkheidsgraad werd gedeeltelijk
opgelost.
RECHTSINFORMATICA EN
HARD CASES
Ronald van den Hoogen
Recht en Informatisering, UU
Verslag van de lezing van Ronald Leenes tijdens de
Jurix-vergadering van 23 oktober 1998 op de
Universiteit Twente, getiteld: Hercules of Karneades:
In de rechtsinformatica heeft men op verschillende
manieren geprobeerd dit probleem op te lossen.
Meestal
wordt
daarbij
aangesloten
bij
rechtstheoretische opvattingen over hetgeen 'recht' is.
Leenes concludeert dat de klassieke rechtstheoretische
opvattingen van Hart of Dworkin, waarin recht als een
eindige verzameling regels wordt gezien of juist als
een open systeem waarin rechtsbeginselen een rol
kunnen spelen, geen afdoende oplossing bieden voor
het geschetste probleem. Hij sluit daarom aan bij de
dialoogbenadering van Alexy, Aarnio, Peczenik en
Lodder. Dat wil zeggen dat recht beschouwd wordt
als een dynamisch proces van juridische
betekenisgeving dat de vorm heeft van een dialoog. In
die dialoog kunnen nieuwe regels en interpretaties van
regels worden ingevoerd en vindt discussie plaats
over betekenis van feiten. Het onderscheid tussen hard
en clear cases is in deze opvatting niet zozeer een
NVKI-Nieuwsbrief
onderscheid in de kenmerken van een probleem,
maar in de argumenten die partijen in een dialoog
naar voren brengen: hard cases worden gemaakt. De
dialogen worden beheerst door juridische
'spelregels'. Het handhaven van die spelregels
beschouwt Leenes als een mogelijke toepassing voor
een rechtsinformaticasysteem. Deze toepassing
wordt een moderator genoemd.
Een moderator kan een juridische dialoog
begeleiden en waarborgen dat een rationele uitkomst
wordt bereikt. Een marginale toetsing van
argumenten wordt uitgevoerd in plaats van de lastige
inhoudelijke
toetsing
die
traditionele
rechtsinformatica-toepassingen uitvoeren. Het
ontwikkelen van elektronische moderatoren staat
nog in de kinderschoenen en het onderzoek van
Leenes kan aan deze ontwikkeling een bijdrage
200
December 1998
leveren. Het onderzoek toont de problemen en de
mogelijkheden van moderatoren in het recht. Leenes
heeft daartoe in zijn onderzoek een analyse gemaakt
van een concrete juridische procedure, de
dagvaardingsprocedure voor de rechtbank. Deze
analyse werpt een licht op de regels ten aanzien van
de zetten van de spelers in het taalspel. Het civiele
bewijsrecht speelt daarbij een belangrijke rol.
Daarnaast heeft een analyse van een praktijkgeval
plaatsgevonden.
die stelt, bewijst' lang niet altijd op gaat. De meest
belangwekkende conclusie van Leenes is misschien
wel dat de partijen de discussie niet zonder de
inbreng van de rechter tot een goed einde kunnen
brengen.
Een moderator voor de dagvaardingsprocedure zou
een marginale toetsing kunnen verrichten van het
materiële recht en kunnen controleren 'of er bewijs is
en door wie het geleverd is' en 'of het bewijs
houdbaar is'. Leenes geeft toe dat hiermee nog altijd
niet voldaan is aan de eis van betrouwbaarheid voor
rechtsinformaticatoepassingen. Zijn onderzoek levert
vooral bouwstenen voor een realistisch model voor
moderatoren. Daarnaast
beschrijft het de
dagvaardingsprocedure op hoofdlijnen, geeft het de
aanzet tot de ontwikkelingen van een catalogus voor
de bewijslastverdeling en maakt het duidelijk wat de
rol van de rechter is en geeft het een beter inzicht in
juridische begrippen als claim, erkentenis,
bewijsaanbod, betwisten, betwijfelen en herroeping.
Verder onderzoek kan wellicht duidelijk maken
welke mogelijkheden er zijn voor moderatoren in het
recht en wat de belangrijkste problemen zijn die nog
opgelost moeten worden.
Uit dit onderzoek is onder andere gebleken dat het er
in de juridische praktijk waarin argumenten worden
uitgewisseld vaak anders aan toe gaat dan men op
basis van de regels in de modellen als die van Lodder
en Gordon zou verwachten. Soms blijkt bijvoorbeeld
dat de ene partij een groot aantal zetten tegelijkertijd
doet en dat de andere partij dan reageert met een groot
aantal tegenzetten: niet bepaald de manier waarop een
normaal 'spel' verloopt. Uit het praktijkgeval waarin
de vraag 'wie moet de eigendom bewijzen van een
tenthuisje?' centraal stond, bleek dat de discussie op
verschillende niveaus plaatsvindt. Er bleek zowel
discussie over de feiten als over het recht en de
procedure te bestaan. Daarnaast bleek dat spelers
verschillende rollen en bevoegdheden kunnen hebben
en dat de hoofdregel van het civiele bewijsrecht 'hij
Jammer is het overigens, als ik met een persoonlijke
opmerking mag besluiten, dat Leenes geen aandacht
besteedt aan het internationale recht. Met name artikel
6 EVRM kan in mijn ogen, ook voor de
dagvaardingsprocedure, een belangrijke rol gaan
spelen bij de ontwikkeling van toepassingen van de
rechtsinformatica omdat dit artikel een interessante
dubbelrol speelt in de toepassing en beoordeling van
recht in een concreet geval. Het artikel, waarin het
recht op een eerlijk proces beschreven wordt, fungeert
namelijk als rechtsbeginsel ter aanvulling op het
Nederlandse recht, maar ook als rechtsregel van een
hogere orde, waaraan het Nederlandse (proces)recht
ondergeschikt is. Meestal wordt het internationale
recht in het rechtsinformatica-onderzoek achterwege
gelaten en worden rechtsbeginselen, als deze al van
belang worden geacht, in de eerste betekenis gebruikt,
terwijl de juridische ontwikkeling juist meer in de
richting van de tweede betekenins wijst.
Artificial Intelligence & Audit Automation
Verslag door Arno R. Lodder
Computer/Law Institute
Vrije Universiteit Amsterdam
Tom van Engers is werkzaam bij de
Projectorganisatie AI & AA (Artificial
Intelligence & Audit Automation) van de
Belastingdienst en is daar verantwoordelijk voor
het researchbeleid. De Projectorganisatie werd
medio jaren tachtig opgericht op het moment dat
de AI zich begon af te tekenen als veelbelovende
technologie. Het was indertijd de bedoeling dat
de Projectorganisatie twee jaar zou blijven
bestaan, maar inmiddels is ze toe aan de viering
van haar tweede lustrum. De vooralsnog door AI
& AA ontwikkelde (kennis)systemen worden
door
de
Belastingdienst-medewerkers
geaccepteerd, daadwerkelijk gebruikt en hebben
een duidelijke meerwaarde. Het onderwerp van
de bijeenkomst was het POWER-project
(Programma
Ondersteuning
Wet
En
Regelgeving) waar een researchgroep momenteel
in samenwerking met onder meer O&I
management partners aan werkt. Verder is er op
deelterreinen samenwerking met de KUB (Leda
en ontologieën) en de UvA (ontologieën).
Leenes, R.E. (1998). Hercules of Karneades; Hard
Cases in Recht en Rechtsinformatica. Enschede:
Twente University Press.
POWER - PROGRAMMA
ONDERSTEUNING WET EN
REGELGEVING
Tom M. van Engers
NVKI-Nieuwsbrief
201
December 1998
toetsen op de consistentie is een ander doel. Bij de
ontwikkeling van het kennissysteem VVV-IBR
(Verrekening, Vrijstelling en Verliescompensatie
in het Internationaal BelastingRecht, waarbij
compensatie voor ondernemingen in verband met
in het buitenland betaalde belasting kan worden
bepaald, waarmee dubbele belastingheffing
wordt voorkomen) heeft het modelleren van de
voorgenomen wetgeving geleid tot meer
consistentie. De syllabus die als fiscaal-technische
achtergrond bij het kennissysteem diende, is als
aanschrijving bij de wet opgenomen. Ook het
simuleren
van
de
gevolgen
van
beleidsbeslissingen en een verbeterde voorlichting
aan de belastingbetaler streeft men na.
Er zijn verschillende redenen waarom het
POWER-project van start is gegaan. In de eerste
plaats is ondersteuning bij het maken en uitvoeren
van de complexe wet- en regelgeving op het terrein
van het belastingrecht gewenst. Een tweede reden
is gelegen in het feit dat specialistische kennis vaak
beperkt is tot enkele experts. Wanneer deze
vertrekken is daarmee ook de betreffende kennis
uit de organisatie verdwenen. Als derde kan
worden genoemd het streven om misbruik en
oneigenlijk gebruik van belastingwetgeving tegen
te gaan. Dit is eigenlijk alleen goed mogelijk
wanneer de regelgeving voor de belastingbetaler
begrijpelijk en voor de Belastingdienst uitvoerbaar
is. Tenslotte is het nu vaak ontbrekende inzicht in
effecten van gewijzigde regelgeving onontbeerlijk.
Voor al de genoemde onderwerpen biedt de
technologie,
waaronder
AI-technieken,
mogelijkheden tot verbetering en optimalisering.
Het POWER-project is momenteel nog in het
beginstadium. Hierbij is vooral aandacht voor de
ondersteuning van de uitvoering van
belastingwetgeving. Niet alleen wordt het
redeneren met wet- en regelgeving (m.n.
uitvoerende dienst) gemodelleerd, ook aan
redeneren over wet- en regelgeving (m.n.
wetgever) besteedt men aandacht. De vertaling in
programmatuur ligt daarmee dicht bij de
wetgeving. Bij het redeneren over wet- en
regelgeving moeten ondermeer inconsistenties,
vaagheden en dergelijke worden opgespoord. Een
aardig voorbeeld hiervan, een zogenaamde live
lock (cirkel-redenering) treffen we bijvoorbeeld
aan in het Loonbelasting-domein:
1. Als tariefgroep is 2 dan is de belastingvrije som
de basisaftrek + de bovenbasisaftrek (art. 20 lid
2)
voorkomen. Dit is van belang omdat voor een
eenduidige uitvoering van wetgeving door de
verschillende uitvoerings-organisaties overeenstemming over de betekenis van begrippen nodig is.
In eerste instantie was het de bedoeling om mei
volgend jaar alle stappen uitgewerkt te hebben
waarna de uitvoering van de in het programma
opgenomen stappen ter hand zou worden genomen.
In het traject tot dit programma was voorzien in een
beperkte proefneming waarbij een aantal stappen in
een beperkt toepassingsdomein zouden worden
genomen (stap 1 genereren standaardspecificaties;
stap 2 toetsen consistenties; stap 3 uitvoeren
simulaties, etc.). Echter, het POWER-project is als
proefneming binnen het kader van de wetgeving
21ste eeuw gepositioneerd en daarom zal
proefneming nu plaatsvinden op een onderwerp
binnen deze nieuwe wetgeving. Het betekent ook dat
delen al geoperationaliseerd moeten worden nog
voordat alle stappen volledig zijn uitgewerkt.
De doelstellingen van het POWER-project zijn
divers. Zo wil men trachten een transparante
vertaling van wet- en regelgeving naar
beslissingsondersteunende (kennis)applicaties te
bewerkstelligen. Op dit moment kunnen
specificaties voor dit type geautomatiseerde
hulpmiddelen niet altijd op eenvoudige wijze
worden terug herleid tot de bron in wet- en
regelgeving. Daarom kan het gebeuren dat de
grond van een beslissing niet direct duidelijk is.
Ook is bij aanpassing van wet- en regelgeving dan
moeilijk in te schatten op welke plaatsen
wijzigingen nodig zijn. De wet- en regelgeving
2. In tariefgroep 2 wordt ingedeeld degene die de
basisaftrek en de bovenbasisaftrek geniet (art. 22
lid 1).
Als de methode zoals deze in POWER wordt
ontwikkeld werkt, dan kan de vigerende
vertaalsystematiek aanzienlijk worden vereenvoudigd. De aandacht voor het optimaliseren van de
kennisinfrastructuur is gegeven deze doelstelling
(kennismanagement) niet verwonderlijk.
Als punten voor discussie werd naar voren gebracht:
traject van informeel naar semi-formeel; traject van
semi-formeel naar formeel; gegevensstandaardisatie
en (her)gebruik - bv. de term pensioen heeft
verschillende betekenissen binnen de sociale
zekerheids- en de fiscale wetgeving; jurisprudentie. In
de discussie werden deze en andere onderwerpen aan
de orde gesteld. Ik zal er hieronder enkele uitlichten.
Onder standaardspecificatie wordt begrepen het
definiëren en beschrijven van begrippen die in de wet
NVKI-Nieuwsbrief
202
December 1998
R
E
B
M
U
N
Y
T
I
S
R
E
V
I
N
U
D
N
A
M
A
R
G
O
R
P
4
1
a
c
i
t
a
m
r
o
f
n
I
Een heftige discussie had als onderwerp de
jurisprudentie. Als inzage in beleid en eerdere
beslissingen van de Belastingdienst wordt gevraagd
wordt dit nochtans door de Belastingdienst geweigerd
met als enige argument dat ze zelf de beschikking
over die informatie ook niet hebben. Wanneer door
POWER deze informatie voor de medewerkers
toegankelijk wordt gemaakt gaat het door de
Belastingdienst gebruikte argument niet meer op. De
vraag was of er problemen te verwachten zijn
wanneer de jurisprudentie geautomatiseerd wordt
opgeslagen. Hoewel Van Engers niet kan overzien of
er in dat opzicht problemen zijn te verwachten, hoopt
hij dat dit stadium in ieder geval bereikt wordt.
m
a
d
r
e
t
s
m
A
n
a
V
t
i
e
t
i
s
r
e
v
i
n
U
0
2
e
i
t
n
e
g
i
l
l
e
t
n
I
e
g
i
t
a
m
t
s
n
u
K
m
a
d
r
e
t
s
m
A
n
a
V
t
i
e
t
i
s
r
e
v
i
n
U
3
2
a
c
i
t
a
m
r
o
f
n
I
t
i
e
t
i
s
r
e
v
i
n
U
e
j
i
r
V
6
e
i
t
n
e
g
i
l
l
e
t
n
I
e
g
i
t
a
m
t
s
n
u
K
t
i
e
t
i
s
r
e
v
i
n
U
e
j
i
r
V
0
5
a
c
i
t
a
m
r
o
f
n
I
t
h
c
e
r
t
U
t
i
e
t
i
s
r
e
v
i
n
U
7
1
e
i
t
n
e
g
i
l
l
e
t
n
I
e
g
i
t
a
m
t
s
n
u
K
e
v
e
i
t
i
n
g
o
C
t
h
c
e
r
t
U
t
i
e
t
i
s
r
e
v
i
n
U
Het POWER-project is ambitieus. Het richt zich niet
alleen op ondersteuning van uitvoerders van
wetgeving, maar ook op makers ervan. Verder wordt
de
kennisinfrastructuur
geoptimaliseerd
en
gehanteerde begrippen gestandaardiseerd. Wat het
project voor rechtsinformatici zo interessant maakt is
dat grootschalig diverse onderzoeksonderwerpen
(kennissystemen, kennisrepresentatie, ontologieën)
praktisch worden verwerkt en ingezet ter facilitering
van juristen. Momenteel is een eerste stap gezet op
een op zich veelbelovende weg die uiteindelijk moet
leiden tot een optimale inzet van AI en IT ten behoeve
van juristen, oftewel rechtsinformatica in optima
forma. Om met de woorden van de spreker te spreken:
'Laten we hopen dat we daar uitkomen!'.
1
1
e
i
g
o
l
o
n
h
c
e
t
s
i
n
n
e
K
t
h
c
i
r
t
s
a
a
M
t
i
e
t
i
s
r
e
v
i
n
U
4
1
a
c
i
t
a
m
r
o
f
n
I
n
e
g
e
m
j
i
N
t
i
e
t
i
s
r
e
v
i
n
U
e
k
e
i
l
o
h
t
a
K
9
2
a
c
i
t
a
m
r
o
f
n
I
n
e
g
n
i
n
o
r
G
t
i
e
t
i
s
r
e
v
i
n
u
s
k
j
i
R
1
1
a
c
i
t
a
m
r
o
f
n
I
n
e
d
i
e
L
t
i
e
t
i
s
r
e
v
i
n
U
END OF SECTION
CALL FOR PARTICIPATION
ARTIFICIAL INTELLIGENCE AND
BEYOND
KNOWLEDGE SYSTEMS
IN LAW AND
COMPUTER SCIENCE
a joint program by Flanders Language Valley
Education and K.U.Leuven Campus Kortrijk
Faculty of Science
Monthly : November - May 1998 - 1999
OBJECTIVES
E
H
T
N
I
S
T
N
E
D
U
T
S
I
A
W
E
N
Artificial Intelligence - the science that builds
intelligent agents and artifacts - has developed a
number of exciting new techniques and applications
over the last decade. The FLV-KULAK seminars
will give a survey of these achievements.
Participants meet in half-day seminars, each of
which focuses on one sub-area of artificial
intelligence. The areas include genetic algorithms,
neural networks, intelligent software agents,
mainstream artificial intelligence and evolving
S
D
N
A
L
R
E
H
T
E
N
a
m
t
s
o
P
c
i
r
E
d
n
a
s
n
o
m
e
l
l
e
H
e
k
o
J
.
8
9
9
1
,
1
1
r
e
b
m
e
c
e
D
r
e
p
e
l
b
a
t
d
e
t
a
d
p
U
NVKI-Nieuwsbrief
203
December 1998
hardware. Each seminar will address both theory and
applications and will be taught by leading AI
researchers and practitioners from Belgium and
abroad. Therefore the series should be of interest to
both industry and academia.
Starlab, Riverland, Zaventem
Module 4 - 03.02.1999 :
Neural Networks By Prof. Dr. Joos Vandewalle,
Kuleuven
Module 5 - 03.03.1999 :
Genetic Algorithms By
Manderick, VUB
TARGET AUDIENCE
This series is aimed at engineers, computer scientists,
linguists, R&D managers and other scientists
interested in state-of-the-art artificial intelligence and
its applications. It is also recommended to
post-graduate and doctoral students in AI-related
fields.
Prof.
Dr.
Bernard
Module 6 - 14.04.1999 :
Introduction To Artificial Life And Swarm
Intelligence By Prof. Dr. Marco Dorigo, ULB
Module 7 - 05.05.1999 : Morning Seminar :
Virtual And Augmented Reality As Integrator of
Knowledge And Information
Technology By Prof. Dr. Fernand Van Damme,
Bikit and UGent
PROGRAMME
See : http://www.kulak.ac.be/facult/wet/flv-kulak
or : http://www.flv.be for the abstracts
Module 8 (A + B) - 05.05.1999 : Afternoon Seminar
:
Evolving Hardware And Cam-brain Machine By Dr.
Michael G. Korkin, Genobyte
Inc., Boulder Co, USA
The Age Of Spiritual Machines By Dr. Ray
Kurzweil, Kurzweil Educational
Systems, Waltham, USA
Module 1 - 25.11.1998 : Opening Seminar :
Welcome by Mr. Jo Lernout, Flv and Rector Marcel
Joniau, Kulak
Artificial Intelligence : A New Step Forward ? By
Prof. Dr. Luc Steels, VUB
Introduction to the Seminars - Lecturers' Panel
Module 2 - 09.12.1998 :
Progress In Traditional Artificial Intelligence :
Machine Learning By Prof.
Dr. Luc De Raedt, Kuleuven
PROJECT COMMITTEE
* prof. dr. Marcel JONIAU, Rector K.U.Leuven
Campus Kortrijk
* dr. ir. Dirk FRIMOUT, Astronaut STS45,
Chairman FLV Education, Ieper
Location :
Conferentiezaal - Stadhuis Ieper
Grote Markt - B-8900 Ieper
Phone : +32(0)57-22.85.62
Module 3 - 06.01.1999 :
Software Agents : The New Future of AI ? By Dr.
Walter van de Velde,
* prof. dr. Lea VERMEIRE, K.U.Leuven Campus
Kortrijk
* prof. dr. Luc DE RAEDT, K.U.Leuven and Fund for
Scientific Research Flanders
* mr. Patrick MOESICK, Manager Delegate, FLV
Education, Ieper
* mrs. Virginie COUCKE, Staff Office of the Rector,
K.U.Leuven Campus Kortrijk
* mr. Jos VERNIEST, PR manager, FLV, Ieper
Seminar Syllabus :
An outline will be handed out at the beginning of
each seminar. A final seminar syllabus will be
available on request.
Timing and Dates :
8 half days : 6 afternoons and a full day, each on
Wednesday
- afternoon : from 2.30 to 5 pm, with coffee break
- closing day : from 10 am to 6 pm, with lunch
November 25 and december 9, 1998, January 6,
February 3, March 3, April 14 and May 5, 1999
Fee :
* Professional : full series (8 modules) = 30.000
BEF; 3 modules (min.
registration) = 15.000 BEF; per additional module =
4.500 BEF
* Educational staff : full series (8 modules) = 10.000
BEF; 3 modules (min.
registration) = 5.000 BEF; per additional module =
1.500 BEF
* Students : full series (8 modules) = 6.600 BEF; 3
modules (min.
PRACTICAL INFORMATION
NVKI-Nieuwsbrief
204
December 1998
n
n
a
s
e
/
e
b
.
c
a
.
l
c
u
.
e
c
i
d
.
w
w
w
/
/
:
p
t
t
h
:
n
o
i
t
a
m
r
o
f
n
I
registration) = 3.300 BEF; per additional module =
1.100 BEF
26-28 april 1999
COORDINATION ’99, Third International Conference on
Coordination Models and Language, Amsterdam, The
Netherlands.
Information: http://www.cs.unibo.it/~coord99
Fee includes participation, outlines and coffee. Lunch
at the closing day is optional at 700 BEF.
Bank account : 552-2959901-91, FLV Education,
B-8900 Ieper. Please add : A.I. Seminars.
25-27 mei 1999
International Conference on Computational Intelligence,
Dortmund, Germany.
Information: http://ls1-www.cs.uni-dortmund.de/fd6
REGISTRATION
31 mei-3 juni 1999
IEA/AIE-99, The Twelfth International Conference on Industrial
& Engineering Applications of Artificial Intelligence & Expert
Systems, Cairo, Egypt.
Information: Dr. Moonis Ali, E-mail: [email protected] and Dr.
Ibrahim Imam, E-mail: [email protected]
You can register at these websites :
http://www.kulak.ac.be/facult/wet/flv-kulak
http://www.flv.be
1-4 juni 1999
IIA’99, Intelligent Industrial Automation and SOCO’99, Soft
Computing, Palazzo Ducale, Genova, Italy.
Information: http://www.ixsc.ab.ca/iia99.htm
CONFERENTIES,
Hieronder volgt een lijst van data van conferenties
SYMPOSIA,
en een contactadres.
Graag wijzen we onze lezers
WORKSHOPS
tevens op de aanvullende
Calendar 1998, zoals die
gepubliceerd wordt in de AI Communications.
Voorts hebben we referenties aan SIGART
Newsletter ontleend.
2-4 juni 1999
VISUAL99, Third International Conference on
Information Systems, Amsterdam, The Netherlands.
Information: http://www.wins.uva.nl/events/VISual99
Visual
15-17 juni 1999
ASCI 1999 Conference, Heijen, The Netherlands.
Information: http://www.asci.tudelft.nl
25-29 januari 1999
SNN, Stichting Neurale Netwerken
Advanced Issues in Neurocomputing Course
Information: http://www.wins.uva.nl/ ~krose/asci_nn.html
14-18 juni 1999
ICAIL-99, Seventh International Conference on Artificial
Intelligence and Law, University of Oslo, Norway.
Information: Program Chair: Mr. Thomas Gordon, E-mail:
[email protected].
17-19 februari 1999
CIMCA’99, Computational Intelligence for Modelling, Control
and Auromation, Vienna, Austria.
Information:
http://www-gscit.fcit.monash.edu.au/conferences/cimca99
22-25 juni 1999
CIMA’99, International ICSC Congress on Computational
Intelligence: Methods and Applications, Rochester Institute of
Technology, NY, USA.
Information: http://www.icsc.ab.ca/cima99.htm
25-27 maart 1999
DGNMR’99, Fourth Dutch-German Workshop on
Nonmonotonic Reasoning Techniques and Their Applications
Institute of Logic, Language and Information, University of
Amsterdam
Information: http://pgs.twi.delft.nl/~witt/dgnmr99.htm
18-22 juli 1999
AAAI-99, Sixteenth National Conference on Artificial
Intelligence, Orlando, Florida.
Information: http://www.aaai.org/Conferences/National/1999
12-14 april 1999
HPCN Europe’99. The 7th International conference on High
Performance Computing and Networking Europe.
Information: http://www.wins.uva.nl/events/HPCN99
30 juli - 1 augustus 1999
UAI99, the Fifteenth Annual Conference on Uncertainty in
Artificial Intelligence, Sweden.
Information: http://uai99.iet.com
19-23 april 1999
PA, EXPO 99, The Commonwealth Conference and Events Centre,
London, UK
Information: http://www.commonwealth.org.uk/
9
9
9
1
l
i
r
p
a
3
2
1
2
16-20 augustus 1999
ESSLLI-99, Eleventh European School in Logic, Language and
Information, Utrecht, The Netherlands
Information: http://www.wins.uva.nl/research/folli/
s
k
r
o
w
t
e
N
l
a
r
u
e
N
l
a
i
c
i
f
i
t
r
A
n
o
m
u
i
s
o
p
m
y
S
n
a
e
p
o
r
u
E
h
t
7
,
9
9
'
N
N
A
S
E
.
m
u
i
g
l
e
B
,
s
e
g
u
r
B
13-17 september 1999
ECAL99, 5th European Conference on Artificial Life, Swiss
Federal Institute of Technology in Lausanne (EPFL), Switzerland
Information: http://www.epfl.ch/ecal99
Second World Conference on New Trends in Criminal
Investigation, Amsterdam.
Information: http://www.eurocongres.com/criminallaw
10-15 december 1999
Workshop on Artificial Intelligence and Judicial Proof at the
NVKI-Nieuwsbrief
205
December 1998
(EMAIL)ADRESSEN
BESTUURSLEDEN NVKI
REDACTIE NVKI-NIEUWSBRIEF
Dr. E.O. Postma (hoofdredacteur)
(Zie adressen bestuursleden)
Prof.dr. J. N. Kok
Wiskunde en Natuurwetenschappen, Dept. of Computer
Science
Rijksuniversiteit Leiden, Niels Bohrweg 1, 2333 CA Leiden
Tel: (071) 5277057, E-mail: [email protected]
Prof. dr. H.J. van den Herik
Universiteit Maastricht, Vakgroep Informatica,
Postbus 616, 6200 MD Maastricht
Tel.: (043) 3883485, E-mail: [email protected]
Dr. Y.H. Tan
EURIDIS, Erasmus Universiteit Rotterdam
Postbus 1738, 3000 DR Rotterdam
Tel.: (010) 4082255. E-mail: [email protected]
Dr. C. Witteveen
Technische Universiteit Delft, Vakgroep
Informatica, Julianalaan 132, 2628 BL Delft
Tel.: (015) 2782521, E-mail: [email protected]
Dr. E.O. Postma
Universiteit Maastricht, Department of Computer Science
Postbus 616, 6200 MD Maastricht
Tel.: (043) 3883493. E-mail: [email protected]
Dr. R. Verbrugge
Rijksuniversiteit Groningen, Cognitive Science
Engineering Grote Kruisstraat 2/1, 9712 TS Groningen
Tel.: (050) 3636334. E-mail: [email protected]
Dr. R.G.F. Winkels
Universiteit van Amsterdam, Vakgroep Rechtsinformatica
Postbus 1030, 1000 BA Amsterdam
Tel.: (020) 5253485, E-mail: [email protected]
and
Dr. S.-H. Nienhuys-Cheng
Erasmus Universiteit Rotterdam, Vakgroep Informatica
Postbus 1738, 3000 DR Rotterdam
Tel.: (010) 4081345, E-mail: [email protected]
Dr. W. Van der Hoek
Universiteit Utrecht, Department of Computer Science
P.O. Box 80089, 3508 TB Utrecht
Tel.: (030) 2533599. E-mail: [email protected]
Ir. E.D. de Jong
Vrije Universiteit Brussel, AI Lab
Pleinlaan 2, B-1050 Brussel, Belgium
Tel.: +32 (0)2 6293713, E-mail: [email protected]
Dr. L. de Raedt
Department of Computer Science, Katholieke Universiteit
Leuven, Celestijnenlaan 200A, B-3001 Heverlee, België
Tel.: +32 16 327643. E-mail: [email protected]
Dr. A. van den Bosch
Katholieke Universiteit Brabant, Vakgroep Taal- en
Literatuurwetenschap, Postbus 90153, 5000 LE Tilburg
Tel.: (013) 4360911, E-mail: [email protected]
Dr. G.J. Beijer
BOLESIAN BV, Steenovenweg 1, 5708 HN Helmond
Tel.: (0492) 502525. E-mail: [email protected]
Dr. W. Daelemans
Katholieke Universiteit Brabant, Vakgroep TaalLiteratuurwetenscha, Postbus 90153, 5000 LE Tilburg
Tel.: (013) 4663070 E-mail: [email protected]
Technische
Drs. B. de Boer
Vrije Universiteit Brussel, AI-Lab
Pleinlaan 2, B-1050 Brussel, België
Tel.: +32 (0)2 6293703, E-mail: [email protected]
en
HOE WORD IK LID?
ADVERTENTIES
Het lidmaatschap van de NVKI kost in 1998 fl. 75,- voor
gewone leden; fl. 50,- voor AIO's en fl. 40,- voor studenten. Als
onderdeel van het NVKI-lidmaatschap zult u in 1998 twee maal
het Europese Tijdschrift AI Communications ontvangen, dat
beschouwd kan worden als pendant van AI Magazine. Tevens
ontvangt u zesmaal per jaar een NVKI-Nieuwsbrief met informatie over conferenties, onderzoeksprojecten, onderzoeksplaatsen, subsidie mogelijkheden, etc., tenminste als er voldoende informatie wordt aangeleverd. Daarom worden alle
leden opgeroepen om nieuws en nieuwtjes die zij de moeite waard
vinden, op te sturen aan de redactie van de NVKI-Nieuwsbrief. U
kunt (uitsluitend) lid worden door overmaking van fl. 75,- (fl.
50,-, fl. 40,-) op RABO-Bankrekeningnummer 11.66.34.200 of
op Postgironummer 3102697, ten name van NVKI, FdAW,
Vakgroep Informatica, Postbus 616, 6200 MD Maastricht.. Pas
als uw betaling ontvangen is, wordt met verzending van de
NVKI-Nieuwsbrief begonnen. Indien u uw lidmaatschap wenst
te beëindigen dient u dit schriftelijk door te geven aan het
redactie-secretariaat vóór 1 december 1998.
Het is mogelijk uw advertentie in de NVKI-Nieuwsbrief te
laten opnemen. Voor informatie over prijzen e.d. kunt u
contact opnemen met het redactie-secretariaat.
ADRESWIJZIGINGEN
De verzending van de NVKI-Nieuwsbrief vindt plaats vanuit
Maastricht. Daarom heeft het NVKI-bestuur besloten dat bij
het NVKI-Nieuwsbrief secretariaat ook de NVKIledenadministratie bijgehouden wordt. U dient derhalve uw
adreswijziging door te geven aan:
Redactiesecretariaat NVKI-Nieuwsbrief
Universiteit Maastricht, Vakgroep Informatica
Postbus 616, 6200 MD Maastricht
Tel.: 043-388 3477, E-mail: [email protected]
KOPIJ
De redactie zet haar kolommen ook open voor productaankondigingen, boekbesprekingen, productbesprekingen,
overviews van AI-onderzoek in het bedrijfsleven, reviews van
nieuwe AI-ontwikkelingen en interviews. Hierbij kunnen natuurlijk controversiële meningen aan het daglicht treden. Om
een goed beeld van de ontwikkeling te krijgen is dit geen
bezwaar, sterker nog wij moedigen discussies zelfs aan. Liefst
ontvangen wij uw kopij per e-mail, of op diskette 3,50" (WP 7.0
of ascii-tekst plus een hard copy).
NVKI-Nieuwsbrief
206
December 1998
advertentiepagina bOLESIAN
NVKI-Nieuwsbrief
207
December 1998