Download The Analogical Speaker or grammar put in its place

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Pleonasm wikipedia , lookup

Morphology (linguistics) wikipedia , lookup

Construction grammar wikipedia , lookup

Distributed morphology wikipedia , lookup

Indexicality wikipedia , lookup

Lexical semantics wikipedia , lookup

Antisymmetry wikipedia , lookup

Probabilistic context-free grammar wikipedia , lookup

Musical syntax wikipedia , lookup

Cognitive semantics wikipedia , lookup

Parsing wikipedia , lookup

Transformational grammar wikipedia , lookup

Analogy wikipedia , lookup

Junction Grammar wikipedia , lookup

Transcript
The Analogical Speaker or grammar put in its place
René-Joseph Lavie
To cite this version:
René-Joseph Lavie. The Analogical Speaker or grammar put in its place. Linguistique. Université de Nanterre - Paris X, 2003. Français. <tel-00144458v2>
HAL Id: tel-00144458
https://tel.archives-ouvertes.fr/tel-00144458v2
Submitted on 5 May 2014
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diffusion de documents
scientifiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
UNIVERSITE DE PARIS 10 NANTERRE, FRANCE
Département des Sciences du Langage
Laboratoire: MoDyCo, UMR 7114
École doctorale: Connaissance et Culture
René Joseph Lavie
The Analogical Speaker
or
grammar put in its place
Doctoral dissertation for the grade of Doctor in Language Sciences
defended publicly on November 18th, 2003
Translated by the author with the help of Josh Parker and Lance Miller
Original title: Le Locuteur Analogique ou la grammaire mise à sa place
Jury:
Sylvain Auroux Research Director, CNRS; Ecole Normale Supérieure, Lyon
Marcel Cori, Professor, Université de Paris 10 Nanterre
Jean-Gabriel Ganascia , Professor, Université Pierre et Marie Curie (Paris 6)
Bernard Laks, Professor, Université de Paris 10 Nanterre
Bernard Victorri, Research Director, CNRS, Laboratoire LATTICE
Director: Bernard Laks
2014 01
Content
INTRODUCTION ...................................................................................................................................... 9
CHAPTER 1. "SEE TO IT THAT ORDER IS NOT MADE A THING" ........................................... 15
1.1. OBJECT: PRODUCTIVITY, INNOVATION, EVOLUTION AND VARIATION ................................................ 15
1.2. RENOUNCING CATEGORIES AND RULES ............................................................................................ 16
1.3. THE SLOT-FILLER SCHEMA ............................................................................................................... 17
1.4. ANALOGY, THE RENEWED SEDUCTIONS OF A VENERABLE NOTION ................................................... 19
1.5. EXPLAINING PRODUCTIVITY ASSUMES A MECHANISM ...................................................................... 20
1.6. PROXIMALITY OF THE MOTIVATION DYNAMICS ................................................................................ 21
1.7. CONTINGENT CAUSALITY ................................................................................................................. 22
1.8. HYPOTHESIS .................................................................................................................................... 23
CHAPTER 2. MOMENTS IN THE HISTORY OF ANALOGY, IN LINGUISTICS AND IN
PSYCHOLOGY........................................................................................................................................ 25
2.1. IN THE ANTIQUITY, A "QUARREL" ARBITRATED BY VARRO.............................................................. 25
2.2. ARNAULD AND LANCELOT, DISTURB THE LEAST POSSIBLE THE ANALOGY OF LANGUAGE................. 26
2.3. HUMBOLDT: ANALOGY PUTS SOUND AND CONCEPTS AT THE SAME PACE ......................................... 27
2.4. BRUGMANN AND SAUSSURE, ANALOGY REPAIRS PHONETIC CHANGE DAMAGE ................................ 28
2.5. A REPAIRING ANALOGY WITH MORPHOLOGICAL AND SYNTACTIC EFFECT......................................... 32
2.6. BLOOMFIELD, THE POWER OF ANALOGY EXTENDED TO SYNTAX ...................................................... 34
2.7. HOUSEHOLDER FORMULATES THE POTENTIAL OF ANALOGY ............................................................ 35
2.8. CHOMSKY, CATEGORIES AND GENERATIVE RULES AGAINST ANALOGY ............................................. 37
2.9. HOPPER AND TRAUGOTT, ANALOGY PARTICIPATES IN GRAMMATICALIZATION ................................. 40
2.10. ANALOGY FOR PSYCHOLOGISTS AND PSYCHOANALYSTS ................................................................ 41
2.11. HOFSTADTER, EMERGENT ANALOGY .............................................................................................. 42
2.12. ITKONEN, REHABILITATION OF ANALOGY ....................................................................................... 43
2.13. ANALOGY PROFILES ....................................................................................................................... 45
2.14. STATICS, A DYNAMICS OF CHANGE, NOT YET A DYNAMICS OF ACTS ............................................... 46
CHAPTER 3. MODEL OF LINGUISTIC KNOWLEDGE, MODEL OF THE DYNAMICS OF
ACTS ......................................................................................................................................................... 49
3.1. TOWARDS A CONCRETE MODEL ....................................................................................................... 50
3.2. A SPEAKER‟S LINGUISTIC KNOWLEDGE AS A PLEXUS ........................................................................ 55
3.3. ANATOMY OF ANALOGY .................................................................................................................. 59
3.4. STATIC MODEL: A PLEXUS AS THE INSCRIPTION OF ANALOGIES......................................................... 67
3.5. PHILOSOPHY OF THE STATIC MODEL................................................................................................. 74
3.6. ABDUCTION, ABDUCTIVE MOVEMENTS ............................................................................................ 81
3.7. GENERAL FRAMEWORK OF THE DYNAMIC SIDE OF THE MODEL ........................................................ 91
3.8. CONCLUSION ................................................................................................................................... 95
CHAPTER 4. STRUCTURAL PRODUCTIVITY ................................................................................ 97
4.1. ANALYSIS WITH AGENTS B2, B3 ...................................................................................................... 97
4.2. ABOUT NON-TRANSFORMATION ..................................................................................................... 107
4.3. JOHN IS TOO STUBBORN TO TALK / TO TALK TO / TO TALK TO BILL ................................................. 112
4.4. AMALGAMATIONS, ARTICLE-PREPOSITION CONTRACTION IN FRENCH ............................................ 122
4.5. QUESTIONS NOT ADDRESSED IN THIS CHAPTER .............................................................................. 125
4.6. CONCLUSIONS ON STRUCTURAL PRODUCTIVITY ............................................................................. 126
3
CHAPITRE 5. SYSTEMIC PRODUCTIVITY ................................................................................... 129
5.1. SYSTEMIC PRODUCTIVITY, DEFINITION AND EXPLANATION ............................................................ 129
5.2. ADVERBIAL DERIVATION IN FRENCH, A PROCESS USING ONE PARADIGM ONLY .............................. 136
5.3. FRENCH VERB, TWO PARADIGMS PLAYING INTEGRATIVELY ........................................................... 139
5.4. RECRUITMENT AND EDIFICATION ................................................................................................... 144
5.5. AUVERGNATS AND BAVARIANS, RESETTING IN A SAME PARADIGM ................................................ 145
5.6. FRENCH ARTICLES, REINFORCEMENT EFFECTS ............................................................................... 152
5.7. GRAMMATICAL AGREEMENT WITH AN2 ........................................................................................ 153
5.8. CONCLUSIONS ON SYSTEMIC PRODUCTIVITY .................................................................................. 157
CHAPTER 6. MORE QUESTIONS OF GRAMMAR AND DESCRIPTION ................................. 159
6.1. MORPHEME, WORD, SYNTAGM....................................................................................................... 159
6.2. SYNTAX-MORPHOLOGY SEPARATION ............................................................................................. 173
6.3. ZEROES .......................................................................................................................................... 176
6.4. ANOMALY AND REGULARITY ......................................................................................................... 181
6.5. SYNTACTIC HEAD........................................................................................................................... 183
6.6. SENTENCE...................................................................................................................................... 184
6.7. CONCLUSION: DYNAMICS ARE THE CAUSE, AND THE GRAMMAR AN EFFECT ................................... 185
CHAPTER 7. FOUNDATIONS AND CONTRASTS ......................................................................... 186
7.1. ANALOGY IN THIS MODEL AND IN OTHER PROPOSITIONS ................................................................ 186
7.2. INDIVIDUALITY OF TERMS .............................................................................................................. 193
7.3. POSITION, POSITIONALITY, COPOSITIONING .................................................................................... 202
7.4. INTEGRATIVITY .............................................................................................................................. 207
7.5. EXEMPLARS AND OCCURRENCES ................................................................................................... 211
7.6. PROXIMALITY, TOTALITY ............................................................................................................... 212
7.7. EXTENSION, INTENSION.................................................................................................................. 216
7.8. BINDING, VARIABLES, VARIABLE BINDING...................................................................................... 218
7.9. PROBABILISTIC MODEL OR DYNAMIC MODEL ................................................................................. 225
7.10. RELATION WITH CONNECTIONISM ................................................................................................ 241
CHAPITRE 8. MARGINS, PROLONGATIONS, IMPROVEMENTS ............................................ 247
8.1. NON-CONCATENATIVE MORPHOLOGIES ......................................................................................... 248
8.2. ACQUISITION, LEARNING, REANALYSIS........................................................................................... 249
8.3. USING A CORPUS TO SET UP A PLEXUS ............................................................................................ 255
8.4. SELF-ANALYSIS .............................................................................................................................. 258
8.5. TREATMENT OF MEANING, PREREQUISITES AND DIRECTIONS ......................................................... 260
8.6. IS RADICAL NON-CATEGORICITY SUSTAINABLE? ............................................................................ 268
9. GENERAL CONCLUSIONS ............................................................................................................ 271
9.1. DYNAMICS ARE PRIMARY AND GRAMMAR IS SECOND .................................................................... 271
9.2. PLAUSIBILITY ................................................................................................................................. 272
9.3. MAKING A GRAMMAR? .................................................................................................................. 274
9.4. SUMMARY OF PROPOSITIONS.......................................................................................................... 275
10. APPENDIX: RULES AND CATEGORIES DO NOT QUALIFY AS A THEORY OF
OPERATIONS........................................................................................................................................ 279
10.1. FRAGILITY OF A LEXICAL CATEGORY: THE NOUN-VERB OPPOSITION............................................. 279
10.2. FUNCTIONAL APPROACH, THE GRAMMATICAL FUNCTION ............................................................. 281
10.3. A BRIEF REMINDER OF RULES REFUTATION .................................................................................. 288
10.4. CONCLUSION: A DESCRIPTIVE APPROXIMATION BUT NOT A THEORETICAL BASE ........................... 288
11. APPENDIX: THE SLOT-FILLER SCHEMA, A HISTORICAL PICTURE ............................. 291
11.1. TABLE OF SOME FIGURES OF THE SLOT-FILLER SCHEMA ............................................................... 291
11.2. TABLE OF THE SLOT-FILLER SCHEMA IN NEIGHBOURING FIELDS ................................................... 292
4
11.3. THE SLOT-FILLER SCHEMA IN CONSTRUCTION GRAMMARS........................................................... 293
12. APPENDIX: SPECIFICATION OF THE PLEXUS ..................................................................... 295
12.1. PLEXUS: INTRODUCTION .............................................................................................................. 295
12.2. TERM ........................................................................................................................................... 295
12.3. RECORD ....................................................................................................................................... 297
12.4. A-TYPE RECORD .......................................................................................................................... 297
12.5. C-TYPE RECORD........................................................................................................................... 297
12.6. ACCESS........................................................................................................................................ 298
12.7. PARADIGMATIC LINK, PARADIGM ................................................................................................. 300
12.8. FAMILIARITY ORIENTATION.......................................................................................................... 303
12.9. OVERALL PROPERTIES OF A PLEXUS ............................................................................................. 310
12.10. TOPOLOGY, CONNECTIVITY, INFLUENCED PROXIMALITY ............................................................ 315
12.11. SYNTACTIC AMBIGUITY: EXAMPLE ............................................................................................. 316
12.12. MULTIPLE ANALYSIS: EXAMPLES ............................................................................................... 317
13. APPENDIX: SPECIFICATION OF THE ABDUCTIVE MOVEMENTS................................. 319
13.1. ABDUCTIVE MOVEMENT BY TRANSITIVITY................................................................................... 319
13.2. ABDUCTIVE MOVEMENT BY CONSTRUCTABILITY TRANSFER ........................................................ 319
13.3. ABDUCTIVE MOVEMENT BY EXPANSIVE HOMOLOGY.................................................................... 320
13.4. ABDUCTIVE MOVEMENT BY TRANSPOSITION ................................................................................ 321
13.5. SOLIDARITY BETWEEN THE PLEXUS AND THE DYNAMICS ............................................................. 328
14. APPENDIX: SPECIFICATION OF THE DYNAMICS ............................................................... 331
14.1. POSITION AND FUNCTION OF ABS IN THE MODEL ......................................................................... 331
14.2. REQUIREMENTS FOR THE ARCHITECTURE OF THE DYNAMICS ....................................................... 332
14.3. ABS IS INDEBTED TO COPYCAT ................................................................................................... 332
14.4. SOLVING WITH AGENTS ................................................................................................................ 333
14.5. AGENTS ....................................................................................................................................... 334
14.6. CHANNELS: SYNTAGMATIC POSITIONS ......................................................................................... 335
14.7. CONVENTIONAL FORWARD-REARWARD ORIENTATION ................................................................. 336
14.8. DEVELOPMENT OF THE HEURISTIC STRUCTURE BY RECRUITMENT ............................................... 336
14.9. AGENT REDUNDANCY CONTROL OF AND RESOURCE REUSE ......................................................... 340
14.10. DEVELOPMENT OF THE HEURISTIC STRUCTURE BY EDIFICATION ................................................ 341
14.11. PHASE MANAGEMENT ................................................................................................................ 345
14.12. STRENGTH MANAGEMENT .......................................................................................................... 346
14.13. LENGTH OF COMPUTATION PATHS .............................................................................................. 349
14.14. ACTIVITY CONTROL ................................................................................................................... 349
15. APPENDIX: SIMPLE SIMILARITY SUGGESTION (AGENT CATZ) .................................... 353
15.1. DISTRIBUTIONAL SIMILARITY ....................................................................................................... 353
15.2. CONSTITUTIONAL SIMILARITY ...................................................................................................... 354
15.3. SIMILARITY ON REQUEST ............................................................................................................. 354
15.4. AGENT CATZ .............................................................................................................................. 355
15.5. TECHNICAL ARCHITECTURE OF AGENT CATZ.............................................................................. 355
15.6. EXAMPLES OF DISTRIBUTIONAL SIMILARITY ................................................................................. 357
15.7. DECONSTRUCTING CATEGORIALITY AND PROTOTYPICITY ............................................................ 358
15.8. ADEQUATION (OR NOT) OF CATZ FOR SIMILARITY SUGGESTION .................................................. 359
16. APPENDIX: ANALYSIS (AGENTS B2 AND B3) ........................................................................ 361
16.1. PROCESS B2-B3, SPECIFICATION AND OVERALL DESIGN .............................................................. 361
16.2. HEURISTIC STRUCTURE FOR AGENTS B2 AND B3 ......................................................................... 362
16.3. PARSING OF THE ARGUMENT FORM .............................................................................................. 365
16.4. INSTALLATION PROCESS ............................................................................................................... 365
16.5. AGENT B2, EDIFICATION PROCEDURE .......................................................................................... 366
16.6. AGENT B2, EDIFICATION PROCEDURE IN PSEUDO-CODE ............................................................... 367
5
16.7. AGENT B3, EDIFICATION PROCEDURE .......................................................................................... 368
16.8. PERFORMANCE WITH THE TYPE OF SIMILARITY ............................................................................ 369
16.9. PRODUCTIVITY OF AGENT B2 ....................................................................................................... 369
16.10. RESULT OF A B2-B3 ANALYSIS .................................................................................................. 370
17. APPENDIX: BINARY BRANCHING, TERNARY BRANCHING............................................ 371
17.1. THE QUESTION AND ITS HISTORY ................................................................................................. 371
17.2. EXEMPLARIST REASONS ............................................................................................................... 373
17.3. COST REASONS ............................................................................................................................ 374
17.4. CHOICE OF N-ARITY ..................................................................................................................... 375
18. APPENDIX: ANALOGICAL TASK (AGENT ANZ) ................................................................... 377
18.1. AGENT ANZ, SPECIFICATION AND OVERALL DESIGN .................................................................... 377
18.2. REARWARD PROCEDURE FOR AGENT ANZ, IN PSEUDO-CODE ...................................................... 378
18.3. FORWARD PROCEDURE FOR AGENT ANZ ..................................................................................... 379
18.4. DISCUSSION OF AGENT ANZ: UNDER-PRODUCTIVE PRIMING ........................................................ 379
19. APPENDIX: ANALOGICAL TASK WITH TWO CONSTITUENTS (AGENT S2A).............. 381
19.1. AGENT S2A, SPECIFICATION AND OVERALL DESIGN ..................................................................... 381
19.2. ARCHITECTURE OF AGENT S2A.................................................................................................... 381
19.3. LIMITS OF AGENT S2A ................................................................................................................. 382
20. APPENDIX: LIMITED SYNTAX WITH AGREEMENT (PSEUDO-AGENT AN2) ............... 383
20.1. DEFINITION OF PSEUDO-AGENT AN2 ............................................................................................ 383
20.2. MERITS AND LIMITS OF PSEUDO-AGENT AN2 ............................................................................... 383
21. APPENDIX: SUMMARY OF AGENTS ........................................................................................ 385
REFERENCES ....................................................................................................................................... 387
GLOSSARY ............................................................................................................................................ 397
FRENCH-ENGLISH LEXICON .......................................................................................................... 403
INDEX ..................................................................................................................................................... 405
6
Il faut dire en gros: Cela se fait par figure et par movement, car cela est
vrai. Mais de dire quels et composer la machine, cela est ridicule. Car
cela est inutile et incertain et pénible.
Pascal (Br. 70 = Manuscrit 152), cité par Milner 1989.
Non. C'est pénible en effet mais utile.
Is a "class of things that resemble each other" a class of things a …n
such that a chain of similarity relationships runs from a to n?
Nelson Goodman,
The Structure of Appearance, Bobbs-Merrill, 1951, p. 147.
Oui. mais il faut les prendre par paires.
Worüber man nicht sprechen kann
darauf kann man schreiben?
Robert A. Chametzky,
Phrase Structure, Blackwell 2000, p. 160.
Wovon man nicht mehr schreiben kann,
darüber kann man noch etwas programmieren.
7
Introduction
In linguistics, the question of productivity remains a central one: how can a speaker,
who has been exposed to a few tens of thousands of utterances, become capable of
understanding and uttering virtually an infinity of utterances.
Productivity may, with Auroux1, be understood in two different ways:
Chomsky himself very early distinguished two kinds of creativity: which he names
'rule-changing creativity' and 'rule-governed creativity'2. He says he is not interested in
the former and protests (against ancient authors: Humboldt, Paul) who did not make the
distinction […]. Calling both of these 'creativity' is a great source of confusion, it would
be better to talk respectively of creativity and productivity.
I shall understand productivity exactly in the sense of Auroux3 above. Productivity is
thus the possibility to produce or understand an infinity of utterances in a given
linguistic frame, that is, given a fixed “competence”. But I will show abundantly below
that productivity is not accounted for by rules. However, I will also show how the
successful production of an utterance, or its reception, is likely to bring up a slight, local
modification to the linguistic knowledge, resulting in a manifestation of the „creativity‟
following Auroux, that is, of the rule-changing creativity. Thus, the two notions will
tend to be reconciled. Before suspecting confusion, the reader is invited to consider that
such reconciliation is necessary; “competence” evolves progressively as a result of
linguistic exercise, as with children, at learning time, and later we never stop learning
even if not at the same pace.
Theories in cognitive linguistics, despite many interesting features, do not provide a
precise, operable theory which would explain productivity; neither do functionalist
linguistic theories.
Connectionist models are experimental devices and feature responses which well
reproduce the productive linguistic behaviours of speakers thus bearing implications on
our understanding of the linguistic phenomenon. Their current limits, in scope and in
1
Auroux 1998, p. 95.
2
Chomsky 1964, p. 59.
3
Despite the potential ambiguity with a different meaning of « productivity » as in the productivity of a
morphological process.
9
perimeter, may well be broadened in future, but these models present two shortcomings.
First, they explain poorly; or to be more precise, the reductionist displacement of the
explanation installs a significant distance between the evidence and the explanatory
plane. Second, they fulfil with difficulty three base mechanisms4: i) to efficiently
account for generalizations (for Marcus: "to make bindings between rules and variables"
but this wording is not endorsed here as will be shown), ii) to represent the recursive
structures which linguistic exercise requires, and iii) to individuate instances. A
development will be made on these three points in Chap. 8.
In generativism, productivity is central and the question was set very early by
Chomsky5, however, this current of thought delayed the goal to account for linguistic
phenomena (emission, reception, learning, variation, and change); instead, it postulated
a linguistic exercise, which would be that of a speaker; its elucidation would be a
preliminary condition to that of the phenomena. This consequence-bearing
displacement, from the linguistic phenomena to a language, defined as an abstraction,
supposes to define what a language is, which turned out more difficult than anticipated.
This object is constructed, artificial, and the question, thus placed on a language, adding
complexities which the object itself does not contain, has hardly contributed to
understand what happens by the speakers. The corresponding constructions are
complex6, numerous, changing and, up to Principles and Parameters, present the
following characters: a) they draw on categories when abundant, converging evidence
shows (cf. Chap. 1) that categories cannot be taken as operative mechanisms, b) they do
not explain the linguistic acts, c) they account poorly for variation between speakers and
for language change, d) they offer a vision of acquisition which is difficult to match with
the findings, e) they adopt a vision of meaning which is Platonic7.
The Minimalist Programme8 reduces the importance of categories, but it does not seem
yet to have much progressed items b, c, d, and e above.
Optimality Theories, capture convincingly many linguistic phenomena but the
theoretical cost is high: the set of constraints they postulate appears not to be closed,
each new publication bringing up a new one. Moreover, constraints often depend on
categories. Finally acquisition, seen as the setting of ranking amongst constraints, is no
more plausible than the parameter setting in Principles and Parameters. Recent
advances in Optimality Theory, which combine it with probabilities, will be discussed in
section 7.9. Probabilistic model or dynamic model (p. 225).
None of the frameworks cited above draw on analogy which , after the bimillenary
recognition of its important role in linguistics, received renewed attention from
4
Cf. Marcus 2001.
Chomsky 1975 (The Logical Structure …), published in 1975, based on a manuscript earlier by twenty
years.
5
6
"To achieve the goal of decscribing language as a property of the human mind, [Chomskys' theory]
establishes an apparatus of considerable complexity". Cook 1988, p. 1.
7
All these topics are detailed below.
8
Chomsky 1995/1997a, The Minimalist Programme.
10
psychologists and cogniticians, then from some linguists; Itkonen, notably, rehabilitated
it (cf. Chap. 2).
Thus a field is today available for an attempt which is non categorial, connectionist (but
localist)9 and aiming at plausibility. This is what this work proposes. It builds on
analogy, thematizes its ability to operate 'copositionings', sets it at work in language
dynamics and presents a model which is strictly exemplarist (later, it will have to
become occurentialist), without categories, without rules, and without abstractions. this
model10 is dynamic and yields effects of productivity and of regularization by
mobilizing elements of linguistic knowledge which are numerous. They combine their
effects in dynamics which are simple in their principle but complex in their deployment.
As a counterpart of this complexity, the model is supported by a computer
implementation which helps to validate it.
Chapter 1 establishes the project. Starting from the shortcomings of categorial
approaches, which are briefly recalled, and from the critique of the "slot-filler schema",
which is one of its figures, I suggest to give up the grammatical viewpoint, (categories,
rules, slot-filler schema), which is abstract and static and I propose an occurentialist and
dynamic model. To that end, analogy appears as the major lever provided we cease to
view it as Platonic (that is, static) and we reinstate it in its dynamic dimension. It will be
coupled with a second important notion which is its corollary: proximality. Against the
deduction as in formalized systems and in cognitivism, which does not suit cognitive
systems, the abduction of Peirce is solicited as the foundation of analogical and
proximal base dynamics.
Chapter 2 presents a selective history of analogy. It focuses mainly on three periods:
Greek-Latin Antiquity, the 19th century, and the 20th century. It shows that analogy has
initially been perceived as static; then, with the Neogrammarians and Saussure, it has
been seen as a dynamics in diachrony playing an important role in language evolution,
but it has not yet been considered enough as a synchronical dynamic bearing on
linguistic acts.
Chapter 3 defines the model, basing it on dynamic analogy and on proximality (of
inscriptions, of accesses, of abductive dynamics).
Chapter 4 puts the model at work on structural productivity: morphological and
syntactical to simplify. It proposes a redefinition of 'syntactical analysis': syntactical
analysis amounts to analogical structure mappings. The analysis of an utterance
encompasses a number of staggered structure mappings.
Chapter 5 defines a systemic productivity which complements structural productivity.
As yet, systemic productivity has been somewhat identified, little discussed, and poorly
9
Understand 'localist' in the sense of this word in connectionnism: a network is localist when the
representation in it of objets of the problem is ensured by defined cells (otherwise it is 'distributed'). Cf.
the glossary.
10
The "Analogical Speaker" occasionally in this work. The model is thus named to denote in two words
its two main characters: a) primacy of the speaker upon the language, and b) primacy of analogy to
understand the inscriptions and the dynamics.
11
modelled. We need to understand how pluridimensional paradigmatic systems build up
and operate, and how they can be learnt, and how they evolve.
Chapter 6 reformulates some classic themes of grammar and of description. For
example, it shows how the model defended in this thesis can do without the notion of
word; how it deals with phenomena for which other theories postulate zero elements.
Chapter 7 discusses the foundations of the model and contrasts it with other theoretical
propositions.
Chapter 8 discusses the model's margins and sketches a few lines to prolong it. In
particular, it contains a model of linguistic learning consistent with the production/
reception dynamics, and the predictions of which are in accord with acquisitional
evidence.
I conclude (section 9) that it is an error to think that a grammar – that is, a platonic,
essentialist, and static elucidation of a language – is a prerequisite likely to provide a
useable base to later understand linguistic dynamics. Rather, it is the preliminary
elucidation of the dynamics themselves, which makes it possible i) to understand them
mutually, and, as a side effect, ii) to 'explain' the grammars' stipulations and their limits.
Several appendixes provide details – some of them important – which have been
expelled from the main body of the text for the sake of concentrating the argument.
Further appendixes provide a technical description of the model and of its
implementation. In quasi-formal natural language, or in pseudo-code, they deliver the
functional and organic data which is necessary to reproduce the experiments that
support my reasoning.
12
Acknowledgments and thanks:
Françoise Abel
Antonio Balvet
Josiane Bartet
Simon Bouquet
Erszébet Chmelik
Antoine Challeil
Morten Christiansen
Marcel Cori
Annie Delaveau
Mariane Desmets
Agnès Disson
Françoise Douay
Gilles Dowek
Jacques Dubucs
Roger Dupin de Saint Cyr
Robert J. Freeman
Jürg Gasché
John A. Goldsmith
Philippe Gréa
François Guillaume
Claude Hagège
U. Aldridge Hansberry
Daniel Kayser
Marc Klein
Françoise Kerleroux
Bernard Laks
Jules Lavie
Alain Lemaréchal
Yves Lepage
Géraldine Mallet
Anne-Marie Mazzega
Lance Miller
François Muller
Lea Nash
Alexis Nasr
Joshua Parker
Frédéric Pascal
François Rastier
Claude Roux
Monique Sénémaud
Irène Tamba
Atanas Tchobanov
Ali Tifrit
Wendy Tramier
Bernard Victorri
Agnès Villadary
Yves-Marie Visetti
The drawing on the title page is by Ferdinand de Saussure.
13
14
Chapter 1.
"See to it that order is not made a thing"
1.1. Object: productivity, innovation, evolution and variation
The speaking subject is productive. Productivity is the main problem in linguistics. To
provide an account of productivity is for linguists a central task.
The referential object11, the part of the world which we address, is language without
doubt, but what is the conceptual object; in other words: how is the referential object
profiled in the approach of it which we take? Generativism places its priority on the
study of syntax and grammar. It does so for reasons of method, some aspects cannot be
addressed today, and because of one of its theoretical positions: the autonomy of syntax.
Its results contribute little, or artificially only to the understanding of other aspects of
language. There are more reasons to this than just the choice of a particular conceptual
object, among which, the endorsement of categories and of rules which are criticized
below.
So the choice of a conceptual object is very important. Abney proposes one: syntax is
autonomous, he says, which was noted by Tesnière before Chomsky, he recalls, but
autonomy is not isolation:
Syntax in the sense of an algebraic grammar stands or falls on how well it fits into the
larger picture. The larger picture, and the ultimate goal in linguistics, is to describe
11
"I shall distinguish two types of objects : the referential object and the conceptual object I call
referential object that part of the world which a science assigns itself to know, its initial referent.
Scientific theories, different as they may be, are classified together in a same field of knowledge on the
base of a common referent. The characterization of such an object bounds and identifies the discipline. In
our case, all of linguistics is defined because it adopts as a referential object a universe called language. In
contrast, I call conceptual object the particular way in which a particular theory conceives and
configurates the referential object. Starting from the same reference, every line of thought, every school of
thought, designs a conceptual object which proposes itself as a centre of knowledge. On this point for
example structuralism and generativism diverge in the measure in which they configure different
conceptual objects, implying different empirical approaches so that the compatibility of their propositions
is very dificult to establish." (Caravedo 1991, p.8)
15
language in the sense of that which is produced in language production, comprehended
in language comprehension, acquired in language acquisition, and, in aggregate, that
which varies in language variation and changes in language change12.
To avoid a construction that would be impossible to extend to variation and to the
dynamics of production, reception, acquisition and language change, these dimensions
must be incorporated to the conceptual object from the start.
Because of that, it must be shown dynamically how a new utterance is possible. It must
be shown what linguistic knowledge is necessary, and how it is solicited to make the
new utterances possible. In the first place, this is a matter of linguistic acts: reception,
emission.
Then it must be shown how the linguistic knowledge which served this act may evolve
so that a successful linguistic event (reception or emission) makes possible after it
things which were not before, or makes easy after it things which used to be difficult
(counter to a 'competence' determined once for all).
The need is that of a modelling approximation which be dynamic and operable. Along
with the acts and acquisition, it has to encompass speaker variation. Finally, it has to
account for the qualities of languages: contingency, ability to innovate, capacity of
"stylistic" figures (e.g. synecdoche, metonymy).
Finally, building on results of psycholinguistics, the compatibility with a model of not
necessarily linguistic knowledge and with psychology is desirable. The devices we adopt
have to be concrete and flexible. Learning from the defects of categorialism and
regularism, it is appropriate to stay away from abstractions of all kinds.
1.2. Renouncing categories and rules
Grammarians, when seeking to put some order in the variety of language facts, then
linguists, when striving to account for them in an explanatory manner, used mostly
categories13 and rules14. Rules and categories are mutually necessary: stating a rule
requires categories and categories have served most often to express regularities15.
Categories and rules have made several useful descriptive approximations possible
without yet exhausting the question satisfactorily for two main reasons.
First, whatever the approach with categories and rules, it had to be accepted that there
always remained an empirical residue that resisted explanation16.
Second, even though the descriptive system was free of empirical residue, it would still
have to qualify as a plausible 'explanation' of the dynamics. In particular it would have
to show how the brain might implement a rule-based operation. The debate is not new,
12
Abney 1996, p. 12.
13
Lexical, syntactical and functional categories.
14
Prescriptive rules ("de bon usage"), diachronic evolution rules and laws, derivational rules, etc.
15
With the notable exception of Optimality Theory in which categories are used to express constraints
rather than rules. Cf. Smolensky 1999 for a brief introduction contrasted with Generative Grammars.
16
Numerous examples will be provided below.
16
see Chomsky (1974/1975, p. 203) having to respond to arguments (Schwartz, Goodman)
denying the brain the possibility of a rule-based operation. Thereafter, the debate has
been very productive, notably upon the renewal of connectionism17. It is not closed, as
evidenced by a recent book by Marcus (2001) – it will be analysed p. 243 – which poses
it anew in the very field of connectionism, which once pretended to have concluded it.
Although they are well known, the principal terms of the critique of categories and rules
will be recalled in appendix 10 p. 279.
Partial categoriality do obtain in linguistic behaviours but this does not imply linguistic
theory to be founded on categories. Regularization effects also do obtain but their
explanation does not imply rules to be made the operative support. Categories make it
possible – with difficulty – to build descriptive approximations, but they cannot
constitute the base of a theory of linguistic dynamics.
Thence, the programme consists of putting the perspective upside down: instead of
postulating categories and rules as causes, and then building the theory with them,
linguistic dynamics have to be accounted for in another manner and then only,
categorization – inasmuch as there is – and regularization – inasmuch as it obtains –
must be reconstructed as effects and explained as consequences.
How initially are we lead into categories and rules? The initial idea is to become capable
to make statements on what is possible - and what is not - in the tasks which speakers
have to carry out and in which they respond to novel situations by building on older
ones, already known and experimented. The generic schema which then comes to mind
is to be able to say things like "Instead of this, one can put that" and the result of such
substitution is judged possible. Very soon it appears that not everything may be placed
everywhere; it must be stated what is possible where and this statement has to be made
in terms as general as possible lest one makes only occurrential assessments and stays
mute on possibility, prediction, innovation. Linguists indeed feel this need, but
grammarians also well before.
One then undertakes to state what filler may occupy what slot in the most general
possible terms. This is the schematization that leads immediately into categories and
rules; the acceptation of this schema is the mother of the descriptive shortcomings and
the theoretical difficulties which arrive then so abundantly.
1.3. The slot-filler schema
Rules and categories may be seen as conceptually dependent on a unique schema which
is their antecedent in the order of necessity: the 'slot-filler schema'18, the critique of
which has not been much done so far. If carried out appropriately, it may provide a track
for overcoming its defects: one thing is to renounce categories and rules, another one is
to devise an apparatus than can substitute them in describing and explaining.
17
McClelland 1986.
18
It is the slot-filler schema of some connectionists; one may also recognize here the construction of
construction grammars.
17
The hyperonyms 'slot', 'filler' and 'slot-filler schema' are proposed because they can
collectively refer to a variety of descriptions and theories. These are not all equivalent
but each in its way attempted to cover a general need: to account for constitutional
sameness, and for functional sameness in general terms; which is a way to approach the
question of linguistic productivity.
The question of the slot-filler schema is important because it connects with the principle
of structure preservation19 (which will be touched again section 7.3.3. The similarity of
copositionings is mediately determinable, p. 204), it is embryonic up to the 17th century
and will be posed mainly in the 20th century (cf. p. 291). For numerous authors, it then
becomes the centre of description and of theory; it is present in psycholinguistics
because it is the kernel of utterance reception and production models. The descriptive
adequacy and the value of the linguistic theories which were produced critically depends
on the responses it receives.
Let alone the specificities of particular theories, the schema is as follows: there are slots
which must be occupied by fillers, and there are fillers which may occupy slots. In order
to specify which filler may occupy which slots, both have properties but in two different
ways. Properties are assigned to fillers; they are on the contrary prescribed by slots for
candidate fillers to qualify for occupying a slot.
Properties are category-based, and the conditions of occupation have the nature of rules.
So the slot-filler schema is a corollary of rules and of categories; more exactly, it is their
antecedent, a common scheme from which they derive.
Milner (1989), critical as he is on categories and rules, regretting that Chomsky did not
differentiate the set of "labels" that apply to slots from he set of those which apply to the
"language units" candidate to occupation (the fillers), maintains a reduced version of the
slot-filler schema. It shows its limit in the coincidence / distortion question (below).
Unification grammars20 present an evolution of the slot-filler schema: by deconstructing
it in part, they yield an important gain in descriptive efficiency (see appendix).
However, the HPSGs (Head-Driven Phrase Structure Grammars) still specify slots and
fillers by their properties. As they multiply these properties, and as they contain under
specification and overriding mechanisms, the HPSGs do better than many theories, but
they remain residually categorial.
The rejection of rules and of categories made above leads to the rejection of the slotfiller schema since it requires stating the "occupation relation" by propositions which
draw on rules and categories. Reversing the proposition, if we manage doing without the
slot-filler schema, the reason to make categories and rules falls down.
In linguistics, the theory of flexible and innovative operation cannot cope with gears
designed to sanction repetition and reproduction. The slot-filler schema encompasses
three beats: 1) define the requirements of the slots, 2) define the properties of the
19
Principle of structure preservation: i) a language has a fixed, limited number of slots, ii) a slot may be
occupied, iii) linguistic material may not happen outside a slot, iv) in a language, the set of slots evolves
extremely little, and extremely slowly. Milner 1989, p. 649.
20
LFG (Lexical Functional Grammar, Bresnan 2001), HPSG (cf. infra), GPSG (Generalized Phrase
Structure Grammar, Gazdar 1985).
18
potential fillers, and 3) check, based on its properties, whether a potential filler qualifies
for occupying a slot. If this schema is refused, an alternative may come from an
approach which syncopates the three beats; if must do the economy of a definition of
needs, and of that of properties. It seems that analogy has the potential as we shall see.
An important corollary concerns variable binding. I shall defend – section 7.8. Binding,
variables, variable binding (p. 218) – the idea that the question of variable binding, as it
is currently posed, is in part (but not entirely) artifactual, because it follows from
positing the slot-filler schema. If the economy of it can be done, then, a part of the
binding problem ceases ipso facto to be posed.
1.4. Analogy, the renewed seductions of a venerable notion
Along these lines, analogy presents itself as a possibility to grasp sameness minimally,
that is, without over specifying, without determining more than necessary. As it does not
require to make the analogical ratio explicit (analogy "elides the predicate", cf. chap. 3),
it bears the promise to dispense with metalanguage:
mice is to a mouse as cats is to a cat
might well be dynamically useable without being more precise than necessary about
particular mammals, without requiring a gloss on grammatical number and without a
statement about whatever happened to "the indefinite article in plural".
Analogy also gives hope to let happen the useful drifts. This would be the second factor
to contribute in the account of flexibility in linguistic operation.
Analogy further appears to enable the idea of contingency; it would make it possible to
avoid what would be the ultimate essence of things, to eschew foundationalism.
Adaptation and innovation: of behaviour, of intellection, of utterance, etc. would be
possible without control over the details, without the ultimate intelligence of means and
procedures.
Finally, analogy drops a hint on a theoretical construction that could be compatible with
Sausserean differentialism: if things do have value by their ratios, let us take these ratios
as directly constituting the linguistic knowledge and see what consequences and
advantage we can take of that.
If this approach succeeds, it restores continuity with 2400 years of history in linguistic
thought: Aristotle, Denys, Varro, Port-Royal, Humboldt, Paul, Brugmann, Saussure,
Bloomfield, etc. which would not be its smallest interest.
Il also established tracks of continuity with cognitive science (Lakoff, Gentner,
Holyoak) and psychology, very specifically with the theory of second order
isomorphism (representations are by similarities and are not direct representations of
things, cf. Edelman infra. p. 41). Continuity also with neuroscience: according to Choe
(2002) the thalamus and the cortex in association are producers of simple analogies: so
are they functionally and their anatomy allows us to understand how.
19
1.5. Explaining productivity assumes a mechanism
Having recognized that the question of the possibility in principle (competence) is not
antecedent to that of the possibility of acts, one is led to treat in priority these acts,
linguistic processes, that is, to adopt a dynamic vision. To explain in this way linguistic
productivity (and learning, and variation, and language change) supposes a mechanism.
The question is central phenomenologically.
This mechanism is something else than a generative procedure. In 1965, Chomsky made
the following conjecture:
A reasonable model of linguistic acts will comprise, as one of its fundamental
components, the generative grammar which formulates the knowledge that the speaker
has of his language21.
However, the thing did not occur and Chomsky himself later withdrew this position.
Today, the more widespread vision on this topic may be borrowed from Jackendoff:
The traditional formulation of phrase structure rules and of transformational rules is
conductive to viewing the rules as like a program for constructing sentences. The
connotations of the term "generate" in "generative grammar" reinforce such a view.
However […] students are always cautioned to resist this interpretation. In particular,
they are exhorted to view derivational movement as metaphorical: "We are after all
describing competence, not performance." The upshot is that the status of such rules
vis-à-vis performance models is left unspecified22.
When researching a model of linguistic acts, no one knows what to do with the
transformations of transformational generativism or with the MOVE of the Minimalist
Programme.
The required mechanism is not the a priori characterization of the set of utterances
which are possible in what would be a speaker's language23 by whatever procedure,
generative or otherwise. On the contrary, we seek a dynamics of reception – it must be
plausible as much as possible – which, when facing a variety of utterances, succeeds, or
not, in building a sense, and does so with success rates that are varied and gradient
depending on the utterance. So is it when uttering utterances. It is not the case that a
speaker is capable of that "possible" because dwells in him a defined, static, language,
which would pre specify the possible and which would have (had) to be learnt by the
speaker. Otherwise said, the idea must be given up to characterize linguistic knowledge
in a static manner, without reference to the dynamics that would use it.
Even though a static linguistic knowledge of the speaker is not sufficient stand-alone –
without the dynamics – to linguistically define a speaker, still, an assumption
concerning it is required. The orientation consists of building this model of linguistic
knowledge with analogy as its base. It cannot be a lexicon governed by rules. It is
something else than a corpus which does not contain the required structure and from
21
Chomsky 1965/1971, p. 20. Quotation retranslated into English from a French translation of the
original.
22
Jackendoff 2002, p. 57.
23
Be it called 'speaker's language', 'competence' or 'I-language'.
20
which that structure cannot be extracted. It is complex, exemplarist and meshed; it will
be named "plexus" in Chap. 3 where it is defined.
This assumption must be complemented by one on the principle of the dynamics. Here
again, the orientation is to solicit analogy. One can make a Platonic reading of analogy
(analogical ratios exist in nature) but the repairing analogy of the Neogrammarians and
of Saussure24 already appears as dynamic in diachrony. The intent is to extend this
dynamic reading in synchrony, applying it to the accomplishment of linguistic acts and
to acquisition.
We are thus led to the speaking-subject as capable of analogy and the question takes a
cognitive and mental dimension. Early in Antiquity, analogy in language was narrowly
associated with the – morphological and syntactical – markers which sanction the
location of linguistic units in analogical systems. This directly conducted to the analogyanomaly debate between Athens and Alexandria on one side and the Stoics of Pergamon
on the other, debate which Varro arbitrated (cf. Chap. 2). The question was thereafter
endorsed by the tradition.
It may be the case, however, that a step has been missed or treated inadequately. I mean
questions like transitivity (total? partial?) in a series of analogical ratios, questions like
the possibility of combined effects of several analogical sets sharing some of their terms
or some of their analogical ratios, etc. These questions are related with deduction and, in
a sense, formal theories like predicate logic or other logics have covered them. They did,
but in a way which is brutal, symbolic, categorical, and this manner does not suit
linguistic phenomena.
When it was finally realized that symbolic theories do not suit linguistic phenomena,
analytical work might have resumed in taking account of the massiveness of analogy in
language, and of the evidence that it is also dynamic, but the course taken was a
different one and connectionism for example, in its first period, sought to apply to
languages the associators which yielded so good results in pattern recognition. Success
was in the measure in which there is pattern recognition in language, that is, limited: it
does not constitute its main part.
The work proposed here may be seen as the project, building on the results of the last
decades, to start anew from analogy, to apply to it a more nuanced treatment, and to
view it as dynamic.
1.6. Proximality of the motivation dynamics
Analogy must however receive a complement. It corresponds to the simple idea that a
thing triggers certain other things and not a great number of them or all of them, some
mental transitions are preferred. It is the idea, dating back to Hume, and generally held
as refuted, of associationist psychology. The enterprise here is not the restoration of
associationism in its original conception but I shall show (p. 74) how proximality and
analogy allied together, may discipline the associations by means of this ratio which
precisely the analogical ratio is. This discipline will be thematized as the observance of
24
Cf. Chap. 2.
21
the "copositionings" which take place between terms. Proximality also echoes a more
recent formula, that of properties which Livet attributes to connectionism: "a local
compositionality and a limited systematicity"25.
Thus, in a speaker's linguistic knowledge, starting from a given inscription, some
inscriptions can be reached with ease: they are proximal. Other ones are less. The
linguistic knowledge acquires as a topology: kinds of distances are set in it. The
idiosyncratic detail of the inscriptions and of the proximality conditions among them is
held to result from the particular history of the speaker, that is, they bear the trace of his
learning history.
The processes which utilize these inscriptions to account for linguistic acts benefit from
proximality and they depend on it. This makes it possible to conceive of paths and
computation chains which are shorter or longer depending on the case and, in this way,
to account for the fact that linguistic acts have different degrees of difficulty and impose
different cognitive loads. Another important benefit will be to reconstruct and
demonstrate degrees in acceptability.
1.7. Contingent causality
The cause of the successful completion of a linguistic act is primarily the precedents
upon which the process which accounts for it may rest. I understand 'precedent' as a
linguistic act i) which took place before with success, leaving some permanent trace,
and ii) which resembles the act currently being processed.
The systematic distributional analysis of a corpus, gives this corpus the role of a body
(precisely) of precedents. A generativist grammarian who picks up examples and
proposes them makes the assumption that they are typical of legal (or not) productions
in his target language and the elaboration which he makes on their base will license
productivity.
Relating a current linguistic act to the precedents that license it brings up several
questions: i) how to select the appropriate precedents, ii) on what base to recognize
resemblance, and iii) how to design the process that carries all this out.
Linguistic theories, most often, adopt of this question a vision which I call 'totalistic' in
the following sense: the modes of selection of the precedents are supposed to be latent
in the totality that the corpus represents, or in the totality represented by the set of
examples which may occur to a generativist. Then, following different procedures, a
system is built: the best possible adjustment to all the cases occurring in the envisaged
totality, that is, that which is descriptively most economical. This makes it abstract. As
one ambitions a wide coverage of phenomena, this system becomes complex.
It must be seen, instead, that sameness-proximality retains an occurrential, exemplarist
character. They remain concrete data which result from the subject's history, and of the
contingent history of his learning. One must restart from similarities-proximalities
already given as exemplars or occurrences because the acquisitions are primary, literally
and in two manners, they are primary in the course of the time of the subject's history
25
Livet 1995, quoted by Vivicorsi 2002, p. 79.
22
(they occurred before the linguistic act now at stake), and they are primary by they
causal position in the accomplishment of the act (they causally condition the dynamics
of the act).
The conditions of productivity – of its dynamics – must be sought a minima only; one
must build on proximality because proximality results from the subject's experience.
Doing so increases confidence to rightly address the idiosyncrasy which always appears
in the linguistic exercise.
Thus this approach presents itself at first sight as a theory of individual facts, as a weak
theory. About it, one may fear that it might not well embrace transversal generalities
which we observe and which lend themselves well to symbolic modelling (like
determination by an article, SVO construction, etc). I show below that this is not the
case: it is possible to design processes spanning from the smallest idiosyncrasy to the
widest generality in continuity, and addressing the whole span with the same base
mechanisms.
This, without yet solving the question of how sameness is approached (cf. Chap. 3),
constitutes a rejection of the totalistic approach and the promotion of proximality.
1.8. Hypothesis
The hypothesis of this work can now be stated as follows.
An apparatus consisting of a) analogical inscriptions that are strictly exemplarist and
endowed with proximality, and b) a dynamics of elementary abductive, analogical
movements, makes it possible:
-
firstly to explain in a homogeneous framework the linguistic dynamics
(reception, emission, learning, the dynamics of language change), and to
understand them with respect to each other,
-
secondly, to reconstruct as a consequence the question of the possible /
impossible in language, that is, to explain as effects of the dynamics the static
stipulations which constitute the grammars.
If this track succeeds, it shows that the reverse position – which thinks it necessary to
first establish a static description, a grammar, in view of explaining later the linguistic
dynamics – take things in the wrong order.
Isn't a work along these lines behind time: connectionism would be fulfilling this
programme in a more promising and more plausible manner? Connectionism is indeed
the school of thought which presents the closest accord with these themes: a
connectionist model is that of a defined speaker, it is abstraction-free and rule-free, and
it is certainly dynamic. It yields gradient effects and combines viewpoints. Any single
detail in it may contribute to a result but none is critically mandatory.
However: i) one thousand presentations of a training corpus do not constitute an
acceptable model of learning and the training procedures of connectionist models are
not incremental. French speakers have one word only: apprentissage, but we must not
mix up training and learning, ii) the gap between observations and the implementation
substrate (cells and weights borne by links) is too wide; this makes any explanation
23
impossible or too obscure, and c) finally neuromimetic connectionism progresses slowly
and with difficulty on variable binding, on recursive structures, and on the treatment of
individuals; see details in section 7.8. Binding, variables, variable binding (p. 218) .
Thence, another approach, based on mechanisms less opaque than those of the
connectionist models, should be welcome to progress in our understanding of the
linguistic dynamics.
24
Chapter 2.
Moments in the history of analogy,
in linguistics and in psychology
To support what precedes, and to provide justifications which will serve in the next
chapter, here are some steps in the history of analogy considered from the point of view
of linguistics and, secondarily, of psychology. Thus, the analogy of the theologians
(mainly Thomas Aquinas), will be considered marginally only.
This history is important mainly in three moments. In Antiquity, emerges the first figure
or a debate which will keep grammarians, then linguists, busy for a long time: that
between regularity (analogy) and anomaly. In the 19th century, the Neogrammarians,
then Saussure, conceive with precision the role of analogy in language change. The 20th
century finally is marked by the disrepute of analogy and its dismissal by Chomsky, then
by its rehabilitation, first by cogniticians then by very few linguists.
2.1. In the Antiquity, a "quarrel" arbitrated by Varro
After borrowing it from Thales, Aristotle defines analogy as follows:
There is an analogy when the second term is to the first what the fourth is to the third;
one will then replace the second by the fourth or the fourth by the second, and
sometimes, one adds the term to which that which has been replaced relates. For
example in The vase is to Dionysus what the shield is to Ares; the vase will then be
called the shield of Dionysus, and the shield the vase of Ares. Or else Old age is to life
what evening is to day, one will then call the evening: the old age of the day, or as
Empedocles, one will say of the old age that it is the evening of life or the sunset of
life26.
Analogy, for Aristotle, is initially 'poetical' or rhetorical. It is found in the Poetics and
not in the de Interpretatione where that which will serve us would rather be expected.
Then the grammarians get hold of it:
Varro27 recalls that it is by borrowing from the mathematicians (Euxod of Cnides,
friend of Aristotle, then Euclid of Alexandria) their proportional ratio (analogon in
26
Aristotle Poetics chap. 21 (Fr. ed. Seuil 1980, p.109).
27
Varro, De linguae latina, book 10, 45 B.C.
25
Greek) that the grammarians of Alexandria for the first time displayed in clear tables
the complex Greek inflexional morphology: declensions and conjugations28.
A great question for grammarians in the Antiquity is known as the "quarrel" between
analogists and anomalists29. For the former (Aristarchos), language is ruled by analogy,
for the latter (Stoicians: Krates of Mallos, Sextus Empiricus), language is dominated by
anomaly.
The arguments of both were not placed on a same theoretical plane; anomalists […]
adopt a general viewpoint: if analogy were the organizing principle of the formation of
words, it would operate regularly, and would be perceivable in the entire perspective of
the set of the words. Now this is not the case, [...]. For the analogists […] despite this
profound concern, all the same there exist analogous forms [analogies de formation]
with great evidence, and they represent an organizing principle sufficient to describe
the transformations of words ones with respect to the others30.
The terms of this debate will be resumed by Varro in the 1st century B.C.
Varro31 criticizes both viewpoints, stressing that the matter is not to compare forms but
relations between forms. Comparing amabam ("I loved") and legebam ("I read") leads
to nowhere, because one could add rosam ("the rose" acc. sing.) on the same plane. In
contrast, the proportional ratio amabam : amabat ("I loved" : "he loved") :: legebam :
legebat ("I read" : "he read") makes it possible to determine the identity of a type of
transformation32.
The point is well made but not very well worded: the matter is not to "transform". It is to
compare, and to put at play productively terms involved in systems of relative positions,
these being reflected in the overt form in some cases, and in other cases, there being no
formal manifestation. In the Arab world, the analogists of Basrah and the anomalists of
Kufa33 will echo the Greek 'quarrel'.
As we restrict ourselves to language, we will leave Augustine, Scholasticism, and
Thomas Aquinas – but his commentator Caietano will be solicited several times below –
to reconnect with analogy in France in the 17th century.
2.2. Arnauld and Lancelot, disturb the least possible the analogy of
language
In Arnauld and Lancelot, is to be found, after Varro and seventeen centuries, a revised
position on the question anomaly-analogy but in a curious posture, 'honnête homme' and
decency on one side, and a proto-scientific attitude on the other; interesting
amalgamation of normativity and of an objective position with respect to language.
28
Douay 1991, p. 8.
29
On analogy-anomaly, see also the short but excellent paper of Françoise Douay (1991) which,
moreover, connects this ancient quarrel with a more recent one and clarifies it : is there a cognitive
linguistics and one which would not be.
30
Baratin in Auroux 1989, tome 1, p. 229.
31
Varron, de Lingua Latina, X, 37-38
32
Baratin in Auroux 1989, tome 1, p. 229.
33
Rey 1973, p. 178.
26
It is a maxim that those who work on human languages must always have present in
mind, that the ways to speak which are authorized by a general and unquestioned usage
must be considered good, even if they contradict the rules and the analogy of language;
but they must not be invoked to put rules in doubt or to disturb analogy, neither
consequently, to auhorize other ways to speak that usage would not authorize34.
The Grammaire générale et raisonnée discusses, criticizes, or generalizes "the rules that
Vaugelas had sketched without striving to make a systematic work"35. The position of
Arnauld and Lancelot will amount to accommodate attested anomaly while disturbing
the least possible the "analogy of language", without however authorizing non-attested
usage.
[Ablative in Latin], properly speaking, is not to be found in plural, where, for this case,
there is never an ending different from that of dative, but, because it would have
disturbed analogy to say for example that a preposition governs the ablative in in
singular, and the dative in plural, it was preferred to say that this number also had an
ablative, but always similar to the dative. It is for this same reason that it is useful to
also give an ablative to Greek nouns, which is always similar to the dative, because this
conserves a greater analogy between these two languages which, ordinarily, are to be
learnt together36.
I will show how a different treatment of the question is possible in section 6.1.2.
Homography, accidental homonymy, syncretism (p. 160),.
2.3. Humboldt: analogy puts sound and concepts at the same pace
For Humboldt37,
Concepts may be marked in three manners: [1. immediate imitation, 2. symbolic
imitation, and] 3. Phonetic similarity [which] depends on the concepts to be denoted.
Words with similar significations receive sounds with the same proximity […]
presupposing sets endowed with a certain magnitude. It is the most fecund function and
that which realizes the clearest and most distinct adequacy between the system of
intellectual productions and that of the language; such a procedure – in which the
analogy of the concepts is taken to a degree such that, each remaining in its own
domain, they are made to walk with the same pace – may be qualified analogical.
Analogy is then "one of the causes which gives birth to grammatical categories"38.
Trabant39 sees here "the relative motivation of language" of Saussure which " is the
image of the coherence of the world which thought produces with the help of language.
By this very reason, that relative motivation is also an image of the coherence of the
world tout court which, for sure is donated to us through language and , without
language, would be a hopeless chaos".
34
Arnauld 1660/1997, p. 60.
35
Mandosio, introduction in ibid., p. XV.
36
Arnauld 1660/1997, p. 38.
37
Humboldt 1974, p. 218.
38
Destut de Tracy commenting the Lettre à M. Abel-Rémusat of Humboldt
39
Foreword, in Humboldt 1974, p. 77.
27
Humboldt thus undertakes to connect the morphological analogy with that which has not
yet been thematized as semantics. Does he restrict himself to morphology or is his
proposition extended to longer forms, then encompassing syntax? This is possible but
hard to decide given Humboldt's style which is very open and sometimes not precise.
Later in the 19th century, analogy founds an explanatory relation between what will soon
after be termed 'diachrony' and 'synchrony'.
2.4. Brugmann and Saussure, analogy repairs phonetic change damage
2.4.1. Neogrammarians as seen by Auroux and Engler
The following quotation is long but important to frame a critical moment in the history
of analogy in the 19th century:
In 1882 Ziemer listed the new themes brought about by the Neogrammarians: […], they
make the concept of analogy something fundamental. Building on the (how ambiguous)
concept of phonetic law, they strive to view the reality of language as an unconscious
process. This makes them reject the purely subjective explanatory principles of Curtius.
On the contrary, because they strive to connect language with the acts, they have to
explain, calling most of the time on associationist psychology, and on the need to
understand each other within a group, how, from individual acts, one passes to the
regularity snatched away (sic) from individual wills. The epistemological achievement
is far from obvious and definitive. If they take that, beside the phonetic laws, analogy
is the second factor ruling the life of language, the neogrammarians use this concept
rather loosely, notably to explain the exceptions which are opposed to the phonetic
laws. As early as vol. IX of the Studien, Curtius reminded them that analogy had to be
considered on series only. Progressively, the concept of analogy comes closer to what
will become that of paradigm or that of paradigmatic axis of the language. For example,
Brugmann notes that, to he who wants to learn German, no one says that gastes is the
genitive singular, gast the dative, etc.; rather, one creates the different forms, each from
the other ones. This idea is mainly an achievement of the Neogrammarians; it is
because he rejects the role played by analogy, that Curtius dedicates the last part of his
pamphlet to the primitive language. His effort is to show that the PIE [proto IndoEuropean] is an arbitrary reconstruction, and that inflections in it play no role. He thus
has perfectly understood that, if one makes a link between the new conceptions of
analogy with phenomena like inflections, one must also consider a series of synchronic
states of the language in which forms act on each other. The concept of analogy leads to
synchrony. The theme of Ausnahmlosigkeit [the fact of being without exceptions]
historically arises from the mechanist conceptions developed in the second third of the
century […]. This prevents the Neogrammarians from understanding the role of the
combinatorial formations (we would say 'syntagmatic'), as will be noted by Jespersen,
and above all, to understand the effect of meaning on the change of the sound form40.
Between phonetic laws and analogy, Engler identifies in the Neogrammarians a
dissymmetry in favour of the former:
… the phonetic laws, postulated without exceptions, and without counterbalance
(nothing more revealing in this respect than the term "false analogy". And even if the
Neogrammarians and Paul acknowledge the importance of analogy, it will only be with
40
Auroux 2000a, p. 421.
28
Saussure who relates it with a fundamental principle of the mechanism of language,
that analogy will play on a par with the phonetic laws.) are as many illusions that tend
to make language 'inhuman'41.
This is not entirely right: Brugmann sees analogy playing on par with phonetic laws
twenty years before Saussure.
2.4.2. Karl-Friedrich Brugmann
This passage from Brugmann (1849-1919)42 is translated from a quote in Normand
1978, p. 48-50. Brugmann exposes the derivational and inflectional combinatorics:
It is the compliance of the material element (base, root), recurring in a set of the various
forms and derivations of a word, which causes the feeling of the etymological link. As
to the orderly feeling of the system of inflections and of lexical formations, likewise, as
to the system of the meanings of the syllables marking inflections and derivations, this
feeling is rooted in groupings like gastes-armes-spruches, etc. führung-leitungbereitung, etc., and also in the comparison of parallel series such as gast-gastes-gäste =
arm-armes-ärme = spruch-spruches-sprüche, etc. It is therefore at the expense of a
certain amount of formal analysis operating when instating some groupings that are
typical of the system of lexical formation and inflection, that the speaker gains
awareness of the models and rules following which he shapes most of his productions;
because, including in adults, one observes the combinatorial activity play a role, in
addition to memory.
The question of productivity is explicitly posed and attributed to analogy (the formation
of an unknown fourth):
Whence the particular importance associated with the creative activity by combinatorial
operation, which the subject operates in the domain of lexical formation and even more
so in the system of inflection. As most of the forms in a system with multiple
articulation were never heard before, or if they were heard, they were not inscribed in
the memory, we form them with the help of groups, by establishing – in a naturally
unconscious manner – ratios between already known terms and by deducting the
unknown fourth term.
Productivity, thus envisaged by Brugmann, may comprehend syntactic productivity
depending on the interpretation of "system with multiple articulations" (as with
Humboldt, supra, it is not entirely clear). At this point, sprouts a dynamic vision of
grammaticality …
In the course of the epigenesis operating repeatedly on the model of the relevant
representative groups, it is indifferent to the nature of the productive activity whether
the element is already in use in the language or deprived of attested existence. In the
latter case, it suffices that the speaker who creates an element which deviates from
accepted usage, feels no contradiction with the inventory acquired by learning and
stored in the memory.
… this makes it possible for linguistic change to explicitly integrate the explanatory
frame:
41
Engler 2000, p. 240.
42
Brugmann, Zum heutigen Stand der Sprachwissenschaft, Strasbourg, 1885.
29
Group dynamics is, to a large extent, what grants each member of a linguistic
community the possibility and the opportunity to go beyond accepted usage. But for a
novel formation which conflicts with established usage to acquire a general validity, it
will have to develop spontaneously and simultaneously in a large number of interacting
individuals.
Phonetic change, the damage it makes in paradigms, and its ensuing repair by analogy,
are dissociated and formulated in terms which Saussure will later endorse:
… hence a notable difference between analogical formation and phonetic change as, in
the case of analogy, innovation does not necessarily incur the rejection of the older
element. Now the emergence and the entrenchment of analogical formations almost
always are causally related with phonetic change. Phonetic alterations cause either the
displacement and uninterrupted destruction of existing groups in the course of the
language history, or the emergence of new groups.
Phonetic change affects already established groupings and associations by unmotivated
distinctions among congruent forms. Cf. esti, este, eimi … To this loosening of the
combinatorial ratio caused by phonetic variation, analogy offers a parry and a response.
The entirety of language dedicates itself tirelessly to blur useless discrepancies and
respond to functional constancy by constancy of the phonetic expression; with an
insisting and progressive pace, it tries to reinforce the conditions of solidarity and better
adjust the groupings in the domain of lexical formations and of inflection.
In a word, for Brugmann, novel formations amount to the deduction of an unknown
fourth. To the loosening of the combinatorial ratio resulting from phonetic change,
analogical formations offer a parry and a counterstroke.
2.4.3. Saussure
Saussure adopts the same analysis of the "repairing" dynamics of analogy. Phonetic
change:
blurs and complicates the linguistic mechanism in the measure in which irregularities
born from phonetic change contradict groupings based on general types; in other words,
in the measure in which absolute arbitrariness takes over relative arbitrariness.
Fortunately, the effect of these transformations is counterpoised by analogy. Analogy is
responsible for all normal modifications of the outside appearance of words which are
not phonetic in nature. Analogy subsumes a model and its regular imitation. An
analogical form is a form built after one or several other ones following a defined rule.
Thus in Latin the nominative honor is analogical. One used to say honôs : honôsem,
then through rotacism of the s,one said honôs : honôrem. At that moment, the radical
had a dual form; this duality was eliminated by the new form honor, created following
the model of ôrâtor : ôrâtôrem, etc.; by a process which we assimilate to the
computation of a proportional fourth :
ôrâtôrem : ôrâtor :: honôrem : x  x = honor
In order to counterbalance the diversifying action of phonetic change (honôs :
honôrem), analogy re-unified the forms and restored the regularity (honor : honôrem)43.
Saussure takes great care to qualify the effect of analogy as an addition, not as a change.
43
Saussure 1915/1970 (Cours), p. 221.
30
Analogy installs a competing form beside a traditional one. This competitor may
eventually supersede the more traditional form44.
Pension : pensionnaire; réaction : réactionnaire. Pensionnaire and réactionnaire do
not change anything to a pre existing term. They replace nothing.
Analogy is the "principle of language creations" and is grammatical:
Analogy is grammatical in nature: it supposes the awareness and the understanding of a
ratio uniting the forms with each other. While the idea is nothing in the phonetic
phenomenon, its intervention is necessary in analogy (intervention of a proportional
fourth).
The combination:
ôrâtôrem : ôrâtor :: honôrem : x  x = honor
would have no raison d'être if the mind did not associate by their meanings the forms
which it contains.
Therefore, everything is grammatical in analogy; but it should be added immediately
that the creation which it produces, at first, can only belong to the parole, it is the
occasional work of an isolated subject. In that sphere, and away from the langue, is
where it is appropriate to initially catch the phenomenon. However, two things must be
distinguished: i) the understanding of the ratio which relates the generating forms (les
formes génératrices); ii) the result suggested by the comparison, the form improvised
by the speaking subject to express his thought. Only this result belongs to the parole45.
To complete the characterization of analogy as a creation, and not as a change, the table
below summarizes the contrast that Saussure46 makes between analogy and what he
names 'agglutination'47 .
44
Ibid. p. 234.
45
Idid. p. 226.
46
Ibid. p. 243-244.
47
Saussure does not use "agglutination" in the sense in which the Turkish morphology or the Japanese
verb morphology are agglutinative.
31
Analogy
Agglutination
pâg + ânus  pâgânus
hanc + horam  encore
potis + sum  possum
With smaller units, analogy builds a longer unit
[which is analysable].
Two or more units melt by synthesis into a
single one [which ceases to be analysable].
Draws on associative series [paradigms], along
with the syntagms.
Does not draw on an associative series; bears on
a group alone; syntagm only (no paradigm).
Supposes analyses and combinations, intelligent
activity, intention.
Assembly obtains at once, in an act of parole,
by the union of elements borrowed from various
associative series.
Is not voluntary, is not active. A mechanical
process. Assembly obtains by itself.
Slow cementing of elements. The synthesis may
erase the original units.
"Construction" (vague) may apply.
"Construction" (vague) may also apply.
"Composed", "derived" must be reserved to this
case.
Table 1 Analogy and agglutination according to Saussure
About analogy substituting older formations with newer ones, cf. also a footnote in the
section beginning on p. 258, where the case "somnolent" is analysed by Saussure.
Three points are explicit in the lines cited above: i) analogy is an act of parole, ii)
analogy is creation or addition, not transformation, and iii) analogy is grammatical. We
see therefore that it belongs directly to the dynamics of linguistic acts.
However, Saussure sees analogy as repairing or morphological without claiming any
specific place for it in syntax – the Cours does not make much room for syntax48.
2.5. A repairing analogy with morphological and syntactic effect
Is the operation of the repairing analogy limited to morphology or lexical creation ? A
case will show that it may also act on a paradigm less narrowly characterized than an
inflectional or derivational paradigm49.
From a corpus taken from the Internet50, Rastier picks up the following series of
examples of collocations that are typical of racist pages. Detecting collocations of this
sort helps in the characterization of racist contents:
48
At that time, in paedagogy, analogy is almost a synonym of morphology + inflection. "The Spanish
Academy calls Analogía that part of the grammar which teaches the parts of speech with all their
properties and accidents" (Galban 1907, p. 17). This book of Spanish grammar for high schools has four
major divisions: prosody (17 pages), analogy (175 p.), syntax (1 p.) and orthography (9 p.). So that
grammar consists of nearly morphology and inflection alone: 87% of the total!
49
This case study, which breaks the historical organization of this chapter, supports in anticipation the
discussion below on Bloomfield and Chomsky.
50
In a work for detecting racit pages on the Internet, sponsored by the Commission of the European
Communities. Rastier 2002d (François) Les critères linguistiques pour l'identification des textes racistes Eléments de synthèse, in Valette, Mathieu, éd., European project Princip.net : a platform for the research,
the identification, and the neutralization of illegal and offensing contents on the Internet. Deliverable
2002-1, Inalco, pp. 84-98.
32
idéologie
complot
mafia
financiers
lobby
internationale 
mondialiste
mondialiste
cosmopolite
étrangers
de l'immigration
juive
He notes, rightly, that this series, extracted from a corpus, therefore "given", presents
regularity: the rightmost term concerns the axis "us-them" while the leftmost one is a
determination without reference to this axis. In this, the series is regular. But it presents
an anomaly; in the last item: internationale juive, the contrary is the case, "us-them"
happens in the first term and the term without this property is the second one. The item
internationale juive thus 'disturbs' (quoting Saussure) the series and this complicates a
little, says Rastier, the detection of racist contents.
This disturbance appears to have had another effect than that of making more complex
the detection; it seems it also has been perceived by the racist rhetor who, on some
occasion, produced the innovation juiverie internationale. This creation causes a
linguistic discomfort (let alone discomforts of other natures); something here succeeds
despite the question always associated with a novel creation: for what benefit should the
innovation cost be spent. In what then does this creation succeed? There are many
factors among which the pejorative character of the suffix -erie in this context; there is
also – it is the point here – the reintegration that this innovation operates of (juif +
international) into the series. This series is present and active in the minds of the
speakers even though its formal structure and working levers remain non explicit – but
isn't its efficiency all the better. The form internationale juive, anomalous then,
performed the recuperation of the rhetorical benefits – assumed available – of idéologie
mondialiste, complot cosmopolite, etc. with an efficiency that was only relative, because
of its anomaly; the new form juiverie internationale, now regular in this series, does so
more efficiently.
This analogical creation is quite as repairing as that which produced honor in Saussure's
example, yet it differs in two respects; i) the trouble it repairs is not the effect of
phonetic change, it is something else, ii) the means of the reparation are not limited to a
lexical creation or a morphemic regularization against the "transparency of an etymon";
beside the creation juiverie, they also comprise a syntactic rearrangement which in this
case is the permutation of two terms.
This example is interesting for two reasons: first it bears simultaneously on morphology
and syntax, another indication that he border between them is not sharp; then because it
leads to envisage as a paradigm – in a broader sense – a set which is not narrowly
determined by distribution but is a field onto which a same analogical pressure is
exerted; despite the reasons being less easy to characterise, they are nonetheless
perceived by the speakers.
33
2.6. Bloomfield, the power of analogy extended to syntax
In 1933, in Language, for the first time in modern linguistics as far as I am aware,
Bloomfield makes a straightforward statement that analogy may be held to account for
linguistic innovations in constructions:
A grammatical schema (sentence type, construction or substitution) is often called
analogy. A regular analogy allows a speaker to utter discourse form which he has not
heard; we shall say he utters them by analogy with the regular forms he has heard51,
This is followed with a development on analogical morphology and its relation with
anomaly that does not innovate on what we saw with Brugmann and Saussure.
Remembering maybe Wallis, who described the phenomenon in the 17th century or,
more recently Humboldt, Bloomfield anticipates the phonesthemes of Firth or the
idiophones of Tournier and Philips52:
Even the morphemes that form the bases have some flexibility; when hearing a form
like squunch in the sense of 'a step making a suction noise on a wet ground', we cannot
say whether the utterer already heard it or whether he uses an analogy with [skw-] as in
squirt, squash, and with [-onč] as in crunch53.
Adopting of analogy the full vision, that is, that of the proportional fourth, he fosters it
as the explanation of learning and, therefore, of linguistic productivity:
p. 259: Regular analogies are substitution habits. Assume for example that a speaker
has never head the form Give Annie the orange but he has heard or uttered a series of
forms such as the following:
Baby is hungry.
Dad is hungry.
Bill is hungry.
Annie is hungry.
Poor Baby!
Poor Dad!
Poor Bill!
Poor Annie!
Baby's orange.
Dad's orange.
Bill's orange.
Annie's orange.
Give the baby the orange.
Give Dad the orange.
Give Bill the orange.
…
He now has the habit – analogy – to use Annie in the same positions as Baby, Dad, Bill
and therefore, in the appropriate situation, he will utter the new form Give Annie the
orange. The fabrication of a form by analogy with other ones is similar to solving a
proportional equation with infinite ratios on the left side:
Baby is hungry. : Annie is hungry
)
Poor baby!
)
: Poor Annie!
Baby's orange. : Annie's orange.
= Give the baby the orange. : X
)
The explanatory power of analogy is now explicitly claimed for syntax – so far it was
claimed for morphology only. The explanation is very clearly made, one may believe
and adhere, but it is not further built nor supported: we stay with "substitution habits"
and the "therefore" is far from clarifying the causal chains that would show how the
subject becomes productive or, with precision, which substitutions can be done and
which ones cannot. This leaves a remainder to explain; we shall see what consequences
a contradictor will draw.
51
Bloomfield 1933/1970, p. 258.
52
Didier Bottineau, personal communication
53
Bloomfield 1933/1970, p. 258.
34
2.7. Householder formulates the potential of analogy
In 1971, Householder delivers in Linguistic Speculations, a chapter: Sameness,
similarity, rules and features54 which reinterprets with analogy a great number of
linguistic phenomena.
At that time, the situation appears to be that each of these phenomena is, or has already
been analogically analysed by some author but that these analyses are scattered in the
publications and in the perception that linguists have of them. The situation is also that
the current thinking at that time provides for these phenomena theories that are not
analogical. The distinctive merit of this chapter by Householder is therefore to bring
together such analogical analyses in one chapter and thus produce suggestion effects.
This is already a value even if, as we shall see, the theory which could follow is not yet
constituted.
He starts from the two-term analogy of the type A is similar to B – which I shall call 'A2
analogy' below – and straight away identifies that a similarity is always apprehended in
some definite way, and that there are always several possible dimensions to comparison,
which leads to the following:
How does one systematize, consciously or unconsciously? The only candidate so far
proposed for this job is analogy. An analogy is a sameness of similarity and differences
(p. 63).
Meeting with the full analogy – which I shall call below A4 because it consists of four
terms – and which will be the subject matter of the twenty ensuing pages.
If I have noted that A is like B, C is like D, E is like F, … and then go on to compare
the A-B similarity to (let us say) the E-F similarity, and conclude that they are the same
[both similarities are the same], I am said to have established a proportion or analogy A
: B = E : F, which, just as in mathematics, can also be written as A : E = B : F, …(p.
63).
He thus postulates analogy to be transposable; as we will see below in the 'transposition
abductive moment', cf. p. 8755, this property is not always verified.
Householder then builds an analogical vision of a great many linguistic phenomena,
beginning with lexical segmentation. The chapter contains few general propositions;
rather, it builds a convincing effect by accumulating the setting into analogies of pairs of
various natures. In this, the vision is 'exemplarist' much in the way the model promoted
in this work is. The text is somewhat wearisome, which does not mean without interest,
made mostly of 'boring examples (Householder) and the only thing that can be done is
sampling:
A word like bet, let us say, is first opposed to things like abet, you bet, etc. and to those
like better, bet them, etc., and Bret, bent, best, etc. by an analogy or analogies whose
terms are nothing : something. Then it is successively opposed to:
pet, vet, get, debt, jet;
to bait, to bit, bat, but, *[but], bot;
54
Householder 1971, pp. 61-80.
55
But we will have to accept that this transposition does not always apply, so it is not exactly as in
mathematics; see the quoted section and the corresponding appendix.
35
to beck, *bep, *betch, Beth, Bess, and bed.
And there are no more, except ones in which one of these (or more) could be inserted
as a middle term; i.e. beg is not on this list because it is the first and most closely
opposed to bed, which is on the list. (p. 65).
The discourse intimately associates segmentation and phonology. Householder does not
directly link bet – beg because the chain bet – bed – beg is possible. He requests for
individual links to be by minimal contrast where attested forms make this possible, that
is, where they attest such contrasts in context. In two pages of more boring examples the
analogical pairs are minimal contrasts, e.g. bed : pet (+ voiced : - voiced) altering
voicing and articulation point, for the initial consonant, for the final consonant, etc. The
phonological development is long and detailed. On the way, partial productivity in the
lexicon is encountered and treated analogically (it is not the derivational productivity,
which is partial itself, but the fact that not all phoneme sequences, even phonotactically
good, are realized as lexemes). Also is encountered – and analogically treated – what I
shall call below 'group sensitivity' (p. 169):
It is a remarkable characteristic of several Indo-European languages … that there are
sets of affixes superficially different in form from other sets, but filling exactly the
same function – the so-called declensions, or declension-types (p. 69).
as well as many more phenomena: phonological, lexical, morphological, and syntactic,
which we have to skip, please refer to the text. The vision of analogical change, that of
Brugmann and of Saussure (it would be better said: "linguistic change by analogical
creation"), is specified on the way:
The kind of linguistic change known as analogical change is not a change from nonanalogy to analogy or one caused by analogy, as is sometimes mistakenly supposed, but
a change from one analogy to another, a transfer of pattern or item from one
proportional set (usually a short one, even unique in one dimension) to another (usually
a long one with two-dimensional similarity throughout). Householder 1971, p. 78.
When Saussure insists in seeing an analogical innovation as an addition to a previous
form, which will coexist with it, he does not appear to say anything else; both do indeed
recognize that the older form, which may eventually be superseded by the newer one,
had anterior titles to be analogical, but in different analogies.
The overall theoretical proposition, if at all, appears in the chapter's conclusion:
Enough has been said to show the great role of analogy in forming the structure in a
man's brain, which is his language. We have also noted the convenience and economy,
in talking about such proportions, of using conventionalized summarizing devices like
rules, features, paradigms and matrices. From now on, we shall use these devices most
of the time; but we should not forget that each of them rests on one or more proportions
or sets of proportions. And if, in one sense, rules and features are merely arbitrary
fictions (while only the utterances and proportions are real), there is another,
paradoxical, manner of speaking in which only they are real while actual utterances are
merely conventional abbreviations for the rules and features. Many linguists prefer this
paradoxical sense of 'real' (p. 79-80).
In the rest of the book, Householder will use "rules, features, paradigms and matrices"
as a matter of convenience and economy, but solely as conventional devices, refraining
to forget that they rest on proportions or sets of proportions, the latter only being real.
Taking the opposite route is, for him, paradoxical.
36
The vision I defend in this work is in very good agreement with Householder's views,
but in addition, all the consequences are drawn: not only do I not forget that proportions
(i.e. analogies) are the base on which rest all these "conventions" that are rules and
matrices, but in addition I restore analogy as responsible for the linguistic dynamics,
producing rule effects (cf. structural productivity, Chap. 4) and matrix effects: chapter 5
will substitute "matrices" and morphological paradigms with an analogical systemic
productivity.
I shall still refer to rules or conventions, by "convenience" or "economy", because they
may alleviate the communication, counting with the complicity and the benevolence of
the reader, lest indeed we would be exposed to the long series of Householder's boring
examples, but I fail to see why the smallest causal role should still be granted to them.
For Householder, those who take the opposite choice, that is, choose rules, features,
paradigms and matrices, against analogical proportions, make a paradoxical choice.
2.8. Chomsky, categories and generative rules against analogy
Chomsky, in order to respond to an observation of Descartes, undertakes explicitly to
account for linguistic productivity:
the Cartesian observation that human and animal language differ in a fundamental way:
les bêtes n'ont que des connoissances directes et absolument bornées, l'homme
compose son discours56.
He finds that his predecessors did not succeed very well in that:
Thus he [Vaugelas] regards normal language use as constructed of phrases and
sentences that are "autorisées par l'usage", althought new words (e.g. brusqueté,
pleurement) can be correctly formed by analogy. His view of language structure, in this
respect, seems not very different from that of Saussure, Jespersen, Bloomfield, and
many more who regard innovation as possible only "by analogy", by substitution of
lexical items for items of the same category within fixed frames57.
and that neither did more contemporaneous linguists:
Modern linguistics has also failed in dealing with it in any serious way. Bloomfield, for
example, observes that in natural language "the possibilities of combination are
practically infinite", so that there is no hope of accounting for language use on the basis
of repetition or listing, but he has nothing further to say about the problem beyond the
remark that the speaker utters new forms "on the analogy of similar forms that he has
heard". Similarly, Hockett attributes innovation to "analogy". Similar remarks can be
found in Paul, Saussure, Jespersen, and many more58.
Analogy, which they called for to that end, does not suffice:
To attribute the creative aspect of language use to "analogy" or the "grammatical
patterns" is to use these terms in a completely metaphorical way, with no clear sense
and with no relation to the technical usage of linguistic theory. It is no less empty than
Ryle's description of intelligent behaviour as an exercise of "powers" and "dispositions"
56
Chomsky 1966b (Cartesian linguistics), p. 12.
57
Ibid. p. 54.
58
Idid.
37
of some mysterious sort, or the attempt to account for the normal creative use of
language in terms of "generalization" or "habit" or "conditioning" 59.
Analogy is thus insufficient because it "substitutes with one another lexical units of the
same category in fixed frames". Indeed, this is where Bloomfield stopped, but nothing
forces one to stop here, as the demonstration will be made.
Again in 197560:
… notions like "analogy" do not take him [a man of science free of all ideology] very
far away in the study of human capacities, at least in the domain of language61.
It must be noted however that the analogy which is dismissed here is "apparent
analogies":
Although John's friends appeared to their wives to hate one another and John's friends
appealed to their wives to hate one another are very similar, the speakers understand
them very differently, without taking their apparent analogy into account.
In short, in order to build a theory of the competence of the ideal speaker of a language,
one adopts a theoretical apparatus which is complex, categorical, regularist, and
abundantly "non apparent" after dismissing the unfortunate analogy: after forbidding it
to use its "non apparent" talents62, it was easy to show that it would not do the job. This
difference of treatment is never discussed or defended and is a qualified injustice.
Indeed the authors who had seen the potential of analogy had not at that time deployed
the explanatory chains, but this deficiency did not incur necessarily the dismissal of
analogy. However, the logical-symbolist pressure of the time led to it and this is what
was done.
Chomsky then engages into what will be the first generativism, the apparatus of which is
well-known: lexical categories, phrase markers comprising nodes that represent
intermediate constituents, themselves strongly categorized, derivations based on rules
and transformations based on rules.
A possible schematization of the history of analogy, or of its distribution in the variety
of the uses of the word 'analogy' could be the following:
1) Aristotle's analogy, with four terms, binding a proportional fourth to the three
other terms. Let us call A4 this analogy since it holds between four terms. It is
that of Varro, of the Neogrammarians and of Saussure.
2) A degraded analogy as for example in the utterance A is analogous to B. It is a
commonplace usage, which corresponds to a moment of discredit of analogy and
appears to have prevailed in the first half of the 20th century63. Let us call it A2
59
Ibid. p. 12,13.
60
Then again in 1985 : The production and the interpretation of novel forms was judged at most as a
question of analogy which posed no problem in principle. (Chomsky 1985/1989, p. 21).
61
Chomsky 1975/1977, Problèmes et mystères (Reflections on language), p. 174 et sequ.
62
Yet, let us recall Saussure, quoted here again: "The combination ôrâtôrem : ôrâtor :: honôrem : x
 x = honor would have no reason to be unless the mind associated by their meaning the forms which
compose it " (author's highlight).
63
The confusion is even much older since, at the time of the Counter-Reformation, one finds this in
Caietano, commenting Thomas Aquinus : "The word analogy, as we received it from the Greek … has
38
since it holds between two terms only. A2 occurrences analysed very soon also
reveal four terms, they show up easily without never digging very deeply.
However, A2 users do not mention them and A is analogous to B is synonymous
of A is like B; one does not specify in what A and B are alike.
3) In the last quarter of the 20th century, renewed attention to analogy and restoration
of A4 analogy, through works in psychology and in cognitive science, then a
request for its rehabilitation, made by Itkonen, which will be analysed in detail
below.
When Chomsky refuses analogy, which one of these two visions does he refuse? A4,
imprudently then, or is it A2, then we should have to think that the blurring of analogy
was very marked in those times.
Milner (1989) makes a comparable reading of the history of analogy:
Saussure explicitly uses the notion of proportional fourth, which exactly meets the
notion of analogy in the Greek sense. Moreover, the entirety of. Chap. IV of the third
part of the Cours, titled "L'analogie", represents a remarkable attempt, and a success,
aiming at restoring the term analogy in its ancient and precise usage, beyond a modern
and imprecise one64.
He even fosters that:
Between the Greek world and the modern universe, a difference has aroused: it is wellknown that there exists today mathematical theories of irregular phenomena; in this
sense, the ancient opposition between analogism and anomalism could be overcome; in
fact there exists some linguistic theories which treat language as fundamentally
"anomalous" in the Greek sense. In modern terms, we should better speak of
complexity.
In some respects, [the opposition between analogists and anomalists] has a
correspondence within linguistics. The scepticism of V. Henry with respect to the
phonetic laws of the Neogrammarians, that of the school of Giliéron towards the school
of Saussure and of Meillet, the conceptions of comparative grammar as a succession of
"small facts", all this relates to the anomalist conception. On the other hand,
formalising linguistics, be it structural, generative, etc. is rather on the side of the
analogist conception65.
If we follow Householder, the first generativism would then be in the uneasy position to
have refused analogy and at the same time to have accepted an analogist conception of
language; a converging remark is made by Itkonen, cf. section 2.12. Itkonen,
rehabilitation of analogy (p. 43). Perhaps. If it has accepted it, in any case, it is through
the detour of the categories. That the categorical detour provides a great expressive
power is not doubtful, it is very attractive to he who grants the primacy to the concision
of the theory. However, the number of empirical residues that it conducts to leave
unexplained disqualifies it in its application to linguistic facts. Now analogy does not
imply categories; if it has to be applied, it is in its birth state, without dressing it into a
been so broadened and divided that we say many names analogous wrongly. … Proportional analogy only
[which is that called A4 here] constitutes analogy. As for unequality analogy [which is that called A2
here] it is absolutely foreign to analogy. Caietano 1498/1987, p. 113.
64
Milner 1989, p. 631.
65
Ibid. p. 631.
39
categorical apparatus. That is not what has been done by generativism and it is the
proposition made in this work.
2.9. Hopper and Traugott, analogy participates in grammaticalization
Hopper and Traugott view analogy as one of the two mechanisms which account for
grammaticalization, the second one being reanalysis:
[…] the mechanisms by which grammaticalization takes place: reanalysis primarily,
and analogy secondarily. Reanalysis and analogy have been widely recognized as
significant for change in general, most especially morphosyntactic change. Reanalysis
modifies underlying representations, whether semantic, syntactic, or morphological,
and brings about rule change. Analogy, strictly speaking, modifies surface
manifestations and in itself does not affect rule change. Although it does affect rule
spread either with the linguistic system itself or within the community. Unquestionably,
reanalysis is the most important mechanism for grammaticalization, as for all change 66.
This conception is summarized in the table below.
The analogy which is envisaged here is analogy following profile 3: repairing analogy
(cf. p. 45). Viewing it as modifying a surface representation contradicts that on which
Saussure, agreeing with Brugmann and Householder, insists much: that analogy does
not modify anything but installs a new form in addition to an older one, a 'paraplasme'
says he, which may eventually cause the older one to disappear.
Hopper and Traugott make analogy (profile: repairing analogy) a secondary process
behind reanalysis. Now if we adopt a less profiled and less restrictive definition of
analogy, and if we put it, as I propose, at the base of linguistic inscriptions and linguistic
dynamics, it is possible to show how reanalysis may on the contrary be seen as an effect
of analogical dynamics. This will be done section 8.2.3. Reanalysis (p. 253).
reanalysis
analogy
primary
secondary
hidden
overt
modifies the underlying representation
modifies the surface representation
involves rule change
does not involve rule change
affects rule spread:
a) in the linguistic system,
b) in the community
operates along the syntagmatic axis
operates along the paradigmatic axis
Table 2 Reanalysis and analogy for Hopper and Traugott
66
Hopper 1993, p. 32.
40
Even supposing that we restrict analogy to repairing analogy, the opinion that it
modifies only surface representations and that it operates only along the paradigmatic
axis is contradicted by the case which has been studied p. 32.
The same ideas as those in the table, and a few more are to be found ibid. p. 56:
As we have defined it, reanalysis refers to the development of new out of old structures.
It is covert. Analogy by contrast, refers to the attraction of extant forms to already
existing constructions. … It is overt. … Reanalysis operates along the "syntagmatic"
axis of linear constituent structure. Analogy by contrast, operates along the
"paradigmatic" axis of options at any one constituent node. When Meillet was writing,
there was a rather narrow, local interpretation of analogy, which was defined as a
process whereby irregularities in grammar, particularly at the morphological level, were
regularized. The mechanism was seen as one of "proportion" or equation.
cat : cats :: child : X  X = childs
The difficulty of the formula of proportion is that it gives no account of why a member
of the pair is selected as the model. Kurylowicz 1947-9 pointed out to some tendencies
regarding selection of the model, for example, the tendency to replace a more
constrained with a more general form or vice versa.. … Neither analogy as originally
conceived nor rule generalization are required to go to completion: we still have footfeet, mouse-mice, and also run-ran, alongside with love-loved.
I exactly undertake below to show how particular members and particular pairs are
selected as homologs. That these processes – analogy in particular – are not "required go
to completion", that is to embrace entirely a set, whatever its definition, is certain and an
illustration of this can be found, for example, in the French verb (Demarolle 1990,
already quoted).
2.10. Analogy for psychologists and psychoanalysts
Wallon appears not to use the word analogy although he meets it (Wallon 1945, p. 46
jour : nuit :: blanc : noir).
In Piaget analogy has not been found, but the search has not been very deep.
Lacan, makes no room for analogy in his theoretical approach. He himself uses it
rhetorically at places and condemns its misuse by some:
One Jaworski, in the years 1910-1920, had erected a very beautiful system in which the
'biological plane' was to be found up to the confines of culture, and which precisely
gave the order of the crustaceans its historical conjunct, if I remember well, in some
late Middle Age, under the heading of a common flourishing of the armour, - leaving a
widower of its human respondent no animal form, without excepting molluscs and
bugs. Analogy is not metaphor, and the resort which some philosophers of nature made
of it, requires the genius of a Goethe whose example itself is not very encouraging.
None loathes more to the spirit of our discipline, and it is by expressly rejecting it that
Freud opened up the way proper to the interpretation of dreams, and with it, the notion
of analytical symbolism. This notion, so we say, is counter to analogical thought of
which a questionable tradition makes that some, even among us, still hold it as solid 67.
67
Lacan 1953, p. 262.
41
but the analogy that he fustigates is A2 analogy, this is understandable at that time, as
we saw above68. There is however plenty of analogy in his work, as early as the
Séminaire sur la lettre volée (1956) and later, for example in the L schema. This will
have to be investigated further and one would be disappointed not to be able to make
some connections, if it is true that le sujet … pour prendre dans la vie la couleur qu'il
annonce à l'occasion … doit recouvrir homologiquement le ternaire symbolique69.
About the vision that it is appropriate to take of analogy, I mention the debate between
schema and categorization (position embodied Holyoak) and projection and structure
mapping (position embodied by Gentner):
Research on analogy are marked by two theoretical positions, which their respective
tenants assess as different, the theory of projection and structure mapping elaborated by
Derdre Gentner, and the schema theory, implying categorization, defended by Keith
Holyoak, conception now integrated in a broader theory of induction. Without making a
decision between these two approaches, work in the domain of analogy, for most of
them, limited themselves to a somewhat external analysis of their hypotheses: it has
been concluded that they do not stand at the same descriptive level, one addressing
analogical transfer as exchange of entities between different domains that share
common relationships, the other insisting more on the activities of abstraction which
are necessary for the transfer itself. Gineste 1997, p. 107.
I limit myself to mentioning them because these two positions appear to share a feature:
none of them places analogy itself at the root of the representations; in the works which
are the subject of this debate, the representations (I shall prefer "inscriptions") follow
different paradigms, types models – semantic networks and other ones (the approach is
'partonomic') – and it may well be the case that their antagonism vanishes if the vision is
changed – if it is made 'isonomic', cf. section 3.6.7. Partonomy and isonomy (p. 89). I
shall suggest how this can be done p. 186.
Finally, a text: Edelman 1998, Representation is representation of similarities, without
directly addressing analogy, reinforces the confidence into a model that is nonessentialist and that is based on similarities/differences or on "sameness of similarities
and differences", to quote Householder again. As a very brief summary of his argument,
rejecting theories for which a mapping happens between perceived forms and the
representations of these (which would be a first-order isomorphism), Edelman thinks
that similarities between perceived forms map onto similarities between internal
representations (which is a second-order isomorphism). This view is compatible with
that which I defend in this work and the subject will be expanded p. 299 plus 1 page.
2.11. Hofstadter, emergent analogy
The question that Hofstadter addresses is analogy making.
Analogy-making is dependent on high-level perception, but the reverse holds true as
well: perception is often dependent on analogy-making itself. … It is useful to divide
68
Another condemnation, for example, in seminar 10 L'angoisse, june 12, 1963, unpublished, where
Theodor Reik is charged, seemingly with some reason, to make a wrong usage of analogy.
Du traitement possible de la psychose in Ecrits, Seuil, p. 152 (the subject … in order to take in life the
colour that he occasionally announces … has to map homologically the symbolic ternary).
69
42
analogical thought into situation-perception and mapping which involves taking the
representations of two situations and finding appropriate correspondences between
components of one representation with components of the other to produce the matchup that we call an analogy. (p. 180-181). However (p. 187) analogy-making is going on
constantly in the background of the mind, helping to shape our perceptions of everyday
situations. In our view, analogy is not separate from perception: analogy-making itself
is a perceptual process. … (p. 189) any modular approach to analogy-making will
ultimately fail. Hofstadter 1995, pp 180-189.
How can an analogy emerge from a model which does not suppose it. Hofstadter
explicitly assigns himself the reduction of analogy as one of his goals.
Hofstadter will be met again p. 261, he will help giving a feel of the want for what I
shall call 'private terms'. I shall also say the debt I have towards him, his model, Copycat
having been a decisive contribution in the design of the dynamic side of the model.
2.12. Itkonen, rehabilitation of analogy
In 1997 Itkonen gave a paper70 which is very important for my talk and with most of
which I am in good accord. He begins with refusing Chomsky's refusal of analogy, using
six arguments:
1. For Chomsky, there being no discovery procedure incurs that there is no analogy.
For Itkonen, this argument is false because its premise is false; there is a
discovery procedure – even if we cannot formulate it today – and this is true for
Chomsky himself: the language acquisition device is expected to elaborate a
grammar starting from a poor stimulus.
2. For Chomsky, there is no simple, elementary induction by analogy which would
account of acquisition, production or reception. For Itkonen, the method is not
simple indeed but it does not have to be. Universal Grammar itself is not simple.
3. Initially, language creativity was equated by Chomsky with recursivity, against
analogy; this opinion was later rejected by Chomsky. For Itkonen, after this, the
ability to create and understand new forms is not distinct from the traditional
analogical ability.
4. For Chomsky the speaker can produce and understand completely new utterances
and this is why analogy does not suffice. Chomsky defines complete novelty as
the absence of physical similarity. Therefore he restricts analogy to physical
similarity and thence what Chomsky rejects is analogy-as-physical-similarity and
not the classical notion analogy-as-structural-similarity71.
5. Production and reception are processes and analogy is at its best in processes.
Now Generative Grammar, because it concentrates on competence, looses the
dynamic vision (it never showed how the I-language serves the speaker in the
accomplishment of the acts). It therefore has no title to disqualify analogy.
70
Itkonen 1997.
71
Itkonen's terms are quite close to those I used above when commenting Chomsky's rejection of analogy:
Chomsky disqualifies analogy only because he sees it only in the overt form.
43
6. Chomsky speaks as follows: either you produce an explicit, analogy-based
explanation, or you accept mine, which is anti-analogical. The argument may be
back-lashed, says Itkonen: analogy is manifest in language and it manifests itself
by processes; now Chomsky chooses to study a static idealization: competence;
therefore Chomsky cannot account for analogical dynamics.
Itkonen then gives a definition of analogy which is very close to that of Gentner, that is,
a structure mapping. He sees three classes of application to language: a) extra-linguistic
reality (bird : fish :: wing : fins :: feathers : scales), b) analogy with iconic rooting (thing
: action :: name : verb), and c) phonological analogy (like Trubetzkoy), morphological
analogy (like Varro), and syntactic analogy (like Sapir and Bloomfield).
The remainder of the paper, which is its longest part, is dedicated to analogy in syntax.
A model is built, supported by a Prolog program, which explains analogically several
phenomena the analysis of which had served to found the first generativism,
transformations in particular. This model of Itkonen will be analysed in detail below,
p.186.
Itkonen proceeds with a claim of achievements: a) he does not pretend to rival with
more fully-fledged models such as GPSG or connectionist programs, b) he requests
equality of treatment: an analogical theory has the right, as generativism claims it, to
take as a fact that sentence John is too stubborn to talk to is correct and that the sentence
John is to stubborn to is not, c) he does not pretend to have covered language-learning
and therefore also requests to take as a fact the analogical structures of utterances, d) the
structure that is needed is that which explains our intuitive notion of analogy, not the
structure of the 'grammatical sentence of English', so the model is a model of linguistic
acts, not a model of competence, e) the model as presented covers syntax only; it is
assumed that it may also be extended to morphology and to non-linguistic analogies, f)
the model has been demonstrated on linguistic form alone, it is pretended that it may
also apply to understanding as well, g) the model is a model of the what, not a model of
the how, that is, its implementation is not plausible (it is in Prolog and it is accepted that
Prolog has no relation with neural operation).
To respond to a remark made by Newmeyer on the difficulty to distinguish good
analogies from bad ones, Itkonen recalls the importance of meaning and structure
(counter to the view that analogy should hold in the form only). He rejects a proposition
of Kiparsky to replace 'proportional' analogy with 'optimization'.
Finally, he shows that Generative Grammar uses analogy implicitly. This meets Milner's
viewpoint which I already stated above.
For Itkonen, the consequences of this rehabilitation are: 1. analogy refutes the modular
conception of mind because there is not a module of language that would be
encapsulated with respect to extra-linguistic reality, nor versus other mental faculties; 2.
analogy refutes innateness; 3. he opts for an 'analogical' representation (à la Kosslyn) of
mental knowledge, against a digital representation (à la Pylyshyn, I do not support this
view without however adopting Pylyshyn's thesis, cf. p.190); 4. Analogy achieves the
integration of the different fields of linguistics in particular between 'core linguistics'
and 'cognitive semantics' because it operates on all levels of language; 5. counter to
Popper, a reassessment is needed of the distinction between context of discovery (where
analogy has a place) and context of justification (where analogy would have no place): it
44
may be logically possible to produce something out of anything, but it is not humanly
possible (this seems to me to be akin to my proximality/totality theme, cf. p 212), 6.
Analogy has the potential to re-unite linguistics; it opens research avenues in continuity
with the tradition. This concludes the article.
The theses developed by Itkonen are also mine for most of them Yet, Itkonen's model,
as we saw it, makes an explicit use of lexical categories, functional categories, and
constructional categories: name, subject, and other similar categories are explicitly and
literally present in the Prolog programs. This may not be a claim; as a claim, it is never
explicitly made, this may be viewed only as a licence which the author took to build a
model with a limited ambition. In any case, the question of the possible dissolution of
the categories is never raised. This is a lag with respect to the strict exemplarism and to
the radical non-categoricity which are assumed in my work and will be analysed in
detail below, (p. 186).
2.13. Analogy profiles
In its various encounters, if one excepts, for aforementioned reasons, the degraded twoterm "analogy" (A2), analogy always establishes a proportional ratio between four
terms. However, vis-à-vis the usage which is made of them, these analogies have
different profiles, and not all of them interest us equally. We shall examine successively
i) a stylistic profile, ii) a systemic profile, iii) a 'repairing analogy' profile.
2.13.1. Stylistic or heuristic analogy (semantic and rhetorical)
It is a semantic profile, for philosophers, for scientists, for orators and cogniticians.
Induction by analogy is opposed to deduction.
Types: Aristotle, Plato, Wittgenstein, Quine, counter orators, linguists when, as anyone
else, they use analogy to make something understood or as a heuristic means.
2.13.2. Systemic analogy (form-meaning pairs constituting a system)
This a linguist profile; the point is to identify how ratios between forms match, or not,
corresponding ratios between meanings (form-meaning correspondence).
Analogy in this profile is a corollary of paradigms: paradigms are tables of analogies.
Their dimensions are grammatical categories (mostly).
Types: Aristarchos (type of analogists) against the Stoics (anomalists), Varro, Arnauld
and Lancelot, Humboldt, the Neogrammarians.
If we want to be precise, 'systemic analogy' may be taken in two meanings: a) either
systemic analogy holds only if the overt form manifests the systemic ratios, and then
analogy is opposed to anomaly, or b) a systemic analogy is considered to hold even
when the overt form does not manifest the systemic ratios, in which case analogy is
simply opposed to the absence meaning ratios and to the sheer impossibility to build a
meaningful table. Systemic analogy will be used in the latter meaning in Chap. 3 and on.
45
2.13.3. Repairing analogy
This is a profile for linguists. Repairing analogy is the diachronic process whereby an
anomaly in a system (it may be the result, for example, of phonetic change), causes the
creation of a new, more regular form, a "paraplasme" of the contravening form, which
will kill the latter, most often, thus blurring the relation to an etymon, or obscuring a
previously observed analogy.
In this profile, analogy is also responsible for "popular etymology".
Types: Brugmann, Saussure (not excluding Saussure also speaking of analogy in profile
1), Meillet 1964/1922, Brunot (1961/1887, p. 70), Demarolle (1990).
This analogy holds between four terms, cf. for example Saussure 1915/1970 p. 221:
honor: honôrem. It is opposed to the 'transparency of the etymon' (the latter at the
expense of systemic anomaly).
This analogy is a diachronic dynamics; it is grammatical:
Analogy is grammatical in nature: it supposes the consciousness and the understanding
of a ratio uniting the forms between one another. Whereas the idea is nothing in the
phonetic phenomenon [a phonetic change which initiated an anomaly in a paradigm],
its intervention is necessary in analogy (intervention of the proportional fourth)72.
It is grammar "in the making". Saussure opens up a track (but does not prolong it
further); the proposition here is to pursue this track and, from there, to rebuild
morphological and syntactic productivity, without rules, and without categories.
2.14. Statics, a dynamics of change, not yet a dynamics of acts
If we want to schematize, analogy, first envisaged as static and associated with
morphological paradigms, comes to be considered as a dynamics in the 19th century
where it is endowed with a role in diachrony: it "repairs" anomalous paradigms, be this
anomaly a result of phonetic change or have it another reason. It is a dynamics of
evolution.
A different dynamics is that of the linguistic acts. In this, analogy is solicited in
principle, by Bloomfield for example, without however giving birth to an explanatory,
precise construction. Distributionalism proposes a systematization of analogy and
broadens its scope to syntax in a first substitute of the dynamics. But it turns it down
onto an alleged essentiality: it is because such thing commutes in general with such
other thing, that such substitutions authorize occurrential productions. Distributionalism
will fail in the precise degree that should be granted to this generality.
Transformational generativism complements, improves, and further systematizes the
principal components of distributionalism, to give a second substitute of the dynamics:
the phrase marker and the transformation marker. This prolongs explanatory success
without yet getting to grips with a dynamics of the acts. It "generates" the set of the
possibles through the derivational and transformational process, which is not the
dynamics of the linguistic processes – and does not pretend to be. This, which is true for
generativism, is also true for Optimality Theory (OT): one generates a set, a large one
72
Saussure 1915/1970 p. 226.
46
possibly, of 'candidate outputs', then is elected the output which best observes the
constraints.
A static, declarative model (a static, declarative theory) is henceforth insufficient
because it does not make enough room for occurrence contexts; these are combinatorial
and the cases thus created are configurations in which so many elements may come into
play that they cannot be summarised in propositions the number of which would remain
practicable. So many such summaries have been attempted that, today, in order to get
closer to experience, it becomes necessary to compute the acts one by one.
This project can be seen as a fourth profile which is another linguist's profile: that of
productivity up to and including syntax. It is a dynamics of acts. This project contradicts
the view that restricts analogy to morphological repairing. It was formulated by
Bloomfield and Householder, for example, without being developed by them. It is
illustrated by Itkonen and a few more.
The present work is now about to heavily solicit analogy to rebuild with it operationally
a number of linguistic dynamics. It will always be analogy between four terms, that
called "A4" above (X is to Y as A is to B) – and not its degraded variety, the one called
"A2" (X is like Y).
I shall adopt a symbolic notation which is common in studies in analogy73 and which I
already used at places above. With this notation, analogy:
the cup is to Dionysus as the shield is to Ares.
becomes:
the cup : Dionysus :: the shield : Ares.
73
This convention is already attested in the Cours de Mathématiques à l'usage des gardes du Pavillon et
de la Marine by M. Bézout (Paris, chez Richard, Caille et Ravier, rue Hautefeuille, 11, au coin de la rue
Serpente, an VII de la République), vol. 4, p. 63, where it denotes homologies between corresponding
elements of similar triangles, and elsewhere in the book; but it may be older.
47
Chapter 3.
Model of linguistic knowledge,
model of the dynamics of acts
The grammatical approach is upside-down as a theoretical approach (Chap. 1): it places
the products of analogy (classes, rules) first and analogical processes second. It turns the
linguistic discourse down to static categories which embody an "essential" similarity:
the similarity results in properties which would be inherent to language objects
themselves. Because of that, the grammatical viewpoint is not in a good position to
account for the infinite variety of linguistic acts.
It is more promising to restore analogy in its duality, as static, and as dynamic, with
solidarity between both. This leads to address linguistic acts first and to take
consideration of the linguistic subject (the speaker) in which they take place.
Chap. 1 also showed that analogy holds under conditions of proximality and this theme
is the second major one to take into account. Analogy and proximality both affect both
the static side of the model (the inscriptions of the linguistic knowledge are analogical
and proximal) and its dynamic side (linguistic processes are analogical and proximal).
Chap. 2 recalled in the history of the linguistic thought a few moments concerned with
analogy which justify the reasoning of Chap. 1; it was shown that if the dynamics of
language change has been well described with analogy, the analogical dynamics of the
acts has hardly been postulated.
Analogy is thus doubly ambiguous. First it is static and dynamic: beside a Platonic
analogy (between some terms, analogical ratios are to be found) we now have to
envisage a dynamic one (analogy motivates new forms and facts on the base of older
ones). Secondly, analogy is also ambiguous because it underdetermines the ratios
between its terms (it 'elides' the predicates) and it underdetermines the motivation of
new facts on the basis of old ones (novelties are linked to precedents by the relation of
necessity). Contrasting with theories which put all their effort in desperately striving to
make these predicates explicit and would like to view motivation as necessary, this
thesis accepts this double ambiguity and takes account of it the best possible way.
This chapter, which constitutes the centre of this work:
49
i)
details the conditions of the enterprise by resuming, detailing and
complementing the themes of Chap. 1, and by defining the conditions of a
dynamic, concrete model; then it
ii)
defines the model under its static side and under its dynamic side.
The definition keeps elided several details, which are provided in the appendices, in
order to show sooner the application of the model to structural productivity (Chap. 4),
then to systemic productivity (Chap. 5), in order also to show the reconstruction in the
model of some notions of grammar (Chap. 6).
3.1. Towards a concrete model
3.1.1. Language, linguistic knowledge, conjoining statics and dynamics
Chap. 1 provided a first definition of the object: the object is not language in general;
neither is it French, Swedish, Wolof, etc. which are a matter as much for sociology and
history as they are for linguistics. The object is centred on the individual speaker –
which leaves open the possibility to envisage assemblies of speakers and occasions of
interlocution between them, but the model of the latter does not assume a central quasinormative object (that French, Swedish, Wolof, etc. would be).
However, having identified the speaker as a focal object, two distinct attitudes are still
possible. They will be named 'disjunctive' and 'conjunctive' for clarity.
The first one, disjunctive, is that of generativism. This theory names I-language74,
whatever is held to account for the status in which a speaker currently happens to be,
linguistically speaking, at this moment of his history; and in the same movement, it
assigns to this I-language, as a theoretical postulation, the mission of "generating the
infinitely many expressions" of which the subject is assumed to be capable; that is, to
define in a static way what previous states of the theory called the speaker's
'competence'. It is a 'procedure' which has exactly that purpose. This procedure is not a
dynamic; it does not aim to say anything particular about the dynamics of emission or
reception. Being procedural, it looks dynamic, but we must not be mistaken: it is a
means to state statically the closure of the possible in the language, and this cannot be
done practically by simply using propositions. I call 'disjunctive' this approach because
it disjoins the characterization of the possible in a language from the language
dynamics; it makes it a prerequisite and a separate enterprise.
The second attitude to approach linguistic manifestations in their phenomenology is that
defended in this work; it is 'conjunctive' in the sense that, in order to linguistically
characterize the speaker, one acknowledges the dynamics from the start. One does not
seek a previous and separate characterization of grammaticality or acceptability. One
does not try to circumscribe a constituted, static knowledge (even cast into a generative
'process') by dissociating it from its mobilization in emission, in reception, and in the
74
Suppose Peter's FL (faculty of language) is in state L. We may then say that Peter has (speaks,
understands, …) the language L. Here, the term 'language' is used in a technical sense : call L an Ilanguage – the letter I to suggest internal and individual, and also intensional [sic, intensional with s !?],
in that L is a specific procedure that generates infinitely many expressions of L. Chomsky 2000, p. 169.
50
dynamics of language learning. In other words there is no trying to make a grammar, no
need to separate langue and parole, competence and performance.
The conjunctive approach is desirable for the following reasons:
a) A, approach of the effects of the dynamics which is static only is deemed an
impoverishment because it encompasses a loss of adequacy and makes the task
more complex.
b) The static description of the effects of the dynamics does not help to elucidate
their mechanism.
c) Nothing proves the feasibility of the definition of the closure of an I-language.
d) We should not ask such question as "what should a language be in order to be
learnable" as long as it is not established that what the speaker learns is a
language. Now a speaker does not learn a language, he/she learns how to speak
which is not the same thing.
e) It is conjectured that mental processes are dependent on conditions and
phenomena of "access", they also benefit from them, and they can be fully
understood with them only. Now a theory which is static only, cannot take
accesses into account.
f)
Many complexities are daughters of disjunction. Conjoining statics and
dynamics should yield something simpler.
It is appropriate now to prevent a possible misunderstanding. In the remainder of this
chapter, I shall define the static side of the model (it will be called 'plexus'); a plexus is
static and could be understood as the linguistic knowledge; however, it is not the analog
of the I-language of the Minimalist Programme which, on its own, is supposed to
characterize the speaker linguistically. A plexus does not achieve that on its own;
without the dynamics it has no identifiable import at all, and cannot be validated of
falsified. A plexus only acquires the value that the dynamics confer to it. It would
therefore be erroneous to view a plexus alone as the linguistic knowledge of a speaker.
The conjunctive approach then ceases to attribute a focal and antecedent status to a
'language', even understood as a speaker's own or internal language. What is done is no
longer a grammar. However, this route does not invalidate another scientific effort (a
disjunctive approach) which takes a language as its object; it remains legitimate and
may produce interesting generalizations and propositions out of reach of the conjunctive
appoach; but it cannot be expected to much help in establishing the operative causal
chains of the linguistic phenomena. This will be addressed further in the conclusions.
3.1.2. Refusal of abstractions, occurrences, exemplars
The more we study language, the more we get penetrated by the fact that everything in
language is history, that is, it is an object for historical analysis, and not for abstract
analysis, that it is made up of facts, and not of laws, that all which seems organic in
language is actually contingent and completely accidental.
Saussure 2002, p. 149 (1st conf., Univ. of Geneva, 1891).
[…] a world of signs which underwent a mutation in the Renaissance and has been
turned into a world occupied by particular, isolated facts, which may yet serve as
51
positive evidence of future particular facts.
Hacking 1975/2002, p. 22.
Abstractions being refuted (Chap. 1) and analogy taking place between concrete terms,
linguistic knowledge must have concrete inscriptions as its basis. Ideally (this work does
not reach this ideal), one should understand "concrete" to mean occurrences that are
dated and attached to the situational context in which they happen. The conjecture is
that the ultimate understanding of the mechanisms of meaning requires taking things up
to that point.
This ambition in principle is, within this thesis, restricted to exemplars. Exemplars are
units of the linguistic form which are detached from a situational context, but which are
attached to a formal context (in French, some say 'cotexte' in this case). About
exemplars vs. occurrences, please cf. p. 211. The clause above, specifying that the
contexts to which exemplars are attached are restricted to be formal, is bound to be
released with the introduction of private terms which is planned for future.
The exemplarity of the inscriptions prescribes that the linguistic units have a value
exactly for themselves and through the exemplarist ratios which they establish with one
another. The occurrences (they are contextualized though) of linguistic units in the
analogies serve the linguistic dynamics without having to be relayed by any categorical
abstraction, descriptive rule or operative rule. The model encompasses no class, it is
entirely flat.
The exemplarity of the inscriptions goes along with the exemplarity of the processes: the
assessment of similarity, that is, the calling up of terms similar to a given one, is carried
out on demand, guided and commanded by the exemplarist terms involved in a defined
linguistic act. Then, names and verbs not being reified, the categorical status of Fr. rire,
as a name or as an infinitive, because it remains descriptively unsettled, does not
become an obstacle to productivity.
Such radical non-categoricity is adopted as a research posture; the point is to see up to
where it can be sustained. It is tempered by the conjecture that, between a model with
abstractions, and one which is completely flat, without abstractions, as that of this
thesis, the neurons implement something intermediate (cf. p.268).
The model being strictly concrete, this incurs a particular requirement on its design: it
has to be integrative. Each of the base inscriptions being less powerful than are
categories and rules, their number has to be larger than in a model comprising a lexicon
with categories and containing rules. Let us se now why exemplars have to be sparse
and heterogeneous.
3.1.3. Integrating sparse, heterogeneous data
Speaking subjects, and the learning subjects in the first place, are not provided with data
that are complete or homogeneous. They have all the time to integrate data that are
sparse and heterogeneous.
The idea of sparse data amounts to considering that the data which are available are
incomplete in the space which would be that of their totality – this theme of totality will
be reviewed and criticized below (p. 212) and I will show how it comes with theories
that are categorial and regularist. A „space‟, in a usage of this word which is certainly
52
metaphorical, is supposed, and the available data populate it partially only. For example,
the complete paradigm of the French verb may consist of 500 000 forms75 if one builds
it systematically, and the availability is restricted to five thousand of these.
In addition, these data are heterogeneous if they do not appear uniform along any
particular criterion, if they do not appear as classified of systematized. For example, for
one verb, forms are available at a given tense for all persons; for another verb, forms are
available at the third person singular in several tenses. Here, the reader‟s complicity is
requested for the usage of „verb‟, 'tense‟, and „person‟: this saves us forty lines of
exemplars – remember Householder supra – which are rewarding neither to write nor to
read, but it must remain clear that these words of metalanguage are foreign to the
proposed model.
The insistence on integrating sparse, heterogeneous data is rooted in the fact that the
speaking subject, when he learns, but also when he operates, does not have the option,
he must do with sparse and heterogeneous data. Experience never shows up as a
methodic teacher and one must always do with the availabilities, fragmentary as they
may be. The subject must be efficient without a complete system, with at best some
systematizations here and there, partial and contingent. A conjecture goes even further:
the systematization of experience always remains marked with exemplarism, it never
really substitute exemplars and occurrences with abstractions. Abstractions may come
later, at another time, that of conscious elaboration, of reflexive work, and of science,
but abstractions are not a prerequisite for the subject to become linguistically
productive.
Things being so, efficiency and productivity require integrative mechanisms that
potentiate sparse and heterogeneous data despite their scarcity and their heterogeneity.
Various data, each with little individual consequence must be made to play together in
multiple ways yielding joined effects which acquire more interest. The model must
show how potentiation happens.
3.1.4. From categories to similarity
Categories not being reified, the question arises of what will replace them, which leads
us to the antecedent question of why we had categories. They were used for ruling the
possibilities of legal, grammatical assemblies in a language. This in general, in
bounding by propositions (or by derivations and transformations) the possible in a
language, that is, competence. The present work modifies that aim: it is not the
bounding in general of the possible in a language which is sought, rather, as will be
shown in detail below, how one or a few of the available precedents can be picked up to
motivate a new form. This takes place occurrence per occurrence, the base of this
picking up is similarity, and its principle is an abductive heuristics.
The productive dynamics is supported by a heuristic computation which encompasses a)
suggestions of similarity and b) settlements which are assessments of coincidences
(among the suggested similarities some are felicitous and some other ones are not). The
abductive dynamics thus repeatedly poses questions of similarity.
75
Five thousand verbs with about one hundred conjugated forms each.
53
But was not the logic of categories already one of similarity; are we really making
progress? We are, because while the categories are an attempt to apprehend similarity in
general and a priori, it now suffices to apprehend it occurrentially and therefore in
concrete cases. Similarity itself becomes exemplarist. The categories disappear, even
refitted as categorial lattices with multiple inheritance (construction grammars,
Fillmore, Goldberg, Jackendoff, etc.) since the questions to which they answered in
general and a priori, can now be posed occurrentially.
Apprehending similarity in general and a priori poses the question of the closure within
which similarity is to be defined. “In general” for sure but a generality of what
perimeter? The question is inescapable as the perimeter cannot encompass all languages,
all the states of a same language, all idiolects, all the variation. The answers are varied.
For a generativist, this closure is a language (an I-language). How is it defined? One
introspects oneself and the judgments coincide … or vary, compromising then the
agreement about the I-language of which an account is sought. For a corpus linguist,
this closure is a corpus. What is its content? It depends on the aim that is pursued.
Remember the finding of corpus linguists that the grammar extracted from a corpus
degrades when the size and variation of the corpus increase (cf. p. 255); so does
polysemy in it. These inconveniences, one tries to keep within bounds by associating
textual productions to a notion of textual genre. Here, the domain within which
similarity is defined is the linguistic knowledge of a single speaker, this is a first upper
bound of its scope, and we shall see another one.
The item for which similar ones are wanted may be a term: a term being given, find
another one or a few other ones which are similar. We shall see that this argument may
also – this is better – be a pair of terms. The difference between both cases is important
because it is a question of precision of the device; it will be explained in due time.
"Similarity" still remains very loose: two terms (even two pairs of terms) may have
different titles of similarity. Below we shall see what dispositions apply to help
separating different titles of similarity, still without reifying categories.
Similarity can be envisaged statically: how does one know that exemplars are similar or
not, disregarding time and context? We shall see that this question has no true answer in
the model but it is not really relevant in it: decontextualized inscriptions are impossible
in it and the dynamics do not require them.
Or similarity may be envisaged dynamically: in the course of an act, with the concrete
and instantaneous determinations attached to it, with the precise aim of the process or
sub-process in question, what exemplars have in the past, at the same position as the
current argument, contributed with most success to an already accomplished act. This
will be implemented below (p. 95) by a mechanism called 'similarity suggestion'. All
suggestions thus made are not good finally; suggestion is followed by a complementary
mechanism: settlement.
The suggestion mechanism is based on the fact that, in linguistic knowledge, some
inscriptions are proximal to one another and some other ones are less, which guides the
mechanism in the suggestions it makes. Proximality, already acknowledged as a
necessity in the introduction, will be defined below within the modelling apparatus
which it serves.
54
3.2. A speaker’s linguistic knowledge as a plexus
3.2.1. Static model and dynamic model
In linguistics, manifestations of contingency of all sorts, and showing up everywhere,
suggest that 'language objects', ultimately have value only by their use in the dynamics.
This idea extends to questioning their very 'existence'; it even questions the legitimacy
of postulating them as monadic, static beings that would be antecedent to the dynamics
which they are expected to support. This is a strong push to look for a model which
would intimately integrate statics and dynamics.
However, to proceed in this direction, intellectual landmarks and generic models are
lacking. In linguistics but also in other fields, let alone quantum physics, theories and
models always separate a static vision and a dynamic one.
Artificial intelligence was, in the 1970s, a place for a debate to decide whether we
should or not
encode the utilization of knowledge [procedural representation] rather than the
knowledge itself [declarative representation]. The debate was concluded in favour of a
declarative representation. This was because a procedure presents the inconvenience
that it mixes up that which is general (the inferential algorithmics) with that which is
specific to the represented knowledge, whence a loss of readability and increased
difficulties in the tests and in the ensuing modifications76.
Moreover, in linguistics particularly, something common must be available to serve acts
of emission and acts of reception: the utterable and the receivable entertain a strong
coupling, even if their domains do not coincide. The model must therefore be
'bidirectional'77, something is needed which is not entirely committed with the dynamics
of emission nor with that of reception. This necessary lag with each of the two dynamics
leads to accept a central object which can only be static.
Finally, for learning dynamics, it is hard to adopt a model differing from a succession of
states between which the transitions that constitute the linguistic events modify the
previous state, giving the successor state.
So for three reasons, the proposed model makes a separation between statics and
dynamics, which is considered as a second best option, as a theoretical tier with some
potential for improvement in this respect. The overall model thus postulates a static
model and a dynamic model; they have many relations and interdependencies but are
nevertheless distinct: one could be replaced while conserving the other.
This statics-dynamics separation notwithstanding, the vision of the 'language objects' is
still highly affected: the assumption of the vacuity of the terms (infra) represents a
significant step in the direction of a possible merging because, the static knowledge
being much leaned, dynamics are called in to reveal what other analyses would take as
'properties' of the terms.
76
Kayser, in Houdé 1998, p. 349.
77
Lamb 2000, p. 108.
55
3.2.2. The plexus is the static side of a speaker’s knowledge
The static knowledge compensates the vacuity of the terms by rich exemplarist relations
or rather copositionings78 between the terms; this makes the static knowledge a network.
It receives the name plexus to stress its meshing79.
A plexus is a model which approximates the static side of the linguistic knowledge of a
speaker under the assumption of radical non categoricity. It is constituted of exemplarist
inscriptions, the meshing of which makes them something very different from a lexicon.
It is not more a semantic network: a semantic network encompasses essential properties
attached to its nodes and the nodes have relations among them. In a plexus, as we shall
see, terms are empty and what structure them are not relations but copositionings80.
A plexus is an indirect observable: it has value through its consequences only, when
used by the dynamics.
3.2.3. Mode of constitution of a plexus
A plexus is not straightforwardly available anywhere. A structure of copositionings does
not present itself as a given. Against the project – Harrissean for example – to have a
grammar emerge from a corpus without calling on subjectivity, a plexus cannot
(currently and perhaps for some time) be usefully obtained from a corpus, principally
because meaning is difficult to apprehend in a corpus, but there are other reasons, the
complete argument is made p. 255.
A plexus may be elaborated, tested, improved, and finally validated by a human author
(the 'descriptor') who introduces in it his own sensitivity as a subject of the language or
the sensitivity which he thinks to be that of informants for example.
There is no other discovery procedure. The approach is similar to that of a descriptor of
a language unknown to him and remote from his own: collecting 'facts' is easy, but it is
not simple to decide what constitutes a fact or motivates its pertinence. The difficulty
comes then, in knowing what contrasts between what facts are granted what role in the
elaboration.
78
The notion 'copositioning' is preferred to the notion 'relation', for reasons that will be explained.
79
The word "plexus" denotes a meshing but it is also a tribute: "La loi tout à fait finale du langage est
qu'il n'y a jamais rien qui puisse résider dans un terme (par suite directe de ce que les symboles
linguistques sont sans relation avec ce qu'ils doivent désigner), donc que a est impuissant à rien désigner
sans le secours de b (et n'est puissant de plus qu'en tant que b' lui crée de la valeur et réciproquement de
sorte qu'il n'y a plus que des differences), celui-ci de même sans le secours de a; que tous deux ne valent
donc que par leur réciproque différence ou qu'aucune ne vaut même par une partie quelconque de soi
autrement que par ce même plexus de différences éternellement négatives". F. de Saussure, private papers
CLG/E (I), p. 265, N 10, number 1906, quoted by Fehr 2000, p. 139, also present in Saussure 2002, p.
219.
As soon as it ceases to be ridiculously small – when too small it has no linguistic significance – a plexus
is absolutely impossible to represent as a text or graphically. The principles of plexus structuration are
going to be built step by step and illustrated with examples. Meaningful plexus excerpts are to be found in
chapters 4 et 5.
80
56
On this question, the initial position is not different from that of the generativists:
introspection (of oneself or of an informant) is what provides judgments. The difference
is that the process does not produce the same final output.
Much in the same way as the description of an unknown language or as the making of a
generative grammar, the elaboration of a plexus is exposed to the risk of preconceptions
which a subsequent validation, if well conducted, may reveal and incite to correct.
The approach is supported by a computer implementation which is indispensable. For a
generative grammar consisting of thirty categories and forty derivational rules, manual
proof tests can be envisaged. Manual proof testing is still possible, but harder, in a
model like HPSG. But for a large set of meshed exemplars like a plexus, this is not any
more possible at all. Even less so as the 'properties' of the terms not being reified and
being only revealed by the dynamics through indirect effects, the production of the
smallest result involves inscriptions and elementary computation steps by the hundreds.
The computer implementation is thus indispensable to the dynamic validation of the
model, but it also assists in the already heavy task of just writing the plexus. It does so
by facilitating the inspection of its content in such or such domain, by exerting formal
correctness and coherence conditions, etc.
The burden of plexus writing leads to the idea that, from an initial state which would be
built manually, the plexus might complement and improve itself by self-analysis. This
would leverage its productive power. The question is mentioned here because it is
important but cannot yet be developed, cf. p. 258.
3.2.4. French plexus, English plexus
The computation examples which are about to be produced in this work are based on a
French plexus (this is simpler for a French author and French readers) and on an English
one (for particularities of English such as the ditransitive construction or the
construction with postponed preposition). The English plexus is a small sample of
language and the French one is larger (about 2000 terms). Other tests were made on
Basque and Japanese but, up the point where they were taken, they have not brought up
anything that could not be shown with a language more familiar to most readers, so they
will not be used in the text. More details of this sort are provided p. 310.
3.2.5. 'Inscriptions', not 'representations'
When defining a linguistic plexus, the matter at stake is indeed 'representation' as it is
presented in the theory of knowledge and subsequently in cognitive science, the
representation which is deemed to be at the heart of cognitive science81. The question of
representation is central in cognitive science in general and in linguistics in particular.
The word 'representation' itself has the important inconvenience of sounding transitive:
it suggests the representation of something. Something to be represented would impose
itself by its evidence and the duty would be to represent it. In case of a problem, the
representation would be imperfect, one would be led to refine and adjust it, but the thing
to be represented would conserve its obviousness and stay untouched.
81
Gardner 1987/1993, p. 436.
57
The debate is not the philosophical one between realism and nominalism; it opposes
representationalism to non-representationalism. Between high-level observables and
physiology, representationalism postulates, a 'representation level' which explains the
observables. Some authors think in addition that the representations of this level should
some day be explained by physiology, but the latter ambition is not deemed necessary by
all. In contrast, non-representationalism negates a representation level. The
representationalist position is dominant and old. Non-representationalism is a minority
and more recent (it was Wittgenstein‟s position though), but it is embodied by scarce
attempts only and representationalism turns out very difficult to overcome.
Note that in linguistics the point is even more critical: if the correspondence between
'representations' and their alleged objects is here as great a question as in any other field,
the status of objects is in linguistics still much less assured; this is even truer if one
accepts that metalanguage should be expelled.
When trying not to incur the connotation 'representation of something', it cannot be
proposed to simply evacuate 'representations' which would amount to a caricature of
behaviourism: the subject‟s history must leave some trace to be reused to make for
novelty, some intermediary is indispensable between the stimulus and the response. The
hope is that it is possible to say something about it without delving down into the
physico-chemical level; a certain amount of mentalism is necessary. This hope may be
vain ultimately, but approximations are possible.
'Inscription' sounds better than 'representation' because it is less transitive. With
'inscription', the push to wonder "inscription of what" is lesser. Therefore I shall write
'inscription' and not 'representation'.
Inscriptions are not countable and must not be viewed as monadic entities. It is not
possible to just add one or to just delete one because of their inherently meshed
character, which is a corollary of the impossibility to make decontextualized inscriptions
(cf. p. 77). Rigorously then, 'inscription' should be made a mass name: 'some of
inscription', 'a little of inscription', 'the quantity of inscription increases'. I shall not do it
but it is important to understand well what is said.
So inscriptions are made, but without considering that they 'represent' linguistic
knowledge: the inscriptions are the model of (the static side of) a speaker's linguistic
knowledge; they approximate it and do not represent anything. At best, they are
collectively its analogue in a model. This position shares something with that recently
adopted by Jackendoff82. About the following statement by Chomsky83:
A child who has learned a language has developed an internal representation of a
system of rules that determine how sentences have to be formed, used, and understood.
Jackendoff writes:
Chomsky's phrase "has developed an internal representation of a system of rules" is
better expressed as "has internally developed a system of rules". The rules are not
represented in the learner's mind. They are just there.
82
Jackendoff 2002, p. 68.
83
Chomsky 1965, p. 25.
58
But of course, it is not of rules that I say that "they are just there" (and not represented),
but of terms, of analogies, and of the links among them. They are simply inscribed in the
plexus and do not 'represent' anything else than themselves. These inscriptions are not to
be judged whether they are or not adequate to represent their supposed 'objects' but
whether the dynamics they support are productive in the way human speakers are.
From now on, but for quotations, the word "representation" will not appear in the text.
3.3. Anatomy of analogy
3.3.1. Three classes of analogy
Aristotle's analogy
cup : Dionysus :: shield : Ares,
Varro's morphological analogy, and the analogy postulated by Bloomfield as the base of
syntactic productivity84, all three establish similarities of differences between four terms
but they do not do that exactly in the same manner. The table below proposes the
definition of three classes of analogy. The three classes command the static treatment
and the dynamic treatment of analogy in the model. Below (p. 67), the table will be
complemented by the modes of inscription in the plexus, then (p. 88) by the abductive
movements which apply to each class.
Class
Systemic non
structural analogy
(class A)
Structural non systemic
analogy
(class C)
Structural and systemic
analogy
(class AC)
Examples
la
une
un
le
: un soir ::
: le jour
élu
: élue ::
maître : maîtresse
soir
jour
: un soir ::
: le jour
Place in
grammars
: le ::
: un
soigneux : avec soin ::
rapide : vite
lawful : unlawful ::
honest : dishonest
happiness : happy ::
beauty : beautiful
un
dis
Paradigms without
overt manifestation
Syntax
: unlawful ::
: dishonest
Paradigms with
overt manifestation
Table 3 Three classes of analogies
84
All three are A4 analogies (not A2 analogies), this is clear now and will not be mentioned again.
59
3.3.1.1. Class A, systemic analogy
Class A (A as analogy) is systemic non structural analogy85. Systemic analogy sanctions
a similarity of differences between four terms (it being visible in the form or not).
This supposes between the pairs, a similarity of meaning ratios. The formula A : B :: C :
D in which a similarity of meaning ratios does not hold is not an analogy.
'Meaning ratio' does not subsume the notion 'meaning'. 'Similarity of meaning ratios'
does not even subsume the notion 'meaning ratio'. This model posits 'similarity of
meaning ratios', it does not posit 'meaning' or 'meaning ratio'.
This leads to a new aspect of the notion of contextuality (cf. p. 77) which, in turn, can
reassure us about the clause "a systemic analogy assumes a similarity of meaning ratios
between its pairs". One might in effect object as follows: what about polysemy and
ambiguity, if one of the terms A, B, C or D has several meanings (this is the general
case if we comprise extensions and metaphorical meanings) a systemic analogy may
hold for some of the meanings and not for all of them. We need to understand that a
systemic analogy most often selects some of the meanings, extensions or acceptations.
More seldom can it cope with several of them; this is rare because seldom do four terms
together have compatible extensions or metaphorical uses (that is, extensions or uses
which may get involved by four in an analogy). All this explanation is made in using the
words "meaning", "proper meaning", "extension", etc. although they do not belong to
the model and their usefulness will be firmly denied (infra) but I find no other way to
put it, this is our shared culture, and even those of us who put these notions into doubt
understand what is meant. Only when the model will be complemented in the direction
of meaning, will it be possible to write more rigorously and more clearly. In the
meantime, it is not possible either to say nothing, because an analogy which would be
formal only has no interest in linguistics, it has only that of being available to lend itself
to a game of meaning if speakers eventually start playing such a game.
Systemic analogy, as just defined, plays an important role in the explanation of
"systemic productivity" and in the learnability of pluridimensional systems (Chap. 5).
3.3.1.2. Class C, structural analogy
Class C (C as concatenative construction) is structural analogy. Structural analogy is a
structural mapping between parts of a whole and parts of another whole, such that the
part-whole relations are perceived as the same in both cases. It is indeed the mapping of
parts and so the formula below:
un : un soir :: le : le jour
has to be understood as an ellipsis of:
un (as a part of un soir) : un soir :: le (as a part of le jour) : le jour
85
In this work, I use 'systemic' and 'structural' in the precise meanings specified in this section. For those
two words, very overloaded, and used with much confusion – a clear distinction can be found in Paveau
(2003, p. 83-84), but it is not the one adopted here – the reader will kindly accept, in this dissertation, the
precise meanings proposed here. These conventions are local to this work and do not pretend to be general
statements about 'structure' or 'system'.
60
and not as:
un (generally) : un soir :: le (generally) : le jour.
The terms play analogically as parts, and not for themselves.
Structural analogy thus subsumes a merology. For linguistic form, it supposes
segmentation. This does not incur the assumption of constituency, that is, the
assumption of constituents that would be univocal or essential: constituents are
constituents only because they result of a segmentation on this occasion (and perhaps a
few other ones) but, i) for a given form, several segmentations are concurrently possible
in a same occasion, and ii) a same form may lend itself to different segmentations in
different occasions (cf. p. 201) even if, most often, a form will be segmented in one way
only.
To denote the ratio between un soir and le jour, the abridged formula below will also be
used:
un + soir = un soir :: le + jour = le jour.
or, even briefer:
un + soir :: le + jour.
In this formula, the + sign is concatenation as far as linguistic form is concerned, but it
can be interpreted differently depending on the type of merology in question:
planets + sun = solar system :: electrons + nucleus = atom.
Structural analogy is not limited to two constituents, in an appendix, can be found a
statement of reasons not to limit oneself to binary assemblies.
Between its left part and its right part, a structural analogy assumes a ratio of meaning.
Thus:
John + is + easy to please :: John + is + eager to please
is not an analogy. This particular case will be heavily solicited below
Likewise:
Gaule + isme = gaullisme :: France + isme = franquisme
is not an analogy (even let alone morpho-phonology). It is a mapping which is formal
only, similar to one which initiates popular etymology or reanalysis, but it does not
suffice. A reanalysis succeeds because it goes along with a constitutable meaning,
compatible with the preceding one, or with only a small difference. In the example
gaullisme-franquisme, the subject who would be ignorant of politics and would ignore
who de Gaulle and Franco were, cannot, with the proposed analysis schema, proceed
meaningfully, if he knows for example that Franquism is related with Spain.
3.3.1.3. Class AC, structural and systemic analogy
Class AC is the case of analogies which are structural and systemic. A structural and
systemic analogy is a structural analogy such that, between the pair consisting of the
assemblies, and one of the pairs consisting of homolog parts, a systemic analogy holds.
61
3.3.2. Tenor, vehicle, analogy orientation
When he defines analogy (supra, Chap. 2, p. 25) Aristotle calls the second pair the
vehicle. In analogy X : Y :: A : B, the second pair, A : B, is the vehicle. Later, the first
pair will be called the tenor86. Vehicle and tenor cannot be exchanged in general: the
vehicle must be more familiar. This is set by Aristotle as a condition bearing on a
metaphor and it bears consequently also on the underlying analogy.
In this model, analogy orientation amounts to this: does the given of X : Y :: A : B
authorize A : B :: X : Y? If both pairs have equal familiarity, the answer is yes.
Otherwise, the transposition is not cognitively founded ant it should not happen.
The model recognizes analogy orientation and grants it a great cognitive significance
with consequences in the statics and in the dynamics. It does so through what will be
called below "familiarity orientation". Section 12.8. Familiarity orientation is entirely
dedicated to this subject.
3.3.3. Analogy "elides the predicate"
Analogy maps onto each other the vehicle and the tenor, and so does it for their
respective terms, regardless of the predicate which would apply between. The cup is to
Dionysus as the shield is to Ares. What is the shield to Ares? attribute? substitute?
symbol? sign? This remains undecided: analogy elides the predicate87. In fact, it simply
omits to require it. Accepting an analogy is accepting this: the predicate which holds
between the terms of the tenor and that which holds between the terms of the vehicle are
the same. Nothing more is assumed; this similarity holds whatever this predicate. Its
essence, its nature, its properties, etc. nothing of all this is necessary; the analogy may be
good, operative, productive, without the subjects having to specify the predicate.
The elision of the predicate is the limit of analogy. Douay88 reminds us an example used
by Perelman and taken from Aristotle89. Iphicrates, asked to compel to the liturgies his
son, who was young but tall for his age, answered this:
If we take tall children to be men, then we should decree that small men are
children.
Iphicrates reveals the paralogism which sustained the argument of his opponents and
which is an analogy, but a false one:
tall : small :: adult : child
by differentiating the category of age and the category of size. Age and size thenceforth
categorically differentiated, it becomes possible to make propositions about one or about
the other and a choice must be made. This enables the foundation of a legal point in the
situation in which Iphicrates had to respond.
86
In French, thème (tenor) sems to appear with the translation of the "Traité de l'argumentation" of
Perelman, 1958 (Françoise Douay, pers. comm.).
87
Douay 1991.
88
Ibid.
89
Rhetoric II 23.16.
62
The same movement that helps him to convince his opponents also founds a certain
rationality: it is a categorization of similarity and of difference comparable to this one
which structures generalizations about sense data and gives a foundation to scientific
rationality. This movement however does not appear (cf. Chapter 1) to provide the
foundation we need to understand linguistic dynamics.
In any case, the limit of analogy which has just been illustrated was the cause of its
disrepute in the âge classique then till the mid twentieth century, we saw this above.
However, if the omission of the predicate is the limit of analogy, it is also its power and
its flexibility:
Nothing is, or at least, nothing is absolutely (in the linguistic domain). No term,
assuming it is perfectly right, is applicable beyond a certain sphere. The elementary
form of the judgment: "this is that" opens immediately the door to a thousand
contestations, because one needs to say in the name of what one distinguishes and binds
"this" or "that", no object being naturally bounded or given with evidence. Saussure
2002, p. 81.
They are not indeed. It is possible to eschew this inability of equativity and it is less
risky to say:
This is to that is as this other one is to that other one.
Most of the time this suffices.
3.3.4. Determination of the analogical ratio
3.3.4.1. Quasi-bijection, three terms must roughly determine the fourth one
Analogy is intermediary between full equivocity and univocity.
Caietano 1498/1987, p. 122.
A proposition of type "X is to Y as A is to B" is not interesting if many Xi may be
substituted to X. For example:
red is to adjective as house is to noun
is probably not false: both pairs are in the ratio instance to lexical class, so that it is not
absurd to bring them together; a 'similarity of differences' does occur. However, it is not
a very interesting one; adjective, house and noun given together do not determine red
sufficiently, pleasant, fast, and hundreds of other ones would suit as well.
At the other end, asking a question like "what X is to Y as A is to B" is not interesting
either, if one cannot conceive of a possible X. Examples:
What is to man as red is to freedom?
What is to Paris as China is to Stockholm?
What is to football as the future is to the unknown?
In none of these three cases may one answer, whichever way one tries to understand
them. Thus an analogy is interesting only if the fourth term is determined by the three
other ones. There must not be no answer, there must not be too many; but thee may be
more than one:
In French, what X is to soigneux as vite is to rapide?
63
Two answers are possible: soigneusement (carefully) and avec soin (with care). Each of
them makes a very acceptable analogy.
In sum, an analogy is acceptable if it is bijective or close to bijection. The term
"bijective" is used although it is improper in part. Properly, an application between two
sets is bijective if it makes a one-to-one mapping between them. An application
involves one term with one term, now all the effort here aims precisely at criticizing the
one-to-one approach to favour alternately a several-to-several approach. In spite of this,
the term "bijective" is kept rather than creating a new one.
The bijectivity of analogies incurs that the "paradigms" which are about to be defined
will have to be bijective or quasi-bijective if we want them to fairly account for
analogies. The question will become sensitive in the constructional paradigms (below):
all constructional paradigms do not encompass quasi-bijectivity, therefore, not all of
them are analogy-bearing.
The criterion of quasi-bijectivity appears to be very efficient. Please refer to section
13.4. Abductive movement by transposition (p. 321); there, we see an analogy which is
surprising at first sight but in which binuivocity does show however. Even if a speaker
may hesitate in accepting analogies of that sort externally, when they are used as a step
in a computation, they operate perfectly because they are bijective.
3.3.4.2. Taken alone, a pair does not determine the analogical ratio
It may happen that, starting from a same pair (suis : serais in the following example) it
is possible to develop several different analogies, that is, several different paradigms.
constancy and variation between pairs
mode
verbal base
tense
person
constant
(être)
constant
(present)
variable
constant
(être)
variable
constant
(1S)
variable
constant
(present)
constant
(1S)
analogies / paradigms
suis : serais ::
es : serais ::
est : serait ::
sommes : serions ::
êtes : seriez ::
sont : seraient
suis : serais ::
ai été : aurais été
suis : serais ::
ai : aurais ::
vais : irais ::
crois : croirais ::
etc.
constant
(indicative :
conditional)
Table 4 Several analogies for a same pair
64
The table above, for several such paradigms, displays the elements that remain constant
and those that vary when moving from one pair to another.
Beside the verbal paradigms of Indo-European languages, agglutinative morphologies
produce phenomena90 of that sort. In the former ones (integrative) all dimensions are
marked by a single morpheme while in the latter (agglutinative) each dimension is
marked by a separate morpheme. In this, both systems differ, but they are similar in the
fact that they are both pluridimensional. The condition for such tables to be possible is
for the system to be pluridimensional.
In other words, a pair of terms like suis : serais does not suffice to determine what will
be constant and what will vary; it does not suffice to determine the analogical ratio that
commands how the rest of the paradigm can develop.
On the example above, a third term suffices to complete the determination so that i) the
(proportional) fourth is determined, and ii) this same logic, henceforth established,
becomes the condition for more pairs to be admitted in the paradigm.
Still, a third term is not always enough to establish an analogical ratio. It is not, for
example, in arithmetic analogies.
The pair 9 : 3 may be construed as multiplication by 3 or addition of 6.
Here, a third term X : 11 :: 9 : 3 does not determine the fourth term because this formula
still may be construed as multiplication by 3 or addition of 6 and so X=33 or X=17 are
possible results. Three terms in this case do not suffice to determine the ratio. So is it in
arithmetic, and more generally in all ring structures in the sense of set theory. However,
this latter case will not be considered further: the model is not concerned with it in the
linguistic field.
In summary, we should remember that the vehicle (A : B) and the analogical ratio have
to be kept conceptually distinct. Most often, both are identical but in systems with more
than two dimensions, the vehicle alone does not determine the analogical ratio and the
addition of a third term completes the determination.
3.3.5. Separate analogies do not account for the continuity of the ratio
Consider now three analogies picked up from the preceding example.
(1)
(2)
(3)
suis : serais
suis : serais
suis : serais
::
::
::
est : serait
sommes : serions
ai été : aurais été
Something specific takes place between analogies (1) and (2) which is the conservation
of the analogical ratio: (indicative : conditional, constant tense, constant verbal base)
whereas the grammatical person varies between pairs.
It is also the case that an analogical ratio is conserved between analogies (1) and (3) but
it is not the same: (indicative : conditional, constant verbal base, constant person), the
tense varying between pairs.
90
Similar tables, still more varied, can be built for the Japanese verb
65
By simply giving analogies (1), (2) and (3) separate from each other, one does not
reflect entirely the conservation of the analogical ratio. A first manner to be faithful to it
consists of endowing the model with the notions: grammatical tense, mode, and person.
This is not envisaged after the critique of the categories made in Chap. 1.
A more economical way to achieve it consists of linking (1) and (2) on the one hand,
and (2) and (3) on the other, but in keeping (1)-(2) and (1)-(3) unlinked. This may be
obtained by splitting each analogy into its constituting pairs and establishing links
between the pairs
(1)-(2)
(1)-(3)
suis : serais :: est : serait :: sommes :serions
suis : serais :: est : serait :: ai été : aurais été
Each pair is here called a record and each link a paradigmatic link for reasons that will
soon be provided. The sets of linked pairs thus formed are called paradigms, this also
will be discussed.
This way of associating together linguistic terms now faithfully reflects the conservation
of the analogical ratio – and will make possible the development of a computation –
while it does not overdo it; contrasting with categorial theories it does not overspecify.
3.3.6. Paradigms of analogies
It seems to me that all that happened for three centuries might, if one liked to, be
summarized in this, that Descartes' adventure went wrong. Something is missing in the
"Discours de la Méthode". When one compares the "Regulae" with the "Géométrie",
one finds that a lot is missing to it indeed. For me, here is the lack I think I find.
Descartes has not found a way to prevent order, once conceived, to become a thing
instead of an idea. Order becomes a thing, it seems to me, as soon as one makes of it a
reality distinct from the terms that compose it, by expressing it with a sign. Now this is
what algebra is, and since the beginning (since Viète). Simone Weil (1966, p. 111).
Descartes is not the first one to fall into sin: Aristotle did before, with the categories.
Arranging analogies into paradigms, so it seems, allows one to respond to Weil's
request: it introduces some order without making it "a reality distinct from the terms that
compose it", without "expressing it with a sign".
For the pairs above, obtained from splitting analogies A1, A2, and A3, it becomes
possible to say that they are paradigmatic if one accepts a slight extension91 of the sense
of the term paradigm as defined by Jakobson, who borrowed it from Donnat, and since
then received in structural linguistics. In structural linguistics, paradigmatic is opposed
to syntagmatic. Here, it is opposed to something else, something which has no name and
is the transition from a vehicle to a tenor, from one pair to another. We shall see below
that extending the meaning of paradigm in this way is not too offensive: in the
paradigms of exemplarist constructions that will be introduced, the opposition between
paradigmatic and syntagmatic is back in a quite classical meaning.
91
Yvon (1997), whose work will be commented much further down in this dissertation, takes the same
liberty in a model of pronounciation : The model crutially relies upon the existence of numerous
paradigmatic relationships in lexical databases; for him, these paradigms are morphological analogies.
66
Thus "paradigm" is understood here slightly differently. In structural linguistics, terms
are not required to be associated into pairs to stand in a paradigm. Here, they are. The
benefits of this requirement will be made clearer below, and when "paradigm"
understood in this way will be extended to morphology and syntax, it will coincide
again with the classical notion.
3.4. Static model: a plexus as the inscription of analogies
The three classes of analogies being established and these clarifications being made, it is
now possible to propose a static model showing how to inscribe analogies in a plexus,
that is, how to constitute the static side of a speaker's linguistic knowledge that may
support the dynamics of linguistic acts, basing them on analogy.
3.4.1. Three classes of analogy with their method of inscription in a plexus
The table below, when first introduced p. 59, defined three classes of analogies. It is
now complemented with the modes of inscription in a plexus which apply to each class.
Class
Systemic non
structural analogy
(class A)
Structural non systemic
analogy
(class C)
Structural and systemic
analogy
(class AC)
Examples
la
une
un
le
: un soir ::
: le jour
élu
: élue ::
maître : maîtresse
soir
jour
: un soir ::
: le jour
: le ::
: un
soigneux : avec soin ::
rapide : vite
lawful : unlawful ::
honest : dishonest
happiness : happy ::
beauty : beautiful
un
dis
: unlawful ::
: dishonest
Place in
grammars
Paradigms without
overt manifestation
Syntax
Paradigms with
overt manifestation
Inscriptions in
A-type records
A la
A une
Structural non systemic
analogy cannot be
expressd in A-type
records
A élue
élu
A maîtresse maître
Inscriptions in
C-type records
Systemic non
structural analogy
cannot be expressd in
C-type records
C un+soir=un soir
C le+jour=le jour
C élu +e =élue
C maître+sse=maîtresse
A
A
le
un
C un+lawful =lawful
C dis+honest =dishonest
A
A
Table 5 Three classes of analogy with modes of inscription in a plexus
For class A analogies, the mode of inscription in a plexus is A-type records. An A-type
record ("A" for "class A analogy") contains a pair of terms and the inscription of an
67
analogy involves two such records, one for each pair in the analogy. The two records are
linked with a "paradigmatic link" (see below) in such a way that the convention:
A
A
la
une
le
un
in the table reads: " la is to le as une is to un". The dynamics that apply to a plexus,
when using the records and the links, give them precisely this meaning. The convention
of A-type records therefore means that their terms are just forms – they are not
perceived by the model as having overt similarities (which contrasts with C-type record
below) – but that a link between two such records accounts for a systemic analogy.
For class C analogies, the mode of inscription in a plexus is C-type records. A C-type
record ("C" for "concatenative construction") contains in the rightmost position a
linguistic assembly and, on the left, the constituents of the assembly – two only in the
examples, we shall see below that there may be more. Thus, a C-type record contains an
exemplarist assembly. The inscription of a structural analogy, as above for systemic
analogy, consists of two such records linked together by a link.
So that the convention in the table:
C
C
un
le
+soir
+jour
=un soir
=le jour
reads as follows:
a)
b)
un (as a part) is to un soir as le (as a part) is to le jour",
soir (as a part) is to un soir as jour (as a part) is to le jour".
This is how the two exemplarist constructions are similar. "Construction" is to be
understood in the sense of Fillmore (1990) or of Goldberg) (1995). Here, similarity
encompasses two aspects: i) the records are structurally (syntactically) similar, and ii)
the semantic effect of the assembly is the same between two directly linked records. The
model does not go beyond similarity thus defined: as the predicate between the shield
and Ares was elided (cf. supra), likewise there is no attempt to make explicit the
"semantism" of this syntax, no effort whatsoever to apprehend 'determination' or
'modification' with metalanguage or definitional propositions.
As for the third class of analogy, class AC, its inscription in the plexus consists of: i)
modelling it as a structural-only analogy (that is, with C-type records), and then ii)
writing a special mark (the A mark which is underwritten in the table) below the terms
which are involved in the systemic analogy. As pointed out above, one of them is
necessarily the assembly, the other one being one of he constituents (we shall see that
one constituent only can bear the A mark, otherwise the quasi-bijectivity rule which
must be satisfied by an analogy to be acceptable would be infringed).
In the following example which is picked out from the above table:
C
C
élu
A
maître
A
+-e
+sse
=élue
A
=maîtresse
A
élue is assembled as élu + -e , maître is assembled as maître + -sse, and in addition, élue
is to élu as maîtresse is to maître.
68
This modelling solution is not perfect but I have not been able to devise a modelling
device which would abstract cases A and C with homogeneity and economy. At least is
it functionally adequate.
The table will be complemented again below (p. 88) with the abductive movements
which apply to each class. Four examples are now going to be used to validate this
inscription model in a variety of cases.
3.4.2. Systemic non structural analogy (A): anomalous verbs
The first example, in English92, illustrates A class analogies (systemic, non structural
analogies). The principle is that the leftmost term is a preterit and the rightmost one a
past participle.
A mention like went gone is a record. Edges between records are the paradigmatic links.
The group formed by the pairs went gone and took taken, and by the edge between them
reads as follows: "took is to taken as went is to gone".
took taken
went gone
wrote written
saw seen
gave given
began begun
flew flown
forgot forgotten
Figure 1 A paradigm which is analogical only
This paradigm tells nothing else, in particular nothing about the meaning of the terms at
play. It expresses nothing about the forms of the verbs which it contains in other
grammatical tenses. Some such data, bearing on some of these verbs or other verbs, may
be inscribed elsewhere in the plexus, in other paradigms. When they are, they are not
constrained to bear on the same verbs.
A paradigm is thus the recording of analogies exactly in the sense of Aristotle.
0.35
0.1
92
Multiple similar examples could be taken in multiple languages, this one is chosen as a matter of
commodity; it will be reused and complemented in the next chapter to discuss the question of regularityanomaly.
69
The formula "took is to taken as went is to gone" incurs nothing particular about what
"took" is to "taken"; in fact, the model says nothing about what "took" is to "taken":
analogy elides the predicate (cf. supra). This is the central fact which allows one to
build a model free of grammatical categories; "took is to taken as went is to gone" does
not assume the category of the preterit or that of the past participle; neither does it posit
the verbs take or go (which would be the "grammatical word" for other authors). Yet,
"took is to taken as went is to gone" is a useable datum and its integrative utilization
remains possible as this will be shown. So metalanguage is expelled because it ceases to
be necessary.
Records (took, taken) and (went, gone) are proximal: as they are remote by one link
only, one can be reached easily from the other; in this particular sample, motion terms
are proximal. Likewise, terms concerning reading and memory are proximal; in the
plexus of another speaker the configuration of proximalities might not be exactly the
same. The precise disposition of records and their organization into paradigms, that is,
what these records are, and the links between them, is subject to influences of various
orders, notably cognitive and semantic. For example, it may reflect the subject's history
and the sequence in which he learnt (cf. p. 249). This is discussed again generally in an
appendix (p. 315). About the apparent arbitrariness attached to the detail of a plexus, see
also section 3.5.2. Determinism, idiosyncrasy, normativity (p. 75).
3.4.3. Structural non systemic analogy (C): syntax
The paradigm below bears on three-constituent structures.
It expresses that, in its six records, the construction is the same: the semantic effect of
the assembly is the same between two records with a direct link (a slight drift may take
place when crossing several links one after another).
One might consider that certain and sourire should have to be assembled first, and then
only the result of this assembly might in turn be assembled with un.
However, le certain sourire, for example, or ton certain sourire seemed to be less likely
to be produced by this speaker (he of whom this plexus is a model) – although they are
possible in French.
Similarly, la bonne chanson contains something that la meilleure chanson does not
contain and this difference is something else than that between bonne and meilleure.
Consequently, the constructions slightly differ; another paradigm which would contain
la meilleure chanson and une grosse entreprise, is possible but it should stay
disconnected from this one, or the linkages should be remote and weak. A speaking
subject feels this sort of tiny difference when he structures the memory of his linguistic
experience. Such slight differences are out of reach of category-based models. Here,
proximality allows them to be accounted for easily.
70
C une + très légère + angoisse = une très légère angoisse
C la + dernière + fois = la dernière fois
C le + grand + jour = le grand jour
C ma + petite + entreprise = ma petite entreprise
C la + bonne + chanson = la bonne chanson
C un + certain + sourire = un certain sourire
Figure 2 Structural non systemic paradigm: syntax
The assemblies are ternary in this example. A discussion of the reasons why ternary
assemblies are needed is provided on p. 371.
About now the precise significance of analogy in this paradigm, it is possible to say:
(a) chanson (in la bonne chanson) is to la bonne chanson as
sourire (in un certain sourire) is to un certain sourire
0.35
0.1
This is a merological viewpoint, there is a structural analogy, that is, a structure
mapping. But one cannot say:
(b) chanson (in general) is to la bonne chanson as
sourire (in general) is to un certain sourire.
Nor can this be said in selecting the first constituent or the second one. In other words,
there is not in this example a systemic analogy. This analogy is structural and non
systemic.
3.4.4. Systemic and structural analogy (AC): violoniste, violoneux
To decide whether a structural analogy also comprises a systemic one, the criterion is
that of bijection or quasi-bijection. If a paradigm behaves as a bijective or quasibijective function between the assemblies (rightmost part of the records) and the terms
of one of the constituent positions then, these pairs constitute systemic analogies. One
can convince oneself of the validity of this criterion by checking that it holds in all the
examples given so far. Bijectiviy may be not entirely strict:
71
C
C
C
Site 1
art
violon
violon
A
Site 2
-iste
-iste
-eux
Site 3
-
Site 4
artiste
violoniste
violoneux
A
We should accept pairs (Site 1-Site 4) as systemic analogies: what an artiste is to art, a
violoneux (En. fiddler) is to violin much in the same way as a violoniste is.
In a record, if a constituent takes part in a systemic analogy, the other constituents
cannot. One constituent only may take part in a systemic analogy with the assembly.
This is a consequence of the quasi-bijection principle.
In a systemic analogy, one of the involved participants is necessarily the assembly. The
other one was the first constituent in the previous example, it is the second one in the
example below:
C
C
C
Site 1
inmala-
Site 2
correct
poli
social
A
Site 3
Site 4
incorrect
malpoli
asocial
A
3.4.5. Structural and (everywhere) systemic analogy (AC): regular plural
The "analogical only" paradigm presented in the previous example features analogical
ratios which are not apparent in the form. The one presented now, if it still encompasses
that "cows is to cow as houses is to house", adds that cow plus -s assemble into cows.
This is what C-type records do (constructor records that assemble by concatenation).
Such a paradigm accounts for morphology. It can also account for syntax. Paradigm: a +
cow :: a + town :: an + idea :: etc. relates noun phrases to their constituents: noms and
defined articles.
C cow -s cows
C house -s houses
C process -es processes
C brother -s brothers
Figure 3 Systemic and (entirely) structural paradigm: plural with regular morphology
This model makes no criterial distinction between morphology and syntax: it treats both
at once with records and paradigms of the same type and the dynamics which use these
records (which will be exposed below) are not morphology-specific or syntax-specific.
The cohesion which is that of the "word" arises de facto as an overall effect of the
72
abductive dynamics which apply to the plexus. For a plexus which is faithful to the
linguistic knowledge of an English speaker, it just will not happen that anything may
intervene between town and -s. This contributes to "define" "nouns" as cohesive with
their affixes and this "definition" is pervasive and de facto in the plexus of this speaker
– and perhaps in that of many more with whom understanding obtains. It needs to be
stated that this "definition" is not propositionally made anywhere in the model. The
notion "word" is at best a shortcut that we, educated humans, perhaps grammarians, find
sometimes useful to use.
3.4.6. Systemic and partially structural paradigm (A and AC)
All plurals do not have a formal manifestation by suffixation and yet systemic analogy
holds for such cases as well:
men is to man as houses is to house.
The mixed paradigm below illustrates how the model accommodates this case: C-type
records and A-type records may coexist in a same paradigm, the sole condition being
that, when systemic analogy applies, it applies in the entirety of the paradigm.
Henceforth, in order to be rigorous, the edges of the drawings should be doubled of
tripled to show the detail correspondences between the terms. The model does comprise
such detail even if the drawings remain elliptic and display one line only.
A foot feet
A ox oxen
A man men
C cow -s cows
A brother brethren
C house –s houses
C process -es processes
C brother -s brothers
Figure 4 Systemic and partially structural paradigm: plural with anomaly
3.4.7. Paradigmatic link
In section 3.3.6. Paradigms of analogies (p. 66) we concluded that there is a need to
constitute paradigms of analogies, and the examples that have just been presented
showed, according to the three classes of analogies, how this could be achieved with
records and paradigmatic links between them.
0.35
0.1
73
A paradigmatic link is the organic device which links together two records and thus
manifests the analogy that holds between their terms.
In order to manifest analogies in a plexus, another possibility would consist in the direct
inscription of analogies, without splitting them into records that would have to be linked
thereafter. Doing so would not make it possible to manifest the continuity of the
analogical ratio, the need of which has been demonstrated supra, p.65.
On the contrary, the splitting of an analogy into two records with a link between them
leaves to each of them the possibility of being linked in turn with another record (other
records) to form another analogy, the ratio of which prolongs that of the former one.
This gives birth to chains of records which are the 'paradigms' of the plexus (this usage
of 'paradigm' somewhat extends the classical meaning in linguistics).
A paradigmatic link may occur between two A-type records. It manifests then a systemic
non-structural analogy. It may also occur between two C-type records without A marks.
It manifests then a structural non-systemic analogy. It may further occur between two Ctype records with A marks. It manifests then a structural and systemic analogy. It may
finally occur between an A-type record and a C-type record with A marks. In this case, it
manifests a systemic analogy between the terms of the A-type record and those of the Ctype record that bear the A marks.
Paradigmatic links play an important role in similarity suggestion, cf. section 3.7.7.
Similarity suggestion(p. 95).
It has been mentioned before (p. 62) that, in general, analogies bear an orientation: one
of their pairs is more familiar than the other one and helps make it understood. The
model recognizes this by providing for an orientation of the paradigmatic link. The need
for orientation is first described in systemic analogies but it also applies to structural
ones: an exemplarist construction may, vis-à-vis another one with which it has a
structural mapping, be less familiar, less natural. It may have been learnt later, with the
help of it upon its first encounter, etc. (cf. section 12.8. Familiarity orientation, p. 303).
Finally, paradigmatic links are the means whereby proximality is implemented, which is
not the lesser of their roles.
3.5. Philosophy of the static model
3.5.1. Proximality of inscriptions
A un besoin est liée l'idée de la chose qui est propre à le soulager; à cette idée est liée
celle du lieu où cette chose se rencontre; à celle-ci, celle des personnes qu'on y a vues;
à cette dernière, les idées des plaisirs ou des chagrins qu'on y a reçus, et plusieurs
autres. On peut même remarquer qu'à mesure que la chaîne s'étend, elle se sous-divise
en different chaînons; en sorte que plus on s'éloigne du premier anneau, plus les
chaînons s'y multiplient. Une première idée fondamentale est liée à deux ou trois
autres; chacune de celles-ci à un égal nombre ou même un plus grand et ainsi de suite.
Condillac 1973, p. 126.
In chapter 1, I suggested that, among the inscriptions of a plexus, some needed to be
more proximal and other ones less; I suggested further that proximality would react on
the dynamics by modulating their cost.
74
Proximality implements the intuition that two pairs which constitute an analogy, when
inscribed in a plexus, are inscribed close to one another since the analogy relates them in
a way, and the paradigmatic link is the instrument of this linkage.
After the concreteness of the model (exemplars and occurrences, not abstractions)
proximality is the second corollary of the absence of rules and categories.
'Proximality' is understood in the sense that the elements of the linguistic knowledge are
proximal when one can be reached easily from another. Proximality is that of the
inscriptions in the first place: a record has links with a limited number of other records.
In the static side of the model, it is the paradigmatic link which embodies proximality.
As inscriptions acquire value only when the dynamics grant them some value, the
proximality requirement is extended to the dynamics themselves: the dynamics must be
proximal, that is, short-sighted even if we expect them to yield final effects which are
not. The metaphor is that of hexagonal cells in a bee-hive or of the regularity in crystals:
here are no general rules which would globally determine the crystal or the form of the
cells, yet regularization obtains, but as a consequence, not as the operating cause. The
position of the rule in the theory changes, a causal status is denied to it, it becomes an
observable, moreover a contingent one.
A philosophy of proximality will be made p. 212, where it will be contrasted with
'totalism' which is a defect inherent in categorial theories. Proximality allows us to
overcome it. It will also be shown how it eschews 'simple associationism' – latent in the
quotation of Condillac above.
Proximality is one of the levers that support the notion of cost in the model (see below
the dynamic side of the model): a unitary move from an inscription to a proximal one
has a low cost; a longer sequence of such moves has a higher cost.
3.5.2. Determinism, idiosyncrasy, normativity
The structure of a plexus, that is, the precise detail of inscriptions in it, poses to the
reader – and to the descriptor before him – the question of its residual arbitrariness.
'Arbitrariness' here is not the arbitrariness of the sign (its conventionality), but rather
questions like "Why inscribe this term and not this other one; for a given term, why in
this record and not in another, why these particular paradigmatic links". Such questions
may have occurred to the reader on the occasion of the paradigms provided as examples
above. This arbitrariness is not as residual as that: when some obvious description needs
have been satisfied (such term is a must, such record is obviously less familiar than such
other one, etc.) a great deal of description micro decisions still remain, and have to be
made with no particular reason. The descriptor then makes an arbitrary choice. Ensuing
tests with the dynamics generally suggest corrections which are a way to move to a new,
better motivated status of the plexus. However, even after validation and correction, the
motivation is far to command the entire plexus detail and a great deal of arbitrariness
remains. It is important not to leave it without an interpretation.
A part of the plexus arbitrariness may be put on behalf of the radical exemplarist
assumption made in this research: as it is too radically poor in ist apparatus, the model is
too unspecified; a less radical model (cf. p. 268), but which remains to be found, would
be tighter and its inscriptional detail more constrained.
75
Q uasi-no rm ative external effects
(qu asi-u nifo rm speaker jud gm en ts)
T w o p lexii w ith varyin g d etail
accep t o r reject ab ou t the sam e fo rm s,
fo r detail reaso ns w h ich are d ifferen t
D eterm inism o f elem en tary p ro cesses
(the im p lem entation level is suppo sed
to b e d eterm in istic)
H igh variation o f th e inscription
d etail and o f the d yn am ics
(speaker's idios yn crasy)
Figure 5 Determinism, idiosyncrasy, normativity
I propose to see the rest of plexus arbitrariness as standing for the speaker's
idiosyncrasy: a plexus, being the static side of a speaker's linguistic knowledge, bears
the trace of his history, of his learning history in particular. The figure above proposes a
metaphorical view of the question. It suggests that an important detail variation is
damped and gives quasi-uniform linguistic outputs (they are quasi-normative), that is,
the linguistic knowledge of this speaker is French, for example, with the variation across
speakers which one observes among French speakers. The damping would be accounted
for in general by a form of stability in complex systems and in particular by the
integrativity properties of the model defended in this dissertation (cf. section 7.4.
Integrativity, p. 207 below). To put it more simply, two different plexii of French will
analyse a same form about as easily (or with about equal difficulty) but each for very
different detail reasons (cf. Chap. 4).
This schema reconciles three poles:
a) the high variation of inscriptional detail across speakers (and of the detail of the
dynamics) which is assumed to reflect the idiosyncrasy of speakers and the
variation of individual histories,
b) the quasi-uniformity of macroscopic effects93 and
93
About this level, Engel (1996) uses the word "normative". For example : Frege reproaches the
psychologists with confusing two meanings of the word "law" when they equate logical laws with
psychological laws: the sense "normative' and the sense "descriptive". Even assuming, as he does, that
logical laws are normative laws, Frege still confuses two meanings of the word "norm": a sense in which
a norm describes the laws of an intelligible universe, and one in which a norm prescribes to individuals
to follow a certain rule. Engel 1996, p. 120.
76
c) the determinism of the neurophysiological processes which is assumed. The
neurophysiological processes which support the linguistic operation have to be
deterministic if we think that they belong to chemistry and therefore do not
require calling on quantum mechanics; this is a conjecture.
This model contradicts the explanation of several variation effects by means of
probabilities, cf. section 7.9. Probabilistic model or dynamic model (p. 225).
The three-pole model also affects the theme of portability and separation. For Putnam
(1960), the fact that a same process may be run on different computers (or, more
abstractly, that a Turing machine is a "logical" description which leaves undetermined
its concrete form) leads to envisage mental states and mental processes that can be
described separately of the nervous system. This important remark is presented as likely
to solve the problem of mind and body94.
It legitimates a theme which is central in cognitive science: the postulation of a
representation level independent of the hardware. The three-pole model proposes a less
sharp vision of this. First it does not posit an abstract object (it would be a language)
which would be portable: the speaker productions are quasi-normative, they are not
normative. Secondly, idiosyncrasy (bottom right pole) is both the variant result of an
individual history and a dependency on the "hardware". The separation then could take
place only at the expense of an abstraction (the postulation of a language) which we are
trying to avoid. If one posits the possibility of a separation, one cannot provide an
explanation of variation or a working explanation of learning because it cuts the model
off from the concrete dynamics of the acts.
3.5.3. Contextuality and mutual contingency
Contextuality
The inscriptions in a plexus are contextual right from the start: it is not possible to make
a decontextualized inscription as are those in a lexicon for example.
Inscriptions are constitutionally interdependent. After reading Saussure, consequences
are drawn: if signs have value only with respect to one another, then, inscriptions that
would be autonomous and self-standing (between which "relations" would then have to
be made) or lexical entries (to which "properties" would then have to be attributed) must
be avoided. This puts us in a better position for terms to get their value from their
"eternally negative mutual differences".
Inscriptions must be contextual because i) decontextualization creates ambiguity95 and
ii) decontextualization prompts partonomy (cf. p. 89): the temptation to attribute
properties to objects. The model therefore contains built-in contextuality: its very
foundations make contextuality of inscriptions obligatory.
It does so firstly by placing terms in constructor records (C-type records), that is, in
structural contexts – some say 'cotexts' – that are utterances or utterance segments. This
is a vision of context which is conventional, well understood, and good by itself.
94
The argument is recalled and summarized in Gardner 1987/1993, p. 45.
95
Ricœur 1969, p. 94; Rastier 1998a, etc.
77
It does so secondly in the structural analogies inscribed in type C records. This is more
novel and requires extending the conventional notion of context. The part of a system
constituted by the four terms in a systemic analogy – that is, in a plexus, the four terms
copositioned in two type A records, the latter linked with a paradigmatic link – is a
'context' inasmuch as this inscription profiles the four terms in a determined manner.
Take for example the following systemic analogy:
(a)
femme : homme :: vache : taureau
Remember that the definition of systemic analogy (cf. supra) rests on a similarity of
meaning ratios. If one introspects oneself on the mode of presence of meaning in the
systemic analogy above, one perceives that the meanings which are those in "ah la
vache!", "sang de taureau", "le taureau par les cornes", "tente mille hommes", "t'es pas
un homme", "l'homme est un loup pour l'homme", "cherchez la femme", "ce type c'est
une femme"96, would make it difficult to involve such terms in analogies such as
analogy (a). Consequently; analogy (a) necessarily profiles each of its terms towards
biological sex, and the human or bovine character, and meaning, in that inscription, is
therefore one of zoological taxonomy. This is in what the corresponding inscription is
contextual; the context of femme in it is:
X : homme :: vache : taureau
Contextuality encompasses a third aspect, which is the most important one and the most
difficult and is not addressed in this work: the situational context. It is ultimately
regarded as the condition of a radical treatment of meaning.
Contextuality is thus constitutional in a plexus. It is so also in the dynamics, as we shall
see later.
Dispersion
The dispersion of terms across records – and via the records, across various paradigms –
matters, because it constitutes a sort of 'potential connectivity' which is revealed upon
their use in the dynamics. This connectivity is complementary to the 'static connectivity'
embodied by the paradigmatic links. When dispersion is high in a zone, it increases
what will be named below 'constructability transfer'. In the categorial vocabulary, one
would say that high dispersion causes 'good' categories, that is, sets which share many
properties and much behaviour. We shall see below (p. 108) dispersion contributing to
render the systematicities that generativism treats with transformations.
When dispersion is weak on the contrary, the sharing of behaviours between terms is
lessened. In the categorial vocabulary, one would describe this as sub-categorization,
which may reduce to categories that communicate little or not at all. An example of this
will be seen p. 112. Between this effect, and the previous one its contrary, there are of
course only gradients and no sharp break, since there are no reified categories.
96
The technique of exposition consists of enumerating those exemplars to avoid the use of "proper
meaning", "meaning extension", "derived meaning", "figurative meaning" which are not postulated.
78
3.5.4. An analogy holds between terms
All segments of linguistic form making up an analogy, be they in C-type or in A-type
records, are 'terms' by definition. The question of terms will be addressed in detail again
p. 193.
'Term' was often used in linguistics; the definition proposed here is firstly compatible
with this one: A term is a word or a group of word constituting a syntactic unit97. But it
is secondly modified as follow: the only criterion that commands the making of terms is
their belonging to an analogy, that is, terms are the consequence of (at least) one
structure mapping and have no other raison d'être, they result only from the
segmentation which contributes to structure mappings. A term thus has morphological
and syntactic relevance. Saying this is just paraphrasing the clause that terms are
commanded by structure mappings. Remember Saussure: "L'analogie est d'ordre
grammatical" (Analogy is grammatical in nature.).
This clause, which is constitutive of the term, makes it tend to align on the constituents
of classical analyses (morpheme, syntagm) without this alignment having to verify in all
cases: some structure mappings may not follow classical frames. We shall see several
examples of this in Chapters 4 and 5, and a typology of such cases will be made p. 193.
The only but strong request which is made for a term is that it be re identifiable in its
recurrences: in each, it is re identified as "the same term".
3.5.5. Vacuity of terms
LINGUISTIC SUBSTANCE – We do not have to posit a fundamental substance which will
then receive attributes. Saussure98
There is a dearth of analogy between language and any other human thing for two
reasons: i) the nullity of the signs; ii) the faculty of our mind to consider a term which
is null in itself (But this isn't what I meant initially. I deviated) 99.
If one takes that any semiotics is only a network of relations (or that a natural language
for example is only made up of differences), the terms can be defined only as points at
the intersection of the different relations. Thus, the examination of the elementary
structure of meaning well shows that any term of the semiotic square is the point where
relations of contrariety, contradiction and complementarity intersect100.
After the principle above which governs them, terms have no properties which would be
their attributes. Much in the way the slot-filler schema was refused in Chap. 1, it is a
sort of object-property schema that is now going to be criticized.
In order to address the linguistic dynamics in an appropriate manner, one is led to
envisage terms deprived of content, that is, deprived of properties. It is so because
conferring a property to a term is sanctioning what has been observed in the past and in
the present, without building a base open enough and flexible enough for future
97
Pei 1969.
98
Saussure 2002, p. 81.
99
Ibid. 2002, p. 109.
100
Greimas 1993, p. 388.
79
behaviours. Allocating terms properties that are susceptible to take values is in general a
renouncing attitude because it renounces developing a discourse on the abductively
productive dynamics to substitute them with the impoverished ratification of a
collection of observed facts. Besides, these approaches invariably fail on acquisition:
they fail to show in which conditions the succession of the acts makes changes in the
values of these properties. Conferring on these the character of continuity or adjoining a
stochastic complement to the theory does not solve the question either, as we shall see.
Terms being content-free, it is their connectivity – that is, their various occurrences in a
plexus, and, in their occurrences, the copositionings of these terms with other terms –
that account for their dynamic behaviours and their productive possibilities.
In this line of thought, analogy carries with it a promise (to which this model undertakes
to do justice): its eliding of the predicate is an important enabler of content draining.
Nothing requires that it be alone in this, but no other device has been found so far. In
order to leave open the possibility for other devices with the same quality, in several
places in this work, stress is moved from analogy to "copositioning", that is, to the
establishment of mutual ratios between terms: any device capable of establishing any
copositioning would be receivable. Analogy is the first one, the main one, and the best
studied one. Others are possible, in principle only so far.
Units deprived of content are difficult to envisage and manipulate. It is hard to build
solidly without a "stable foundation". It is hard to make models or theories deprived of
"essences". The building of science needs solid foundations and is not deemed to be
compatible with an absence of content. This is where we must strive however.
Classically, three orders of properties are postulated: a) syntactical, b) semantic, and c)
phonological. Let us review them and see how content draining is achieved, or not, for
each in turn.
a) In the current status of this model, syntactic properties (categories, syntactic features,
etc.) are refused and the model is free of them indeed. We shall see below how it is
capable of syntax and in what measure.
b) Usual semantic properties (lexical meaning, linguistic meaning) are not posited
either. However, in the absence currently of the semantic side of the model this nonpostulation remains a petitio principii and the demonstration that it is possible to
evacuate semantic properties is not yet made. This possibility stays as the favourite
conjecture, but still to be proven.
c) Terms, as they are presented so far in the model conserve a form (orthographical in
practice, it might be phonological), which seems to contradict the principle of their
vacuity. This must be viewed as a lag, accepted by lack of anything better, between a
desired goal and that which it was possible to realize. Besides, this region of the model
poses a constitutional question, cf., p. 298, a discussion of the question of access, and, p.
296, another one as to what point it is possible to downgrade the lexicon.
Let alone this last reservation, the principle of the vacuity of terms is stated, and the
demonstration of its validity will be made below in morphology and syntax. This
principle will be presented p. 89 as the condition for a quality that the model must have:
isonomy.
80
3.5.6. Suspending the minimality of terms
A term is not constrained to be elementary.
In establishing the categories of analysis like morpheme, seme, phoneme, etc. one
strives usually towards minimality and elementarity, each time along one of the
dimensions of the analysis. This aims at making available a tool set, as reduced as
possible, so that various combinations of these tools give a good account of the
immense variety with economy; this posture is common in science and the rationalist
tradition presents it as inherent to science. This approach, when it succeeds, means that
the viewpoints or dimensions in question are independent – which may be construed as
tautological.
Now, in language, these independencies, without being negated, do not fully verify. This
is why they should not be postulated; rather, it is appropriate to adopt weaker
postulations, but these quasi-independencies will have to be reconstructed as results.
Thus, in the model, minimality itself is questioned; as a matter of principle, no
minimality is posited. Consequently, description and explanation do not rest on
'elements' but rather on terms – the extension and the level of definition of which must
remain contingent in principle – and on inscriptions at multiple levels. The question will
be developed and discussed p. 194, after meeting several examples.
The terms and inscriptions we have been considering are static. We also saw that the
rendering of the effects is expected from their use in dynamics. The major character of
the latter is that they are abductive, and the device which links statics and dynamics in
the model consists of four 'abductive movements'. These are now going to be exposed.
3.6. Abduction, abductive movements
3.6.1. Abduction: conjectural inference in an open frame
The model which is sought must propose mechanisms that show movements from the
ancient to the new, from he already known to the never uttered, and are abductive,
because this movement is each time a presumption of success without the anticipated
proof of success being possible, and besides, it would not be very useful. Such
presumptive movements correspond to what was studied since Aristotle, and was
termed 'abduction' by Peirce.
Among the conjectural inferences which do not belong to the technical acceptation of
induction, but may possibly belong to its usual one, let us single out abduction. This
modality of inference was identified by Aristotle in the Posterior Analytics just after
presenting induction. In order to catch what Aristotle understands with abduction, let us
take his example:
Science may be taught
Virtue is a science
Virtue may be taught
Major
Minor
Conclusion
In a classical deductive reasoning, the Conclusion follows from the Major and the
Minor, whereas in an abductive reasoning, a Minor is sought to act as a probable
intermediary between the Major and the Conclusion; in other words, abduction starts
from the Conclusion and from the Major to infer a possible Minor. We are thus in
81
presence of an unsure reasoning which is not, by far, an inductive reasoning, because it
does not move from the general to the particular: as an illustration, in the example
quoted above, the Conclusion and the Major are universal propositions in the
Aristoelean sense, that is, in predicate logic, they map onto universally quantified
formulae101.
Thus for Aristotle, the example abduction is:
Virtue may be taught, now virtue is a science, therefore (abductively) science may be
taught
Abduction must be recognized in the inadequate inference of Spinoza:
The modes of perception may be grouped into four classes: I. [acquired by hearsay]; II.
[by vague experience ], III. There is a perception in which the essence of a thing is
infered from something else but in a non adequate [non adaequate] manner; which
takes place either when we infer the cause from whatever effect, or when we draw the
conclusion [of the fact] that a universal is always associated with a certain property. IV.
Finally, there is a perception in which the perceived thing is perceived by its essence
only, or by the knowledge of its proximal cause [cause prochaine]102.
In the Treaty he will place his attention on the fourth mode of perception only. He is an
atheist and Descartes a believer but for both of them the true knowledge of things cannot
be satisfied with inadequate inferences, even if, he recognizes, the things however
which I could so far understand by such a knowledge [mode IV] are few (p. 20) and the
only example he gives is taken from Euclid.
We will now omit other important steps in the history of abduction (Peirce, Eco, etc.)
and move on to its role in language.
Beside its role in thought and reasoning which is its origin domain, as Aristotle's
example shows, abduction plays an important role in other tasks of problem solving like
utterance planning or natural language understanding103. I suggest further that
abduction is even involved in base processes like unit identification, syntactic analysis,
etc.
If, as logicians remind us, abduction is dangerous in reasoning, for a speaker of a natural
language, the danger is not so great: he certainly performs abductions, but he assumes
that his interlocutor makes about the same ones. From experience, abductive inferencing
in language works well most of the time, and it is easy to correct as language use is
interactive.
We now should say how abduction happens. For Chomsky, the autonomy of syntax
constitutes the only possible response to the problem of abduction104. The proposition
made here is that this assumption is not necessary.
The accomplishment of language acts is based on abductive mechanisms. They are
abductive in the sense that, from inscriptions in a plexus (attested linguistic facts), they
101
Ganascia 2000, p. 129.
102
Spinoza 1661/1984, p. 16.
103
Houdé 1998, p. 24.
104
Laks 1996, p. 171.
82
authorize new facts without this being a logical deduction, exactly as Science may be
taught is not "demonstrated" in Aristotle's example above. There are three differences
however.
Firstly, in the example of Aristotle, i) the path leading to the abductive conclusion
contains only two steps, ii) there is only one result, and iii) the result is a proposition,
whereas here, i) the path leading to the abducted result may (as we shall see below)
consist of several steps (several 'phases'), ii) a same abductive process may produce
several results in different phases each with its own strength, and iii) the results are
linguistic terms (sometimes more complex results) and not propositions.
Secondly, in Aristotle's example, the mechanism is based on propositions such as Virtue
is a science, whereas here it rests on the positional exploitation of analogical
inscriptions.
Thirdly, abduction as it is presented above has a totalistic flavour: we know what
sciences are (all sciences), we know that all sciences can be taught, we know with
certainty that virtue is a science: the universe of discourse is known and closed; it is
entirely framed by unambiguous categories. In the linguistic dynamics on the contrary
we have to dispense with all this. Following the radical assumption which directs this
work, we can rely only on occurrential and proximal inscriptions which are the result of
partial cognitive experience, and therefore the processes expected to develop have to be
proximal. It is an abduction reshaped in this way which must account for linguistic
productivity.
Abduction is implemented by computations: the dynamic side of the model is abductive
by construction, its results cannot be demonstrated by logic, and they do not come from
categories and rules. The results are best compromises between the constraints
associated with an occurrential linguistic act and the inscriptions accessible in the
plexus. The dynamics are built up on 'abductive movements' which are elementary
movements. These relate the static view of analogy to its dynamic view. Four abductive
movements have been found necessary and will now be defined: by transitivity, by
constructability transfer, by expansive homology, and by transposition.
3.6.2. Abductive movement by transitivity
From the two analogies:
(1) a : a' :: b : b' and
(2) b : b' :: c : c',
which share the pair b : b', one abducts the following analogy:
(3) a : a' :: c : c' .
This is what is called 'abduction by transitivity'; rigorously, it is the paradigmatic link
which is transitive, it implements the mathematical notion of transitive relation which
holds between pairs. The given analogies (1) and (2) are alleged to be 'good' analogies:
the speaker of whom this plexus is the static linguistic knowledge finds them
acceptable.
According to the abductive movement by transitivity, the abducted analogy (3) is also
alleged to be acceptable but possibly a little less. With analogy, nothing can be
83
demonstrated, nothing is guaranteed, this is in what the movement to (3) is an abduction
and not a deduction. After several such movements, the risk thus taken may, as we shall
see, be compensated (or not depending on the case) by collateral dynamics (parallel
computation paths) which, integratively, add to this one other abductions, thus
reinforcing the corresponding results.
Possible reinforcements let alone, along the path a : a' then b : b' then c : c', the ratio may
drift. If one chains up several steps in this way, the ratio may, after a moment, not really
be conserved any more. Abduction has ended up hazardous. The assumption – this is
suggested by the detail behaviours of the model, below – is that linguistic acts –
utterance reception for example – in their majority, are computed with short chains and
therefore under comparatively sure conditions. Some other ones, a minority but not rare,
involve abductions that become hazardous because the dynamics of these acts mobilize
longer chains: the terms of the linguistic act and those of the plexus are in this case not
very congruent. Several such examples will be given below.
3.6.3. Abductive movement by constructability transfer
The second abductive movement is by constructability transfer.
in scrip tio n s
B io c c u rre n t te rm
p a ra d ig m
p a ra d ig m
u n + ch ien
ce + ch eva l
p etit + éléph ant
g ra nd + ch ien
a b d u cte d p a ra d igm s
p a ra d ig m
u n + ch ien
ce + ch ien
ce + ch eva l
u n + ch eval
u n + éléphan t
ce + élép han t
p a ra d ig m
p etit + éléph ant
g ra nd + éléph an t
g ra nd + chien
p etit + chien
p etit + cheva l
g ra nd + cheva l
Figure 6 Constructability transfer
It is appropriate to present constructability transfer on an example – a quasiformalization will be given in an appendix. The two paradigms at the top of the figure
are inscriptions in the plexus and they share a term: chien. This is the "bioccurrent"
term. Constructions un + chien, ce + cheval and grand + cheval being attested, the
construction un + cheval becomes acceptable by abduction. This is what constructability
transfer is.
84
The construction grand + cheval is not the only one to be produced by abduction: un +
éléphant, grand + éléphant, petit + chien also can be abducted. The figure thus gives
the feeling that a Cartesian product is built. This is not false but has to be complemented
by noting that its elements are produced in successive phases (following the principles
of the computation that will be detailed below). This phasing depends on the connexity
of the initial paradigms which, in the figure is a very degraded notion since each contain
only two records; usually, paradigms consist of more records. The Cartesian product of
the possibilities is therefore not built entirely in general; its building is phase-wise. It
begins with the elements closer to the starting ones, according to the progressive needs
of the dynamic of a particular act. So the effect of the bioccurrent term, the
constructability transfer, most often reaches areas not too remote from the initial records
in the paradigms. In less favourable cases, it may, after a number of phases, have a
broader extension but in general the products of such long paths will be superseded by
other effects, following shorter paths, abductions that are more immediate and more
pertinent vis-à-vis the terms of the act. This is a manifestation of the principle of
proximality, there is another one.
It is not fortuitous that the data of the example all bear on animals. The starting
paradigms inscribed in the plexus bring together linguistic data related to the cognitive
sphere. The conjecture is of the type "birds of a feather flock together", the linguistic
knowledge (and the cognitive one) would have inscriptions in observation of this
principle. Of course, there are many ways to be similar, each may lead the organization
of a particular zone of the plexus, or of several such zones; the zones coexist as do
macles in crystal structure: each is a proximal organization, and, at their borders, they
join as they can, which means: with organizational breaks.
The example which illustrates constructability transfer is built with binary constructions
and the definition extends straightforwardly to ternary constructions.
A formalization and a critique of constructability transfer appears in an appendix,
section 13.2. Abductive movement by constructability transfer (p. 319).
Constructability transfer is the first movement that contributes to structural productivity;
the second one is expansive homology.
3.6.4. Abductive movement by expansive homology
3.6.4.1. Principle of expansive homology
If the constructive paradigm C1-C3 is available in a plexus:
C1
C2
C3
une + journée
une + belle journée
une + occasion
 une journée
 une belle journée
 une occasion
and if in addition the constructive paradigm C4-C5 is available:
C4
C5
belle + journée
belle + victoire
 belle journée
 belle victoire
then constructions C6, C7 become acceptable:
85
C6
C7
une + belle occasion
une + belle victoire
 une belle occasion
 une belle victoire
Premises C1 through C5 are sufficient to abduct C6 and C7 but it is not necessary that
they be exactly these ones, nor as numerous, to bear on these precise terms; it suffices
that journée and occasion co-categorize105 together in a way or another – we shall see
how below – and it suffices that attestations like C1 - C7 apply to terms which are
distributionally similar to journée and occasion.
What matters is that the expansion belle journée of journée occurs in C2 where it is
homolog of journée in C1 (or that a similar fact holds beween distributionally similar
terms of these terms).
3.6.4.2. Expansive gate
A plexus configuration such as that of the example is an occasion for expansive
homology. I call 'expansive gate' such a configuration. An expansive gate is a
configuration of plexus inscriptions which allows expansive homology (the abductive
movement by expansive homology) to take place. This designates in a plexus a
'resource' which is functionally defined and more or less organically bounded, that is, it
is embodied by an identified subset of records. This 'resource' is not a detachable part of
the plexus, it is rather a subset of the plexus which is profiled for a given finality. Its
elements, the records, also link with records that are foreign to the expansive gate, thus
contributing to serve other finalities.
When is an expansive gate constituted? In a restrictive view, when the criterion of
expansive homology holds between the terms themselves: in the three constitutive
records, the terms are themselves present with the required positions – this is the case in
the example. Let us call this a 'hard' expansive gate. But an expansive gate operates also
if the critical terms are not identical but are distributionally similar only. Then it is a
'soft' expansive gate. It just operates more slowly: it requires some more computation
phases to assess distributional similarity (elsewhere I write "co-categorization") of
terms. The softness of expansive gates is a factor of productivity and must be respected.
The B2-B3 process for syntactic analysis which will be studied below operates
following this soft vision.
To make things concrete, a systematic survey of expansive gates was made in the French
plexus of 1800 terms that is used in chapters 4 and 5. It is restricted to hard expansive
gates. The term which is homolog to its expansion un underlined and the expansion is
not.
hommes
Espagnole
bon cheval
homme habile
trop grand
pas assez
refaire
105
86
femmes
grande
bon temps
cours pour adultes
pas bon
venue
bon coup
coup tordu
That is, one may be suggested as similar to the other, cf. infra.
grande sœur
un livre de cent pages
est arrivé
je vais avec eux
à chaque fois
deux cents
est venu
il est ici
est venue
One may call the underlined term 'head' but I do not do this: the model does not require
to reify the notion 'syntactic head', cf. section 6.5. Syntactic head (p. 183).
In this model, causal chains are long, tenuous, multiple, and difficult to grasp between
on the one hand the exemplarist detail of the plexus and the swelling of the
computation, and on the other hand the overall, externally observable behaviour, the
macroscopic effects. For this reason, it is difficult to perceive how mechanisms (that are
elementarily variant) produce quasi-normative observable results. Neuromimetic
connectionist models also present this opacity and do not solve it very well. Here, the
table above contributes to alleviate it. In section 4.1. Analysis with agents B2, B3 (p. 97)
we shall see special queries that 'expose' the detail reasons of computation results; in
another way they also contribute to reduce that opacity.
The notion 'expansive gate' frames for pedagogical purposes a mechanism the level of
which is intermediate; this makes it possible for dynamics, otherwise obscure, to be
brought closer to the knowledge that the readers have, based on previous notions like
'expansion', 'head', 'generation rule', etc. But it has to be understood that 'expansive gate'
is not properly a concept of the model, it does not correspond to anything distinctly
reified in it.
A complement on expansive homology is given in an appendix, section 13.3. Abductive
movement by expansive homology (p. 320). Abductive movements by constructability
transfer and by expansive homology contribute to structural productivity (cf. Chap. 4).
3.6.5. Transposition (or not) of analogy, abductive movement by transposition
The expansive movement which is now about to be defined does not contribute to
structural productivity; it does to the systemic productivity which is the subject of Chap.
5.
The systemic analogy X : Y :: A : B being given, the following analogy: X : A :: Y : B is
defined as its transposed analogy; terms Y and A are simply swapped. If an analogy is
equivalent to its transposed analogy, then the question:
(a)
find X which is to Y as A is to B
is equivalent to:
(a')
find X which is to A as Y is to B.
Moving from (a) to (a'), that is abducting (a') from (a), is performing an 'abductive
movement by transposition'.
Most often, this abductive movement is acceptable, that is, a speaker that accepts (a)
also accepts (a'). For example, it works very well in French articles and in the verbal
paradigms of Indo-European languages. But it also occurs that transposition yields a
curious analogy, one understandable at the expense of an interpretation, or even an
unacceptable one. A survey of cases is made in an appendix section 13.4. Abductive
87
movement by transposition (p. 321). This same appendix provides a detailed description
and a critique of the abductive movement by transposition.
3.6.6. Three classes of analogy with abductive movements
A table of three classes of analogies was first introduced p. 59, please refer to it for class
definitions. It was then complemented, p. 67, with their modes of inscription the plexus.
Il is now complemented again, and finally, with the abductive movements which apply
to each class.
Transposition of structural analogy is impossible because the transposed pairs never
define an analogical ratio. Transposition of systemic analogy is often possible with
exceptions (cf. appendix), therefore it is only potential. Constructability transfer and
expansive homology are proper to structural analogy.
As displayed in the table, transitivity is common to all three classes. Also shared by all
three classes are notions like the individuality of terms, the elision of the predicate, the
determination of the analogical ratio, and the familiarity orientation.
88
Class
Systemic non
structural analogy
(class A)
Structural non systemic
analogy
(class C)
Structural and systemic
analogy
(class AC)
Examples
la
une
un
le
: un soir ::
: le jour
élu
: élue ::
maître : maîtresse
soir
jour
: un soir ::
: le jour
: le ::
: un
soigneux : avec soin ::
rapide : vite
lawful : unlawful ::
honest : dishonest
happiness : happy ::
beauty : beautiful
un
dis
: unlawful ::
: dishonest
Place in
grammars
Paradigms without
overt manifestation
Syntax
Paradigms with
overt manifestation
Inscriptions in
A-type records
A la
A une
Structural non systemic
analogy cannot be
expressd in A-type
records
A élue
élu
A maîtresse maître
Inscriptions in
C-type records
Systemic non
structural analogy
cannot be expressed in
C-type records
C un+soir=un soir
C le+jour=le jour
C élu +e =élue
C maître+sse=maîtresse
A
A
le
un
C un+lawful =lawful
C dis+honest =dishonest
A
Transitivity
A
+
+
+
Transposition
+
potential
–
impossible
+
potential
Constructibility transfer
–
+
+
Expansive
homology
–
+
+
Table 6 Three classes of analogy with abductive movements
3.6.7. Partonomy and isonomy
3.6.7.1. Having properties or dispensing with them
For Koenig106, partonomy is the characterization of language objects by their properties.
Example of partonomic proposition: "all nominals bear case". A few lines further, he
opposes partonomic to taxonomic. This opposition seems to me not to be the most
interesting one to be made in the Analogical Speaker.
106
Koenig 1999a, p. 15.
89
It seems more productive to oppose partonomy to 'isonomy'. The etymology of isonomy
is: same law. In mineralogy, two crystals are isonomic if they are built following the
same law. In politics, the ancient French word "isonomie" means equality facing law,
equality of civil rights107.
In this framework, I propose – this is a slight modification of the meanings above – to
call 'isonomy' the fact of following reasons i) attached to the objects themselves, without
having to draw on their properties, ii) which get defined exactly at the level at which the
objects themselves are defined. The four abductive moments defined above are
isonomic because they start from (pairs of) terms to reach (pairs of) terms through
movements that only involve the (pairs of) terms and their copositionings.
Isonomy differs from homogeneity: a partonomic theory is homogeneous if all its
objects have the same types of properties; this does not make it isonomic. Isonomy is
different from merology and compatible with it: parts are not properties. So the
maximum contrast of isonomy is indeed with partonomy which is the fact of positing
properties.
An isonomic theory is more economical than a partonomic theory because it eschews
numerous questions associated with partonomy: i) having to separately describe the
structure of the properties (for example trees or lattices of syntactic features), ii)
categorical effects of sharp behavioural jumps when moving between different values of
a property, iii) conditions under which the value of a property should change to reflect
an evolution, etc.
Isonomy facilitates the suspension of minimality (supra).
3.6.7.2. The Analogical Speaker is isonomic
The question of analogy-making108 is always presented as a partonomic process: in the
survey made by French109, all models are partonomic (details can be found about two of
them p. 186). Most linguistic theories are partonomic; some connectionist models only
are not.
The Analogical Speaker stresses on the contrary the importance of isonomic dynamics.
All four abductive movements are isonomic, this is apparent from their definition. In
this model, the analysis of a received utterance will be defined below as a series of
structure mappings and the dynamics that accounts for it are entirely isonomic, whereas
they are usually viewed as partonomic. In the Analogical Speaker, analysis is isonomic;
in it, the parsing itself of the received form is not partonomic; it has to be seen as
merological which is not the same thing.
Partonomy has the unfortunate consequence of pushing one into artificial decisions,
lexical category assignment for example, which has already been addressed in Chap. 1.
Another example is the syncretism of the forms which is a result of positing that a form
107
Littré dictionary.
108
'Analogy-making' (also called 'analogical mapping' by some authors) consists of discovering analogies
(making them emerge) in a model in which there isn't a notion of analogy before this operation.
109
90
French 2002
belongs to a place in a system because of the syntactic features it has as some of its
properties; the question is treated in detail p. 160. Isonomy is a corollary of the vacuity
of terms the utility of which was shown: terms deprived of properties can only be used
by isonomic processes.
So this model concentrates on isonomic dynamics. It demonstrates an isonomic
productivity of analogies but it presupposes to that end a body of analogies to start with,
which have to be readily available. It does not tell how this initial body is obtained. This
is 'priming' which will be met again below. Priming may well draw on different
mechanisms, and these may well be partonomic. I do not pretend therefore, that the
isonomic dynamics accounts for the entirety of linguistic dynamics, it surely accounts
for a very great deal of it.
3.7. General framework of the dynamic side of the model
I have shown how the static side of the model is structured and can be elaborated. It is a
plexus which models the static side of a speaker's linguistic knowledge.
I have just defined four abductive movements; they relate the static side and the
dynamic side together by showing how static inscriptions can be conducted into the
dynamics.
I have established that these dynamics are abductive and proximal. They are diverse and,
as we are about to see, fragmented. Macroscopic results are produced by the synergetic
cooperation of simple and numerous processes. The dynamics are controlled by a
general frame in which they operate. This frame is now going to be explained. It is
deterministic, and organizes the production of results by fragmenting it, ensuring the
synergies and the overall operation control.
3.7.1. The linguistic dynamics is a deterministic computation
In section 3.5.2. Determinism, idiosyncrasy, normativity (p. 75) I stated why the
dynamics must be deterministic and I indicated how determinism is compatible with
high individual detail variation (speaker's idiosyncrasy) all in preserving a quasiuniformity of external effects (linguistic quasi-normativity).
So the dynamics present themselves as deterministic computations involving details in
quantities, which may be very varied. The variation is indeed individual variation
because, for a given speaker, the details may differ significantly at two remote moments
in the speaker's history, they may differ slightly at two close moments but, at a given
moment, the computation is determined.
The computation is deterministic but not algorithmic, it is a heuristic process.
Determinism and abduction are not contradictory: abduction, however unsure an
inference it is, is nonetheless a procedure which may be deterministic.
3.7.2. Linguistic acts and linguistic tasks
A speaker's know-how comprises two fundamental linguistic acts: the reception of an
utterance and the emission of an utterance. Attention will now move on to the dynamics
91
of these acts. As to dynamics of learning, it will be addressed in section 8.2. (p. 249);
the dynamics of linguistic change is a consequence of that of learning and reanalysis.
The model uses a notion which is close that of linguistic act: the linguistic task. A
linguistic task is en entire linguistic act or a part of it which is functionally
homogeneous and can be ensured by a defined effector, in conditions that will be
specified.
A linguistic act is carried out differently depending on the congruence between its terms
and those of the plexus. For example, a given utterance may be received and analyzed
easily vis-à-vis a given plexus and difficultly with another one. Depending on the
congruence, the computation of the acts then requires uneven computational means. It is
a question of cognitive load and this modulation is governed by the escalation principle.
3.7.3. Escalation principle
The escalation principle is also a principle of economy depending on how you consider
it. It goes as follows: the dynamics of a linguistic act (as that of any task it may
comprise) launches in first rank, processes that are short, therefore economical, starting
from the arguments of the act (of the task) they reach inscriptions that are proximal to
them. Such a dynamics is little abductive. A short process produces in priority directly
attested forms, possibly anomalous ones.
When short dynamics prove unproductive, escalation initiates longer, therefore more
expensive processes, soliciting inscriptions more remote from the argument terms. Such
processes are more abductive. Either they produce forms that are attested but more
remote from the task's terms, or they produce forms that are not directly attested, by
assembly. The latter are 'analogical', therefore 'regular'.
The escalation effect is obtained almost without particular care, a simple and sound
architecture favouring more direct results. To this 'naturality' the following model's
features (cf. below) contribute: phasing, competition, possible cooperation of different
paths, integration of effects.
This explanation encompasses the articulation: "when short dynamics prove
unproductive, etc.". This is just a way to put it to introduce the question simply; it must
not imply the idea of a particular point in the process where a precise decision would
take place to trade short dynamics for longer ones. Actually there are multiple subprocesses of multiple natures, progressing in parallel. Some produce early successful
intermediate results and this makes them overcome the unproductive ones, or those with
weaker, later results. However, the final effect is the one that has been indicated.
The escalation principle is illustrated in section 6.4. Anomaly and regularity (p.181) but
it must be understood that it has applications beyond anomaly-regularity; for example it
has an important part in explaining the progressive generalization of a new structure
during learning, cf. section 8.2. (p. 249).
The presentation of the dynamic model will now proceed in a technical mode but still
remain introductive. A more complete specification appears in an appendix, p. 331.
92
3.7.4. Agents
A usual approach in modelling consists of dividing a complex dynamics into smaller
fragments. In this case, fragmentation has two converging reasons:
-
the complexity, the variety of overall linguistic effects and their sensitivity to
multiple factors lead to consider them as a combination of multiple, simple
actions,
-
if the substrate of language is the neurons, it is accepted that each has a very
simple function and is not the locus of an elaborate intelligence. Linguistic
intelligence, rather, should obtain as an overall effect. With Minsky110,
intelligence is expelled from the elementary organs; if one of them were
identified as making something too complex, it should have to be replaced with
an assembly of simpler organs.
In this model, the computation of a linguistic act is thus fragmented into small
functional computation units. Each is assigned to a small organ, functionally
specialized, and simple. Such organs are called 'agents', which may initially be
perceived as a metaphor for economical agents or for agents occupying a fragmented
function in a human organization. However, it is advisable to quickly give up this
metaphor and stick to the clauses specifying the agent's behaviour without trying to
think them after the ordinary notion of agent.
The plausibility which is claimed is not a literal one. Agents do not match neurons or
anything anatomically identifiable. This work is analogy-driven and the functions of
agents are defined based on the processing of analogies. This tier is somewhat higher
and more linguistic than Minsky's.
Agents have different types depending on the elementary functions necessary in the
computation. The main functions which motivate agent types are:
-
the comprehension of an utterance,
-
the production of an utterance (not developed in the current state of the model),
-
similarity suggestion (function in the service of other agents),
-
the productive computation in a pluridimensional system which accounts for
'systemic productivity'.
A typical linguistic act engages from a few tens to a few thousands of agents depending
on the act and its congruence with the content of the plexus.
An agent111 is a short-sighted entity; its scope of awareness in the computing
environment comprises i) its duty, ii) a few plexus data which match the terms of its
duty, and iii) the point to which it delivers its results when it happens to produce any.
Upon its creation, an agent is assigned a duty which is a task to fulfil, but the agent does
not fulfil it entirely. It fulfils a part of it, which may be viewed as an incremental step. In
110
Minsky 1985, p. 23.
111
In this section, underlined words have a specified meaning in the model. They are used with this
meaning coherently throughout this work and are not interpretable following their meaning in ordinary
usage. Please also refer to the glossary at the end.
93
an incremental step, an agent determines more duties depending on its own duty and on
plexus data matching it; these are deemed apt to (abductively) prolong the fulfilment of
the agent's duty. The agent then recruits other agents for these duties. Recruited agents
are commissioners of the former which thus becomes their client. This takes place in
one phase of the computation. The complete computation of an act comprises in general
several phases, up to seven, ten, or fifteen; there is no definite limit to this number.
Thus, phase after phase, a structure of agents is built which is called the heuristic
structure. Examples of heuristic structures can be found Figure 33 p. 337 and Figure 44
p. 364. The heuristic structure has in reality two different modes of edification; in it,
some global effects temper the short-sightedness of the agents, please refer to the
appendix. An agent – this is not the case for all of them – may come across a favourable
condition which holds between the data of its duty and the plexus data that matches it;
one such condition is always a coincidence but there are several types of them; it is a
settlement condition, the agent then makes a settlement. A settlement is always
associated with an element which characterizes it, a term, or a term occurrence in the
plexus, or an element of some other nature. This element is a finding. A settlement
raises a finding. A finding will end up in a result but with an intervening merging:
findings with the same content are merged into the same result. Merging is not detailed
here, please refer to the appendix. An example of heuristic structure featuring settlement
and merging can be found Figure 33 on p. 264.
A dynamics organized in this way is an 'agent-based solving', abbreviated into ABS. For
the technique of ABS please refer to appendix 14, p. 331, which specifies it. The
principal notions of ABS are also defined in the glossary.
3.7.5. Strengths
ABS encompasses strengths which reflect lengths of abductive paths, that is, costs (the
convention is that a weak result is one which is costly to obtain). In the implemented
model, these costs are presented as computational costs and they are interpreted as the
homologs of the cognitive costs associated with the linguistic acts.
The first factor influencing strengths is distance from the initial terms: the more remote
a finding, the weaker the result. A second factor is reinforcement: when two parallel
abductive paths yield the same result, the result is reinforced by the mechanism of
merging. The dynamics of strengths is specified in detail p. 346.
3.7.6. Channels
Beside agents, the second important component of ABS is the channel. Channels are
points of the heuristic structure which receive results (the latter obtain from the merging
of the findings). Any agent delivers necessarily to a single channel which is its delivery
point. It is legitimate to see channels as ensuring the syntagmatic dimension in a
computation: when a task encompasses terms in syntagmatic mutual position, it opens
up exactly one channel per position. By contrast, sets of agents that are clients and
commissioners to one another and between which no channel intervenes, are
paradigmatic to one another: between all their findings and the terms resulting of these
by merging, an exclusive choice must be made. The syntagmaticity of channels has an
94
application domain broader than just the received acceptation for 'syntagmatic' but it
applies in particular exactly to questions of syntax in the most classical sense.
3.7.7. Similarity suggestion
In a global abductive process, similarity suggestion is defined as the sub-function or the
sub-process that brings up possibilities, the latter being thereafter settled – or not – that
is, validated. Starting from elements of a linguistic task similarity suggestion consists of
designating elements similar to them as proper to allow the development of the
abductive computation.
Depending on the elements for which we want similar terms to be suggested, similarity
suggestion presents two varieties:
-
simple similarity suggestion which bears on one term only, cf. p.353. This is
principally a matter of distributional similarity.
-
copositioned similarity suggestion, which bears on a pair of terms. Copositioned
similarity suggestion can be found in agent ANZ, cf. corresponding appendix p.
377.
The former (simple) is a vision of similarity that is conventional and poor. The latter
(copositioned) is a richer vision, which is differential and is presented as an effort to
take full advantage of analogy. The rest will show to what point this effort succeeds.
Similarity suggestion is dynamic, occurrence-based, and determined by the exemplarist
terms of a linguistic task. It suppresses the need to base the productive dynamics on pre
established categories. Consequently, it denies to categories the status of a theoretical
foundation, to make them a phenomenon which is to be considered phenomenologically.
The general framework of the dynamics also comprises an overall control mechanism
which organizes the dynamics in successive phases, chains them up, and ensures overall
triggering and activity control. Please refer to the appendix.
We now have available the general frame which makes it possible to introduce
particular agent types (Chap. 4 and 5). Agents have different types, each with its own
nature, its own duty structure, and its own type of products. Each type also has its own
procedures for recruiting commissioners and raising its findings, that is, settlement.
3.8. Conclusion
This chapter promised a lot without yet delivering much – this will be done in chapters
4, 5 and 6. It was long, and yet many details, some even important to the understanding,
had to be moved to appendixes in order not to further dissolve the argument.
We have established the static frame and the dynamic frame within which we are now
about to build structural productivity (Chap. 4) and systemic productivity (Chap. 5).
Next, Chap. 6 will show how some notions of grammar or of linguistic analysis now
lose their interest or are reconstructed.
95
Chapter 4.
Structural productivity
Structural productivity is defined as a productivity of assemblies. It is contrasted with
systemic productivity which is the productivity in pluridimensional language paradigms
and is the subject of Chap. 5. Linguistic productivity as a whole results from the
combined interplay of structural productivity and systemic productivity. This
dichotomous proposition may have to be complemented upon the extension of the
model to semantics but it is sufficient in the current perimeter of this work.
Structural productivity is the basis of syntax. On its own, it does not cover agreement
which requires combining structural productivity with systemic productivity. This is
why agreement will be addressed in the next chapter, only.
Structural productivity covers morphology and syntax in continuity in the sense that the
dynamics do not differ; plexus inscriptions are the warrants of the differences between
morphology and syntax.
Emission is not covered in this work because the point where to start from is not clear as
long as semantics is not covered. Interpretation cannot yet be treated for the same
reason. As to reception, it is treated up to (and including) analysis.
This chapter begins with redefining analysis; in this frame it is necessary to redefine
what analysis is. Then a series of commented examples show the dynamics of analysis.
Example after example, it progressively defines that which replaces the syntagmatic
structure. I demonstrate with an experiment that the notion 'transformation' is not
necessary in the theory; with another one, that the notion 'thematic role' is not necessary
either; with an example, that categorial homonymy is easily solved in context and that
categorial 'disambiguation' ceases to be a question. Finally I propose a solution to the
problem of the amalgamation in Romance languages (ex. Fr. de + le → du) which is
theoretically economical.
4.1. Analysis with agents B2, B3
In a theoretical frame which encompasses categories and rules, to analyse encompasses
segmenting the received utterance and assigning to each segment thus determined one of
the categories of the theory. This assignment having to follow rules and other
97
stipulations of the theory, its transformation rules for example112. This view cannot be
conserved here since rules are not reified and neither are categories. So, in the frame of
the Analogical Speaker, the definition of 'analysis' has to be clarified.
The proposed vision is as follows: in an exemplarist theory, the finality of analysis is to
achieve a structure mapping – in the sense of Gentner and Holyoak – with an analogous
constructor record in the available linguistic knowledge. There may be only one
mapping or there may be several ones with several construction exemplars when these
are compatible. A mapping should be the best possible one or at least good enough, that
is, a mapping is a compromise between its adequacy and the computation cost to obtain
it.
The difference with a structure mapping a la Gentner it that the latter is one level only;
it may be quite elaborate but it encompasses one level only. Here on the contrary, it is
necessary to pile up several levels of mappings, to concord with this idea, well
understood since Arnauld and Lancelot at least113 and taken over by Hockett, then by
Generativism under the species of the syntagmatic structure – comparable levelling is
present in dependency grammars and in all modern syntax theories – that, in utterances,
it is necessary to make groupings. Psychology itself may need to make such levelled
groupings but we see that it did it much less than linguistics114.
The difference between "the best possible one" and "at least god enough" is a question
of computational cost vs. the marginal utility for the speaking subject; a sub-optimum is
quite sufficient in ordinary linguistic experience and only do invite us to push the effort
a little further occasionally, mathematicians, lawyers, and poets.
The view 'analysis as a mapping' will have to include meaning by the time we know how
to handle meaning. For the time being, it will be showed at work restrictively in
linguistic form alone, that is, in morphology and in syntax.
B2 and B3 are the agents responsible for building analyses for a received utterance.
"Analyses" is a plural, this is not indifferent as we shall see. Agent B2 (for "build 2")
considers binary constructions (ones with two constituents) and agent B3 ternary ones.
Any particular analysis task involves B2 and B3 in solidarity: here, at this phase, it is B2
which succeeds, at another point of the same task, B3 does.
The exposition will be carried out on examples. More abstract and formal descriptions
appear in appendixes.
112
With an important reservation however: according to Janet Fodor speaking at University Paris 7,
january 8, 2003, There is no model kown at present for applying transformations to parsing; nobody sees
how to apply transformational rules to parsing.
113
For example this : La deuxième chose que le relatif a de propre et que je ne sache point avoir encore
été remarquée par personne, est que la proposition dans laquelle il entre (qu'on peut appeler incidente)
peut faire partie du sujet ou de l'attribut d'une autre proposition, qu'on peut appeler principale. Arnauld
1960/1997, p. 49.
114
Although this idea was expressed as early as 1948 by Lashley at the Hinton symposium. Gardner
1987/1993, p. 23.
98
4.1.1. Example c'est beaucoup trop grand
The example in French c'est beaucoup trop grand (it is much too large) contains several
aspects interesting to present while remaining simple enough. The analysis dynamics is
activated with the task to analyse the form: c'est beaucoup trop grand. The best is to
look, phase after phase, at the states reached by the process and to comment them.
The overall principle of the B2-B3 dynamics it that, phase after phase, channels take
hold of longer and longer parts of the received utterance. This begins with the smallest
discernible units, that is, the smallest segments of the adopted coding, here letters.
Channels are instated, each taking hold (and accounting for the analysis of) a 'span' in
the utterance. A span is defined by a start and an end. The start is the rank of the first
letter of the span, and the end that of the last one. For example, in form "le soleil brille"
("the sun shines"), span <1-2> is the initial "le" and span <6-7> is the "le" of "soleil".
Figure 7 c'est beaucoup trop grand after one computation phase
The figure above is produced mechanically, and so are the following ones. They propose
successive views of the heuristic structure which, phase after phase, analyse the
utterance c'est beaucoup trop grand. They display all the channels, but agents remain
elided to reduce overload and confusion. The vertical axis maps onto time which runs
from top to bottom. Smaller-span channels are at the rightmost side and on the left, the
channels spanning the longest parts of the utterance that could be analysed at a given
phase. In other words, the maxima of structures (which are the analogs of the roots of
generativist trees), which we are used to see at the top, are here presented at the left.
99
This disposition is adopted to let develop downwards the lists of exemplars (results at
channels) which are here necessary.
In form c'est beaucoup trop grand the first computation phase identified all occurrences
of terms existing in the plexus. For each, a channel is built, the span of which is the
bounds, in the analysed form, of the occurrence in question.
On the figure, for example, group:
3 1 (beaucoup) <7-15>
302 1 [beaucoup]
means that channel 3 was built in phase 1, spanning from letter 7 to letter 15 in the
form. The content of this span is (beaucoup)115. This group also signals that span
(beaucoup) is attested by result 302, produced in phase 1 and resulting from term
[beaucoup] which is present as such in the plexus. So far, invention is not very great: the
first phase simply picks up homographic matches between the analysed form and the
plexus. This is called installation.
Note in the rightmost part of the figure a number of small-span channels; for example,
in channel 5, segment (a), which is extracted from "grand" (En. large, great), is found to
coincide with term [a] (En. has) that is present in the plexus and is a form of verb Fr.
avoir (En. to have). This is an assumption which the process makes; very soon it will be
found unproductive. Should we try to eliminate such hypotheses? The reason might be
to grant priority to maximal terms, that is, when several segmentations are possible, to
keep the one making the longest terms; this would be adopting a longest match
principle.
This principle is efficient most often but not always. A counterexample is the following
one in Japanese116:
form kô bun shi ryô san, must be analysed as
kô bun shi
macromolecule, polymer
ryô san
production
despite
kô bun shi ryô
great quantity of polymer
which the longest match principle would favour, but this would leave san as an unused
residue. Yet the latter analysis would be appropriate in:
kô bun shi ryô
great quantity of polymer
wa
the mark which terminates the topic
In order not discard the analysis that will turn out to be the good one and enable the
correct resolution out of possible garden paths117, all analyses are kept and, in this
version at least, the longest match principle is not applied.
115
Round brackets ( ) are related to a form being analysed. They signal its segmentation by a B2 or B3
agent. Whereas square brackets [ ] are related to terms of a constructor record in the plexus. The record
authorizes the particular segmentation the agent makes.
116
This example is from Zoya Shalyapina of Moscow (verbal communication).
100
In addition to segments (beaucoup) and (trop), it turns out that segment (beaucoup trop)
also finds a direct attestation in the plexus; term [beaucoup trop]. Here is an illustration
of the minimality suspension principle: overlapping terms may coexist in the plexus.
Channel 7 (beaucoup trop) does this as early as phase 1. To acknowledge this, a B2
agent links channel 7 (beaucoup trop) to channel 9 (beaucoup) and to channel 6 (trop).
This is denoted by the three converging lines at the left of the figure; the centre of such a
'star' denotes a B2 agent. This 'assembly' is not very abductive yet: it only reflects a Ctype record that exists in the plexus.
Near to the top, the process distinguished segment (c'est beau) because it finds term
[c'est beau] in the plexus and matches it with a part of "c'est beaucoup" in the received
utterance. This is a pun, and it will not proceed very long along this track because it will
not be able to associate this rightwards with (cou), (coup), (coup trop), etc.,
So far, all channels are installation channels and all results are installation results. Phase
1 does exactly installation, its does not innovate; it has not abducted anything yet.
At phase 2, the lists of results below the channels are longer. Under channel 2 (trop
grand) for example, are now 21 results, there was just one at the previous phase. For
example, result 396 [trop gentil] was produced in phase 2. It was produced as a
distributionally or constitutionally similar term of result 296 [trop grand].
This is because the agent in charge of similarity suggestion (agent CATZ, cf. appendix),
beside a result existing at a channel, adds phase after phase the most proximal terms of
the plexus which have the same distribution. In this particular case, [trop gentil] is
constitutionally analog to [trop grand] and has been produced for that reason. Here, we
just saw the abductive movement by constructability transfer at work.
Still to be noted under channel 2, is the creation of result 299 [grand]. This creation is
remarkable because it is an occurrence of the abductive movement by expansive
homology. It could work because the plexus contains the constructor records:
C
c'est + grand
C
c'est + trop grand
 c'est grand
 c'est trop grand
in which [grand] and its expansion [trop grand] are homologous. This is an 'expansive
gate', cf. p. 86. Result [trop grand] was present at channel 2, the agent in charge of
similarity suggestion, 'abductively' appends its homolog [grand] producing result 399.
Result 399 will eventually have a consequence.
117
The process of analyzing an utterance goes into a garden path when, upon a syntactic ambiguity, one of
the interpretations is first adopted but is later contradicted as not compatible with some element
considered later in the analysis. This contradiction pushes to restore the analysis that was initially
discarded.
101
Figure 8 c'est beaucoup trop grand after two computation phases
The rightmost bound of the span of channel 9 is fifteen and the leftmost bound of the
span of channel 2 is sixteen. Fifteen plus one = sixteen, these channels are adjacent.
Because of this, the process creates a B2 agent, the mission of which is to try and see
whether the spans of these channels can be assembled – this process assumes
concatenative assembly, different types of assemblies are envisaged p. 248.
The spans of channels 9 and 2 are (beaucoup) and (trop grand), each separately already
attested. The question now is whether (beaucoup trop grand) would be possible and
why. A B2 agent is created to that end. Elided in the figure, it is at the intersection of the
bold lines starting leftward from channel 9 (beaucoup) and from channel 2 (trop grand).
The B2 agents operates as follows: taking one after another results at channel 9 and
likewise at channel 2, it forms all possible pairs and looks up in the plexus whether the
pair occurs as constituents in the same binary C-type record. When this is the case, the
settlement condition for agent B2 is met. The record in question is the settlement
record. The effect of the settlement is to raise a finding at this agent and the finding is,
in the settlement record, the term which occupies the assembly position. Then this
finding is merged into a result. Here, result 399 [grand] at channel 2 settles with result
416 [un peu] at channel 9 because there exists in the plexus record
C
un peu + grand
 un peu grand
The reader reading this document in colours notes that results 399 and 416 are in blue
which means exactly that they took part in a settlement; the blue colour denotes
settlement results. The settlement has consequences in the leftmost part of the figure; as
102
it is confuse here is an enlargement:
22 2 (beaucoup trop grand) <7-25>
495 2 [un peu grand]
7 1 (beaucoup trop) <7-20>
300 1 [beaucoup trop]
412 2 [beaucoup moins]
497 2 [un peu mal]
Figure 9 c'est beaucoup trop grand after 2 phases of computation (enlarged detail)
The two effects of the settlement are: a) a channel, channel 22, is created which attests
that (beaucoup trop grand) is possible and b) result 495 [un peu grand] is created at the
channel. This result is the reason why 'one may say' that (beaucoup trop grand) is
possible, that which authorizes this saying. The statute of this authorization is very
precisely: that can be said abductively, because there happens to be a particular
exemplarist reason to do it; this is exactly what makes the speaker take not too big a risk
with this saying: that it will be accepted and understood in general.
There is one result only at channel 22, this is temporary; in next phase another one will
be created. In general there can be from one to several results at a channel, which attest
the segment corresponding to the channel's span. Below, the importance, or not, of
having several results will be discussed.
Channels 1 to 6 are in red, they are extinct. These and the structures which depend on
them rearwards – there aren't any yet in this figure – are extinct: they no longer recruit or
produce. They are extinct because they bear enough results that settled already. This
participates in an overall activity control of the heuristic structure which will be treated
in detail below. Channels which stay active are displayed in green colour.
At phase three of the computation, channel 23 (c'est beaucoup trop grand) was created.
The entirety of the form is now analysed.
After the computation's end, a query issued against the heuristic structure shows the
abductions which were made. The advantage of his new figure is that it displays the
agents – they were absent in the previous ones. In the jargon of this model the query
requests the model to 'expose' channel 23. It is indeed an exposition of the abductive
reasons to find the analysed form receivable.
103
Figure 10 c'est beaucoup trop grand after three computation phases
Exposing channel 23
(c'est beaucoup trop grand)
(c'est )(beaucoup trop grand)
[c'est][trop grand]
(c'est )
[c'est]
(beaucoup trop grand)
(beaucoup )(trop grand)
[trop][grand]
(beaucoup )
[beaucoup]
(trop grand)
[trop grand]
[trop][grand]
span of channel 23 (ph 3)
how ag 531 segments the span
attests the segmentation (finding 684 on record 939)
span of channel 18 (ph 1)
attests as setup term 1614 setting up channel 18
span of channel 22 (ph 2)
how ag 208 segments the span
attests the segmentation (finding 678 on record 427)
span of channel 9 (ph 1)
attests as setup term 138 setting up channel 9
span of channel 2 (ph 1)
attests as setup term 628 setting up channel 2
attests the segmentation (finding 682 on record 693)
as per channel 9, already exposed
as per channel 2, already exposed
Figure 11 c'est beaucoup trop grand after three phases, exposition of the reasons
104
This exposition of the reasons, which was produced mechanically, may be rearranged as
follows:
1
2
3
4
5
6
7
8
9
10
11
(c'est
(c'est )
[c'est]
(c'est )
[c'est]
beaucoup trop grand)
(beaucoup trop grand)
[trop grand]
(beaucoup
trop grand)
(beaucoup )
(trop grand)
[trop]
[grand]
(trop grand)
(trop )(grand)
[trop grand]
(beaucoup)
(trop ) (grand)
[beaucoup]
[trop] [grand]
channel
agent
assembly attestation
channel
agent
assembly attestation
channel
agent
assembly attestation
installation channels
installation attestations
Figure 12 c'est beaucoup trop grand after 3 phases, exposition of the reasons rearranged
The new display reveals a tree the root of which is line 1. Caution: cases happen in
which multiple, compatible analyses overlap. The unique, univocal tree is not an obliged
theme here.
One observes also hat some paths are longer than other ones. This is not surprising.
One notes that [trop], line 6, attests (beaucoup), line 5 while "trop" also occurs in (trop
grand) line 5. These two "trop" are not in the same positions.
One is interested to note that the licensing record [trop]+[grand][trop grand] is used
two times as a settlement record: line 11 and line 6. Syntax presents indeed this
recursivity. As it happens in the example, in such two nearing occasions, in these two
consecutive assembly steps, the settlement record is the same record. It might not be the
case (with another plexus, for the same analyzed form). At one of these levels or at both,
there might be more than one settlement record, with their sets having an intersection
(as here) or not. In a plexus as scarce as the one used to compute this example118, this
type of resource apt to license expansions (named 'expansive gate' above) being
comparatively rare, the same gate may tend to be more reused than in a more complete
plexus where several of them would be available.
Example c'est beaucoup trop grand, which has just been commented, features assembly
steps with two constituents only, whence the "2" in "B2 agent", the agent that makes
these assemblies. Agent B2 acts with plexus records which are binary themselves.
118
1830 terms, 1250 records only.
105
4.1.2. B2 agent , B3 agent
The model recognizes the necessity of ternary assemblies along with binary ones. The
question of n-arity, as a necessity in this model and as a property of branching, accepted
or refused among the generativists, is discussed in detail p. 371. An agent, agent B3 is
dedicated to ternary branching; we will see it at work in ensuing examples. Its principle
of operation reproduces that of the B2 agent, let alone that adjacent channels are now
taken by three to make a B3 agent. Settlement then occurs between three results, one in
each of the three channels and the settlement record must now be a ternary C-type
record, that is, one with three constituents.
One will have noted that an analysis as performed by a B2-B3 process is a bottom up
one. Plausibility so demands and it cannot be otherwise since there being no explicit
grammar, there being no generative rules; it is not the case that we would have a generic
rule giving the a priori schemas of a sentence, of the type S → NP VP whereby a top
down process could start.
4.1.3. Limits and merits of B2-B3
Analysis with B2-B3 does not respect grammatical agreement. Une beau journée is
accepted as easily as une belle journée.
B2-B3 also lacks group sensitivity: it has no notion of conjugation groups in French or
declension groups in Russian; it abducts inflexions too freely with respect to what
speakers do (cf. p. 169).
These two defects occurring simultaneously is no surprise: both have something to do
with systemic analogy. B2-B3 fails on agreement and declension classes because it takes
no account of systemic analogy. Agent ANZ (below) takes account of systemic analogy,
but it is not capable structural productivity (syntax). In Chap. 5, I show a first
association of these two productivities, agent AN2, which is capable of some syntax and
observes agreement. But the conjecture is rather that a better solution would require a
revision of the very structure of the inscriptions: the current design of the exemplarist
constructions (C-type record) would not be sufficient.
Coreference in a broad sense (anaphor, relative subordination, etc.) is not covered. Here
again, advances on the structure of inscriptions are a prerequisite.
With these limits and in spite of them, B2-B3 has the merit to perform syntactic analysis
without categories or rules. It is a concrete application of the proximality principle (cf.
Chap. 3). It is an operable implementation of a situated linguistics, productive within
contingency. Two attempts, as far as I know, share this character: that of Skousen and
that of Freeman which will be contrasted with this work below.
4.1.4. Syntactic analysis redefined
What is the purpose of syntactic analysis? Not to determine grammaticality. The success
or failure of the analysis of particular utterance depends on its compatibility with the
plexus, so that there is a kind of de facto grammaticality but we know that its precise
definition is not possible, even in a language as constrained and normative as French is.
Even if it were possible it should not have to be done, firstly because it is not necessary
106
within this model, and secondly because it would bring the risk of sterility on variation
and on learning.
The final utility of analysis is meaning. As long as the model does not cover the
computation of meaning, one is never sure that the attestation of an utterance is made
for 'good' reasons. This is the current limit in this model's development.
When the model's scope will be broadened so as to encompass meaning, it will be
possible to observe, hopefully, heuristic paths directed by meaning, concurrent and
simultaneous to ones directed by the form. And also, still hopefully, heuristic paths that
associate both.
If this turns out, form and meaning will cooperate in the interpretation. There will be
cases in which syntax plays a minor role, thus validating the ancient idea of 'connection'
of Tesnière119. It will not be the prevalence of one onto the other in general. The
respective contributions of form and meaning will be a matter of observation case by
case.
When I write 'heuristic path', it is not metaphorical; I understand very practically the
process of edification (as illustrated above with B2-B3), assisted by recruiting processes
(which will be studied below). That is, the structures comprising the agents that are
created by edification and by recruitment (applying abductive movements), and
comprising the associated results produced by the settlement process.
4.2. About non-transformation
4.2.1. Analogies that motivated transformations
Transformations appear in Chomsky's writings publicly in 1957 (Syntactic Structures)
and non-publicly as early as 1955 (The Logical Structure, published in 1975 but written
in 1955). The reason for transformations is that groups like120:
they arrive
do they arrive
they can arrive
can they arrive
they have arrived
have they arrived
they are arriving
are they arriving
demonstrate a systematicity for which the theory must provide an account. Now a
grammar which is syntagmatic only provides for this poorly only, in any case, very far
from the simplicity which is expected from a theory. Newmeyer later will remind us121
how the introduction of transformations responded to a simplicity requirement.
119
In Alfred chante, there are three elements, says Tesnière; Alfred, chante and "the link that unites Alfred
and chante, and without which we would have only two independent ideas, without relation to one
another, but not an organized thought." This is the link which Tesnière calls connection. … The
connection sets up automatically between some parts of speech without any mark having to be involved.
Lemaréchal 1989, p. 58.
120
Chomsky 1957/1969, p. 71. This example is taken among numerous other ones which would be
possible.
121
"Chomsky did not question in Syntactic Structures that phrase structure grammars are capable of
weakly generating the sentences of English. He rather argued that they can do so in a cumbersome fashion
and, furthermore, do not come close to assigning the correct structural descriptions to the generated
sentences. … Chomsky's arguments (auxiliation and passive in English) for transformational rules in
107
The examples above are analogies, the same ones as Bloomfield's (cf. supra, p. 34). So
the facts motivating the introduction of transformations are analogies; analogies
involving form and meaning even if Chomsky, as we saw it, refuses the meaning
content of analogy and thence disqualifies analogy. He will adopt generative rules, and,
for what matters here, transformations. Let us call "analogies which motivated
transformations" analogies as those above.
How should a theory, which refuses categories and rules and intends to account for
productivity with analogy, treat such systematicities? The first idea is that, since the
analogies which motivated transformations are analogies, the theory must show how it
solves the corresponding analogical tasks. Facing a question like:
X : Pauline sends the letter :: a toy is offered by Alex : Alex offers a toy
if it responds X = the letter is sent by Pauline (and thousands of similar answers) it will
be validated. We shall see this idea followed by Itkonen (p. 190). This path is not
entirely appropriate because it is not typical of the linguistic knowledge of the speaker;
it is typical at best of his epilinguistic knowledge122, or even of his metalinguistic
knowledge. This is not what we must account for. We must account for the fact that if a
speaker can understand the utterances (a) Pauline sends the letter, (b) Alex offers a toy,
and (c) a toy is offered by Alex, then he can also understand (d) the letter is sent by
Pauline.
To this end, it is not necessary to operate analogical tasks123 but to know how to
interpret and produce utterances such as (d) by taking advantage of utterances such as
(a), (b) and (c). To be more precise, it is not even necessary to have (a), (b) and (c)
available, which would already too favourably share all the required terms (the lexical
material) within the required constructional frames.
4.2.2. Jean voit Jeanne, Jeanne est vue par Jean
An example will show the mechanism. It bears on the French plexus in which it
concerns only an excerpt124, it is built on the following constructor paradigms (each line
is a C-type record):
1391
1392
1393
1394
j'appelle
je vois
je retrouve
j'attends
Jean
Berthe
Victor
Berthe
Syntactic Structures were all simplicity arguments, that is, arguments appealing to weak generative
capacity. They all involved showing that a grammar with phrase structure rules alone required great
complexity, a complexity that could be avoided only by the positing of a transformational rule".
Newmeyer 1986, p. 22-23.
122
The notion ["epilinguistic"] is from A. Culioli who uses this term to designate the unconscious
knowledge which any speaker has of his language and of the nature of language ("language is itself an
activity which supposes a permanent epilinguistic activity defined as an inconscious metalinguistic
activity"). Auroux 1989, p. 35.
123
See a definition of 'analogical task' at the beginning of Chap. 5.
124
Numbers heading the lines will serve as a reference in the exposition (I have adopted the record
numbers which are internal in the model's implementation; this explains the large values).
108
1395
1396
1397
Victor
Berthe
Jean
arrête !
viens ici !
ne touche pas à ça !
1398
1399
1400
1401
c'est
c'est
c'est
c'est
à
à
à
à
Victor
moi
Jeanne
Alfred
1402
1403
1404
Jean
Jean
Victor
est soigné
est séduit
est vu
par Jeanne
par Berthe
par Berthe
1405
1406
1407
1416
il
il
il
elle
1408
1409
Jeanne
Victor
1410
1411
Alfred
Jeanne
1412
1413
je
je
1414
1415
par
par
est soigné
est occupé
est vu
est vue
voit
regarde
Berthe
Jeanne
marche
mange
l'ai
l'ai
vu
mangé
Jeanne
Berthe
This excerpt is not isolated in the plexus; the records in it are linked with the rest in
multiple manners. It is extracted here only for exposition purposes. Its scope is limited,
but it presents the scattering properties which it is useful to illustrate. It contains:
-
proper names and the pronoun moi which are distributionally similar for varied
and converging reasons: direct utterances like 1408, passive utterances like 1402
and other ones like 1395 1398 or 1410,
-
some direct utterances like 1408,
-
some passive utterances like 1402,
-
the analysis of two prepositional syntagms 1414 and 1415.
The lexical material is well scattered across all records. It might be more and the reasons
for distributional similarity might be even more diverse: the example would only be
more tedious with no functional incidence on the result, there would only be a possible
incidence on the computation load and on the number of phases necessary to obtain a
result.
Tests passed on the model in the example's domain show in various ways that it is
productive of direct utterances, and of passive utterances, the appropriate abductive
movements operating in each case on all the resources indifferently, that is, on all the
paradigms.
109
In the test report below, the convention already made still applies: round brackets ( )
denote an analysed form and show its segmentation by a B2 or B3 agent, and the square
brackets [ ] apply to C-type records in the plexus which license the form by justifying its
segmentation.
Test 1
ph. 1
(Victor) (est vu) (par Berthe)
[Victor] [est vu] [par Berthe]
direct attestation without any abduction
Test 2
ph. 2
ph. 2
(Jean) (est vu) (par Jeanne)
[Jean] [est séduit] [par Berthe]
[Victor] [est vu] [par Berthe]
1st abductive licensing
2nd abductive licensing
Test 3
ph. 4
ph. 4
(Berthe) (est vue) (par moi)
((par) (moi))
[Victor]
[Jean] [est séduit] [[par] [Berthe]]
[Victor] [est vu] [[par] [Berthe]]
Test 4
ph. 2
(Jean) (voit) (Jeanne)
[Victor] [regarde] [Jeanne]
Test 5
ph.?
(Jean vu par Berthe)
-
2nd level segmentation
abductive licensing of 2nd lev. constituent
1st abductive licensing of the whole
2nd abductive licensing of the whole
agrammatical utterance
20 phases run with no licensing
In test 4, the move from Jean to Victor is licensed by 1403-1404 which are passive
forms, that is, resources of passive (oblique?) forms also serve to license direct forms.
Test 5 shows that the model is sensitive to grammaticality. It will not accept anything
and it was good to assess this.
Without any 'transformation', the attestation of a few direct utterances and of a few
oblique ones, not excluding other types of utterances and syntagms, suffice to provide a
pool of lexical-constructional resources from which to abduct similarities of behaviour.
In this example, forms like (est vu), (est séduit), which are viewed as constituent terms
are not in turn analysed into shorter terms, despite such analysis being possible of
course. In the restricted scope of the demonstration which is sought, the dynamics is
happy with this "suspension of minimality" (cf. supra). The analysis of these terms is
quite unimportant here and would not contribute to the intended demonstration125 but it
would matter from the moment we would undertake in addition to show productivity
among forms like voit, verra, vit, a été vu, ont vu, ont été vus, ont été vues, aurait vu,
etc.
125
The non-analysis of (est vu) and (est séduit) illustrates the principle of minimality suspension: the
terms are analysed only inasmuch as their analogical mappings require it.
110
So the proposition is to refrain from defining 'transformation' neither in the sense of
Harris or of Gross nor in the sense of Chomsky: the computations applied to the
constructional exemplars of the plexus (C-type records) provides for the needs.
The treatment of analogies which motivated transformations may be summarized by the
following three clauses: i) the plexus contains exemplars of constructions: affirmative,
interrogative, passive, etc., ii) the lexical material is reasonably scattered among them,
and iii) ordinary computations are used. This yields effects of cross licensing among the
various construction types. This way of doing may be viewed as another figure of
integrativity in this model.
As seen from outside, these effects may lead to think that they rest on abstract schemas
of passivation applying to direct forms for example but there is nothing of this kind: just
abduction based on exemplars.
At this point of the development, we must resist the temptation to consider that
inscriptions, by their sole "exemplarist" presence indicate the swaps that are possible
between terms and their positions. Doing so would take us to define positions and
thence to stipulate the properties required from their potential occupiers; this would
imply the reintroduction of categories which would be a regression as a dynamics of
copositionings suffices to account for the systematicities at stake.
The solution which was indicated applies to all analogies which motivated
transformations which are numerous: passivation, negation, relativation, formation of
questions, fronting of various kinds, etc. Actually, their set is open and it evolves as a
speaker changes his speaking habits and his language evolves.
Several recent grammatical theories do not postulate transformations. The first one to
dispense with them was the Role and Reference Grammar (RRG) of van Vallin126.
Another theory doing without transformations is the Autolexical Syntax of Sadock
(1991). It is worth noting that both comprise several components, four components
each, although they are not the same. They are thus 'pluristructural' because they have
several trees which concur in describing the structure of an utterance: each tree accounts
for one aspect. None of these structures suffices alone but their union succeeds,
according to their authors, in accounting for all the useful properties127.
Something similar shows up in the more recent proposition of Jackendoff: his "parallel
architecture"128 which is also pluristructural and transformation-free.
A variety of transformations is however maintained by Chomsky up to the Minimalist
Programme (Chomsky 1997a) with the operation MOVE ALPHA.
The suggestion is that the dismissal of transformations is a corollary of a pluristructural
modelling. Transformations seem to be wanted when one adopts a univocal modelling
126
van Vallin 1977.
127
Robinson indicates that RRG needs indeed to have several levels of syntactic representation (S.
Robinson, recension of van Vallin 1997 in Language vol. 75 # 3, sept 1999) while the Autolexical Syntax
of Sadock requires trees with several levels.
128
Jackendoff 2002 Foundations of Language, chap. 5 p. 105.
111
approach. The conjecture would be the following one: it is when you want to rule an
utterance by a unique tree that you are most prone to introduce transformations129.
Pluristructural grammars reject transformations, like this model, but not for the same
reasons: they make a linguistic of language and acknowledge the theme of theory
economy. They succeed in this without transformations at the expense of a plurality of
trees. This model in turn, does not make a linguistic of a language but a linguistic of
acts, and does not stress theory economy. This allows it to reach its goals without
transformations and without having had so far to postulate multiple structures. It is
ironical to note that the non-recognition of the economy principle gives birth to a model
which is remarkably economical in its way. However, it is fair to note also that the
coverage is so far restricted to morphology and syntax; yet not even to the entirety of
these. It may be the case that the extension within morphology and syntax, and the
extension to phonology and semantics, bring pluristructural viewpoints in, without these
having necessarily to be embodied in trees that would belong to categorically
differentiated planes as is the case with van Vallin, Sadock or Jackendoff.
From all this it follows that deciding whether "the main, declarative, affirmative, active
clause is a more basic kernel type, or a more "neutral" pattern in reference to which all
other syntactic types may be described"130, ceases to be a question.
In a plexus, there are propositions of all these sorts. The ability of this speaking subject
to constitute paradigms that are constructionally homogeneous (paradigms of
interrogations, of imperatives, of passive constructions, of utterances topicalized by
fronting, etc. and also of course, paradigms of "main, declarative, affirmative, active
clauses"), added to the fact that some terms occur in propositions of various of these
sorts, are the base on which abductive computations prove able to licence infinitely
many other propositions. Licensing may draw on utterances of any sort in the benefit of
utterances of any sort, even if some of these sorts have a heavier cognitive weight and
thence license more often, but this is not explained by categories, sorts and rules, not
even frequencies or probabilities: it is explained by proximal exemplars and
occurrences, and by proximal abduction.
4.3. John is too stubborn to talk / to talk to / to talk to Bill
4.3.1. Scope and intent
In Chapter 1, we saw the limits of categories of various sorts, including thematic roles.
About the latter, here is an example from Chomsky and the associated argument, as
reported by Auroux.
One of the typical approaches of the Chomskyan school of thinking in favour of
innateness amounts to invoking the lack of another available explanation. It may be
129
The polychromous trees grammar of Cori and Marandin (cf. for example Cori 1998) does not seek to
give a particular treatment to analogies which motivated transformations. It happens that it does not have
tansformations. This case, then, neither confirms, or infirms the conjecture "the dismissal of
transformations as a corollary of pluristructural modelling".
130
Givón 1979, p. 45.
112
circumscribed in the following argument. Argument ab absentia in favour of
innateness: X, Y, etc., have property P; now, we have no explanation for property P,
therefore, P is generated by an innate mechanism.
One may take as an example the famous argument about John ate which is often found
in Chomsky in support of the thesis of the poverty of the stimulus, and which he uses
again for example in Chomsky 1990b, p. 36-37. Classically, Chomsky gives the
following examples:
(1)
John ate an apple
(2)
John ate
(3)
John is too stubborn to talk to Bill
(4)
John is too stubborn to talk to
The argument is about explaining how a subject who never heard (4) may produce or
understand it. The empiricist will invoke analogy: (1) is to (2) as (3) is to (4)
(suppression of a complement). But, as Chomsky points out, John is subject of ate in
(1) and (2), of talk in (3), but not in (4); (4) is a new configuration, therefore something
else than analogy is needed to explain that the child understands (4); so, as we do not
see how he might understand, it must have to be innate131.
Here, I leave Auroux pursue his track – he will show that the ab absentia argument is
not sufficient to conclude to innateness – to myself demonstrate that analogy allows
indeed to explain with precision how a speaker who never heard (4) may relate it to
inscriptions in his linguistic knowledge. I shall show how this way of making that the
already known licenses novelty recognizes in each case who talks and to whom, in other
words, who the agent is.
The way to succeed in this is analogical, but counter to the words of an ironical
Chomsky reported by Auroux, it does not lie in trying to see a "suppression of
complement" that would be licensed by the fact that (1) would be to (2) as (3) is to (4).
Rather, much in the way in which passive was treated supra, it consist in resting on a
computations, applied to a set of records in the plexus. It will yield integrative effects
which are "naturally" sub-categorizing; they will be respectful of the agentive
orientation.
For its processing, this case will be grouped with another classical one: John is easy to
please - John is eager to please, which is similar in a way and for that reason integrated
into the same experiment. The latter question is known in the literature as that of
control:
With "control" one refers to regularities of the type: J'ai promis à Pierre de venir / J'ai
permis à Pierre de venir (I promised Peter to come / I permitted Peter to come). The
subject of the infinitive clause is not the same in both utterances. This difference may
not be predicted from general syntactic phenomena because syntax, in this case, rather
perceives similarities between the two verbs. The difference then has to derive from
individual (lexical) properties of terms promettre and permettre. So the notion of
control tells that a defined verb has the power to attribute a defined reference to the null
subject of the complement infinitive proposition, by selecting to that end such or such
131
Auroux 1998, p. 88.
113
controller: subject or complement in the main clause, this will depend on the particular
verb. Milner 1991, p. 18.
The two cases are different but both present the following similarities: i) utterances in
which the agent of the second verb is the subject of the first one, and ii) ones in which
the agent of the second verb is not the subject of the first one. These critical pairs have
the same syntax only if one adopts a formal and categorical vision of syntax. The
proposed direction to handle these cases consists rather in recognizing that speakers do
not do that because the perception that they have is informed with meaning and they
make structure mappings only between utterances that deserve it, taking consideration of
their meanings, in particular of the agentive orientation of the verbs.
Example John is … , this was pointed out by Chomsky, has a further interest because it
presents a "non-monotonicity" in the following way:
Utterance
John is too stubborn to talk
John is too stubborn to talk to
John is too stubborn to talk to Bill
(1)
(2)
(3)
Who talks
John talks
someone else talks
John talks
The agent of talk changes twice as the utterance is prolonged. To make justice to this
complication, a model with approximate commutations will not suffice; it must be very
precise in the account it takes of something which underlies these utterances. In
generativist propositions, this is the phrase structure. It is postulated, explicit, and its
very definition supposes grammatical categories for terminal points (lexemes,
morphemes) and categorial labels for syntagms. Without these, its definition could not
even be stated. There is no intent here to deny the phrase structure: something of that
kind is obviously at work in the dynamics of language acts. I rather undertake,
abstaining from reifying it, to render its effects with simpler theoretical postulations:
a) inscriptions which refrain from making improper analogies,
b) the already described abductive movements (the first three ones only contribute
here, transposition does not).
What I intend to show with these examples is that if, in the plexus, paradigms make no
confusion as to the agentive roles, then no confusion either will be made about new
utterances proposed for analysis: the analysis process will find them licensed by
licensing records that are compatible with them in this respect. If this obtains, it means
that, for this model, the differentiation of agentive roles, if granted once, is then
productively prolonged with robustness. This property will be all the more remarkable if
it obtains against the severe non-monotonicity described above.
The focus is now placed on a plexus excerpt pertinent for these examples. The example
is built in the English plexus in order to be faithful to the utterances, because the
construction with postponed preposition is particular to English. Below, each paragraph
is a plexus paradigm; the presentation which is made does not show precisely the graph
of the paradigmatic links. Graph structure, and likewise familiarity orientation (here
there are none), are not very important in this case as the paradigms are small.
As in the example in the section on "non-transformation" above, the lexical material
directly useful in this example is complemented with terms foreign to it and the set of
114
terms thus obtained is scattered among paradigms of different constructions: ones that
are critical for the examples and other construction types. This helps making the
experiment less ad hoc and enhancing its demonstrability. Records contribute in either
or both the following ways: i) provide a base for distributional similarity of terms, that
is, provide a base for the suggestion of similarities, and ii) provide occasions of
constructability transfer and of expansive homology (that is: provide expansive gates) to
enable the B2-B3 analysis process.
4.3.2. Excerpt of the English plexus
The major principle observed in this zone of the plexus is that, for constructions with a
second verb (V2), we keep in separate paradigms:
-
records in which the agent of V2 it the subject of V1 (marked + below) and
-
records in which the agent of V2 it not the subject of V1 (marked - below).
This gives pairs of paradigms like (P01+, P01-). This principle also applies to
constructions which assemble forms that can only be constituents of the previous ones.
Other paradigms, less proximally affected by role differentiation do not undergo this
distinction. Their use by the computations will be the occasion of leakage in the
"categoricity" which interests us here, but these will be second order and the first order
which is guaranteed by + and - pairs will finally ensure well separated results as will be
shown.
As an organization measure, the samples below are arranged into 'verbal constructions'
and 'non-verbal constructions'. This does not incur that 'verb' has the slightest place in
the theory, the reader now understands this well.
4.3.2.1. Excerpt of the English plexus, verbal constructions
P01+ [the agent of V2 is the subject of V1]
63
Alice
is
willing to walk
Alice is willing to walk
P01- [the agent of V2 is not the subject of V1]
98
the job
is
too big to deal with
97
Al
is
too dishonest to
…
work for
58
Fido
is
too big to take away
52
French
is
easy to learn
54
Spanish
is
easy to understand
the job is too big to deal with
Al is too dishonest to
work for
Fido is too big to take away
French is easy to learn
Spanish is easy to understand
P02+ [the agent of V2 is the subject of V1 (expected on the left)]
86
too stubborn to
talk
too stubborn to talk
87
too lazy
to
work
too lazy to work
P02- [the agent of V2 is not the subject of V1 (expected on the left)]
55
too big
to
take away
too big to take away
56
too difficult to
understand
too difficult to understand
66
too difficult to
please
too difficult to please
P03- [the agent of V2 is not the subject of V1 (expected on the left)]
93
too
dishonest
to work for
too dishonest to work for
115
94
95
too
too
large
big
to deal with
to deal with
too large to deal with
too big to deal with
P04+ [the agent of V2 is the subject of V1 (expected on the left)]
46
willing
to
please
willing to please
45
eager
to
win
eager to win
47
willing
to
walk
willing to walk
48
trying
to
understand
trying to understand
P04- [the agent of V2 is not the subject of V1 (expected on the left)]
43
easy
to
understand
easy to understand
44
difficult
to
do
difficult to do
53
difficult
to
learn
difficult to learn
P06
30
31
57
32
96
91
33
John
Alice
Fido
London
the job
French
Tokyo
is
is
is
is
is
is
is
serious
stubborn
big
big
big
easy
too big
John is serious
Alice is stubborn
Fido is big
London is big
the job is big
French is easy
Tokyo is too big
P08
35
36
34
90
meet
speak
talk
talk
with
to
to
to
Alice
her
him
Pamela
meet with Alice
speak to her
talk to him
talk to Pamela
P10
64
65
I
I
seldom
often
talk
understand
I seldom talk
I often understand
P12
80
79
68
73
69
75
74
I
I
I
you
I
you
you
talk to Pamela I talk to Pamela
talk to him
talk
go
see
accept
apologize
talk to him
talk
you go
I see
you accept
you apologize
P13
81
82
he
he
is
will be
he is
he will be
P14
40
41
42
see
understand
please
daddy
French
him
see daddy
understand French
please him
P16
49
to
see
to see
116
50
51
to
to
go
understand
P20
88
89
92
to
to
to
go
work
deal
P22
61
62
67
don't
don't
don't
talk
talk to him
go
to go
to understand
with
for
with
to go with
to work for
to deal with
don't talk
don't talk to him
don't go
4.3.2.2. Excerpt of the English plexus, non-verbal constructions
P50
37
38
70
too
too
too
big
lazy
difficult
too big
too lazy
too difficult
P52
71
72
very
so
stubborn
difficult
very stubborn
so difficult
P56
76
77
happy
ready
to
to
go
go
happy to go
ready to go
P58
59
60
me
me
and
and
Alice
Bill
me and Alice
me and Bill
The plexus sample contains the following expansive gates (cf. section 3.6.4.2.
Expansive gate, p. 86), the part which is not underlined is the expansion:
too big,
(records 32 and 33)
talk to him,
(records 61 and 62)
too big to take away,
(records 33 and 58)
Several tests were made with this plexus; their results are summarized in the table
below, and then discussed. For five tests, here is a summary execution report which was
mechanically produced; it displays a first level detail of the abductive paths leading to
the results, that is, of their 'reasons'.
In the reports, the round brackets ( ) still apply to the form submitted to analysis and
denote its segmentations by a B2 agent or a B3 agent, and square brackets [ ] denote Ctype records which license the forms and justify their segmentation.
Mentions at the right are mechanically produced by the model and complement the
explanation of its operation. They may be skipped at first reading; they assume the
understanding of the detail of B2 and B3 agents which is given in an appendix, p. 361.
117
4.3.3. Test A: John is easy to please
(John is easy to please)
(John )(is )(easy to please)
[Fido][is][too big to take away]
(easy to please)
(easy )(to )(please)
[too big][to][take away]
(easy )(to please)
[French][is][easy to learn]
[Al][is][too dishonest to work for]
[the job][is][too big to deal with]
span of channel 9 (ph 3)
how ag 101 segments the span
attests the segmentation (finding 191 on record 58)
span of channel 7 (ph 2)
how ag 60 segments the span
attests the segmentation (finding 177 on record 55)
how ag 96 segments the span
attests the segmentation (finding 241 on record 52)
attests the segmentation (finding 263 on record 97)
attests the segmentation (finding 271 on record 98)
4.3.4. Test B: John is eager to please
(John is eager to please)
(John )(is )(eager to please)
[Alice][is][willing to walk]
(eager to please)
(eager )(to )(please)
[willing][to][walk]
span of channel 8 (ph 2)
how ag 63 segments the span
attests the segmentation (finding 95 on record 63)
span of channel 7 (ph 2)
how ag 38 segments the span
attests the segmentation (finding 86 on record 47)
4.3.5. Test 1: John is too stubborn to talk
(John is too stubborn to talk)
(John )(is )(too stubborn to talk)
[John][is][ready to accept]
(too stubborn to talk)
[too stubborn to talk]
(too stubborn )(to )(talk)
[too stubborn][to][talk]
[Clara][will be][ready to apologize]
[Al][was][too stubborn to talk]
[Fido][is][too big to take away]
span of channel 11 (ph 2)
how ag 110 segments the span
attests the segmentation (finding 234 on record 83)
span of channel 2 (ph 1)
attests as setup term 169 setting up channel 2
how ag 108 segments the span
attests the segmentation (finding 177 on record 86)
attests the segm. (finding 236 on record 85)
attests the segmentation (finding 246 on record 84)
attests the segmentation (finding 361 on record 58)
4.3.6. Test 2: John is too stubborn to talk to
(John is too stubborn to talk to)
(John )(is )(too stubborn to talk to)
[the job][is][too big to deal with]
(too stubborn to talk to)
(too )(stubborn )(to talk to)
[too][big][to deal with]
[Al][is][too dishonest to work for]
[Fido][is][too big to take away]
118
span of channel 17 (ph 5)
how ag 293 segments the span
attests the segmentation (finding 447 on record 98)
span of channel 16 (ph 5)
how ag 202 segments the span
attests the segmentation (finding 439 on record 95)
attests the segmentation (finding 474 on record 97)
attests the segmentation (finding 477 on record 58)
4.3.7. Test 3: John is too stubborn to talk to Bill
(John is too stubborn to talk to Bill)
span of channel 21 (ph 6)
(John )(is )(too stubborn to talk to Bill) how ag 374 segments the span
[Al][was][too stubborn to talk]
attests the segmentation (finding 534 on record 84)
(too stubborn to talk to Bill)
span of channel 20 (ph 6)
(too stubborn )(to )(talk to Bill)
how ag 210 segments the span
[too stubborn][to][talk]
attests the segmentation (finding 530 on record 86)
[John][is][ready to accept]
attests the segmentation (finding 619 on record 83)
[Clara][will be][ready to apologize]
attests the segmentation (finding 620 on record 85)
[Fido][is][too big to take away]
attests the segmentation (finding 651 on record 58)
4.3.8. Table of results
In the table below, each line is a test: the utterance in the first column is given to the
model for analysis.
Column 2 indicates the expected agent of the second verb (V2). Mention "one" stands
for the indefinite person.
Column 3 indicates the agent of V2 actually found by the model: it is the agent of V2 in
the licensing record, that which settles. The mention is preceded by the number of the
computation phase in which the result is obtained.
For each tested utterance, the process is continued well further the first result in order to
test the model's resilience: we would not like discordant results to come up too soon
behind a first concordant one. So there are several results per test.
In the last column, an = sign indicates that the obtained agent concords with the
expected agent: the model analysed well. This is to be understood in the sense that the
model matches the proposed utterance with an analog (the settlement record) in which
the agentive roles have homolog syntactical manifestations. An X on the contrary
indicates that the model found a settlement record discordant in this regard.
Mention 'exhaustion' means that the plexus was exhausted: the heuristic process stopped
by lack of more data to envisage. The English plexus used in this experiment is small. A
larger plexus would not reach exhaustion that fast.
119
Test
=/X
Expected
agent of V2
Ph Agent of V2
obtained by the model
A John is easy to please
one
(pleases John)
3 one (takes Fido away)
4 one (learns French)
5 one (works for Al)
6 key (fits with lock)
exhaustion
=
=
=
=
B John is eager to please
John
2 Alice (wants to succeed)
exhaustion
=
(wants to please)
1 John is too stubborn to talk
John
(talks)
2 John (ready to accept)
2 Clara (apologizes)
2 Al (talks)
one (takes Fido away)
=
=
=
X
2 John is too stubborn to talk to
one
(talks to John)
5 key (fits with lock)
7 one (works for Al)
8 one (takes Fido away)
9 one (learns French)
exhaustion
=
=
=
=
3 John is too stubborn to talk
to Bill
John
(talks to Bill)
Al (talks)
John (accepts)
Clara (apologizes)
one (takes Fido away)
=
=
=
X
Table 21 John is easy to please, grammatical tests
Test
Expected
agent
Ph Obtained agent
L John is too stubborn to please
(ambiguous test)
ambiguous
3
5
5
6
X Clara is ready to apologize to
(agrammatical test)
agrammatical
at phase 8, exhaustion
without result
Y Al is happy to accept with
(agrammatical test)
agrammatical
at phase 8, exhaustion
without result
one (takes Fido away)
one (learns French)
one (works for Al)
key (fits with lock)
Table 21 John is easy to please, non-grammatical tests
4.3.9. Results comment and conclusions
Tests A and B: the obtained agent concords with the expected one. The response is
satisfying. Test B was analysed easily at phase 2 and test A was more expensive (phase
5). An optimalist interpretation of this cost difference would be that B satisfies the
constraint 'it is preferable that the subject be the agent' whereas test A violates it. Then it
would be the case that de facto the plexus embodies something of that constraint
120
without the thing having been sought or prepared. I write in the conditional because this
point is only noticed without it being possible to make it a formal proposition; for this,
more systematic tests on a larger plexus would be required.
Test 1. Three concordances in phase 2, one discordance in phase 5. The response is well
separated and good.
Test 2. Four concordances between phases 5 and 8, then exhaustion. This is good.
Test 3. Three concordances (ph. 6 and 8) and a discordant result, but later (phase 10).
The separation is good.
Test L. The utterance John is too stubborn to please is deemed difficult to interpret by four
speakers (three from the United States, one from England): they find it hard to
determine whether John has to be pleasant or someone else has to please him. Some opt
of one conclusion, other ones for the alternate one, and the reasons they give are third
order reasons. A model which would do justice to this should settle late and perhaps
balance the interpretations. Here, a first result is produced at phase three which is early.
The four results obtained between phases three and six all have the same orientation: the
model univocally thinks that the point is to please John.
Tests X and Y: two sheer ungrammaticalities are simply refused by the model, which is
good.
Let alone test L, in which results should be late and balanced to reflect speakers'
judgements, all other results are good. In its current status, the model is not expected to
treat appropriately test L because the difficulty it poses to the speakers is one of
interpretation in which agentivity is not the original cause; we should rather see the
absence of congruence between 'too stubborn' and 'to please'; contradictory conditions
between the too hinder the easy stabilization of any interpretation. To render this, a more
extended coverage of meaning should be a prerequisite.
It has just been shown, in a series of cases which are complex enough, that if we take
account of speaker judgments as to the agent of verb 2, and if we respect them by not
pretending to make analogous, inscriptions in which the agent is the subject of the first
verb and ones in which it is not, these separations in the plexus are productively
prolonged with robustness.
There are no more categories or rules than previously, here again, it suffices to rely on
exemplarist inscriptions among which proximality conditions are allowed to play. What
has just been shown is that the same dynamics as before, can also produce effects of
agentive roles (or thematic roles depending on the authors).
Later, to threat these sentences, Chomsky will postulate the abstract pronominal
element" PRO:
… a subject or an object may be an empty element that is mentally represented. More
complex examples show that both simultaneously can be empty elements, as can be
expected. Consider sentences (22) and (23):
(22)
John is too stubborn to talk to Bill
(23)
John is too stubborn to talk to
We understand these sentences respectively as:
121
(24)
Johni is so stubborn that hei will not talk to Bill
(25)
Johni is so stubborn that one cannot talk to him
These examples are particularly interesting because the subject of the transitive verb is
interpreted differently in both cases: it is understood as designating John in (22) and an
arbitrary person in (23). However, these sentences differ only by the explicit presence
of the object which is overt in (22), but absent in (23). These strange facts also derive
from the binding theory, if we suppose that the "interpreted subject" and the
"interpreted object" are in fact mentally represented, as in (26) and (27), which
correspond to (22) and (23), respectively:
(26)
John is too stubborn [PRO to talk to Bill]
(27)
Johni is too stubborn [PROj to talk to Xk]
What I represented here by PRO must be understood as an abstract pronominal element,
that is, a pronoun without a phonetic content. The binding theory allows PRO to be
bound to John both in (26) and in (27) and another sub-theory, the theory of control,
imposes this binding in (26). Chomsky 1981/1984 (retranslated from French).
To this "abstract element" apply the same critics as those which will be made to the zero
element, cf. section 6.3. Zero (p. 176). The solution I propose also dispenses with
calling on this artefact.
This section has shown how some complex effects, which other theories ascribe to a
syntagmatic structure or to various artefacts like PRO or coreference indices, may be
rendered more simply by a plexus – provided it does not flout speakers' intuitions – and
by simple abductive movements.
Success in the treatment of the three cases John is too stubborn to talk, John is too
stubborn to talk to, John is too stubborn to talk to Bill, despite the non-monotonicity (cf.
above) in them, shows that the separation of effects does not require to reify structures
and to base them on reified categories and rules: they can be obtained with simpler
analogical dynamics.
4.4. Amalgamations, article-preposition contraction in French
Amalgamation phenomena like, in French, the contraction of an article with a
preposition (de + le → du, à + le → au, etc.)132 are an occasion of worry for categorybased theories and they constitute a limit of morphemic analysis. Martinet, for
example133, describes the problem fairly, regrets that it makes "difficult, if not
impossible to distinguish the successive 'monèmes' in the utterance", but proposes no
solution, be it only descriptive. In theories which, in addition, want their descriptions to
be univocal, the dilemma becomes intractable: either they analyse (au)(marché) / (à
la)(fête) and miss (à)(un marché) / (à)(une fête), or the contrary. In addition, they have to
complexify the system of the lexical categories. Whatever the option, either
generalizations are missed, or non-motivated options are imposed. This double bind can
132
Similar phenomena are present also in Portugese, in Gascon, in Catalan where they even have a
broader extension.
133
Martinet 1979, p. 6.
122
be broken only by accepting that analyses (structure mappings for us) can happen
following the two manners: grouping preposition + article, and also ungrouping them,
then allowing the article to assemble with the noun if necessary.
This is what Sadock makes. He mentions the question as, in Hockett, that of the
"portmanteau" morpheme:
The term "portmanteau" was first used by Hockett 1947 to describe the behaviour of
French au, which, he argued, had to be seen as a single morph (because it is a single
phoneme) which nevertheless represented a sequence of the two morphemes à and le.
Hockett correctly noted several advantages in this analysis, including the elimination of
a morpheme with otherwise unattested behaviour, i.e. one that took N-bars directly into
prepositional phrases, and the provision of an account for a defective distribution of à,
which occurs before là but not before le, vis-à-vis the majority of the other prepositions
in the language, which occur in both positions. Despite the disarming simplicity and
intuitive appeal of Hockett's analysis, it is not one that could comfortably be maintained
in theories with a strictly hierarchical relation between morphology and syntax. Several
attempts have been made to deny the syntactic complexity of au and to attribute it to a
fresh category that otherwise does not occur except in du, des and aux, or to posit new
mechanisms of grammar to account for it. Sadock 1991, p. 188.
Hockett's solution being incompatible with a "strict hierarchy between morphology and
syntax", Sadock, along the lines of his Autolexical Syntax which makes provisions for
several trees, each more simple, models the phenomenon with two trees. The structural
schema is the following:
PP
P
NP
Det
|
le
de
N
|
livre
du
W
N
Figure 13 Sadock's treatment of the amalgamation in Fr. de +le → du
In this way, he makes room for "morpheme" W while rescuing the formulae NP → Det
+ N and PP → P + NP since he believes they are necessary.
Without requiring so complex and so formal an apparatus, the model proposed in this
work allows either grouping to be made, or both, contingently and occurrentially,
depending on the needs. Its allows this as a consequence of the principle of multiple
analysis and of the minimality suspension principle which lets terms be defined at
various levels without constraining them to any preset minimality. An example will
show the process better.
123
The B2-B3 process is asked to analyse form: à la campagne. An analysis by [à] [la
ville]134 is found in phase 2 but, at phase 4, the same form à la campagne is segmented
in another manner. Figure 14 below "exposes" the "reasons" which the model finds to
license à la campagne (licensing records are in bold typeface).
"Exposition" of channel 12
(à la campagne)
span of channel 12 (ph 2) = form to analyse
(à )(la campagne)
[à][la ville]
(à )
[à]
(la campagne)
[la campagne]
(la )(campagne)
[la][ville]
(la )
[la]
(campagne)
[campagne]
[la][France]
[à][Paris]
[pour][la France]
how ag 181 segments the span
attests the segmentation (finding 355 on record 1388)
span of channel 11 (ph 1)
attests as setup term 264 setting up channel 11
span of channel 4 (ph 1)
attests as setup term 2199 setting up channel 4
how ag 178 segments the span
attests the segmentation (finding 350 on record 1383)
span of channel 9 (ph 1)
attests as setup term 1 setting up channel 9
span of channel 3 (ph 1)
attests as setup term 2198 setting up channel 3
attests the segmentation (finding 704 on record 483)
attests the segmentation (finding 709 on record 490)
attests the segmentation (finding 716 on record 353)
(à la )(campagne)
[à la][ville]
(à la )
[à la]
(à )(la )
[à][la]
[en][France]
[en][ville]
how ag 179 segments the span
attests the segmentation (finding 351 on record 1389)
span of channel 10 (ph 1)
attests as setup term 300 setting up channel 10
how ag 182 segments the span
attests the segmentation (finding 226 on record 40)
attests the segmentation (finding 707 on record 491)
attests the segmentation (finding 711 on record 1385)
Figure 14 Analysis of à la campagne
In another plexus, the order of the licensed segmentations could be different: form à la
campagne might be licensed first and stronger by an amalgamated record, or might be
simultaneously licensed by two records, one amalgamated and the other not
134
Reminder of the bracketing convention : round brackets ( ) denote the segmentations made in the
analysed form, and square brackets [ ] mark the terms of C-type records in the plexus whereby the form is
analysed (the licensing records). Thus, [à] [la ville] means that (à la campagne) is segmented as (à)(la
campagne) and that the segmentation is licensed by the constructor recod à+la ville → à la ville.
124
amalgamated. This is contingent and depends on the congruency between a particular
plexus and a particular form.
In the example as it was computed, form à la campagne happens to be segmented as
(à)(la campagne) and concurrently as (à la)(campagne). Each segmentation allows
different licensing records to play. Segmentation (à la)(campagne) is licensed by
[en][France], [en][ville], it could also be licensed by [au][Canada] in comparable cases.
I now conclude that: i) the model is non-sensitive to the article-preposition
amalgamation which it treats by a double analysis; licensing may be made by records
with amalgamation and by records without amalgamation; ii) therefore, the phenomenon
of amalgamation does not constitute an obstacle to reach faster the records which are
closer to the task's terms, iii) when the model will be extended to treat meaning, the
computation of meaning will thus have the best sources available, that is, those which
have the greatest congruence with the argument, no matter this anomaly.
Similar behaviours obtain with other types of amalgamations135. The means utilized to
obtain these results are non-specific. Multiple analyses, as shown here, also happen in
cases without amalgamation.
In summary, when the question arises to relate a new utterance to its best analogs, that
is, the closest ones, those which provide for interpretative bases in meaning
computation, accidents like amalgamations with a diachronic phonetic reason, or
numerous other anomalies whatever their reason, tend to become indifferent.
Thence, in a framework making room to processes concretely at work in a particular
speaker, spending time trying to figure out with what components du and au are made
up become futile. The smaller branches which would subdivide these bottoms of trees
(or lattices) are useless136.
The case just exposed can also be construed as an expression of the proximality
principle or of the avoidance of totalism: it ceases to be necessary to have a unique
analysis frame which would exhaust the set of all phenomena and anticipate all local
complexities. Local and occurrential connections which look almost ad hoc obtain with
the combined play of mechanisms which are not ad hoc at all: they are non-specific.
4.5. Questions not addressed in this chapter
The treatment of reception acts has been restricted to formal analysis because
interpretation requires meaning issues to be covered, and they are not in the perimeter of
this work. For the same reason, acts of production could not be treated either.
135
It is conjectured, but not yet demonstrated, that similar treatments apply to a number of morphosyntactic peculiarities and to phonological phenomna.
136
They are useless in morphology and syntax in the service of meaning production, if one adopts an
orthographical coding as the base for inscriptions, but different ones might be useful if one envisages to
relate morphology and syntax with morpho-phonology and prosody in a model that would extend its
ambitions in this direction.
125
Anaphor, relativation, and coreference more generally have not been covered. Remote
dependencies are not treated. The conjecture is that the current structure of the C-type
record does not suffice: it is too simply Harrissean.
Agreement and concord were not treated because the apparatus of structural analogy
alone does not have that power. A step in this direction will be done in the next chapter
by involving systemic analogy.
4.6. Conclusions on structural productivity
The general frame for the dynamics which was defined in Chap. 3 has been applied to
analysis: the present chapter began with the redefinition of analysis as a dynamics of
staggered structure mappings; then an implementation was provided with agents B2 and
B3.
The dynamics demonstrates a base productivity in about the same domain as that of first
period Generativism (Aspects, Syntactic Structures), but this productivity is based on
exemplars and uses proximality. It produces effects of syntagmatic structure without
positing a reified syntagmatic structure, which is more flexible and has several
advantages.
It is not affected by cross-categorial homonymy which is solved easily in context.
It produces systematicity effects between sentences of different types without requiring a
transformational apparatus: dispersion-distribution of the lexical material across
sentences of different types suffices to systematicity.
Concerning inscriptions which are formally analogous but in which agentivity is
differently disposed (easy to / eager to), provided that they are not made directly
analogous in the plexus, that is, provided that speaker judgments are respected, these
separations are productively prolonged; in each case, novel utterances are licensed by
inscriptions presenting compatible agentive orientations. This provides a correct base
for ensuing interpretation.
In another example, the same prorogation obtains with precision and robustness: it is not
compromised by the non-monotonicity of too stubborn to talk / to talk to / to talk to Bill.
The response of the model externally seems to be categorical, but the means to obtain it
are not; they make minimal postulations, in any case much weaker ones than do other
theories which address linguistic productivity with precision.
The same dynamics also succeeds with amalgamations (ex.: de + le → du in Fr.) with
flexibility, and with an apparatus which is non-specific.
It integrates sparse and heterogeneous inscriptions, and therefore, it is favourably
oriented to explain acquisition (the demonstration was not made in this chapter, it will p.
249).
I conclude that the model is satisfying for a substantial part of syntax, and for analysis.
Without drawing on the corresponding devices of the grammars, the dynamics based on
transitivity, constructability transfer, and expansive homology, produces a number of
grammatical effects: category effects, regularization effects, syntagmatic structure
126
effects, transformation effects, effects of thematic role, effects of structure multiplicity
(Sadock, van Vallin), etc.
They are obtained by productive "a-grammatical" mechanisms, although they are
externally analysable as grammatical. At this point already, many points of grammar
appear therefore not to be prerequisites to the explanation of the dynamics, but rather as
effects of the latter. More will be shown in the following two chapters.
127
Chapter 5.
Systemic productivity
Systemic productivity is a dimension of linguistic productivity which has not been well
identified. Current theories only grasp it as being in the margins of structural
productivity – the latter very much apparent by contrast – and systemic productivity is
touched only indirectly, either via morphology, or via syntactic features smuggled in to
address some of its consequences: agreement or concord. Either way, systemic
productivity is not studied for itself. From this unfortunate elision, there follows, in the
first case, stopgap conceptions like improper derivation for example, and in the second
case, an inadequate treatment of systemic anomaly, and in both cases, an approach
which is categorical and this is not desirable as has been shown.
This chapter: i) defines systemic productivity, ii) approaches it with analogy, identifying
for its treatment the abductive movement by transitivity, and the abductive movement
by transposition, iii) defines agent ANZ as the kernel piece of its treatment, iv) applies
agent ANZ to five examples, v) proposes a direction to treat the question of agreement
and discusses it.
5.1. Systemic productivity, definition and explanation
5.1.1. Systems as the locus of a specific productivity
The question of linguistic productivity being posed, it is envisaged spontaneously as the
ability to utter (and receive) novel assemblies. This vision is necessary and was the
subject of chapter 4 where I accounted for it mainly with structural analogy and the
abductive movements by constructability transfer and expansive homology.
But in considering linguistic productivity solely as a question of assemblies, one
neglects to see that the placement of a form in a pluridimensional paradigm (that is, a
system like the verbal paradigm of a Romance language), is a productive process in
itself.
I understand 'placement', in reception, as the assignment of a place in a paradigmatic
system to a given form, and in emission, as the attribution of the appropriate form to a
given place. The notions 'paradigmatic system' and 'place in a paradigmatic system' are
provisory, what follows being a critique of them; and the conclusion will be precisely
129
that we must produce system effects (without reifying the frames that would define the
systems), and consequently to produce the corresponding effects of placement.
As a first approach, the question of the placement in a system roughly amounts to
recuperating the 'semantism' that would be associated with a place in the system. We
know what it turns out to be: the mapping between places in systems and their
associated meaning (meanings) is contingent and complex. This is true for example, of
the 'semantism' of verbal tense; as it is for definiteness, number, etc. Contingent and
complex as this association may be, it nevertheless has an inescapable function in
interpretation, because it helps locate terms that are similar in the sense that they are 'of
the same place' and it is exactly via the similarity of their 'locality' or placement that
interpretation may deploy its abductive paths.
The domain of systemic productivity encompasses all systems137, that is, all the tables
which may be established in languages so that, for any pair of lines, for any pair of terms
picked up from these lines in the same columns, the meaning ratio in this pair is the
same as the meaning ratio in another pair picked up in the same lines and in another
column; likewise after permutation of 'line' and 'column138.
To begin with, systems are verbal systems and declension systems which are usual.
Systems also encompass a vast number of tables which receive less attention because
they are less usual or concern fewer forms, like the following ones in French:
S1
la
une
le
un
S2
mieux
bien
pire
mal
plus grand
grand
autant
aussi grand
moins
plus petit / moindre
mineur
inférieur
S3
plus
plus grand
majeur
supérieur
S5
après
avant
égal
suivant
précédent
137
The notion 'system' is a pretheoretical notion used provisorily. Below it will be abandoned for that of
'systemic productivity', which allows us to problematize the dynamics and the cognitve implications of
system effects.
138
This last proposition : Likewise after premutation of 'line' and 'column is important. We shall see
below that it justifies calling on the abductive movement by transposition.
130
S5
avant
après
ensuite
hier
plus tôt
tôt plus près
recule
lors, alors aujourd'hui en mm temps
à égale dist. reste sur place
auparavant demain
plus tard
tard plus loin
avance
S6
dans
hors (de)
à côté de
S7
vite / rapidement
rapide
dedans
dehors
à côté
intérieur
extérieur
proximité
soigneusement / avec soin
soigneux
entrer
sortir
passer
bien
bon
The dimensions of systems are grammatical categories like gender, number,
grammatical tense, and person. They may also be a set of what a categorical description
would call 'lexical class', like the rows of system S7 above which are Adv. and Adj.
5.1.2. Explaining systemic productivity
In a small system, systemic productivity may be considered a small problem: speakers
learn it by rote and there is nothing more to it. The explanation of ensuing acts of
emission and reception would be covered in this way. At the lower extreme, the smallest
possible system is a two-by-two system, that is, a systemic analogy. The speaker forms a
systemic analogy and nothing more: once formed, he can use it.
However, this does not explain the possibility of extension of a system, be it a durable
extension by conventionalization of more forms that append to the system, or an
occurrential extension. One example would be the possibility of metaphors, which is
always open.
Neither does this provide a base to the differential process of meaning recuperation.
In a large system, all these reasons still hold to disqualify a 'learning by rote'
explanation, but moreover it is just no longer possible to learn by rote, because of the
size of the system.
We know that morphology (occasionally syntax) takes over, in the very measure of the
system's size, by installing in the overt form some marks (affixal marks for example)
which guide the placement of forms in the system. This is an empirical fact. In what
does it constitute an explanation that would nullify the need to envisage a properly
systemic productivity?
5.1.3. An explanation by structural productivity does not suffice
Then, for instance in a verbal system, the attention focuses on a morphological schema
like:
verbal base + inflection → inflected verbal form.
The question of a possible systemic productivity would then be moot because it would
be replaced by structural productivity. A replacement as simple as this presents many
obstacles.
131
This schema does not explain the alternation of bases because it does not do justice to a
fact like, in Fr.:
irai is to vais as mangerai is to mange.
This schema also fails with groups (conjugation groups, declension groups, etc.).
Neither does it apply to forms occupying more than one place in a system139: fais, in
written Fr., is a first person or a second person.
This schema cannot apply to systems S1 to S7 above, which present little or no
morphological regularity.
Systemic productivity takes place despite structural anomaly; therefore it cannot be
explained by structural dynamics alone.
5.1.4. Explaining with a dimensional frame
Theories then usually postulate a dimensional frame which underlies the system: they
reify the system. For example, in the Fr. verb, a tense-mode dimension, a person
dimension, and a number dimension are postulated. The frame is assumed to be given
and it is spontaneously presented (this is not always made explicit) as explaining the
system and its operation. This analysis is the classical one in pedagogical grammars, but
these grammars are intended for speakers who already have a certain command of their
language. It is also the analysis made by modern theories (generativism, HPSG, etc.)
which renewed it with syntactic features. Forms are assumed to be determined by three
features, one for each frame dimension, and the feature values assign a form a place in
the system.
As a descriptive means, such a frame is comparatively efficient (with some defects), but
is not explanatory.
5.1.5. Defects of the frame
The frame does not explain the anomaly of forms
Syncretism and the alternation of bases remain as formally anomalous residues.
Now, despite formal anomaly, the forms find their place in the frame, and this set
operates smoothly: speakers perform placement even when the 'base + inflection'
schema cannot support the placement process.
One may object that in French the personal pronoun being obligatory, this partially
compensates for anomaly and syncretism. However, in Spanish, pronouns are not used
in current practice and this does not prevent anomaly:
ir (go)
ser (be)
hacer (do, make)
andar (walk)
cantar (sing)
139
pres. ind. 1S
voy (I go)
soy (I am)
hago (I do)
ando (I walk)
canto (I sing)
fut. ind. 1S
iré (I shall go)
seré (I shall be)
haré (I shall do)
andaré (I shall walk)
cantaré (I shall sing)
Phenomenon which is sometimes called 'syncretism'.
132
pret. ind. 1S
fuí (I went)
fuí (I was)
hice (I did)
anduve (I walked)
canté (I sang)
Likewise in Russian, in Basque, and in many other languages with the categories person
and personal pronoun, but eliding personal pronouns, formal anomaly is not an obstacle
to systemic productivity.
The frame assumption does not explain the anomalies of the frame itself
Such anomalies are numerous.
In systems S1-S7 above, there are many unoccupied places.
Imperative in French does not have persons 1S, 3S, 3P.
In Fr., there is no compound past subjunctive, no anterior future conditional, etc. To
account for the fact that not all pairs (tense, mode) are attested in French, Gross
proposes140 to substitute tense and mode with a tense-mode category which would de
facto sanction those of the pairs which are attested. This measure is prudent and wise
but it fails to do justice to data like j'aurais vu : je verrais :: j'ai vu : je vois. That is to
say: between tense and mode in French, there is a partial categorial orthogonality,
certainly incomplete, but which is not nothing. Therefore, the theory underlying Gross's
decision (and which he leaves non-explicit) misses a 'local generalization' if one may
say so.
The French definite plural article les is neither masculine nor feminine.
Etc., examples of anomalies of the frame are numerous.
We see that the system of the places itself (the frame) is more a matter of empirical
observation than one of postulation141, and that the systematicities which it offers are
partial only; it is the case well before the forms that it hosts are found morphologically
regular or not.
The frame does not explain learning
Postulating a multidimensional frame does not explain how children gradually build up
pluridimensional ability either. The reason for this is a fact that has already been stated
in Chap. 3: the learner must integrate sparse and heterogeneous data, and positing a
frame is simply positing the contrary.
In a large paradigm, speakers never really acquire the same ease in all points of the
domain. Even for an educated adult, at its margin (seldom used forms of seldom used
irregular verbs) there are hesitations and gaps. For a speaker of French, the threedimensional system of the verb is ideal and its margin never really gets comfortable;
either it remains a zone or free variation or, to comply with a norm, the speaker uses a
Bescherelle.
This is not compatible with an explanatory schema like innateness plus parameter
setting. In the case under discussion: innateness of paradigm dimensionality plus setting
the right dimensions all at once.
140
Gross 1986-1, p. 10.
141
That it has a 'contour dentelé' as Milner (1989) would say.
133
Postulating the frame does not explain language evolution
As in any categorial theory, having postulated a frame (the dimensions of which are
categories) it is impossible to show how it may undergo progressive alterations and
therefore evolve.
The frame is not appropriate because it is partonomic
Finally, postulating a frame requires the forms in it to be attributed properties which are
coordinates in the frame (for example: tense-mode, person, number). Doing this would
be accepting categories (which we do not want) and would be a handicap in building an
isonomic dynamics (which we want). This reason is a general reason but it is an
important one in the approach we are taking.
Finally, the frame is not explanatory, an antecedent explanatory mechanism is
required
To sum up, if we stick to a pluridimensional frame142, there is a description problem
since real systems often do not even observe it, and it is difficult to explain a verbal
system, i) as the contingent product of a history, ii) as learnable, iii) as useable and
serviceable for the speaker when the latter does not have an available theory of this
verbal system.
As we have not taken advantage of systemic analogy, this particular productivity
remains unexplained. There is therefore a productive mechanism which is antecedent to
its partial sanctioning by morphology, and it is not suitable to postulate a preexisting
frame which would explain how the learning speaker makes the right form-meaning
associations.
5.1.6. Systemic productivity as the dynamics of systemic analogy
The refusal of syntactic features leads us to seek an explanation by a genuine systemic
dynamics, that is, a dynamics which should be exemplarist and isonomic as is that
which accounts for structural productivity in the previous chapter.
This new dynamics is conceptually distinct from structural productivity, but as both
operate together, complementing one another, and taking over from one another, it is not
always easy to perceive what belongs to each.
The systemic dynamics is based on systemic analogy: it is based on the assumption that,
at some point in his learning history, the young speaker becomes capable of making
some analogies like:
va : vais
vient : viens
sommes : suis
sont allés : est allé
sont : est
142
:: vient : viens
:: est : suis
:: jouons : joue
:: sont venus : est venu
:: sommes : suis
Which is what syntactic features do.
134
These elements of linguistic knowledge are exemplarist systemic analogies. Their
number is modest because each has a certain cognitive cost. The young speaker makes a
certain number of them, not a very great number. He does so without the availability of
abstractions like 1P, 3P, indicative present, future, singular, plural, verb "aller", verb
"venir".
We assume then that these elements can undergo the abductive movement by
transposition. This assumption is not theoretically very costly: it is entailed by the
definition of systems (cf. supra). These elements can also undergo the abductive
movement by transitivity. The two movements then allow the unitary analogies above to
enter an integrative dynamics. Starting from the initial systemic analogies, this
dynamics143 has the final effect – as we shall see in detail below – of producing a large
number of other analogies by abduction, under conditions which are cognitively more
economical.
This progressively renders effects of pluridimensional systems.
Naturally, the pluridimensional system 'pre exists' the learning speaker; it is obviously
not he who establishes it. He is simultaneously the beneficiary of the mother tongue and
dependent on it. Gradually, he must comply with it if he wants to understand, to be
understood, and to become an esteemed member of his speaking community.
But he does not get hold of a system with three coordinates all at once. It is not a 'take it
or leave it' matter. If it were, French would have a perfect infinitive, a supine, an
ablative, etc. It is necessary that the conditions of this appropriation allow it to be a
progressive and incremental process. It is not the case that it has to be taken to any
predefined term except, in constraining pedagogies, the learning of tables that are pre
established and presented as an ideal norm. In a more spontaneous exercise of language,
something of the ancestral inheritance reconstitutes itself; the acquired knowledge
complies with the inheritance in the very frequented parts of the paradigms and, in the
less frequented parts, remains an occasion for hesitations leading to bolder abductions,
and these in turn occasionally give birth to variant creations.
The perspective is reversed. A categorial theory would postulate a three-dimensional
analysis frame, of which it should then have to explain the gaps (defectivity, i.e.
unoccupied places, syncretism, alterations, anomaly); it would have nothing to say about
the evolution of the frame. Here on the contrary, we start from the acts and from
operating mechanisms which are explanatory right from the beginning. Exemplars are
primary, as is the abductive computation which uses them; and the possibility of
describing the system which the young speaker constructs, and in which he becomes
productive, is recuperated as an effect of the base dynamics.
Adopting a dynamics as an explanatory schema of this type has many advantages, as we
can see:
143
-
A plausible discourse about learning becomes possible.
-
The progressive way a verbal system is built in its dimensions is better
explained.
The rest of this chapter will expose in detail the systemic dynamics (agent ANZ).
135
-
Room is made for allomorphy, syncretism and groups as a cognitively motivated
residue of a regularization process.
-
Inflectional morphology is better positioned: it can sanction a pluridimensional
system without having to do so entirely and its role is second in time, and
causally second, even if, once the language has been learnt, in the adult's
knowledge, this role becomes very important.
-
The 'failures' in the learning process, or its residues in the margins of the system,
make room for its possible evolution.
Systemic productivity is thus based on transitivity and on transposition. It shares
transitivity with structural productivity, but transposition is proper to it: structural
productivity is not concerned with this movement.
Systemic productivity assumes some hypotheses concerning the inscriptions that support
it. Some of this will be made clear in the course of this chapter and the topic is more
technically addressed in an appendix, section 12.9.2.1. Linguistic paradigm, system,
dimension (p. 311). In this model, systemic productivity is implemented by agent ANZ,
the architecture and operation of which are now about to be explained with examples. A
more formal statement is made in the corresponding appendix.
5.2. Adverbial derivation in French, a process using one paradigm only
Consider a task of the type: "find X which is to Y as A is to B", in which Y, A and B are
terms144. Call this 'analogical task'.
In ABS, the agent that solves an analogical task is agent ANZ: it produces Xs which are
to Y as A is to B. The Xs it produces are called 'analogisands' of Y, A and B. The set of
three terms Y, A and B define the analogical task, it defines the duty of an ANZ agent.
The mutual positions of these terms matter: tasks ANZ (Y, A, B) and ANZ (A, Y, B),
for example are not the same tasks. Saying that terms Y, A and B are here 'copositioned'
is saying nothing else. Any ANZ agent has a duty which has the form (Y, A, B).
A first agent undertakes the analogical task which is that which the problem poses. Then
it recruits more agents of the same type, which in turn recruit more agents, etc145. Each
such recruitment attributes to the commissioner agent a duty which is equivalent – let
alone an abduction step – to that of the client agent (the recruiter).
So every recruited agent has a duty which is transitively equivalent to that of the initial
one, but, with the distance, there may be a drift. It is a drifting transitive determination.
Here is now a summary definition of the operation of agent ANZ; it will look clearer
with the ensuing examples and is formalized in the appendix. An ANZ agent may, in a
favourable case, contain in its duty data which settle immediately: two of its terms are
144
For Y, A and B, entities more complex than terms will be envisaged further in the text.
145
Such recruitment chains dry up in case of exhaustion of the plexus, or, even when more data is still
available in the plexus, when the delivery point of these agents (a channel) has enough results. Deciding
when a delivery point has enough results is a question with several implications and is addressed under the
title "Controlling the dynamics" in the appendix which specifies the dynamics of the model.
136
equal. When this is the case, it raises a finding the content of which is the third term of
its duty146. In addition, an ANZ agent applies the abductive movement by transitivity by
making a step in the paradigm in which it operates; this takes it to recruit more agents.
Finally, an ANZ agent applies the abductive movement by transposition; for this, it
transposes the roles of the arguments in its duty, this also causes the recruitment of more
commissioner ANZ agents.
A first simple example will show the operation. Let us assume that, during the course of
a broader linguistic act, the need arises of a term which is a little like soigneux, a little
like habilement, but not really any of these two terms. Rather, it is to soigneux as
habilement is to habile. This is an 'analogical task' such as defined above and an ANZ
agent is recruited to produce the corresponding result X:
X = ANZ ('soigneux', 'habilement', 'habile')
To solve this task, the model uses one paradigm only, that of the figure below. This
holds for the plexus used in this experience. With another plexus, the tracks to a
solution might be different.
The paradigm that is used contains regular derivations of French adverbs by suffixation
of –ment, but the model does not "know" it in the sense that it just records systemic
analogies among the forms and ignores morphology, even if the latter is apparent of
course to the human reader. The paradigm also comprises the adverb phrase avec soin
(with care) which occupies a place in this analogical system even if it is not derived with
-ment.
Processing regular adverbial derivation by enumerating records in this way is not very
smart or very productive: the least ambitious linguistic model is expected as one of its
first accomplishments, at least to apply such processes with some systematicity. The
previous chapter shown how morphology and syntax were handled and this case could
be approached following the same schemas, but here, the intent is to demonstrate
systemic productivity and any set of forms can always be envisaged in ignoring their
formal regularities. Moreover, if, with Langacker, we refuse the 'rule-list fallacy' (supra)
it is expected in the Analogical Speaker that the plexus should contain inscriptions of
that kind.
The model finds the two following results (strengths in column 1 were introduced in
Chap. 3; they indicate that the first result is more economical):
146
This amounts to solving the following trivial analogy: "What is to Y as A is to A". The result can be
nothing else than Y itself.
137
A terrible terriblement
A ferme fermement
A grand grandement
A habile habilement
A soigneux soigneusement
A honnête honnêtement
A soigneux avec soin
Figure 15 Paradigm habile-habilement
Strength
Result
.73
.66
soigneusement
avec soin
The plexus contains potentially these two solutions; the model finds both.
Soigneusement is found first because the connectivity of the paradigm is such, and the
prepositional phrase avec soin is found right after. In a plexus corresponding to a
different speaker the order might be different.
To reach these results, the model used the agent tree below:
Agents are displayed in straight characters and products (findings and results) in italics.
Agent numbers are followed with their strengths, then with the terms that constitute the
agent's duty. Product numbers are followed with the product strength, then with the term
associated to the product. Note for example agent 10 which raises finding 3: avec soin,
causing the delivery of result 4: avec soin at the root channel. Note also numerous
agents (eg. agent 7: soigneux mal mauvais) which are envisaged by the computation but
lead to no result.
138
channel 1 (root channel)
2 0.73 soigneusement
4 0.66 avec soin
2 ANZ 0.81 soigneux habilement habile
7 ANZ 0.73 soigneux mal mauvais
1 ANZ 0.81 soigneux habilement habile
6 ANZ 0.73 soigneux grandement grande
5 ANZ 0.73 soigneux soigneusement soigneux 10 ANZ 0.66 soigneux avec soin soigneux
1 0.73 soigneusement
3 0.66 avec soin
3 ANZ 0.73 soigneux grandement grand
9 ANZ 0.66 soigneux fermement ferme
8 ANZ 0.66 soigneux honnêtement honnête
4 ANZ 0.73 soigneux terriblement terrible
Figure 16 Agent tree
This heuristic structure is a simple one. It does not present any occasion of
reinforcement: each result is merged from one finding only.
One may judge that this paradigm is a toy paradigm: it contains seven adverbs only
when in French there are several thousands. What would happen with a more realistic
one? What if the records useful for the task were more remote instead of being at a
distance of two links as in the example? This question has several aspects, some of
which only can be discussed at this point: i) nothing imposes that a single monster
paradigm be built with thousands of French adverbs, the integrative cooperation of
multiple, smaller, heterogeneous paradigms may do (cf. infra a gloss about
integrativity), ii) if the records were more remote, the strengths of the results could well
be lesser and this could be desirable, iii) several paths ganging up and the resulting
reinforcement could increase the strength of the result, iv) familiarity orientation (cf.
section 12.8. Familiarity orientation), much reduces the number of heuristic paths that
are envisaged, v) the introduction of structural productivity (here morphological) as seen
in the previous chapter, would open up different paths and the discussion would be a
different one, vi) finally, there might arise dynamics so heavy as to be intolerable and
impossible to amend, which would tend to refute the radical non-categoricity
assumption, and to suggest that brains really have some other ways to do.
This example helped us introducing the dynamics progressively but it does not
constitute by itself a very fascinating achievement. A task involving two paradigms is
more interesting and more demonstrative.
5.3. French verb, two paradigms playing integratively
The analogical task posed to the model is now:
"find X, which is to va as venir is to vient" or X = ANZ ('va', 'venir', 'vient' ).
139
The model finds one result:
Strength
Result
.59
aller
This result is good and the only possible one in French.
To solve this task, still for a given plexus content, ABS used two paradigms. The first
one associates forms of verb aller (to go) with their homologs for verb venir (to come):
447 A vas viens
449 A allons venons
446 A vais viens
448 A va vient
Figure 17 paradigm of vais-viens
140
0.7
The second paradigm, for a set of verbs, associates their infinitives with their third
person singular of the present indicative:
0.65
205 A cacher cache
0.6
422 A être suis
0.55
206 A poser pose
0.5
423 A venir viens
0.45
207 A aller vais
0.4
0.35
243 A voyager voyage
0.3
0.25
Figure 18 Paradigm of venir-viens
0.2
With a different plexus, the inscriptional resources serving the same task could be very
different: this is an occasion to show in a concrete example the question of inter-speaker
variation already alluded to in section 3.5.2. Determinism, idiosyncrasy, normativity, p.
75.
Please note that the two paradigms have very heterogeneous structures:
paradigm
what opposes the two
terms in a record
what changes between
two linked records
first paradigm
base aller - base venir
tense + person + number
second paradigm
person 1S - person 3S
base
Table 7 Contrasting the structures of the two paradigms
The diagram below is an overview of the computation and shows how it proceeds; it is
limited to the branches that contribute to the result.
141
The process beg ins w ith the
terms o f the task
Paradig m : ALLER -VENIR
A
448
Y
B
A
va
vient
venir
va
vient
venir
During the co mputation, these
po sitions are successive ly
occupied by d ifferent terms.
This set of three positio ns
characterises the analog ical task.
The spare term is
forwarded
O ne step in a paradig m
A
446
vais
viens
Pair viens-venir enables a
po sitioned resetting into ano ther
parad igm
venir
P ositio ned resetting = Change o f parad igm
Paradig m : 1S-INFINITIF
vais
viens
venir
( 4 23
A)
aller
( 2 07
A)
O ne step in a paradig m
vais
vais
Coincidence in these two positio ns :
the settlement conditio n is met.
Therefo re the third term is a so lution
aller
Y
’
Is to
va as venir is to vient
A
B
Figure 19 The mechanism of agent ANZ shown on an example
The process begins with pair va, vient (the 'current pair') which is attested in a record.
This record (the 'current record') belongs to a paradigm (the 'current paradigm'). The
spare term (venir), that is the term which does not belong to the current paradigm, is set
aside. Neighbour records in the current paradigm are explored, causing the evolution of
the current pair (this drawing is restricted to the paths leading to the result but numerous
other paths are explored, as the agent tree below shows).
All along the process, two conditions are watched in newly created agents: the
settlement condition and the positioned resetting condition. For a general introduction to
positioned resetting, please cf. p. 206.
The settlement condition is met when two of the three current terms are equal. When
this happens, a finding is raised.
The condition of positioned resetting is met when the agent's analogy (that which
underlies that agent's duty) transposes, that is when the pair formed with the term in
position B within the current pair, and the spare term, is attested in the plexus. Then the
current agent recruits another one and this opens up a new branch in the heuristic tree.
142
Here is now the agent tree which was used in this task.
channel 1 (root channel)
2 0.59 aller
1 ANZ 0.81 venir va vient
2 ANZ 0.73 venir vais viens
5 ANZ 0.66 venir vas viens
3 ANZ 0.66 vais venir viens
7 ANZ 0.59 vais aller vais
1 0.59 aller
6 ANZ 0.59 vais être suis
9 ANZ 0.53 vais voyager voyage
8 ANZ 0.53 vais poser pose
4 ANZ 0.66 venir allons venons
Figure 20 Agent tree
There is not yet any reinforcement effect because one agent only (agent 7) finds the
settlement condition. It raises the finding 1 which will cause the delivery of result 2 at
the root channel. Here again, numerous recruited branches lead to no result: they do not
meet the settlement condition.
A positioned resetting occurred. The corresponding edge is drawn in bold.
Envisaged globally, the duty of the agent at the source of this edge (agent 2, the client
agent) and that of the agent at the target of this edge (agent 3, the commissioner agent),
both consist of the same terms. But in both agents, the terms hold different roles: they
are each time in different positions; this justifies the phrase positioned resetting.
Positions Y and A are exchanged but position B conserves its occupier when resetting
takes place. Position preservation is the key in the efficiency and flexibility of ABS
computations. Linguistic positionality is conserved form end to end in the computations
and this finally relates the results to the initial terms of the task in a coherent and correct
manner. Channels are another means to serve the same end but they are not used by
agent ANZ.
5.3.1. Integrativity
Leaving now the intricacies of the detail operation for a more significant topic, it is
important to note in this example how two paradigms concur to produce a result. Each
one is comparatively poor and not very useful if considered on its own. Used together in
conjunction, they acquire a greater operational power.
Agent ANZ integrates the effects of partial paradigms. This holds not just for agent
ANZ but also for the other agent types which all have an integrative effect, and it also
holds for ABS generally which integrates the effects of agents of different types. The
question of integrativity will be developed in section 7.4. Integrativity (p. 207), when
more mechanisms will have been exposed.
143
5.3.2. Positioned resetting
In the preceding example two paradigms are used: the computation begins in a first one,
then continues in the second one. At the point it enters the new paradigm, a resetting
takes place. The most usual computation steps prolong a followed abduction path within
a same paradigm, as above in the example about adverbial derivation. A process
performs a resetting when something different happens. Upon resetting, the abductive
thread makes an abduction step which is not just prolonging a track in a paradigm.
Resetting must be positioned: the copositioning constraints that hold between the
agent's arguments must be observed. This is a little difficult to explain but it is
important. The agents of this task (all ANZ type agents in this case) have three positions
symbolically named Y, A and B (the green columns in the synthetic diagram above).
The position names come from the statement of the analogical task: "find X which is to
Y as A is to B" which is now usual. In a computation step which crosses a paradigmatic
link, pairs extracted from the plexus follow one another in positions A and B. In a
resetting, the movement is different, the three terms, temporary occupiers of positions
Y, A and B, globally remain the same but position Y and A exchange their occupiers:
this is the application of the abductive movement by transposition defined p. 87. It is a
redistribution of roles which takes place in a precise and motivated choreography. This
is what it means to say that copositionings are observed.
In the case under discussion, the second paradigm is different from the first: so resetting
could be named "change of paradigm". This is not done because it is not always the
case: the example below will contain a resetting which is a move into another record of
the same paradigm, but with a reassignment of the roles. In the previous chapter, the
shifts between levels during syntactic analysis, because their schema is something else
than the mere crossing of a paradigmatic link, can also be called a 'resetting' and they are
also 'positioned'.
The notion of 'positioned resetting' is central: it is one of the keys of productivity by
integrativity. The subject will be discussed again.
5.4. Recruitment and edification
In syntactic analysis with the B2-B3 process (previous chapter), the heuristic structure,
that is, the set of agents and channels, was built according to a process of edification.
Edification progresses forward (towards the left of the figures and is sensitive to field
data. In the case of B2-B3, field data is the beginning and the end of a substring of the
string being analysed; these two numbers characterize a channel of the B2-B3 process.
In the analogical task performed by agent ANZ (this chapter), the development follows a
different method: recruitment. Recruitment progresses rearward (towards the right of the
figures), starting from a unique point: the root channel, and is not sensitive to field data.
Recruitment is discussed in detail p. 336 and edification p. 341, where a table
contrasting both is also proposed.
144
5.5. Auvergnats and Bavarians, resetting in a same paradigm
5.5.1. The task and the resources it uses
The analogical task posed to the model is now:
Find X, which is to Français as Français is to Européen
X = ANZ ('Français', 'Français', 'Européen' )
This task uses one paradigm only, which is presented below. Its principle is that the
leftmost term is a national membership – or an administrative or territorial membership,
remind that analogy elides the predicate – which is contained while the rightmost term is
one which contains the leftmost one.
A thing like 41 A Auvergnat Français is a C-type record of the plexus. It is record
number 41. The edges are paradigmatic links. The two records 42 and 39 with the link
between them, read as follows: "Bourguignon is to Français as Français is to Européen".
All records are type A records, which means that each contains two terms without their
forms being necessarily related or reflecting overtly the ratio between them.
This paradigm tells nothing more. In particular, it tells nothing about the essence of
territorial entities, about political units, citizenship, the containing-contained relation,
etc. Some such data, related to some of the terms in this paradigm may or may not be
elsewhere in the plexus, they will not serve here.
Records of the type "provinces in France" are close to "France in Europe", records
"Länder in Germany" are close to "Germany in Europe", the English and the German
have a close link. The Burgundians are close to the English for any good reason owing
to the culture of this speaker. This is how proximality is influenced in this paradigm.
In what does this paradigm constitute a system in the sense defined at the beginning of
this chapter? In other words, what are its dimensions? A first dimension is that which
underlies the pair (Auvergnat, Français). So far, a dimension was said to be constituted
of lexical categories, and the example was (Adj, Adv). This remains true but becomes
more specific. The dimension here is (N, N). Both names have value, not simply as
names, but as names marking an attachment to politico-territorial entities; moreover, the
logic of their pairing is that the first one of these entities is geographically included in
the second one. In this way, such inscriptions embody a sort of sub-categorization. A
theory which would be categorial and partonomic (which would attribute properties)
would find it difficult to render this because what matters here is not inherence but
relative positions.
145
42 A Bourguignon Français
41 A Auvergnat Français
38 A Espagnol Européen
39 A Français Européen
40 A Anglais Européen
43 A Alsacien Français
88 A Bavarois Allemand
37 A Allemand Européen
Figure 21 Paradigm Français-Européen
0.35
0.1
The second dimension is the set (Auvergnat, Français, Alsacien, Allemand, etc.).
Although this set is homogeneous in this that all its elements are inhabitant names, it
ceases to be possible to contrast them two by two as this can be done with singular and
plural, with indicative and subjunctive, with containing entities and contained entities,
etc. It seems not to be possible any more to rescue a categorial or sub-categorial
approach. Should we then have to grant that this dimension is 'false' and that the
paradigm is one-dimension only. Is it still possible to say that such a table is a system?
A reconciling argument can be made starting from the verbal paradigm. It was presented
above as tri-dimensional: tense-mode + person + number. Actually, it comprises a fourth
coordinate which is a fourth dimension: the variety of the verbs according to which it is
possible to make excerpts like (allons, venons, sommes, etc.). The case is the same here:
the series (Auvergnat, Français, Alsacien, Allemand, etc.) is a system dimension in the
same respect. Thus a system may have, as one of its dimensions, simply that of lexical
variety without ceasing to be a system for that reason. It functions quite well as any
other system. In particular, the transposition movement applies (it applies under the
condition of quasi-bijectivity, but this is independent from one of its dimension being
lexical variety).
5.5.2. First results: Alsatians, Burgundians and Auvergnats
After two computation phases, the model finds the following three results:
Strength
Result
.73
Alsacien
.73
Bourguignon
.73
Auvergnat
146
This is expectable as they are the three French provinces inscribed in the plexus. The
tree of agents (the heuristic structure) is the following:
c h a nn e l 1 (roo t ch a nn e l)
4 0 .7 3 A lsa cie n °
5 A N Z 0 .73 F ran ç ais °E s p ag n ol°E u ro
1 AoNn Z
5 0 .7 3 B o u rg u ign
° 0 .81 F ran ç ais °F ra nç a is°E u ro p é en °
6 0 .7 3 A u ve rg na t°
9 A N Z 0 .66 E sp a g no l°
2 A N Z 0 .73 F ran ç ais °A ls ac ien°F ra n ça is °
7 A N Z 0 .66 F ran ç ais °B a va ro is°A lle m a n d°
1 0 .7 3 A lsa cie n °
6 A N Z 0 .66 F ran ç ais °A llem a nd °E u ro p ée n °
4 A N Z 0 .73 F ran ç ais °A u ve rg na t°F ra nç a is °
3 0 .7 3 A u ve rg na t°
3 A N Z 0 .73 F ran ç ais °B ou rg uign o n °F ran ç ais °
2 0 .7 3 B o u rg u ign o n °
8 A N Z 0 .66 F ra n ç ais °A ng lais °E u ro p é en °
Figure 22 Tree of agents after two computation phases
The tree uses the paradigm once only.
5.5.3. Second line results: Bavarians
If triggered to proceed further, the model, at phase six, finds the Bavarians. The results
are now:
Strength
.73
.73
.73
.48
Result
Alsacien
Bourguignon
Auvergnat
Bavarois
The Bavarians were found to be to the French as the French are to the Europeans! How
is this to be understood?
The agent tree now lost readability and is provided as a document only. More readable
excerpts are provider further.
147
channel 1 (root channel)
4 0.73 59 0 0 0 0 0 Alsacien°
5 ANZ 0.73 55 0 0 38 1 4 Français°Espagnol°Européen°
55 0 0 39 1 4 Français°Français°Européen°
5 0.73 58 01 0ANZ
0 0 0.81
0 Bourguignon°
6 0.73 57 0 0 0 0 0 Auvergnat°
8 0.48 92 0 0 0 0 0 Bavarois°
15 ANZ 0.59 54 0 0 38
9 ANZ 0.66 54 0 0 39 1 4 Espagnol°Français°Européen°
2 ANZ 0.73 55 0 0 43 1 4 Français°Alsacien°Français°
7 ANZ 0.66 55 0 0 88 1 4 Français°Bavarois°Allemand°
1 0.73 59 0 0 0 0 0 Alsacien°
12 ANZ 0.59 54 0 0 43 1 4 Espagnol°Alsacien°Franç
25 ANZ 0.53 54 0 0 88 1 4
6 ANZ 0.66 55 0 0 37 1 4 Français°Allemand°Européen°
4 ANZ 0.73 55 0 0 41 1 4 Français°Auvergnat°Français°
3 0.73
0 01 04 0Français°Bourguignon°Français°
0 Auvergnat°
3 ANZ 0.73
55ANZ
0 57
0 42
24 ANZ 0.53 25
19
54 0 0 38
37 1 4 Allemand°Espa
Espagnol°Allem
10
0.59
25 0 0 39 1 4 Allemand°Français°Européen°
14 ANZ 0.59 54 0 0 41 1 4 Espag
2 0.73 58 0 0 0 0 0 Bourguignon°
13 ANZ 0.59 54 0 0 42 1 4 Espagnol°Bourg
8 ANZ 0.66 55 0 0 40 1 4 Français°Anglais°Européen°
26 ANZ 0.53 56
23
54 0 0 38
40 1 4 Angla
Espa
11
16ANZ
ANZ0.59
0.5356
25000039
431144Anglais°Français°Européen°
Allemand°Alsacien°Français°
28 ANZ 0.48 25 0 0 88 1 4 Allemand°Bavarois
7 0.48 92 0 0 0 0 0 Bavarois°
27 ANZ 0.48 25 0 0 37 1 4 Allemand°Allemand°Européen°
18 ANZ
0 41 1 4 Allemand°Auvergnat°Fra
20 ANZ 0.53
56 0 0.53
0 43 25
1 40Anglais°Alsacien°Français°
ANZ 0.48 56 0 0 88 1 4 Anglais°B
17 ANZ 0.53 25 0 0 42 1 31
4 Allemand°Bourguignon°Français°
30 ANZ 0.48 25
29
56 0 0 40
37 1 4 Allemand°Anglais°Europ
Anglais°Allemand°Europ
22 ANZ 0.53 56 0 0 41 1 4 Anglais°Auverg
21 ANZ 0.53 56 0 0 42 1 4 Anglais°Bourguignon°Fra
32 ANZ 0.48 56 0 0 40 1 4 Anglais°Angla
Figure 23 Tree of agents after six computation phases
This surprising result is interpretable: the underlying reasoning has now changed and it
must be reconstructed. Generally, in this task the underlying reasoning is: find
inhabitants of a contained territory; in the first two phases it is interpreted as contained
in France which is itself contained in Europe. The general underlying reasoning stays
the same but it is now interpreted as contained in any territory which is itself contained
in Europe. In the course of the computation, agent ANZ spontaneously broadens its
search scope. In a framework accepting constraints, one would say that a constraint has
been released, or violated.
Here is another presentation of the agent tree. It is restricted to the paths with which the
Bavarians were found.
The positioned resetting which led to the Bavarians result occurred on the edge from
agent 6 to agent 10. Both agents have the same set of terms in their duties, but not in the
same positions. In agent 6, the spare term is French and the current pair is (German,
European) whereas in agent 10, the spare term is German and the current pair is
(French, European).
Metaphorically, agent 6 makes the following "reasoning". At the point where I stand, the
initial task is reformulated into: What is to French as German is to European? This is
my own duty. Let me try and transpose this analogy – some analogies transpose, other
ones do not, I cannot know in advance, only the outcomes make the difference – and let
148
me make a try with this new duty: What is to German as French is to European? Let me
try and recruit a commissioner with this duty. If the pair (German, European) is attested
somewhere in the plexus, then, i) the recruitment takes place, opening up a new
abductive path which ii) may lead to some finding. As it happens in the case in point, i)
the pair is attested in the plexus so agent 10 is recruited and becomes a commissioner,
and ii) two steps later, in agent 20, a settlement occurs because the spare term: German,
is found to coincide with one of the terms of the current pair. The third term in the duty:
Bavarian is then raised as a finding.
agent type
duty--------------
content (interpreted duty)------
.48
92 0 0
Bavarois
ANZ
.48
25 0 0 88 1 4
Allemand Bavarois Allemand
ANZ
.53
25 0 0 43 1 4
Allemand Alsacien français
recruited by agent 10
ANZ
.59
25 0 0 39 1 4
Allemand Français Européen
recruited by agent 6
ANZ
.66
55 0 0 37 1 4
Français Allemand Européen
id
.73
55 0 0 43 1 4
Français Alsacien Français
id
.91
55 0 0 39 1 4
Français Français Européen
Explanation of finding 7
recruited by agent 28
recruited by agent 16
recruited by agent 2
recruited by agent 1
recruited by channel 1
strength
000
1
Table 8: Explanation of finding 7 'Bavarians'
This result lends itself to several comments:
a) The model does not particularly favour the reuse of a same paradigm but it does
not prevent it; when the task allows it, the model exhausts all the possibilities of a
paradigm; it re-uses the paradigm with different points of view147.
b) There is no directly modelled logic but the model behaves logically.
c) The most expected results, the most prototypical ones, the cheapest ones, are
produced first and with higher strength. Stranger results, ones understandable, but
with an effort are also produced, but later, and weaker.
147
This may be related with a fascinating result of the early times of artificial intelligence, in the domain
of theorem proving (Hofstadter 1995, p. 478 rather atttributes it to Pappus of Alexandria in the 3rd century
B.C.). An isoceles triangle being defined as having edges AB and AC equal, the program was asked to
demonstrate that its base angles B and C were equal. Since Euclid, the demonstration consists of drawing
the height AH then in showing that the right-angled triangles AHB and AHC are equal. This is achieved
by applying equality theorems between right-angled triangles. With Pappus, the theorem demonstration
program took a different course: it envisaged triangles ABC and ACB, which it directly demonstrated
equal, whence the conclusion follows. This way is shorter and more elegant than that of Euclid but it
requires a structure mapping which human computation is recluctant to make because it takes the same
elements in different positons. The theorem prover dit it and so does agent ANZ in our example. In our
conscious computation, we do not like to assign the same elements different positions. It may be the case
that this limit does not apply in our unconscious computation but we do not know. It if were proven that
our unconscious computation is subject to the same limit as our conscious computation, then a mode of
operation like the one shown for agent ANZ in this section would be refuted; it might be conserved for an
analogical artificial intelligence, but it would be disqualified for an analogical natural intelligence.
149
d) Adaptation of model behaviour obtains with non-specific means. They apply to
containing/contained territories, as here, but equally well to any systemic
paradigm. They apply to copositionings between formal terms, as here, but also to
ones between private terms – 'private term' will be discussed p. 262.
5.5.4. The route followed by the computation in the paradigm
The thick arrow shows the succession of the records which were used as current records
to obtain the Bavarians.
42 A Bourguignon Français
41 A Auvergnat Français
38 A Espagnol Européen
Priming
39 A Français Européen
Positioned
resetting
40 A Anglais Européen
43 A Alsacien Français
Settlement
88 A Bavarois Allemand
37 A Allemand Européen
Figure 24 Route followed by the computation in the paradigm
As this example shows, a positioned resetting may target the same paradigm. This is not
the general case: most often it reaches a different one.
0.35
0.1
Records 39 and 43 were used twice but not in the same respect: on the two occasions,
the positions Y, A, and B did not have the same occupiers. This illustrates the
possibility of the same inscriptions being used two times with distinct viewpoints; all is
a question of relative positioning between the terms of the task and those of the plexus.
After a positioned resetting, the process is reset. It seems it reuses the same resources of
the plexus but not in the same manner.
150
5.5.5. Result 'Bavarians' interpreted as a conceptual integration
It is possible to construe the process leading to the Bavarians as a conceptual
integration, the reference here is to the theory of Fauconnier and Turner148. This
conceptual integration is certainly modest and moreover very peculiar.
The agent appears to have performed a conceptual integration corresponding to the
following schema:
Input 1
Input 2
Containing 1
Containing 2
Contained 1
Contained 2
Blending
space
Europe
Country
Regions
Emerging property :
regions in a second entity
= regions in a continent
Figure 25 Domains in the conceptual integration which produced 'Bavarians'
The first input space ('space' and 'domain' are synonymous in this theory) is the
paradigm of the question and so is the second input space: the paradigm integrates to a
second instance of itself with a shift, to constitute a blending space with a two-level
inclusion hierarchy.
0.1
0.35
The (unique) paradigm used here contains already in itself something of the double
levelling by the fact, for example, that German occurs in the records sometimes on the
left, and sometimes on the right. This is what makes the two levels communicate; this
paradigm contains as virtuality the possibility to be so associated to itself. This is one of
the conditions which make the integration possible and agent ANZ realizes this
virtuality. In the blending space (in this theory, 'blending' and 'integration' are
synonymous) emerges the property "second level inclusion". It is latent in the origin
paradigm but not explicit in it. The dynamics of agent ANZ reveals it.
It must be noted that the schema is not: first build the blending space and then use it;
that is, the schema is not: first prepare a framework for induction and then perform
148
Cf. for example Fauconnier 1997a.
151
induction in it. The schema is more subtle and pervasive: the blending space is phase
wise-assembled along the development of the process computing the task and, this too
must be noted, integration is not the sole way to results: other results, more evident and
stronger, were produced before, without conceptual integration.
In the theory of conceptual integration, setting a relation between two input spaces is
deemed to be triggered by the occurrence of an 'introducer'. In the published examples, it
is a formal term (for example an adverb or an adverbial phrase) occurring in a text or in
a narration. If we had to look in the example above for what acts as the introducer, it
should not be sought as a part of an utterance since there is no utterance here. If it has to
be anywhere at all, it must be at the point of the positioned resetting. The key role in
triggering the conceptual integration is the already mentioned fact that German occurs
now on the left of the records and now on their right, but this fact does not play on its
own, it plays with the process which uses it, that is, the dynamics of agent ANZ.
The following example will now illustrate reinforcement and flexible categorization.
5.6. French articles, reinforcement effects
The task submitted to the model is now: find X which is to le as une is to un .
X = ANZ ('le', 'une', 'un') or X : 'le' :: 'une' : 'un'
The table displays the results received with the associated strength at each phase:
phase 1
la
cette
le
ce
cet
the (fem.)
this, that (fem.)
the (masc.)
this, that (masc)
this, that (masc)
2
.73
.66
3
.78
.66
4
.78
.66
.53
5
.78
.66
.62
6
.78
.66
.62
7
.78
.66
.62
.54
.43
8
.78
.66
.62
.54
.43
Table 9 Results of task ANZ ('le', 'une', 'un')
-
la is normally found first and with the highest strength. In phase 3, its strength
increases.
-
cette comes second and weaker: it is another feminine determinant, definite in its
own way, but less prototypically analogical to the terms defining the task.
-
le, ce and cet come later, still weaker: they are still determinants, they are still
definite, but they are masculine; whence their lesser strengths.
-
this set of results illustrates category drift which is a property of this model: it
makes no clear limit between categories since it does not reify them.
The figure below shows the reinforcement mechanism of result la:
152
Explanation of result 2 (0.78)
content: 1 0 0 0 0 0
la
content: 1 0 0 0 0 0
la
delivered at channel 1 1
merged from finding 5 (0.59)
raised by agent 9 (0.59)
raised by agent 7 (0.66)
raised by agent. 5 (0.73)
raised by agent 1 (0.81)
type:ANZ duty:
200 114
les la les
type:ANZ duty:
200 414
les une des
type:ANZ duty:
6 0 0 76 4 1 une les des
type:ANZ duty:
6 0 0 74 4 1 une le un
singularplural
definiteindefinite
recruited by channel 1
finding 5 is directly raised by its agent
merged from finding 1 (0.73)
raised by agent 4 (0.73)
recruited by agent 1 (0.81)
content: 1 0 0 0 0 0
la
type:ANZ duty:
6 0 0 75 4 1
une la une
type:ANZ duty:
6 0 0 74 4 1
une le un
definiteindefinite
recruited by channel 1
finding 1 is directly raised by its agent
Table 10: Explanation of result 'la'
Two paths using three paradigms concurred to produce result la.
The path in the lower third of the figure uses one paradigm only: structural analogies
between definite forms and indefinite ones. This path is short and produces finding 1
with strength .73.
The path in the two thirds at the top of the figure begins within the same paradigm then
(thick horizontal line) a resetting takes place which makes it enter a singular-plural
paradigm. After a longer walk through the plexus, it ends up raising finding 5 with
strength 0.59.
The two findings are merged, into the result la with strength 0.78.
Another plexus would operate differently. However, if it implements the knowledge of a
not too deviant French speaker, it must produce result la with high strength and in first
rank. This is macroscopic determinism (externally observable results may result from
dynamics which vary in their detail) and quasi-normativity (all speakers of a language
have about the same productions).
5.7. Grammatical agreement with AN2
5.7.1. Principle of agent AN2 and its effects
So far, agent ANZ addresses systemic productivity alone. In a paradigm, one dimension
of which is number, it finds a plural when required. However, it is not capable of
morphology or syntax (structural productivity) and therefore cannot exert an agreement
constraint.
On the other hand, the B2-B3 process ensures structural productivity and performs
analyses, but without exerting any systemic constraint: the notion 'system' is foreign to
153
it. Now grammatical agreement combines structural productivity and systemic
productivity together.
The idea with agent AN2 (ANalogical task with segmentation into 2 constituents) is to
combine both routes. One of its main effects will be to make the model capable of
grammatical agreement, number agreement or agreement on any dimension along which
an agreement constraint applies.
Formally, the task requested from agent AN2 is exactly an analogical task as defined
above, that is:
find X which is to Y as A is to B.
The difference lies in the technique adopted to solve it. Here, term Y is:
a) envisaged as a whole, as in ANZ, but also, simultaneously and concurrently,
b) segmented into two constituents (in ANZ it was not so analysed).
In this way, tasks which did not have a solution with ANZ because Y was not directly
attested in the plexus may now have one. To segment Y, AN2 uses as a commissioner
agent S2A the specification of which is provided in an appendix
Here are now a few test results, still with the same French plexus. Lines 1 to 6 show that
agreement performs well: gender agreement in simple noun phrases, person agreement
in verb conjugation. This is happy and seems a minimum. The interesting point of
course is how this is obtained: the agent responsible of these results is short-sighted and
it uses only systemic analogy and structural analogy: it knows nothing about things like
verb, pronoun, noun, article, gender, etc. It works without heads or syntactic features.
As in previous sections, a detail analysis would show that these results are made
possible by the integration of several fragmentary paradigms.
Y
vehicle: A / B
1
un homme
femme / homme
2
un homme
une / un
3
homme habile
femme / homme
4
homme habile
5
phases strength
result: X
10
.56
une femme
9
.43
une femme
10
.56
femme habile
une / un
9
.43
femme habile
je vais
allons / vais
5
.66
nous allons
6
je vais
nous / je
5
.66
nous allons
7
très gentil
suffisamment / assez
6
.59
extrêmement gentil
Table 11 Grammatical agreement with agent AN2
154
T a s k 's p a ra m e te rs
v e h ic le s u ffis a m m e n t : a s s e z
X : th e qu e s tio n
s u ffis a m m e n t
trè s | g e n til
assez
P a ra d ig m m a s c . | fe m .
A10
A
g e n til
g e n tille
A11
A
bon
bonne
C1
C
G oing through this paradigm leads to
attestations available for bon , w hich
are not for gentil.
trè s
bon
trè s b o n
C 1 is eligible to analyse Y , so are
therefore all the C -type records of its
paradigm , nota bly C 2.
C2
C
e xtrê m e m e n t
bon
A11
A
bon
bonne
A10
A
g e n til
g e n tille
e xtrê m e m e n t
+
g e n til
C onstructor C 1, … C 2 assem ble s
by concatenation

P a ra d ig m u n m a rk e d | e m p h a tic
A20
A
assez
s u ffis a m m e n t
A21
A
trè s
e xtrê m e m e n t
A22
A
trè s b o n
e xtrê m e m e n t b o n
e xtrê m e m e n t b o n
W ay back through the
m asc. |fem . paradigm
e xtrê m e m e n t g e n til
X : th e s o lu tio n
Figure 26 What is to très gentil as extrêmement is to assez
155
Finally, the speaker of which this plexus is a model has a good command of agreement
in two-term groups, that is roughly two-morpheme groups; this ability is not rule-based,
it is distributed and latent in the plexus and is revealed by a dynamics.
Line 7 shows something in addition: if one sees the pair assez : suffisamment (En.:
enough : sufficiently) as defining a vehicle which is a [unmarked, emphatic] vehicle,
then, the form très gentil (En. very kind) is an unmarked form and the task consists of
finding for it one or several emphatic homologs. The model finds extrêmement gentil
(En. extremely kind) which, in this speaker, is a possible emphatization of très gentil149.
The figure above is a picture of the sort of inscriptions which are mobilized, and of the
paths which are taken. It may be consulted to catch an approximate idea of the
mechanisms at play but, although already complex, it remains "figurative": it ignores
many unproductive search paths and focuses on those that finally produce; even in the
latter, it skips numerous intermediate steps, and it does not reflect rigorously the
settlement mechanism.
The agent succeeds by integrating, always in a short-sighted manner, data taken out of
three paradigms:
a) a C paradigm très+bon→ très bon :: extrêmement+bon→ extrêmement bon
b) an A paradigm assez : suffisamment :: très : extrêmement, etc.
c) and paradigms as the A paradigm gentil : gentille :: bon : bonne, etc. which make
it possible for gentil and bon to be considered similars, this in turn allows the
construction in paradigm C to be applied to term gentil.
The heuristic deployment becomes complex but its elementary movements remain
simple: they are limited to the four abductive movements defined above. This new
example, illustrates once again the integrative effect of the computation.
Line 7 is also interesting because the axis of its vehicle: assez : suffisamment, which is
termed "axis [unmarked, emphatic]" for convenience only, is now remote from what
grammars described with some success. It is vaguer, and less recognized than the axis
[singular, plural] for example. It is also less shared among speakers. However, it is a fact
which demonstrates some systematicity and some productivity. In a speaking
community there is, at work, an abundance of such oppositional axes, halfcharacterized, and half-shared, which constitute the dubious frontier of grammar.
Oppositions, forming themselves into paradigms may appear and evolve rapidly in
languages. These sorts of paradigms surge, then reinforce themselves following fashions
and influences among speakers, then generalize and entrench, or droop and disappear.
To this, categorial theories are helpless. With this model it suffices to add or alter a
small number of records in the plexii or the relevant speakers.
149
Some might object that this is not correct in French: extrêmement is not to très as suffisamment is to
assez. It surely is not, but this model does not try to model received or standard French and what is
considered is a speaker who, at a given point in his history in the language, may make that particular
analogy.
156
5.7.2. Limits of agent AN2
Agent AN2 succeeds in giving way to constraints which play across paradigms and lead,
for example, to render agreement effects without requiring any ad hoc device, that is,
without the syntactic features that are usually called for this. However, this is not
sufficient and agent AN2 has limits.
AN2 has a first defect which it inherits from agent ANZ which it uses (remember AN2
is client of ANZ two times: i) directly, and ii) via S2A). AN2, using ANZ, inherits its
low efficiency150 at priming time. More generally, AN2 also has a low efficiency in the
rest of its operation: it is deemed to make an inefficient use of plexus inscriptions. To
obtain results it requires more inscriptions than what would be strictly necessary
following intuition. This remark was made by B. Victorri in an early stage of the project,
this is recognized but I did not try to correct this defect as it is linked with the second
one: the inextensibility of the agent.
The second limit of AN2, in effect, is that it cannot be extended to more than two
morphemes (more precisely, two terms), from AN2's function specification itself. In a
conception which would seek, for the same function, to extend its scope, one should
previously have to understand to what it can apply: it makes no sense to order applying a
vehicle (for example putting it into feminine) to a form of arbitrary length or with an
arbitrary "categorial label" (in Fr. putting into feminine makes sense for an NP and
sometimes for the group formed by NP + V in the case of the agreement of the past
participle; it makes no sense for an adverb or a multipropositional utterance). Having to
determine this scope, meets the second question about this extension: when do we know
what vehicle has to be applied to what form, and why? In a realistic act like emission or
reception, when and how are we led to assign the model a task requiring a function like
that of AN2?
5.8. Conclusions on systemic productivity
In this chapter, it was shown that structural productivity does not exhaust linguistic
productivity. Beside it, a systemic productivity was recognized necessary. It has a
dynamics of its own, and, even if it conjoins very soon with systemic productivity, it is
antecedent to it.
The dynamics of systemic productivity was constructed by means of the abductive
movement by transposition (and that by transitivity). Organically, this motivated the
introduction of agent ANZ which is the base organ of the model for this productivity.
Several case studies showed how this agent draws on plexus resources in different ways,
and the model's integrativity received new illustrations.
150
"Efficiency" is informally defined as the quantity of productions which a model can abduct divided by
the quantity of inscriptions in the plexus. The use of the word "productivity" is dedicated to the idea that a
vast number of new utterances may be produced after exposition to a much smaller number of utterances.
"Efficiency" is related to a model and "productivity" to the object of investigation, so they are different.
For example, one may have to say that a model accounts for linguistic productivity but that it does so with
a poor efficiency.
157
This showed the base mechanism of the (re)construction of pluridimensional systems by
the learning speaking subject.
The case Auvergnats and Bavarois illustrated the possible lexical dimension of systems.
The question of agreement was met and qualified as a mixed productivity phenomenon:
both structural and systemic. A first approach solution was proposed and discussed: it is
limited because it is inextensible.
The standpoint reached in this chapter is susceptible of the following extensions (which
are not done in this dissertation):
a) Massive use of the base dynamics of systemic productivity on the verbal
paradigm of a language which differentiates well the forms (Romance language
or Slavic language for example) to demonstrate a sigmoidal acceleration learning
(avalanche effect). This poses no particular conceptual problem and is just a
question of time to dedicate to an experiment which is a little bit heavy to
conduct.
b) Use of the base dynamics of systemic productivity in combination with the
structural dynamics; this poses a conceptual problem and is a prerequisite to the
forthcoming items.
c) Exploration of the grey zone anomaly-analogy in this domain. For a subject not
yet endowed with a pedagogical, dogmatic knowledge (a pre established
multidimensional frame has not been presented to him as a norm), show how a
starting configuration of inscriptions in which some are anomalous and other
ones already formally analogical (or present several formally analogical
subsystems with contact points between them), constitute a field where
regularization (occasionally perceived from the outside as overgeneralizations)
may develop in different directions.
d) Generalization of the agreement dynamics to more than two terms.
Within these limits, this chapter showed how pluridimensional linguistic systems re
implement themselves in speakers, with contingency residues, as the effect of an
elementary dynamics.
The list of grammar effects rendered by dynamics that are antecedent to grammars is
now complemented with the following ones: new sub-categorization effects, system
effects, syntactic feature effects.
Here again, it is not an antecedent grammatical description that conditions the
understanding of the dynamics. It is the previous elucidation of the dynamics which
allows reconstructing the effects. The latter may, in a second tense, become the subject
of grammatical discourse; but this is second.
158
Chapter 6.
More questions of grammar and description
For some notions, traditional or more recent ones, this chapter shows how the
"grammatical" vision that other theories provided is affected by the analogic and
exemplarist approach which is proposed here.
These notions generally lose their necessity or see it much weakened, but before
dispensing with them, it is necessary to show how the needs which they were intended
to meet are now covered.
6.1. Morpheme, word, syntagm
6.1.1. Word
The notion 'word', as a component of grammatical description or as a theoretical
component is not postulated in this model: it depends too much, cross linguistically and
in time, of certain descriptive traditions. The less bad criterion to define the word has
been that of cohesion: morphemes constitute a word when syntax does not make it
possible to insert anything among them. Now cohesion is a de facto effect which results
from i) terms being motivated by structure mapping, ii) the dynamics based on plexus
inscriptions, and iii) the fact that C-type records (including expansive gates) license
some assemblies and not other ones. Therefore, there is no need of a particular
descriptive entity, the 'word', to account for it.
One of the effects of the notion 'word' would be to found the separation between
morphology and syntax. Now precisely, is appears as not very useful to separate
morphology and syntax with defined criteria (below).
There is therefore no 'word' in the model. This option is coherent with the suspension of
minimality: for a 'language with words' (shortcut for 'a language in which a descriptive
tradition finds words') it will be possible to distinguish terms shorter than words, terms
longer than words, and these two things concurrently with words themselves. This
option is consistent also with this conclusion drawn from the dead ends of descriptive
approaches and from the suggestions of the connectionists:
159
The conception of the lexicon which recurrent networks suggest, contradicts the
lexicographic position. Words, as entries in a list, do not exist because there is, properly
speaking, no remembering from an independently stored, decontextualized knowledge.
Words are always reactivated in a specific context from the memory traces constituted
by the connections weighted by experience. As mental states thus reactivated, they
correspond to interpretative cues geared towards the analysis of a given situation, not to
building blocks that would exist independently of their usage. If they have an
independent (that is: lexical) existence this must be seen only as the secondary effect of
their recurrence, much the same way as a prototype is just the invariant part of all its
actualizations. As an abstract lexical entry, more or less invariant, they belong to a
conceptualized knowledge of language, which is derived and reflexive, not to language
at work. Otherwise said, words are postulations of grammarians or of lexicographers in
the double sense that they are actually produced by grammarians and that any speaker
end up defining a reflexive knowledge on his own practice. Laks 1996, p. 115.
Instead of the word, the Analogical Speaker fosters he term. The term is subject to what
has been called 'suspension of minimality': a term may be a word, a morpheme, or
longer, or shorter; various examples have been given.
There being no word in the model does not prevent to treat written language with spaces
between words, this is the case for all plexii made so far. Spaces may occur within terms
but the space has no particular role ascribed: it is treated like any other written letter. In
an analysis by agents B2 and B3 for example, the parsing for terms in the received form
grants the space no particular role.
By contrast, an important role is played by term demarcations as they appear in C-type
records. They influence immediately and directly the structure mappings of the received
form onto the plexus content.
The stability (or fragility) of the notion 'word' does not hang solely on what would be its
length. We must also examine the cases in which, for a same span, theories have found
reasons to see one word only or several ones.
6.1.2. Homography, accidental homonymy, syncretism
6.1.2.1. Statement of the question and orientations for its solution
Classically, these cases are homography or homonymy, they encompass accidental
homonymy, syncretism, "improper derivation", etc.
All these cases are characterized by a single form, but to understand it in its occurrence
contexts, various theories or various analytical frameworks, which approach language
by objects and properties (therefore partonomic), found the need to distinguish several
words, or alternately to postulate one word only, but which may occupy several places in
a pluridimensional paradigm.
Thus for example Arnauld and Lancelot (cf. p. 26), observing that Latin does not
differentiate ablative and dative in the plural, maintain however the distinction ablative
plural and dative plural, these two places being systematically occupied by equal forms,
syncretic by this alone, because doing otherwise "would blur the analogy of the [Latin]
language". And, almost worse, blur the analogy between Latin and Greek!
160
Now these cases are 'oblique' cases, that is more marked ones, if one accepts mark in
syntax; they are also the less commonly used cases and it is of no small importance that
it be here that languages make fewer differences. So is it in French for the definite
article which, in plural (les), does not differ for gender. This happens in plural, which
means, again, forms more marked and less frequent.
The fact occurs in numerous phenomena of numerous languages. Imposing differences
against the evidence of the form, by submission to exceedingly coarse analysis frames,
simply amounts to ignore that languages differentiate their expressive resources in
proportion of the cognitive importance of the differences to make. Forcing artificial
differences is certainly departing from the functioning of speakers, "optimal" in a certain
sense.
This is also to be seen in the paradigm of the article in contemporaneous German. In its
usual presentation, one perceives homonymous forms but with no particular organizing
principle, and a great confusion as an overall impression.
nom.
gen.
dat.
acc.
masc. fem.
neut. plur.
der
des
dem
den
das
des
dem
das
die
der
der
die
die
der
der
die
Now a reshuffling of columns and cases (the rows) reveals a very different picture:
masc. neut. fem.
plur.
nom. der
acc. den
das
das
die
die
die
die
gen.
dat.
des
dem
der
der
der
der
des
dem
As in Latin, the conjunction [plural • oblique cases]151 is less differentiated and, in this
German example, indifferentiation also extends to feminine. The greatest difference
shows up in [masculine • direct cases]. These facts are cognitively relevant. A categorial
analysis of the type [gender • number • case] masks them completely, and moreover
creates an artificial problem of homonymy, imposing then the artificial burden of having
to "disambiguate"; designers of computer programs for syntactic analysis based on such
theories will understand what is meant here. It is more faithful to facts to abstain
believing in a grammatical number which would cross grammatical case with
systematicity or in a gender which would differentiate for all numbers.
151
The bold dot • denotes the Cartesian product.
161
A more systematic investigation in such phenomena was made by Jason Johnston152.
For a variety of European and African languages in which he studies the inflexional
paradigms, Johnston finds that the syncretic forms (systematic homonymy for him),
always lend themselves to regrouping if we are allowed to reorder the rows and columns
of the pluridimensional paradigms. He concludes to the inadequacy of classical analyses
by features: cross-classifying binary features are incorrect, they fail to predict
linearizability [for Johnston, linearizability is the rearrangement of rows and columns]
of natural classes of properties. This meets the conclusion made above on the German
article.
Certainly a theory based on categories has no other option, but we do not posit
categories; we do not compel paradigms to follow frames like [gender • number •
definiteness], or [case • number], for example. Moreover, terms are empty, they are not
property-bearers, at no moment must we assign them a gender, a number, a declension
case, etc.
What is suggested then is to adopt a faithfulness to form principle, that is, to abstain
postulating two linguistic beings where one form only is produced by the structure
mappings across exemplarist utterances.
In French, in effect, the definite article in singular is twofold depending on gender, but
in plural there is only one. So three terms only are needed: le, la and les.
It is nonetheless possible to write:
le Marocain : la Marocaine :: les Marocains : les Marocaines
without the two les making a difficulty because they are comprised in longer terms in
which the nouns differentiate the genders, even in plural.
By contrast, it would be inappropriate and harmful to pretend the following analogy:
le : la :: les : les.
This is because:
-
Doing so with only one term les, would create confusion on the gender effect and
the utilization of this inscription would introduce high noise in the results153; this
analogy would be wrong.
-
Choosing on the contrary to make with les two different terms (two distinct but
homonymous "words"), one masculine and the other feminine – what would be
152
Johnston 1997, excerpt from the introduction : This thesis takes as its starting point proposals to model
inflectional paradigms as geometrical structures, wherein systematic homonymies are constrained to
occupy contiguous regions. It defines a precise criterion for assessing systematicity and shows, for a
range of largely Indo-European and Afro-Asiatic data, that such models are observationally adequate in
modelling systematic homonymies within a single inflectional dimension, and to a lesser extent, between
different inflectional dimensions. This is taken to indicate that widely assumed characterizations of
inflectional categories in terms of cross-classifying binary features are incorrect, inasmuch as such
characterizations fail to predict the linearizability of natural classes of properties belonging to those
categories. The same inadequacy besets attempts to account for systematic homonymies by means of rules
that convert or ‘refer’ one morpho-syntactic representation to another.
153
Remind an analogy is all the better, the closest it is to bijection, that is, nearing a function in the
mathematical sense, otherwise said, a biunivocal application. This is not verified here since the terms of
the singular map onto one only in plural.
162
recommended by Shaumjan and Mel'cuk, against Bloomfield, cf. below – would
be analogically acceptable but would infringe the faithfulness to form principle.
In a case which combines two syncretisms, it is also possible to write:
le Suédois : la Suédoise :: les Suédois : les Suédoises
despite the added syncretic form Suédois (masc. sing.) and Suédois (masc. plural),
because the analogous forms are all different here again, be it by the noun or by the
article, so that the four terms in this analogy are different even when their constituents
are not.
From these examples, we can now abstract the principle adopted for the inscriptions:
when making inscriptions in a plexus (A-type records and C-type records), syncretic
forms must not be used directly as terms, and they must not be dissociated into as many
homonyms as the places they are deemed to occupy in analysed pluridimensional
frames; on the contrary, insert such forms in contexts that are broad enough for the
required analogies to hold between overtly different terms. The principle of suspension
of minimality finds here a precious application.
This approach suppresses the need to wonder whether a same form must be analysed as
one word or as several words. It was long a worrisome question with questionable
solutions. For example Bloomfield and Shaumjan disagree about it. Bloomfield takes
that the word is a form154 (the point here is not that it is a free form or not, what matters
is that Bloomfield identifies the word with the form). Shaumjan – followed by Mel'cuk
in this matter – contradicts this view in several respects notably this one: when
Bloomfield sees one word only, Shaumjan wants as many (as many grammatical words)
as there are places in the analysis system155: the word must be "defined through the
notion of syntactic function". The proposition in this thesis is closer to that of
Bloomfield: using his example, shut should not appear standalone in the plexus; rather,
it must appear in contexts such as: the Louvre shut yesterday, or keep your mouth shut.
In this way, one ceases to have to differentiate "homonymous words" according to their
syntactic function or to fuse them into one word only.
An indication of the incidence in the model of either option can be provided.
154
A word is a minimal free form. Bloomfield 1933, quoted by Shaumjan 1997, p. 285.
Bloomfield definition of the word is not satisfactory for several reasons : 1. […], 2. Bloomfield
confounds the phonological representation of the word with the grammatical notion of the word. Thus the
phonological word [likt] and the corresponding orthographic word licked represent a particular
grammatical word that can be characterized as the past tense of lick. But the phonological word [∫t] and
the corresponding orthographic word shut represent three different grammatical words : the present tense
of shut, the past tense of shut, and the past participle of shut. 3. [ …] within applicative grammar, the main
classes of words are morphological crystallizations of the basic syntaxemes : predicates crystallize into
verbs, terms crystallize into nouns, modifiers of predicates crystallize into adverbs, modifiers of terms
crystallize into adjectives. NAd? subclasses of words are crystallizations of their different paradigmatic
functions. A definition of the word must be independent of the notion of the morpheme. The word must be
defined through the notion of syntactic function. A word is a minimal linguistic unit that is capable of
having various syntactic and paradigmatic functions either (1) by itself or (2) together with a word of type
(1) meeting in the latter case the condition of separability. "Minimal" means that a word contains no other
word. Shaumjan 1987, p. 285.
155
163
6.1.2.2. Plexus before elimination of homographies
form
la
des
que
si
-er
-es
-e
viens
arrive
voyage
fait
fatigue
attenthabite
vis
veux
ferme
été
-
term 1
article
article
as in chaque fois que
as in si fort, si grand
infinitive, 1st group
indicative present 2S
indicative present 1S
indicative present 1S
indicative present 1S
indicative present 1S
indicative present 3S
v. fatiguer, ind. pres 3S
as in attention
indicative present 1S
indicative present 1S
indicative present 1S
indicative present 3S
v. être, past participle
contre-jour
term 2
clitic
amalgamation de les
as in je crois que
as in si je veux
as in premier, dernier
mark of fem. plural
mark of feminine
indicative present 2S
indicative present 3S
indicative present 3S
past participle
v. fatiguer ind. pres. 1S
as in attentat
indicative present 3S
indicative present 2S
indicative present 2S
En. farm
the hot season
voulez-vous
term 3
imperative 2S
noun
noun
En. firm
dix-sept, cent-deux
Table 12: Homographies before elimination
In the French plexus, a first development stage contains 19 forms which are occasions of
homograph terms: each form corresponds to two or three terms. Some of them (arrive,
viens) are syncretic, some other ones (ferme, été, -es) are accidental homonymies, for a
few remaining ones (la, des, fait) it is difficult to decide.
The first step in the experiment consists in fusing such terms so as to eliminate any
homography from the plexus: doing this is applying the faithfulness to form principle.
6.1.2.3. Effect of the elimination on tasks without apparent homography
The model is first tested with tasks that are deemed "without homography" because they
do not contain, directly visible in the utterance, homograph terms – which would
traditionally be analysed as such – like ferme, voyage or été. Yet, they do contain other
ones, embedded in lowers levels of the analysis. Hidden and shorter homographs like -es
or -e (see the table above) occur; they may be seen as parasitic. It is interesting to see
how the behaviour of the model is affected as a consequence of the elimination of
homographs.
To that end, six utterances are analysed i) before the reduction of homographs, and ii)
after it. For each utterance, the table displays the number of phases needed to obtain the
first analysis, the number of agents, and the number of products. All three numbers are
provided before and after reduction.
164
test utterance
phase
nb of agents
nb of products
before/after
before/after loss
before/after loss
1 un très grand jour
2/2
311/326
287/293
2%
2 une très grande maison
5/5
1443/1587
10%
1576/1705
8%
3 séjour de vacances
4/4
547/765
40%
674/810
4 bon séjour en France
18/25
1613/2072
28%
2083/2324 12%
5 elle est arrivée avec son homme
4/4
1044/1170
12%
1112/1152
3%
6 elle est arrivée avec son homme et 7/7
son cheval
1898/2076
9%
2000/2087
4%
5%
20%
Table 13 Compared tests, before elimination of homographs and after
The volume of the heuristic structure (agents and products) increased by 15% in average
and twice more for agents than for products156.
Test 3 shows an important increase. One contributing factor was the melting of -es as a
verbal inflexion mark, and -es as the feminine plural mark. There may have been more.
Test 4 displays a surprising increase of seven phases which may be explained by some
records disappearing as the consequence of the reduction. The computation had to take
different, longer paths. The cost increase, in agent and product numbers, is significant
without being explosive.
Such computation cost increase is the price to pay for getting rid of this categorialist
facility which the differentiation of syncretic or homograph terms constituted in the
previous state. The model in its new state supports the added cognitive load to
discriminate en passant and to "categorize" terms which are now more ambiguous.
The utterances under test get analysed for the same reasons as before, that is, they are
licensed by the same records. This is not documented in the table above but it is
reassuring: the computation of the meaning, when we know how to do it, would have
the same basis, whether homographs are reduced or not. This is a sort of guarantee of
stability. This remark however is relative, as these tests contain no explicit, "true"
homograph; which suggests another test.
156
That the product number increase is half that of the agents draws the attention. The following
interpretation may be proposed. Agents mostly reflect heuristic invention: they are opportunity-seekers
spreading in several directions in search of settlement conditions, that is, of favourable conditions holding
between the terms of the task and those of the plexus. Products by contast sanction the settlements when
they occur only; they are more directly dependent on the congruence between the terms of the task and
those of the plexus; they depend more directly on the "possible of language", to quote Milner. Accepting
this, helps understanding why homography makes the process search a lot more but makes it find a little
more only. All right, but why should it not find nothing more. At the last stages of analysis, that is, when
the entire utterance is analysed (assuming that there is not at this stage a final ambiguity to which
homography might contribute) the process finds nothing more indeed. But before getting to there, in the
intermediate steps of the analysis, some additional hypotheses surge temporarily, giving birth to some
additional products. This is why the number of products also increases, less however than that of the
agents.
165
6.1.2.4. A test with a "transcategorial homography"
This new test bears on the French form été which, taken out of context, can be either a
form (En. been) of verb être (En. to be), of the hot season (En. summer). The intent is to
show how a context which determines one of these interpretations suffices to the model
for utterances in which the ambiguous form is contextualized to be related to
appropriate licensing analogs in the plexus.
Thus, form cet été for example, gets licensed in five computation phases by une semaine
(En. a week) and le soir (En. the evening), without any interference of the past participle
of être.
(cet été)
(cet )(été)
[une][semaine]
(cet )
[cet]
(été)
[été]
[le][soir]
span of channel 6 (ph 5)
how ag 43 segments the span
attests the segmentation (finding 359 on record 1087)
span of channel 3 (ph 1)
attests as setup term 89 setting up channel 3
span of channel 1 (ph 1)
attests as setup term 2074 setting up channel 1
attests the segmentation (finding 363 on record 255)
as per channel 3, already exposed
as per channel 1, already exposed
Form nous avons été (En. we have been), in turn, is analysed in two phases, licensed by
the verbal construction il a fait (En. he has done). Here, the season été (En. summer)
has no place.
(nous avons été)
(nous )(avons été)
[il][a fait]
(nous )
[nous]
(avons été)
(avons )(été)
[a][fait]
(avons )
[avons]
(été)
[été]
span of channel 15 (ph 2)
how ag 279 segments the span
attests the segmentation (finding 280 on record 1256)
span of channel 11 (ph 1)
attests as setup term 526 setting up channel 11
span of channel 13 (ph 2)
how ag 116 segments the span
attests the segmentation (finding 241 on record 1280)
span of channel 4 (ph 1)
attests as setup term 527 setting up channel 4
span of channel 1 (ph 1)
attests as setup term 2074 setting up channel 1
Thus categorization effects become insensitive to homonymy provided the context
makes them non ambiguous. Here again, the day we know how to "compute meaning",
we will be able to avail ourselves of the appropriate bases to do it.
The separation effect is easy to understand: the distributions of été-summer and étébeen, in their possible constructions, are different enough for the plexus paradigms of
their exemplars to relate them with analogs which are 'natural' to either, well before
interferences, which are always possible, but remote, and necessarily weak, have an
occasion to arise.
166
All this removes one more reason to postulate "lexical items" and takes us closer to
make justice, operationally, of this precious intuition: "language is form and not
substance"157.
This completes the investigation of cases in which one is tempted to postulate different
words where one form only is perceived.
6.1.3. Allomorphy
The opposite situations are ones in which, facing several different forms, we would have
reasons to postulate one linguistic being only (word, lexeme, or morpheme). There are
two such situations; allomorphy – which applies to radicals and bases – and group
sensitivity – which applies to conjugation affixes and to case marking affixes.
6.1.3.1. Allomorphy
The examples are:
-
Fr. vais/allons/irai/fus (En. go pres. 1S / go pres. 1P and imp. 1P / shall go fut.
1S / was, were in certain persons),
-
En. be/am/is/are/was, eat/ate (an apophony here?),
-
Jap. ii/yoi (to be well, to be good) yet this may also be analysed as a defectivity
of ii, the homologous forms of yoi being called in suppletion of the non-existing
ones of ii.
For Ducrot (1995):
Two morphs are of the same morpheme (and then are said to be allomorphs) if they
carry the same semantic information, and if their substitution:
- either is never possible in the same context, this is the case with i and al (ira, allons)
which can never be substituted since they are imposed by the person and the tense of
the verb,
- or is possible in any context without meaning alteration, this is the case with ne …
pas, ne … point. This is also the case with peux and puis which are always
substitutable.
In the two cases envisaged by Ducrot, the first one only will count here, that of forms in
complementary distribution: context imposes one of them exclusively. Allomorphy,
which is an anomaly, is often associated with the anomaly of forms (vais) but not
always: it may be concomitant with "parochial" sub-domains, which are locally regular
(allons, allez, allions, alliez; irai, iras, ira, irons, irez, iront), but the frontiers of such
sub-domains are contingent.
Phenomena of allomorphy ( /floer/ fleur ~ /flor/ floral ) or of suppletion (jeu ~ ludique),
very frequent in morphology, have no clear syntactic equivalent 158.
157
Saussure, Cours et Ecrits.
158
Houdé 1998, p. 278.
167
6.1.3.2. Theories addressing allomorphy
The most common solution is the "grammatical word" or abstract morpheme with
conditional realization159. It is that of Martinet for example:
The 'monème' makes it easy to describe phenomena for which the Americans created
the concept of allomorph and of portmanteau morph. Ducrot 1995, p. 434.
One describes without difficulty but one describes only and the Distributed Morphology
(DM) does nothing better:
DM recognizes two different types of allomorphy: suppletive and morphological.
Suppletive allomorphy occurs where different Vocabulary Items compete for insertion
into an f-morpheme. For example, Dutch nouns have (at least) two plural number
suffixes, -en and -s. The conditions for the choice are partly phonological and partly
idiosyncratic. Since -en and -s are not plausibly related phonologically, they must
constitute two Vocabulary items in competition.
Morphological allomorphy occurs where a single Vocabulary item has various
phonologically similar underlying forms, but where the similarity is not such that
Phonology can be directly responsible for the variation. For example, destroy and
destruct- represent stem allomorphs of a single Vocabulary item; the latter allomorph
occurs in the nominalization context. DM hypothesizes that in such cases there is a
single basic allomorph, and the others are derived from it by a rule of Readjustment.
The Readjustment in this case replaces the Rime of the final syllable of destroy with uct160.
The rule of readjustment is designed to readjust, and it readjusts; but how does the
notion 'readjustment' fits into a theory? What makes it something more than ad hoc?
6.1.3.3. The model addressing allomorphy
Since categories are refuted, it is not possible to postulate a lemma or abstract
morpheme with conditional realization, like verb aller „in abstracto‟, and this is not
attempted. Secondly, since there are no rules, the way of readjustment cannot be taken
either and this is not regretted.
The job is done by A-type paradigms in the plexus. Analogies like:
(a)
irai : vais :: mangerai : mange
confer to forms irai and vais the same opportunities to enter the copositional
computation as "regular" ones like mangerai and mange. But this is still true when both
pairs in an analogy contain suppletive bases:
(b1)
irons : allons :: irai : vais
(b2)
go : eat :: went : ate
the "regularity", or not, of these forms does not prevent analogies like (b1) and (b2) to
function on their own, and then to integrate their effects with other ones: simply, an
analogy like (a) puts in addition the pair irai : vais in communication with a regular
159
Which is sometimes called 'lemma' in natural language processing.
160
http://www.ling.upenn.edu/~rnoyer/dm/#impoverishment, 2000/02/13.
168
zone (via the pair mangerai :: mange) and thus with the formation of a great number of
forms abducted by suffixation, that is, it extends much its potential efficiency.
The analogical task, and behind it systemic productivity, is not vulnerable to allomorphy
from the moment the forms which use suppletive bases are copositioned with other
forms. Moreover, this is about indifferent to these presenting a locally sub-regular
affixal inflexion as in irai, iras, ira. This means that agents ANZ and AN2 behave on
allomorphs with the same felicity as on regular forms, and as on many more
irregularities.
An example of this was already discussed in section 5.3. French verb, two paradigms
playing integratively (p. 139).
In other words, allomorphy is not an obstacle to relating a form with its best analogs. So
that, here again, when we can compute meaning, we will have the appropriate bases
available. Then the proposal will be validated by demonstrating that similar meaning
effects are recuperated from formally different allomorph terms: the terms are formally
different but the model succeeds in circumventing these differences. We do not even
have to fear that allomorphies create a processing/cognitive overload if direct raising of
a readily inscribed form is supposed to be cheaper than assembling it (escalation
principle): as allomorphy applies to more frequently used forms, it is expected that such
forms are inscribed by many occurrences in a plexus and thence are directly raised, and
not assembled.
6.1.4. Group sensitivity
The second occasion in which one can be tempted to postulate only one "abstact
morpheme" covering several different forms is that of grammatical groups: conjugation
groups, and declension groups. With a French plexus containing:
a) je blanch-is, blanch-ir
b) je chant-e, chant-er
with from: je finis, the model should abduct finir and not a form like finer. Doing this
rightly would be demonstrating "group sensitivity". The same need arises for declension
groups in Latin or Russian for example and for other group phenomena.
Conjugation groups and declension groups share with allomorphy the fact the
morphemes involved (flexion morphemes and case morphemes) present forms with
non-optional complementary distribution.
It is no longer the bases which are in complementary distribution, here it is the
inflexional morphemes. This let alone, they are also forms which occupy a place in a
system, and in that place, several of them are possible. In the given example, the place is
"indic. pres. 1S" and the corresponding form is realized as -e (je rêve) or –is (je lis).
What is the indication which selects a form among the possible ones at a given place? In
allomorphy, it is a place in a system. For verb aller, the indication "indic. pres. 1P"
selects all- and indication "indic. future" selects ir-.
The case of groups is more complex. The clause is of the type:
169
(C)
finir (and not finer) because je finis, je blanchis
and because blanchir (and not blancher).
That is to say that, a pair like je finis, je blanchis must be introduced, along with other
such pairs, in an analogical game which now involves more elements.
In feature-based models, the question of conjugation groups can be solved by
introducing a syntactic feature for the required group. For case allomorphy (that is, for
declension groups), Latin nominative is nominative whatever the declension group; it is
the grammatical category 'nominative' which reduces this allomorphy. Here again, the
device encompasses a syntactic feature.
This is refused in this model as features are neither applicable nor desirable. What is
needed is a mechanism respecting clause (C) above. It must implement this as an effect,
in a non-categorical, hopefully cognitively founded manner. It should also be
implementationally plausible. The solution to this point is not yet found; an extension of
agent AN2 is a possible track, but other ones should also be explored.
6.1.5. Sub-categorization
In theories with categories, the question of sub-categorization arises when one realizes
that it is impossible to make a set of lexical categories in which each particular category
provides, about its instances the lexical entries, all the information required to determine
their behaviour. For example, for nouns, it must be possible to distinguish count names
from mass names, animates from inanimates, humans from non-humans, referents that
may be possessed from ones that may not, etc., for verbs the intransitives from the
transitives and among the latter, the direct ones from the indirect ones, etc. The number
of distinctions to make is not a priori bounded and they mix up formal viewpoints and
semantic ones. Crossing all these criteria is impossible because it causes explosion in
the set of sub-categories and this renders the theory intractable.
Current theories address this difficulty along two ways; either they accept a numerous
set of categories, and organize them into a lattice with multiple inheritance, this is what
construction grammars do (cf. Chap. 1), or with feature structures, used in unification
theories such as HPSG. Both ways achieve a certain categorial flexibility, with some
residual rigidity, a heavy functioning, and a null plausibility.
The example in Figure 27 illustrates one of the means proposed to treat subcategorization. It is a plexus paradigm which bears on the ditransitive construction in
English161.
161
In previous examples, when a paradigm was drawn, a single edge between two records was sufficient.
This example on the contrary requires to show term by term mappings, so edges are drawn between terms
rather than between records. Yet the underlying model is the same, graphical surface presentation only has
been adapted.
170
h elp
C la ra
re w ard
h elp
Jo h n
g ive
m on e y
o ne se lf
se rv e
p ro vid e
ta ke
m on e y
g ive
Jo h n
m on e y
se rv e
h av e
b uy
b re ak fas t
d in ne r
fo od
se rv e
h ire
h av e
g am e
h ou sing
g ue sts
d in ne r
se rv ice s
so rrow
h ire
e m p lo ye e s
se rv e
re cruit
tw o m a ste rs
frien d s
se rv e
b rin g
b rin g
h er
p eo p le
tw o m e n
w elco m e
fo re ig ne rs
Figure 27 Ditransitive construction in English
In this paradigm, the two critical records are: give John money and serve guests dinner.
The edges show how, in this region:
a) John and guests strongly categorize with foreigners, two masters, her, friends,
Clara, oneself, etc. whereas
b) money and dinner strongly categorize with housing, game, food, etc.
This helps not to produce utterances like
* offer money John.
However, the region buy food :: hire services :: hire employees, which is remote from
the critical records, shows how these groups finally may connect (follow the thick
edges), but the connection is remote from the ditransitive region.
In short, this figure demonstrates an effect of global category ("noun phrases" if you
want) flexibly coexisting with an effect of sub-categorization ("possible beneficiaries"
and "possible objects"). The reader still remembers that such (sub)-categories do not
have to be reified in the model and they are not.
171
In this example, a single paradigm contributes to the sub-categorization effect but it is
not necessarily so. In the example John is easy to please / eager to please (p. 112), the
overall sub-categorial separation effect is rendered differently by acknowledging in the
plexus that "the constructions are not the same" and by the integrative play of several
paradigms. What the two cases have in common is the paradigmatic distance set
between records which differ constructionally even when they look alike superficially.
6.1.6. About the lexicon
What does the lexicon become in this model?
It happens that a term of the model is a "word".
It also happens that a conventional "word" never occurs directly as a term in a plexus:
such finite verbal form for example or such infinitive, or such derived word, may not be
found explicitly. In cases in which a form meeting its specification is called into a
computation, the corresponding form is assembled on the fly by analogical abduction.
The latter is authorized by C-type records containing the bases and affixes most similar
to the constitutive segments of this "word".
But this may also be the case for a non-inflected word. Il may happen that a "word" is
present only as a part of an assembly. It is contained in one or several terms in the
plexus but is not otherwise present with its exact perimeter. If the hypothesis of selfanalysis is retained (cf. p. 258), such a containing term may be analysed on the fly and,
in a transient manner, a form with the exact length of the "word" may be distinguished
and serve, for example, to license a homograph form which appears in the received
utterance. Whether this transiently distinguished form deserves to survive the
occurrential act, that is, deciding whether the act is an occasion for the model to learn
something, is discussed in the section just referenced.
The lexical entry is thus made precarious vs. what would be its length: it becomes
fortuitous that a term is a word (but it may be frequent). This dimension of contingency,
established in Chap. 2 a desirable property of the model, is thus realized in it. A more
complete discussion is provided p. 194.
Even when it happens that a term is a word, the downgrading of the lexicon is increased
from the fact that terms are vacuous. This is an important difference with preceding
theories. A term – this point has been made already – has no other import than that of
providing access to the exemplarist contexts where it is occurring, and to be recognized
as "the same term" in its recurrences, see section 7.2.2. Essentiality (or not) of a term (p.
193).
At this point, little remains of a lexicon's conventional vision. It is not entirely nullified
however. An assessment of the question is provided in an appendix (section 12.2.2. Is a
'table of terms' needed, up to where downgrade the lexicon? p. 296).
If the notion 'word' looses its value, an incidence has to be expected on the separation
between morphology and syntax. But it cannot be a simple abolition of their separation,
we need to go a little into details.
172
6.2. Syntax-morphology separation
6.2.1. Conversion, improper derivation
The considerations developed above, about homography and syncretism, have an
extension and an application in "improper derivation". This refers to the case in which a
word of a category is used with a different category, for example in Fr., an infinitive
comes in the position of a noun: le parler vrai, le voir baroque162. Another example is:
le bleu du ciel. It is non-affixal derivation.
The question which improper derivation poses to grammarians is to decide whether,
after conversion, we are facing the same word as before conversion – of what category
then? – or two different homonymous words – and then how is this homonymy to be
handled. It is perceived by Sanctius as early as the 16th century:
One of the most characteristic leading ideas in Minerva is the refutal of any re
categorisation, of any non-affixal derivation which would enable a noun to play as an
adjective, an adjective to "substantivise" in order to act as a noun, and the main part of
the chapter dedicated to preposition, adverb and conjunction consists of reinstating in
their origin category words which, by their form, are adjectives or pronouns, and the
use of which in lieu and instead of an adverb or [text interruption]. Geneviève Clerico
in Sanctius 1587/1982, p. 20.
Francoise Kerleroux writes:
We assume that this notion (improper derivation) serves to cover data which appear as
residual, after application of the only available analysis model, that is: affixal
morphology, which is supposed to represent all languages. Kerleroux 1996, p. 11, then
the entire Chap. V on this topic.
In HPSG still, which remains categorial, members of the HPSG community consulted
consider rire, in envie de rire and in le rire, as two distinct lexical entries. The reason
for this is easy to understand: in HPSG, lexical entries are modelled as feature structures
in which feature CATEGORY plays a key role.
To this, the Analogical Speaker provides again a simple solution. There is one term rire
only, without any categorial determination since there is no room for categories. In
emission, a form like le pleurer (strange in French) is simply not produced because of
the escalation principle if terms like ses pleurs, les pleurs, des pleurs are found present
in the plexus (it will only be possible to demonstrate this when we know how to treat
meaning). In reception, le pleurer, if we expose the model to receive this, will be
abductively licensed from le rire or similar terms, if the plexus contains such terms.
As for le bleu du ciel, le ciel est bleu, there is no need to decide whether bleu must be
construed as one word or two (a noun and an adjective): the various placements of the
unique term bleu in various structural contexts, that is, in various C-type records,
provide for licensing other uses that might be done – or uses of distributionally similar
terms – all in constraining each appropriately. The fact that a single term bleu is the sole
occupier of these different placements causes a possible category leakage between what
categorial frames call 'adjectives' and what they call 'nouns'. But this is exactly what we
162
Kerleroux 1996, p. 293
173
need to licence c'est très classe (it's very classy) or un lourd (a heavy one, a stupid one).
Naturally this also exposes us to le rapide de sa réaction (the fast of his reaction) which
is not very accepted in contemporaneous French. In the plexus of a contemporaneous
French speaker, rapide is at some distance of bleu, réaction is remote from ciel,
consequently le rapide de sa réaction is possible but a little expensive, and so normally
not produced. It may be received but with a certain cost. That phrase was much less
impossible among the précieux in the 17th century, it could be used today with irony or
distance, its unacceptability varies among speakers, and we do not know what will turn
out to be in a few decades or a few centuries.
6.2.2. Questioning the inflection-derivation frontier
Several authors put into question that there would be too clear a distinction between
inflection (which would be syntactic) and derivation (which would not be).
The opposition between inflection and derivation, appears fragile enough and the
grammarians of Sanskrit could do without it. As Pinault notes: For Panini, there are
only affixes, which differentiate solely by their rank in the chain of derivation. Auroux,
1994, p. 175.
The Stoics make no clear distinction between derivation and inflection. Swiggers 1997,
p. 27.
The existence of the difference between inflection and derivation is not less obvious
than the difference between semanteme and morpheme. But with the current status of
knowledge the definition of this difference is not less vague than the other one. We
think the difference is to be sought in the opposition between syntagmatic relations and
associative relations. Hjelmslev 1933/1985, p. 56.
The difference between inflexion and derivation has a limit in Suffixaufnahme163.
Planck 1995, p. 3.
If it is true that inflections generally incur a smaller difference of meaning than
derivations, and are more general, there is a difference of degree rather than an absolute
one between these categories. So it is not possible, according to Bybee, to situate
inflection in syntax and derivation in the lexicon, as Generative Grammar often does.
The best definition of inflection is its being obligatory so that its absence creates a lack
which takes a signification. The absence of the mark of plural in French for example,
indicates the presence of the singular. Vandeloise 1990, p. 230.
6.2.3. Reasons for merging or distinguishing morphology and syntax
Creissels, in the light of African languages questions the notion of word and
consequently the morphology-syntax demarcation:
[In a language like Latin] in which the morphemes of an utterance are easily grouped
into blocks with high internal cohesion and high mobility with respect to each other,
163
Definition of Suffixaufnahme (paraphrase of Planck 1995, p. 7): let Nt be a nominal head with a
nominal modifier N2, Suffixaufnahme consists of a casual mark of Nt being duplicated onto N2 without
this being motivated by the function of N2. On N2, the mark is added to possible other marks, including
casual marks, which N2 may bear for functional reasons. Suffixaufnahme, first described by Bopp, is
attested in Georgian, in Caucasian languages and in ancient languages of the Middle East; it is different
from group inflection even if both these phenomena are akin to each other.
174
there is no reason to reject the advantages of a description in words. Then we have a
division into morphology and syntax. But in a language in which the cohesion of
morphemes do not show such differences, it is not wise to conserve this schema.
Creissels 1991, p. 31.
What is traditionally separated as morphology and syntax can be envisaged as an axis
along which a variety of phenomena, functions and needs are disposed. It is not given in
advance that positing a separation is the most clarifying way to structure this axis.
The position adopted in this model is not to make particular devices that would
differentiate morphology and syntax. This option is motivated by three reasons: i) the
notion 'word' is not postulated because of the problems it poses, ii) the clause
"morphology = short assemblies, syntax = longer ones" is not criterial, iii) abductive
movements by constructability transfer and by expansive homology apply in both
domains.
At least in the tests made thus far, all needs of productive assemblies are covered by the
interplay of the following items:
-
a vision of the lexicon which is "leaned" and made contingent: demotion of the
notion of lexical entry, preference for the notion of term, vacuity of terms,
minimality suspension, etc.
-
plexus content, notably C-type records and paradigmatic links between them:
they support production of morphological assemblies and syntactic assemblies
equally well,
-
abductive movement by constructability transfer,
-
abductive movement by expansive homology,
-
the general dynamics of agent-based solving (ABS).
The refusal to distinguish between syntax and morphology by subordinating the latter to
the former is a principle vision shared by several authors. Fradin (1999) points out that it
is the case of Saussure, Harris, Haiman, Gruaz, Sadock, Halle and Marrantz.
In the same article, the intent of which is on the contrary to defend a distinction between
morphology and syntax, Fradin surveys criteria and reasons tending to show that the
distinction is necessary. Several of these criteria and reasons have no influence on the
Analogical Speaker because the inscriptions in a plexus exert them de facto. So is it for
example of the cohesion of morphemes within words. Cohesion happens simply because
the plexus provides no exemplarist occasion for such or such expansion to occur, so the
model cannot produce that sort of expansion. The same thing can be said about another
criterion: that an assembly has a category different form its head's category (then
exocentric according to Bloomfield) or has the same category (then endocentric). For
Fradin (p. 27), a morphological assembly is always exocentric whereas in syntax it is
more easily endocentric. This is not always true (Fr. passé, passée, passées) or it
depends on the vision we take of categories. In any case, here again, in the Analogical
Speaker, the exemplarist inscriptions place conditions on the possible outcomes of an
assembly, that is, they constrain that with what it will in turn be able to assemble. These
"conditions" are not reified, they are a global effect of the inscriptions and the dynamics
(ABS and the agents that it hosts) do not have to know anything about it in principle or
175
in general. Therefore, endocentricity or exocentricity do not matter and these notions
cannot be used as a base to discriminate that which is morphological and that which is
syntactical.
All this does not deny that there might be in these respects some specificities or
tendencies of morphology, but asserts that the abductive mechanisms do the job without
having to wonder whether they are making morphology or syntax.
The inscriptions of the plexus do not mark in any special way a difference between
morphology and syntax, no computation is affected by this difference in particular, the
model postulates nothing concerning a possible demarcation.
It is conjectured that this indifferent dynamics would apply with the same felicity to
morphological, morphosyntactical or syntactical phenomena which occur in other
languages. Group inflection, for example, which is observed in Basque164, seems not to
pose a particular problem but it would be more important to take a look at languages
like Eskimo-Aleut languages165 or some African languages in which the morphologysyntax frontier is much less clear even than in e.g. European ones. This work remains to
be done. If it confirmed the findings so far, the conclusion would be that the
morphology-syntax distinction is, for the most part, just for compartmental convenience
and has, at best, a pre theoretical statute.
However, this order or reasons do not suffice to account for morphonology (Fradin
1999, p. 26). On this point, the model has nothing to say in its current development. It
may be that solutions can be found in assembly schemes more elaborate than just
concatenation, or ones inspired from the multiple structures of van Vallin, Sadock and
Jackendoff; then the decompartmentalisation option would be extended and validated.
But it may be also that such phenomena impose to acknowledge something of the word.
Without this having to reinstate the word in all its prerogatives, there would be one or
two phenomena to treat particularly.
6.3. Zeroes
Strictly speaking, the question of zeroes is not a linguistic one: by definiton, zero
elements are not observable phenomena; they are dispositions that some theories166
adopt in the account they give of certain phenomena. The question links with that of the
ellipsis without coinciding with it.
6.3.1. Zero elements in grammar and in linguistics
The temptation of zero elements in the history of linguistic thought dates back to
Sanctius at least:
164
Inflexions of case, of determination, and of number are not suffixed to a noun but to an entire noun
phrase.
165
Cf. Tersis 2000.
166
Structural linguistics and generativism principally.
176
Sanctius refuses to make passive impersonal [in Latin] a distinct structure from that of
ordinary passive. Both may be glossed identically and integrated into equivalent
constructions. Therefore it is a useless category and the mind of children should not be
burdened with it. This position allows him then to "prove" with a circular argument,
that any verb is necessarily transitive, including those regarded as neutral (curere,
sedere, stare) by the tradition. Since these verbs occur in passsive, (curitur, sedetur,
statur), and since this passive is not different from the passive of transitive verbs
amatur, this leads to positing behind them the suppletion of a transition accusative,
stationem after stare, setionem after sedere, cursum after curere. This example
illustrates the fact that, for an author who goes beyond formal data, phenomena in the
form play a considerable role in the organization of language. Geneviève Clerico in
Sanctius 1587/1982, p. 22.
Much later their usage is systematized in structural linguistics .
Giving the status of linguistic elements to zero segments can be carried out in a great
many situations. It can be used in such a way as to blur the differences between two
sets of morpheme-class relations. Note must therefore be taken of the descriptive effect
of each zero segment that is recognized in the course of an analysis. In keeping with the
present methods, it would be required that the setting up of zero segments should not
destroy the one-one correspondence between morphological description and speech.
Hence a zero segment in a given environment can only be a member of one class.
Harris. 1951, p. 335.
In Martinet the zero element is the occasion of a curiosity. Accepting zero elements
generally ("the signifier of subjunctive is occasionally the zero signifier"167, "the zero
signifier of injunction"168), he states that before giving in to that temptation it must be
assured that its signified is consistent:
However, there is normally [in the case of European languages], among the elements of
the grammatical class, a "category" which is unmarked, that is, neither formally
represented nor semantically characterized: this is the case, in French, of indicative, of
present and of singular. One must not posit a "monème" for a zero signifier that would
correspond to an inconsistent signified. Martinet 1985, p. 146.
This is somewhat disappointing. This author who generally recognizes opposition (in
take the book, take is selected against give, throw, put, etc., ibid, p. 32), sees the absence
of mark, as it does not positively characterizes it, to correspond to an "inconsistent
signified". Does opposition apply separately in each plane? I agree on the conclusion:
zero elements are not desirable, not more in this case than in any other (cf. below), but
there is a serious objection about a motivation of this kind. It is not possible to make an
ad hoc correction at an isolated point without reconsidering the analysis frame (mode,
tense, person, number) and its general relation with the formal observables. Either you
recognize the ideal frame of the verb paradigm in an Indo-European language (mode,
tense, person, number) and you request forms to be characterized according to it; then
you cannot say that indicative present singular is an inconsistent signified and a zero
element is necessary. Or you recognize the analysis frame without requesting the forms
to be always differentiated in it; then a zero element is not necessary, but it remains to
167
Martinet 1970, p. 104.
168
Ibid.
177
be shown how speakers assign forms (now ambiguous) to places in the frame. Or – this
is my proposition – the frame is not postulated (it would be categorial) and one shows,
on exemplars, and analogically, which ratios and which oppositions a speaker can make,
in what assemblies of bases and inflexions (and of contexts) in those that a speaker can
license, these are not always the same for all verbs (that is for all bases); this will have
to be doable without zero elements.
Even before the fundamental objections which will be made, the explanatory power of
the zero postulation is not well assured in many cases. Marandin169 identifies the failure
of analysing with an empty category the noun phrase Det + Adj, ex. les rouges sont
fripés (En. the red (ones) are crumpled).
Elsewhere170, zeroes are refuted in the name of checkability and non-indexability:
From an epistemological viewpoint, positing zero marks or zero constituents is
questionable, since it amounts to posit a segment, constituent, or a segmental mark, the
signifier of which is precisely represented by an absence of segment, therefore to posit
fictitious segments under the pressure of the theory – this encompasses an important
risk of non-checkability. Another difficulty quickly appears: the fundamental
impossibility to categorize and to index such elements "which do not exist", and even
more to coindex them.
the functions that would be theirs must be taken over by other elements171:
There is no zero mark for the person, but a pluridimensional structure of linguistic
paradigms. This applies in the plane of paradigms, with paradigms of paradigms, and it
applies as well from the viewpoint of the syntagmatic axis which presents a
"superposition of marks" of various types, concomitant marks which enter into
combination in any utterance and give distinct instructions.
In the case of relativation in languages without a relative pronoun, Japanese for
example, the push to postulate zero marks is seen as a consequence of the fact that the
relative proposition is perceived as a transformation of an autonomous proposition172. In
a theory without transformations this reason for zero marks falls.
Lemaréchal pleads, rightly, for not positing zero marks. His intuition of the
"superposition of marks" as a suppletion to what other frames analyse as a shortage of
segmental marks has indeed the potential of a productive dynamics that may succeed
without zero elements; this dynamics must be made explicit and this will be done below.
Sadock's Autolexical Syntax increases the model's complexity the for several reasons
and succeeds in constructing the explanation without zero elements.
One of the features of the autolexical model that give rise to discrepancies between
representations in different dimensions is the possibility that a lexeme that is
represented in one component is simply not represented at all in another, giving the
effect of deletion or insertion without the need for specific rules that actually delete or
insert. The empty subject of "extraposition sentences" [It seems that Fido barks] for
169
Marandin 1997, p. 144.
170
Lemaréchal 1997, p. 2
171
Ibid., p. 44.
172
Ibid., p. 83.
178
example, can be treated simply as an element with a representation in syntax but none
whatsoever in semantics173.
The Autolexical Syntax contains no notion of movement:
The components are modular in that the units with which they deal are distinct. The
units of the morphology are stems, affixes, inflections, and so on, namely units that are
appropriate to word construction; the units of the syntax are words, phrases, clauses,
and so on, that is, units appropriate to sentence structure; and the units of the semantics
are predicates, arguments, variables, and the like, that is, meaningful units. The
components of an autonomous, modular grammar of this kind are thus "informationally
encapsulated" in the terminology of Fodor 1983 (The Modularity of Mind), whereas the
modular building blocks of a GB style grammar, such as the rule Move-Alpha, have
access to all representational dimensions, and are therefore not informationally
encapsulated174.
Which suggests that the Autolexical Syntax does not have transformations. Although the
non-postulation of transformations and movements is never explicit in the text, these
notions occur only in examples that the theory proposes to treat without them. This
conjecture is reinforced by this:
… a context-free phrase structure grammar is a sufficient formalism for each of the
modules, including the syntactic component175.
Now, phrase structure grammars are a supertype of X bar theory (which specializes
them by adding the notion 'head') and the same X bar theory is positioned as the
component which addresses syntax before transformations176. Finally, the Autolexical
Syntax does not recognize transformations, which is coherent with the fact that it does
not recognize zero elements.
6.3.2. What should be done with zero elements
In short, zero elements are introduced as a consequence either (i) of the generativist's
transformations or (ii) of mono-, bi-, or three-dimensional categorial paradigms for
morphology or syntax. One understands that, since there is no positing transformations
and since pluridimensional paradigms are not approached with categories, the need for
zero elements falls in this model and they are not introduced.
The cases which motivated their introduction in other frames are processed simply and
naturally by the interplay of A-type and C-type records and of the computations that
apply to them, using the escalation principle, cf. section 6.4. Anomaly and regularity, p.
181, for transformations, cf. section 4.2. About non-transformation, p. 107.
The demonstration will be better made on an example.
173
Sadock 2000.
174
Ibid., p. 11.
175
Ibid., p. 21.
176
X bar addresses the 'bare component' as opposed to the 'transformational component' Chametzky
2000, p. 6.
179
6.3.3. "The indefinite plural article has no realization in English".
Seemingly, speakers of English agree upon the following analogy:
(A)
the cat : a cat :: the cats : cats
In a framework which posits the notion 'word', if a mass of other facts invites to posit
words 'the', 'a', 'cat', 'cats', which are categorized into articles and nouns, and if the
backing frame comprises the dimension defined-undefined and the dimension singularplural, we have to face a slight obstacle: the position article + indefinite + plural is not
filled with anything. The structuralist solution consists of postulating an indefinite plural
article with no realization, a zero article, and the temptation is to write lines as177 :
(1)
the cat : a cat :: the cats :  cats
(2)
cat : cat :: cats : cats
(3)
the : a :: the : 
The temptation is even stronger if the theoretical frame posits propositions of the type:
NP  Det + N
Line (2) is not false but trivial or tautological and has to be left out of the consideration
as void.
Line (3) has two inconveniences. Firstly the double occurrence of "the" which connects
with the question of syncretism (cf. p. 160), and secondly the presence of  the
problems of which have just been exposed. Finally inscriptions (2) and (3) reflect
nothing of a linguistic knowledge that would be useable in the linguistic computation.
Now line (A) is perfect even if its terms are not minimal: it is a very good analogy,
much contributing to the computation, and free of any negative side-effect. If we
recognize the minimality suspension principle, it becomes possible to keep it.
As a complement we will need an operation of subtraction which may have two
operative supports: a) the subtractive utilization of C-type records, in cases where the
inscriptions of the plexus are abundant and sufficient, and b) formal analogy when
inscriptions of the plexus do not suffice, for example in the case of unknown words. So
is it for the English term tiger, less familiar the En. cat, but which some other plexus
inscription178 make it possible to abductively "co-categorize" with cat, inscription (A)
will abductively license something like (B);
(B)
the tiger : a tiger :: the tigers : tigers
which will contribute to align the behaviour of tiger on that of cat for matters of number
and definiteness.
Although it may, line (B) does not have to be explicitly inscribed in the plexus; the
computation, because it is abductive and integrative, will develop as if the inscription
were explicit. When it is not, the computation will simply be slower, as the abductive
177
This discussion does not distinguish the plural mark -s which should obviously be in a more complete
coverage but is not necessary in this example.
178
For example big tiger :: big cat or fierce tiger :: fierce cat or meaning-related analogies when this
complement is made.
180
gimp which will then have to be deployed to reconstruct its effect requires a few
supplementary agents.
6.4. Anomaly and regularity
Chap. 2 reminded how old the question is: analogists, anomalists in the Antiquity, and
arbitration by Varro. Chap. 2 also reminded the position of Arnauld and Lancelot which
boils down to accommodate attested anomaly while "disturbing the least possible the
analogy of language", without however authorising non-attested usage.
So far, these authors make of the question anomaly-analogy – or anomaly-regularity – a
treatment which is descriptive, antagonistic or ecumenical, normative in the case of
Port-Royal, but the explanatory treatment, when present at all, is nascent only and no
case operative.
Rather than an antagonism between analogy (rule) and anomaly, it is advantageous to
see an analogy versus another one – it may be the case that the extension of the latter
happening to be limited to one exemplar only. Thus, if the repairing analogy of the
Neogrammarians installs a new form beside an old one that underwent phonetic change,
one often sees also new forms with analogical motivation doubling old ones which did
not undergo anything: they simply follow another analogy. The older form is not
anomalous per se, it is only versus the newer analogy or a statistics. Such cases are
frequent in Vaugelas, and this vision is necessary to account for the mobility of the
demarcations in the "situations de partage" of verbal paradigms, along the diachrony of
the French verb179.
The question of anomaly and analogy poses a problem to generativism. Following its
requirement of minimality, this theory, at least in its early stages, places on rules and
categories the duty of accounting for the greatest possible part of the data; this leads it to
rejecting all anomaly in the lexicon and results, at the earliest stages, in some discomfort
in the vision of the lexicon and morphology, and more recently, in a more lexicalized
theory.
Langacker denounced this as the 'rule-list fallacy':
[in the generativist conceptions] If a grammar is a set of rules for constructing
expressions, and contains the fewest statements possible, then any expression
constructed by these rules must itself be omitted from the grammar. Separately listing
an expression computable by general rules would be redundant (and redundancy is evil)
(Langacker 1988b, p. 128). I call "rule/list fallacy" the presumption of the generative
grammarians that regular expressions should not be listed in the grammar. It is
fallacious because it tacitly presupposes only two options: rules vs. lists. But nothing in
principle prevents positing both (ibid. p. 131).
Rules and lists are not mutually exclusive (rule/list fallacy): instantiating expressions
have to be included in the grammar along with rules because rehearsed units are known
despite their satisfying general patterns. Langacker (p. 2). [the approach I advocate is]
non-reductive. Recognition of both rules (or patterns) and individual knowledge of
specific features. Advantage: accommodates instances where a fixed expression is more
179
Demarolle, already quoted.
181
detailed and elaborate than the structure that a rule or schema would allow to compute
(an eraser is not just something that erases) (p. 132).
The question of anomaly vs. analogy was touched a first time on the occasion of a
response to Jackendoff who deemed the "usage-based" principle as unable to treat it, and
a direction for solution was then sketched. It encompasses A-type records exploited by
the 'analogical task' (agent ANZ) on the one hand and on the other, C-type records
exploited by morphological and syntactic constructive processes (agents B2 and B3),
both being supervised so as to make to bear the principle of escalation (p. 92).
Thus for example, the analogical task X : cheval :: hommes : homme, which amounts –
for the analysts that we are, but the model does not know it – to find a plural for cheval,
initiates an abduction by systemic productivity (agent ANZ) and, if the plexus contains
the anomalous term chevaux180, finds it by this way. Only when such a result is not
found at a reasonable cost, is then launched a suppletion process which builds chevals
licensed from the inscription homme + -s  hommes (or similar accessible ones) and
abducting its effect. The suppletion process constitutes an escalation: it is more
expensive, and consequently penalized with respect to the direct process; because of
this, before becoming productive itself, or even before just starting, it leaves an
opportunity for chevaux to be produced.
The direct process is agent ANZ, the suppletion process is agent S2A and the process
controlling both is agent AN2, cf. corresponding appendixes for their specifications.
Let us now revisit the modularist option concerning anomaly and regularity. It is
attacked by Langacker again:
Attempts to impose a strict boundary between structural regularity and idiosyncrasy –
attributing them to distinct modules or processing systems (Chomsky" 1965, Pinker"
and Prince" 1991) – are, I believe, linguistically untenable and psychologically dubious.
Instead, I envisage a dynamic, interactive process whereby structures at all levels of
abstraction compete for activation and for the privilege of being invoked in producing
and understanding utterances (Elman" & McClelland" 1984, Langacker 1988).
Langacker 1998, p. 25.
If one really wants to, it is possible to see two modules in those two different processing
modes: indeed both are carried out by distinct effectors, cortical areas perhaps, and
distinct agents in this model. Yet it should also be noted that the effectors are minor, in
their function and in their size, with respect to the overall mechanism that controls them,
that obtains differentiated results according to the relevant terms in the tasks, and that
globally exerts the escalation principle. Both positions can be defended: that there are
modules, and that there are not; none is very interesting because in a linguistic task, as
soon as it is not ridiculously small, both modes are present and what matters is their
combined interplay in this entanglement. If there had to be two modules, one of
regularity and one of anomaly, the interesting question would remain to know when and
180
I indulge myself to write "the anomalous term chevaux" because the resources of the language are such
that they incline to make this sort of metonymy. Yet, it is hopefully understood that term chevaux is not
anomalous by itself, not more than it is anything by itself: a term having no essence and no property.
Writing "anomalous" is exactly assessing that an analogical regularization is not the case; here the
formation of plural by -s. But chevaux, is "regular" in the paradigm canaux, totaux, etc.
182
why either is triggered, how both interface and concur to enterprises beyond the scope of
each. This cooperation/concurrence and escalation game is exactly what the dynamic
side of the Analogical Speaker does.
6.5. Syntactic head
I remind here the example data of section 3.6.4. Abductive movement by expansive
homology (p. 85). They consist of two constructor paradigms:
C1
C2
C3
une + journée
une + belle journée
une + occasion
 une journée
 une belle journée
 une occasion
C4
C5
belle + journée
belle + victoire
 belle journée
 belle victoire
and:
both sharing term belle journée in records C2 and C4 – this last point is constitutive of
expansive homology. A set of constructor records (C-type records) of that type was
named 'expansive gate'. The expansive gate in the example is a "hard" one: the term is
homologous to an expansion of itself. 'Soft' expansion gates are also possible in which a
term is homologous to an expansion of a term which is distributionally similar to it.
If one wishes to, one may call "head" the term journée, that which is homologous to its
expansion. However this is not required because i) the analysis of expansions takes
place without having had to state generative rules or make HPSG-like lexical entries; in
any case, as there are no categories, there is no base to say that a construction is
endocentric or exocentric, ii) the optionality of adjuncts is a question which is solved
naturally by the operation itself of the B2-B3 process using expansive homology
movements, and iii) agreement and concord are handled by different ways; there is no
syntactic feature to propagate, no percolation. In short, none of the reasons which
motivate the introduction of the notion 'head' in theories that require it no longer hold
here. The corresponding effects can be obtained without such explicit postulation.
Moreover, since the analyses are not univocal as we have seen, the head could only be
ambiguous.
Finally, the notion 'head' is not necessary in the proposed frame. Dependency, the
obligatoriness or optionality of a segment, are exerted in the model but they do so in a
sort of de facto manner, they are expressed in a pervasive and distributed mode in the
records of the plexus and they are manifested as effects in the utilization of the plexus
that the computations do.
183
6.6. Sentence
For the sentence, as many authors, as many definitions or almost so181, and the most
ironical one: "a sentence is that after which you write a full stop" is not the silliest one.
And still these definitions address written language only:
Although sentences have often to be treated unquestioningly as the most basic of
linguistic units, they do not always emerge from ordinary speaking with compelling
clarity. … Syntax and prosody are often at odds, and intonation units do not always
combine to form structures with the properties syntacticians have traditionally assigned
to the data that has been either invented or, at best, copied from some piece of writing.
… It is interesting to find that, whereas both intonation units and discourse topics
remain relatively stable in content across different tellings of the same experience by
the same individual, sentences do not. Chafe 1996a, p. 45.
The problems of sentence definition were well and extensively described and I shall not
repeat this182.
What the model requires to operate are constructive paradigms. They may concern
written language or spoken language. They may be prototypical or not, comply with an
institutional norm or not. Forms terminated by a full stop may be, among others,
assemblies in exemplarist constructions; that is, a sentence may be a term. To this there
is no other contra-indication than the loss of usefulness of long terms cf. section 7.2.6.
Terms should be simple and commonplace p. 200.
Things being so, sentences can be constituents in analogies bearing on sequences of
replicas in dialogues, so that the model is open to "trans-sentence" processing or to
sequences of verbal productions punctuated not by full stops, but by prosodic marks.
Two exemplarist constructions forming a plexus paradigm suffice to licence more if
only abductive paths can be found; in the linguistic form only for the time being.
More than prosodic delimitation, phrasal delimitation, or sentential delimitation, what
matters is the construction of meaning (the following concerns an extension of the
model, yet to be done, in which meaning would be processed). The lag versus good
formation or completeness which would be that of a well formed sentence may be as
low as a synthesis point. A 'synthesis point'183 is a point at which meaning may be
fabricated, as little as it may be. This is not the case in all assemblies: some assemblies
are steps which are necessary while awaiting a synthesis point, but which do not allow
the construction of a stable meaning. In reception, as soon as a synthesis point is
reached, the corresponding meaning is fabricated and becomes an asset while waiting
for the rest of the utterance, the analysis of which it will contribute to orientate.
On the whole, about a form being sentential or not, as well as it being well formed or
not, the idea is to abstain over specifying: it is the plexus that commands what will be
181
One may consult Catherine Garnier La phrase Japanese (Garnier 1985, p. 14) for definitions of
"sentence" by Arnauld and Lancelot, Saussure, Meillet, Tesnière, Bloomfield, Guillaume, Jakobson,
Harris, Benveniste, Martinet, and Chomsky.
182
For some questions concerning the definition of 'sentence' and the difficulty it poses to generativism,
cf. Hagège 1976, p. 200.
183
I owe the word 'point de synthèse' to Irène Tamba.
184
possible or not, what will be easy or difficult. In a plexus of written, academic language,
the notion 'sentence' will be massively present, pervasive in exemplars, and the
productions based on that plexus will be univocally recognized as sentences. By
contrast, such plexus of spoken language may make little or no room for sentences, or
attest "sentences" which cannot be canonized against any canon. If it contains structural
analogies (C-type records with paradigmatic links between them) it will constitute the
foothold of an abductive productivity in the same way as the former one; it will
determine the "style"" of these productions.
6.7. Conclusion: dynamics are the cause, and the grammar an effect
Chapters 4 and 5 shown, in a positive manner, how a number of effects so far (badly)
accounted for by stipulative discourse (grammars, static theories of a "language") were
better seen as produced by a dynamic model.
We just saw in this chapter, now in a negative manner, how a number of grammatical
notions, each of them problematic, loose their necessity. This was done by showing in
each case how the Analogical Speaker solves differently the questions that these notions
addressed.
These are ancient notions like word, homonym, lexical entry, lexical meaning,
morphology-syntax boundary and sentence.
These are also more recent notions of 20th century linguistics: syntagm, zero element,
syntactic head, and even morpheme to some extent.
In their stead, the dynamics and principles of the model: proximality, suspension of
minimality, vacuity of terms, inscriptions of systemic analogies and structural analogies,
abductive movements, and the general abductive dynamics, solve numerous description
questions and theoretical questions. They do so with economy, flexibility, some
plausibility, and with means which are simple and tend to be non-specific.
Thus, it has been widely proven in which way many grammatical notions become
consequences of the dynamics. The relation between grammar and the dynamics was
upside-down: the former was expected to explain the latter. Now it turns out that things
go the other way round.
Analogy, now repositioned as a static system of ratios between terms, and a productive
dynamic process, restores the reasoning in the right sense. Repositioning things in this
way allows holding the phenomenon for a phenomenon and the process for a process, to
make the process the cause, and the phenomenon an effect.
This enterprise of resetting things in the right order, because it reinstates analogy,
restores continuity with over two thousand years of linguistic thought, and with more
recent themes in the cognitive sciences. It makes the theory compatible with category
leakage, with linguistic change (analogy as the mechanism of change, and the possibility
of reanalysis which stays always open), and language acquisition. Language acquisition
and reanalysis will be dealt with in Chap. 8 with other directions along which the model
is susceptible to be prolonged.
In the meantime, a few questions touching its foundations need to be dug, which will
provide opportunities to contrast the model with other approaches.
185
Chapter 7.
Foundations and contrasts
In this chapter several questions related with the foundations of the model are addressed.
The vision of analogy in it is contrasted with that in other theories; the notion of term is
discussed under all respects; more details are provided about copositioning and
integrativity; three oppositions are discussed: exemplars vs. occurrences, proximality vs.
totality, and extension vs. intension; the question of variable binding is shown to be less
of a question after the refusal to reify categories and rules; the proposed model is
contrasted with recent propositions tending to introduce probabilities in linguistic
theories; finally the model is contrasted with connectionism.
7.1. Analogy in this model and in other propositions
In the Analogical Speaker, analogy is the base of the inscriptions of linguistic
knowledge and it is also the base of the linguistic dynamics; the model is isonomic (cf.
p. 89). Other authors on the contrary aim at making analogies (and perhaps also
metaphors and metonymies) without ambitioning to found the inscriptions and the
operation themselves on analogy; they are partonomic.
7.1.1. Psychologists, cogniticians, artificial intelligence
So is it with SME (Falkenheimer 1989), ACME (Holyoak, Novick & Melz 1994), LISA
(Hummel 1997), Tabletop (Hofstadter 1995), and Sapper (Veale 1988).
For the needs of the discussion, I propose to call "standard problem" vis-à-vis analogy
the question as posed by psychologists (after Gentner 1983), which is also the question
as posed in artificial intelligence. It is schematically recalled for example in Lepage
(1996, p. 728) which I summarize. The standard problem goes as follows:
-
two domains are envisaged, for example the atom and the solar system, the latter
(the vehicle) being expected to help understanding the former (the tenor).
-
the approach consists is achieving a structure mapping of the two domains (e.g.
atom nucleus : sun).
186
-
the structure mapping will result in property transfers (gravity : electromagnetic
field). Structure mapping and property transfer are two different operations.
-
the base of the structure mapping is a modelling of each domain (the solar
system is made up of the sun and planets, a planet has a mass, there are eleven
planets each with a different mass, planets have orbits around the sun, etc.).
-
the value of an analogy is a function of the strength of the transferred properties
(number, truth, etc.).
-
the "standard problem" is defined as follows: the structures of the two domains
being given, find a structure mapping between them.
Extension of the "standard problem": one target domain (the atom) which is poorly
understood being given, and now not just another domain but a vast knowledge base
(astronomy, human size mechanics, naive sociology, etc.) being available, select the best
part of the knowledge base which may be taken as a source domain to make a structure
mapping with the target domain.
A first manner to contrast the standard problem with the model proposed in this work, is
to see that the standard problem supposes the analogous domains to be partonomically
modelled (cf. p. 89): briefly, they contain entities bearing properties, and with relations
among them. It is because each domain is modelled that a mapping may be searched and
possibly found. The approach is partonomic.
In the Analogical Speaker on the contrary, for systemic productivity (that of Chap. 5, p.
129), a dynamics develops between terms without requiring them to have properties. It
is an isonomic approach. This dimension is entirely new and is not to be found in the
standard problem.
Secondly, in the syntactic computation which accounts for structural productivity (that
of Chap. 4, p. 97), it is possible to see the utterance to be analysed (by the
in the Analogical Speaker
in the "standard problem"
utterance (to be analysed)
target domain (atom to be understood)
plexus
knowledge base
licensing records
source domain (solar system)
Table 14 Mapping with the analogical "standard problem"
model) as the target domain to be understood (the atom in the standard problem) and the
plexus of the linguistic knowledge (of the model) as the knowledge base (of the standard
problem); in this, the problem posed to the model would compare to the standard
problem.
The mapping then would be as summarized in the table above.
187
There is a first difference though: in the Analogical Speaker the plexus is described on a
strictly isonomic base whereas works that address the standard problem, for most of
them, give of the domains to be mapped descriptions which are 'ontologies' (this is the
word often used), sorts of semantic nets based on properties, on categorical types and
relations. Such models are partonomic.
A second difference between the Analogical Speaker and current analogical mapping
models lies in the general dynamics for producing results:
A classical solution [to produce the good answer] is considering all the possible
representations of a situation, as in the case of modelling analogical reasoning in which
it is frequent to build all possible pairings between the elements of two situations, and
then to select the best adapted one according to defined constraints. But if we want a
psychological account of this ability, the problem of encoding and the problem of
representation lashes back: we do not elaborate all representations and it is difficult to
set decision rules before the construction of representations184.
The Analogical Speaker does not build all possible pairings; on the contrary, it draws on
proximality to reach one or a few exemplars which are good enough and, in this small
number, it selects those with which a settlement (i.e. a solving) takes place. To this end,
the dynamics envisages certain possibilities and never has to envisage a totality or a
closure185. In the Analogical Speaker, the closure is a question at no moment: the
heuristic process gradually broadens a search scope by soliciting gradually less proximal
inscriptions; it is stimulated to do so only by the lack of congruence between the
arguments of an act and the inscriptions in the plexus. What is heaving here is the
proximality-totality antagonism which will be developed infra (p. 212).
7.1.2. Skousen: "statistical" analogy without rules or categories, but A2
A work, that of Skousen186 recognizes (as I do) the inadequacy of rules as operating in
language phenomena, and his argument is about that which I summarize in Chap. 1 and
which is developed in an appendix. In order to "predict the linguistic behaviour",
Skousen uses an analogical means.
In order to eliminate these difficulties, this book introduces a new way of accounting
for language behaviour, one that can be called analogical. But unlike the imprecise and
impressionistic appeals to "analogy" that have characterized language studies in the
past, the analogical approach that this book proposes is based on an explicit definition
of analogy. The main problem with traditional analogy is that there is no limit to its use:
almost any form can be used to explain the behaviour of another form, providing there
is some similarity, however meagre, between the two forms. Nor does this book use
analogy to handle only the cases that the rules cannot account for. Instead, everything is
184
Vivicorsi 2002, p. 83.
185
Incidentally, optimalist models (the matter will be addressed again in the conclusions) meet this same
question and do not appear to solve it better than current analogical mapping models. In the optimalist
models, there is a worrisome step through a totality of potential solutions (among which constraints allow
to select one as the best one) and the status of this totality is not sufficiently questioned, in my opinion.
186
Skousen 1989 (Royal), Analogical modelling of language, Kluwer. This book has not been deeply
analyzed. The statements made here are based on the analysis of its introduction only.
188
considered analogical, even the cases of complete regularity. Skousen 1989,
introduction.
Analogy is responsible for accounting even of complete regularity. This theme is fully
compatible with mine which is that effects of regularization must be handled along with
anomalous facts in a single operating mechanism, different from the rule, and leaving to
the latter no place in the modus operandi.
One can only follow Skousen with interest in his effort to run away from "imprecise and
impressionistic appeals to analogy" which were done. In effect, the analogy which he
refuses, that which satisfies itself with "some similarity, however meagre, between the
two forms", resembles much the associations of associationist psychology and is
probably not a sufficient lever to be applied to language. How is he going to achieve
this?
Basically, an analogical description predicts behavior by means of a collection of
examples called the analogical set. For a given context x, we construct the analogical
set for x by looking through the data for (1) classes of examples that are the most
similar to x and (2) more general classes of examples which behave like those examples
most similar to x. The probability that a particular kind of occurrence will serve as the
analogical model depends on several interrelated factors:
(1) the similarity of the occurrence to the given context x;
(2) the frequency of the occurrence; and
(3) whether or not there are intervening occurrences closer to x with the same behavior.
What appears is this: i) similarity plays always between two elements (not between
four), that is, between two examples (exemplars?) or occurrences, arising for him from a
corpus, that is, between such exemplar and the "given context x", which is what
determines the linguistic task, ii) the frequency of exemplars in the corpus is solicited,
iii) the attention brought to "intervening occurrences" suggests the request for maximum
contrast made by Householder (Chap. 2) in the lines of structural linguistics.
In the end, what is selected to "serve as the analogical model" for x, is what i) resembles
it most, and ii) is most frequent in the corpus.
It is possible to see Skousen's clause "most frequent" as analogous to familiarity
orientation in the Analogical Speaker.
About similarity, the examined text is not precise, but it is reasonable to infer – this was
confirmed by working with Robert Freeman whose work is akin to Skousen's – that it is
distributional similarity in the corpus, this means to say that x1 and x2 are the more
similar that they have more occurrences sharing the same left context and right context.
Finally, it appears that Skousen's analogy is analogy between two terms, of the type "X
is like Y" (that which was called "A2 analogy" in Chap. 2 when discussing the dismissal
of analogy by Chomsky. A4 analogy of the type "X is to Y as A is to B" is not
mentioned.
The cases which this book addresses convincingly are:
1. the English indefinite article a/an
189
2. the English initial /h/ in graphical realization (<h> regular case, <wh> majority
exception , <j> minority exception ),
3. the categorization of the labial stop by its voicing onset time (/b/ [-107,2]
milliseconds, /p/ [51,94] milliseconds).
4. a diachronic phenomenon in Finish: twelve verbs used to end with si in the past
tense and now they end with ti and it is not possible to relate this change to any
systematic explanation. For two verbs, Skousen explains the change by the
avoidance of a homophony, then:
The effect of this minor change in an already sparse field was sufficient to break
down the original gang effect of that field. Under conditions of imperfect memory,
the analogical approach then predicts the subsequent historical drift, so that over
time other verbs in this field have also changed their past tense forms from si to ti.
The analogical approach thus accounts for the original instability of certain past
tense forms in Finnish. It also predicts the overall stability of the past tense in the
modern standard language.
Phonetic change of the first two verbs drove the (analogical) creation of new forms to
the remaining ten verbs (gang effect) and the new forms superseded the older ones187.
We recognize here the "repairing" analogy of Brugmann and Saussure already presented
in Chap. 2. Repair spreads (or not yet) to the rest of the paradigm probably for reasons
of the type invoked by Demarolle (1990) (repartition situations) cf. Chap. 2 again.
Now repairing analogy is an entirely A4 mechanism. How can it be invoked in a work
which started out by recognizing A2 analogy only?
In the same spirit, it would be interesting to see how Skousen explains analogically
questions – which he did not address – like agreement, syntagmatic expansion and, more
generally, syntax matters.
In summary, in the renewed interest for analogy, it is not clear fo everyone that the
variety to take into account is A4 analogy. By contrast, this is very clear with Itkonen.
7.1.3. Itkonen: A4 analogy, but with rules and categories
An important paper by Itkonen, tending generally to rehabilitate analogy, was analysed
p. 43. This article comprises a model which suggests two remarks.
7.1.3.1. Itkonen keeps rules, categories and the slot-filler schema
In order to explain syntax by means of analogy, Itkonen 'formalizes' syntactic analogies
(p. 145).
He models linguistic knowledge as Prolog rules by typing Prolog atoms with the most
usual categories of the analysis of English (N, V, NP, VP, Adv, Adj, Prep, etc.; subject,
object, etc.; agent, patient, etc.). As Prolog does not have types, its use might favour a
non-categorial modelling but this is not what is done: types, i.e. the above listed
categories are explicitly built upon Prolog.
187
Skousen writes that past forms "change" but if we remind Saussure (chap. 2), we understand that, in the
analogical reparing of paradigms, forms do not change: newer forms are created and substitute the older
ones.
190
The use of categories made by Itkonen in his model, this was already mentioned, may be
just a convenience that this author adopted on the occasion of a limited argument;
nowhere in the paper is a positive claim for categories and rules to be found, but
nowhere either is the question even sketched. However, it can only be noted that
Itkonen's model, as presented in the article, observes the slot-filler principle, and the
potential of analogy to do better is not used or even envisaged.
7.1.3.2. Itkonen treats "analogies which motivated transformations" as
analogical tasks
Itkonen assigns his model to solve tasks like:
X
------------------------------------------Where did John say that Bill was?
=
John said that we have to get off the bus here
--------------------------------------------------------John said that Bill was there
that is, he asks it to find:
X = Where did John say that we have to get off the bus.
This I called above 'analogical task' with syntax. The question here at stake is that of
analogies which motivated transformations, cf. p. 107. He treats them by showing that it
is possible to solve them as analogical tasks, that is, by showing that it is possible to
compute a proportional fourth.
I addressed this above already: for me, explaining these systematicities by substituting
the generation-transformation system of Generative Grammar with an analogical
explanation does not necessarily require to solve analogical tasks. Such tasks in
themselves are not normal speaker ability; they do not fall within the 'natural' use of
language and ought to be seen rather as a metalinguistic exercise.
A first tier of explanation may be obtained as I showed in the already stated section.
However, there may be more to the analogical task than a gratuitous exercise of
productive know-how. This cannot be decided as long as we do not have a model of the
utterance production process. This model would start from the 'thing to be said', from
the enunciative programme; it would devise an 'enunciation plan' taking account of the
plexus resources that present themselves as the best ones with which to make mappings;
it would then deploy an analogical computation so as to produce an utterance which
represents a good compromise to 'say what has to be said'.
The enunciation process may opt for direct use of an interrogative exemplar, which will
serve as a sentential template, then make substitutions188 in it, without ever having to
compute a 'transforming analogy'. When this is the case, analogies which motivated
transformations are explained by analogy without requiring an explanation based on
transforming analogical tasks. Here, I fight against this preconception which would
make affirmative, active, non-thematized sentences the prototype from which every
188
These substitutions, or sub-tasks of the overall enunciation task, may be analogical tasks, that is,
computations of a proportional fourth, but the overall task would not be that. Generally, a subtask may be
an analogical task without the overall task having to be. In summary, in the recursive embeddings, this
discussion and this variety of possibilities may apply at any level, and independently at each level.- let
alone certain dependencies and accidents of compositionality.
191
other type should obtain by transformations. It is important to refer this conception back
to a linguistics of competence (in Chomsky's sense), that is, a linguistics of a language,
which is not that made here, nor is it that which Itkonen makes.
Itkonen undertakes a micro-work-programme assigned by generativism without going to
the term of a potential critique. He fulfils the programme with his weapon: analogy. He
succeeds, and thence refutes the pretension which is that of generativism to impose
derivations and transformations because there would be no challenging proposal. But he
endorses regularism and categorialism 'en passant'.
Here is a limit or an article which did not make this its main purpose. I whish to stress
again how important this 'rehabilitational' article is, it shows the error that constitutes
the dismissal of analogy by Chomsky, please refer back to the already provided
summary.
7.1.4. A2 analogy anyway, but differently
Several times, great care was taken to distinguish A4 analogy (four terms) from A2 (two
terms). The former is fully fledged, technical, it allows us to found a computation; the
later is too poor, and for this rejected as improper to found a computation. However, in
language manifestations, phenomena with two terms do occur (ex. the vase is like the
shield, Ares is like Dionysus, he is a snake, tons of worries), likewise, phenomena with
three terms occur (ex. the vase is the shield of Dionysus). These structures may not be
those which provide its foundation to the computation, they nevertheless remain
phenomena and productivity among them must be explained; but they are seen as
phenomena, not as a device in the theoretical or modelling apparatus.
I am not undertaking here to cover this treatment or this explanation. The conjecture is
that a computation as those presented above should yield it. It must comprise, as one of
its steps, to abduct one or two supplementary terms so that there be four of them in the
current conditions of the computation, after what, ensuing computations and abductive
chains may become more canonical, that is, more alike to what has been presented. This
is saying nothing else than what Aristotle says, and which sounds right, that underlying
a metaphor, there is always an analogy.
7.1.5. Three tiers
The position of the Analogical Speaker vis-à-vis efforts which address analogical
mapping directly may be proposed in distinguishing three tiers.
The upper tier is that of symbolist grammars, of categories, of rules, and of the lexicon.
it is comparatively concise but leaves descriptive and explanatory residues. It does not
propose a model of acts and does not account for learning.
A middle tier (this work) is isonomic. It supposes some analogies readily available,
proximality, a plexus structure, abductive movements, and it proposes a dynamics which
produces an infinity of analogies; it is a powerful lever of productivity. This dynamics is
economical since it eschews analogical mapping which is deemed computationally (and
cognitively) more expensive. It has some cognitive plausibility, but an implementational
plausibility which is average only.
192
Finally, a lower tier encompasses analogical mapping or any other approach of
reduction. It is partonomic and descriptively voluminous. It does not suppose readily
available analogies, and it supposes reduction. It is computationally (and cognitively)
heavy and parallelism is quasi-imperative in it.
7.2. Individuality of terms
A first introduction of the notion of term was given p. 79; this notion will now be
exposed in detail.
7.2.1. A term is a participant in an analogy
The four 'things' involved in an analogy are its 'terms'". I call 'term' whatever enters in
the expression of an analogy. In analogy:
X : Y :: A : B
X, Y, A and B are terms.
In exemplarist constructions, constituents are terms and the assembly is also a term.
Thus, in a plexus, participants in A-type records and in C-type records are all terms. A
same term may occur in many records. A same term may occur in A-type records and in
C-type records. This homogeneity across record types is important because it conditions
their joint mobilization into the dynamics; it is therefore a productivity factor.
A term is an excerpt of linguistic form; I will show below what non-formal terms
('private' terms) might be.
A term is a fragment of linguistic form which may constitute a syntactic unit. There are
some modifications with respect to received descriptive and theoretical frames:
-
a morpheme may be a term, the word not being postulated here (cf. p. 159).
-
in a non-concatenative morphology, a term may be a non-cohesive part of
linguistic form (for example, a tree-consonants base in a Semitic language or a
vocalic pattern in the same languages) or any other excerpt of the form,
according to the proper structures of the particular morphology.
-
a segment of form consisting of several words or morphemes (a syntagm, e.g.: le
grand jour) may be a term,
-
we shall see that morpheme assemblies that are not usually accepted as syntagms
(ex. in Fr.: à la or un très) must be able to be considered as terms (cf. p. 198).
-
morphological phenomena may cause indecision as to the boundaries between
terms.
7.2.2. Essentiality (or not) of a term
A term is re identifiable in its recurrences, it is recognized the same term in all its
occurrences.
For a term, being re identified across its recurrences, does not require that the term be
reified. A term is not a thing; it has no content, no properties. Of a term, one can tell
193
nothing more than its occurrences in diverse positions of various systemic or structural
analogies, so that what plays between terms are not relations (relations only occur
between objects or individuals), what plays between terms are the copositionings
instituted by the analogies of the plexus, then the copositionings which the dynamics
abduct from the former.
At the expense of this poverty, we can expel essences, ontologies, and we can explain
without drawing on metalanguage.
7.2.3. Minimality suspension for terms
I introduced (p. 81) the need to 'suspend the minimality' of terms, that is, to refrain
seeking 'atoms' the recombination of which would provide for descriptive and
theoretical needs. It is so because, in linguistics, evidence invites to stay away from too
"Cartesian" a vision: firstly the empirical evidence that different planes and orders
interact, then the evidence that a uniform, minimal description level does not
accommodate all facts. So is it for lexicalization or grammaticalization for example.
Which is why granting a minimality whatsoever to terms, or constraining them in this, is
simply refused. A term is not constrained to be elementary or minimal, that is, analogies
may be established between elements of different grain, this concurrently, and
complementarily; an assembly of terms may also be a term. Elementarity is not
foundational by itself; the decision to break a term down (to analyse it) is just a matter
of opportunity, a matter of judgment which the speaker makes, unconsciously most
often: it is contingent. All speakers do not make the same decisions in all points; a same
speaker may not make the same decisions in all occasions.
Suspending minimality in this way is negating two things (which would constitute the
antagonist viewpoint, the 'primarist' or 'elementarist', or 'foundationalist' position):
1) Univocity. There would be a level of breakdown into elements (the 'quarks' of
linguistics) from which all phenomena would be reconstructed and explained.
2) Uniformity. This breakdown level ought to be the same everywhere in a
language (in all languages) and apply the same way to all phenomena.
Minimality suspension asserts the contrary on both these points:
1) Multivocity. The same linguistic material may break down differently according
to different viewpoints, giving different structures, often interdependent, but
distinct189. The elements of one are not the elements of the other and there is not
a system of atoms which is common to both.
2) Non uniformity. The same material occurring two times may have uneven
breakdowns in both occurrences, even according to a same viewpoint. It is not
postulated that decomposition has to result in a uniform tier; it does not have to
be the same in the entirety of a language (of all languages).
No minimal uniformity of terms according to any criterion. The sub-determination of
analogy allows us to make mappings among units of different grains. However, tier
189
About this, think of the multiple structures of van Vallin, Sadock, Jackendoff, and Selkirk.
194
effects (e.g. morpheme) may happen and extend up to quasi-generality. As such tiers are
not assigned to play in the explanatory construction; this removes the risk to produce a
theory which would fail at the margins of these quasi-generalities. However, a model
which suspends minimality, has to account, as an effect of analogy, for the
(re)construction of these quasi-general tiers, but as phenomena, not as causes.
Minimality suspension relates to reanalysis. Sometimes the reanalysis process leaves a
residual part with uncertain grain and un-assured statute. This part did not have a
leading role in the reanalysis process, it is rather a residue. Its putative attributes get
determined 'by defect', by analogy and subtraction; for the speaker, they remain a matter
of subliminal conjecture. The term in question may participate concurrently in other
analyses and finally, different points of view of different grain may have to coexist.
Minimality suspension is useful in three identified cases, and possibly more:
a) for syncretism, cf. p. 160,
b) for amalgamation (Fr. du, des), cf., p. 122,
c) in the case of entrenched phrases.
Minimality suspension applies initially to formal terms: those which are constituted of
linguistic form It is expected that it would apply also to private terms190.
190
About private terms, it is ineresting to compare this proposition with the solution Nelson Goodman
gave to a problem that he met.
This logician, already quoted above, having made the critical analysis of the model of "The Locical
Construction of the World" by Carnap (Der Logische Aufbau der Welt), makes a proposal to replace it
with a realist system (by contrast, Carnap's was particularist). He proposes a first solution, then finds in it
the same major defect as in Carnap's system: it fails as soon as more than two qualia are considered. After
examining several other possibilities of correction, he finds an improvement by accepting to consider as
individuals "sums" of atoms, along with the atoms themselves :
Among several different possible revisions, the best is perhaps also the most obvious. The choice
of atoms need not be changed, but all sums of two or more atoms are likewise admitted as
individuals, and some of these are included as basic units. In particular, primitive togetherness
is construed as obtaining not only between qualia, but between any two separate sums of one or
more qualia contained in a single concretum. Whereas Wh [Wh is the relation of "togetherness"
defined in his first proposition] obtained between every two distinct atomic qualia in a
concretum, the new primitive, W [the new relation of togetherness], obtains between every two
discrete parts of a concretum (or more accurately between every two individuals that are sums of
qualia, that are systematically discrete, and that are parts of one concretum). This involves no
departure from the ordinary notion of togetherness, but merely interprets it systematically by a
less restricted primitive. A color may quite as naturally be said to occur at a place-time, or a
color-spot at a time, or a color-moment at a place, as a color at a place or at a time. Goodman
1951, p. 208-209.
As already mentioned, the primitive relation (for Goodman the relation of togetherness), base of the
construction of qualia and qualities, holds between pairs. It is possible to see this akin to analogy.
However, analogy was not thematised by Carnap, and not more by Goodman. Moreover, their common
project of an Aufbau, that is, a construction able to found "the actual process of cognition" (p. 180) on
primitive elements is not mine; I already established why a primitive basis is not a good idea in linguistics
and related fields. However, the fnding by Goodman that it is useful to have terms with a variable grain
and the minimality suspension that results is maybe more than a fortuitous coincidence.
195
Other authors, in the neighbouring field of 'qualitative simulation', also assess primarism
as a dead-end:
The qualitative simulation algorithms developed to date are problematic as models of
human reasoning. Current qualitative simulation algorithms operate via first-principles
reasoning over general-purpose axiomatic knowledge. They often produce a huge
number of possible behaviors (hundreds or even thousands) even for relatively simple
situations. The reason for this is that qualitative simulation, because of the decreased
resolution of information about a state, tends to be ambiguous. In a quantitative
simulation there is a unique next state, but in qualitative simulations, there can be
several next states, corresponding to different transitions that are logically consistent
with the resolution of the qualitative state information. Each of these has several next
states in turn so their number grows exponentially … which makes such algorithms
seem psychologically implausible, given how easily people reason about everyday
physical situations.
A second problem with first-principles qualitative simulation algorithms as models of
human commonsense reasoning is that their predictions tend to include a large number
of spurious behaviors that logically follow from the low-resolution qualitative
descriptions that they use as input but are not in fact physically possible. … this is not a
viable option for modelling the commonsense of the person on the street, who is
capable of making reasonable predictions even without such detailed information.
Forbus 2001, p. 35.
It is interesting to see that, in order to prevent computational explosion, these authors
also call on analogy with proximal scope:
We [Forbus and Gentner 1997] suggest that the solution to this puzzle lies in our use of
within-domain analogies (e.g. literal similarity) in commonsense reasoning. We claim
that a psychological account of qualitative reasoning should rely heavily on analogical
reasoning in addition to reasoning from first principles. Qualitative predictions of
behavior can be generated via analogical inference from prior observed behaviors
described qualitatively. Prediction based on experience reduces the problems of purely
first-principles qualitative reasoning, because they are limited to what one has seen.
Forbus 2001, p. 35,36.
Because they are limited to what one has seen: this is the proximality principle already
discussed at length; all these things hold together jointly and severally.
7.2.4. Delimitation of terms as seen from morphology
In the current status of the model, since plexii are hand-made, the desirability to make
such or such form a term is judged by the descriptor as resulting from the desirability to
make defined structure mappings. We are ready to think he is serious but we also want
to understand the principles which found his decisions and under what conditions a
linguistic form qualifies as a term.
It appears that the response cannot be dogmatic or propositional; it will rather be
contingent and dynamic.
A form is a term firstly when it meets the needs of analogical co-segmentation (for
example métal- below). Secondly, a term may be produced by a mechanism of masking
and difference as the rest (for example, chir- below) after subtracting another, already
established term. The new term appears then as a residue.
196
Let us take an example in the morphology of French (italics are received words, straight
typeface ones are not):
-ique
-urgique
-urgical
métal-
métallique métallurgique métallurgical
plast-
plastique
plasturgique
plasturgical
chir-
chirique
chirurgique
chirurgical
Table 15 Example in the morphology of French
The want to account for métallique, métallurgique, plastique, plasturgique causes the
creation of terms métal-, plast-, -ique, -urgique. This is analogical co segmentation.
Then – let alone the gemination -ll- at this stage – the following paradigm of C-type
record is set:
C
C
C
C
métalplastmétalplast-
-ique
-ique
-urgique
-urgique
métallique
plastique
métallurgique
plasturgique
Then, the encounter of chirurgique establishes a mapping with métallique, plasturgique
which, by masking and difference creates term chir- . The paradigm above is thus
complemented with record:
C
chir-
-urgique
chirurgique
Form chir-, which is a residual form, is taken as a term; it appears in the model191. It is
the residue of chirurgique from which -urgique is subtracted.
The analogical dynamics then acquires the potential to license form chirique by
constructability transfer. That is, i) if this form is presented to it, the model can analyse
it, and ii) to respond to a production need which would tend to it, the model could
produce it. In the model's dialect, this form is possible at his point of the model's
development and at this point of the discussion. There is a slight worry: the dialect of
the model here seems to present a lag with French speakers because none would
produce that form and accepting it would be a problem to all of them. The fact that the
form chirique is possible for the model does not incur that it will produce occurrences of
it. If it contains the term manuel, which may expected in a plexus that would
approximate a French speaker, this term will occur in the plexus, for example, in a
paradigm like the following:
A
A
A
métal
automate
main
métallique
automatique
manuel
191
A similar development, with a sufix this time, can be done in the rightmost part of the table, with term urgical.
197
A
A
œil (En. eye)
œil
oculaire
optique
By the principle of economy, because it costs more to assemble a new form than to
retrieve one from the inscriptions which satisfies the needs of the production act, manuel
will be produced, which will block the production of chirique by assembly.
This argument presupposes that there is no other obstacle to the productive reuse or term
chir-. Now there is one in this particular case, which is that the masking-subtraction
operation works well within the form, but does not find an easy prolongation in
meaning. The speaker who does not know Greek (does the model know it?) has no
direct reason to recognize the hand in chir- . Here again, the question is mentioned, but
we must stop at the edge of what it is possible to do today.
In any case, it has been shown how masking-subtraction gives birth to a term. A term so
obtained does not have as great a strength as one obtained by analogical co
segmentation, it remains in a paradigmatic margin; métal- and -ique by contrast, are
more solid because they are licensed by cross attestations. Analogical co segmentation
produces stronger terms and masking-subtraction, weaker ones. Yet, term chir- is there
for the rest of the linguistic carrier of this plexus (or of the corresponding speaker). It
may find a usage upon ensuing encounters of chiromancie, chiropractie for example.
Examples in syntax could also be taken. Goldsmith192 presents for example the result of
a corpus analysis by the method of Minimal Description Length. Overall, the method
finds the morphemes to which we are used (here these would be terms), but
occasionally, it deviates; for example, it considers form of the without segmenting it; the
is indeed a morpheme elsewhere, but of is not (here, of the would be a term on its own).
In this corpus, the need does not arise to make of autonomous, all its occurrences are
followed by the. Here is another case in which analogical co segmentation does not
drive to distinguish a term193.
7.2.5. Delimitation of terms as seen from syntax
The model stays as underspecified as possible to let happen as freely as possible all
creations and all conjunctions that are to be observed when speakers produce or accept
linguistic material. Therefore, a constraint must not be placed on terms unless it is
strictly motivated. In order to help understanding how terms are determined, it was
stated above that "a form is a term in the first place when it results from the needs of
analogical co segmentation". This is the least and still leaves possibilities very open.
A theory like Generative Grammar is by contrast very precise on this point, the notion
'syntagm' is precisely defined, and syntagmatic structure is very constrained in it. For
this, this theory has ancient and precise reasons: the analysis of the shortages of
markovian and probabilistic models. What is at stake is to reject monstrous formations
as for example a trigram model might produce if it were taken as a productive model:
My question to you those pictures may still not in Romania and I looked up clean; you
192
John Goldsmith, lecture at ILPGA, Paris, May 2002.
193
Incidentally, this case also illustrates minimality suspension.
198
were going to take their cue from Anchorage lifted off everything will work site Verdi.
(cf. more complete quotation and reference p. 228).
An example in French of article-preposition amalgamation (cf. p. 122) already shown
the advantage that there is to let happen terms that infringe this canonical vision and are
not received syntagms: it is interesting to keep the freedom to map [aux] [champs] and
[à la] [ville] together and this demands to constitute term [à la]. This term is not a
syntagm in classical frames, but keeping this liberty offers a solution the question of
amalgamation in Romance languages which is more flexible, and theoretically cheaper
than ones in previous frameworks.
If we adopt this way of thinking and find some good reason194 to associate article and
adverb, should we then also acknowledge a paradigm – free of any amalgamation this
time – like:
[un très]
[un si]
[un aussi]
[un trop]
[grand chien]
[bon moment]
[mauvais traitement]
[petit nombre]
a very big dog
such a good moment
such a bad treatment
too small a number
where the leftmost term is not a syntagm in the received acceptations. In itself, this
paradigm is not bad in the sense that, among all its records, constructability transfer
works well. There is no risk to make unwanted productions.
Accepting this raises a suspicion: if a plexus encompasses terms that deviate so much
from strict syntagmaticity, do we not have the risk to license long assemblies, which
would be aberrant because they infringe syntagmatic structure.
This is not to be feared. Longer assemblies are constructed by expansions, owing to
plexus structures which involve several records and were named 'expansive gates'
above, p. 86. In short, in an expansive gate, some term is homologous to its expansion.
Expansive homology, because it requires expansive gates in the plexus, subordinates
abductive licensing of an assembly by expansion, to a precise and constraining
condition: the dynamics must find in the plexus an expansive gate adapted to the case.
As long as this does not happen, long aberrant assemblies cannot be produced. Now, we
are not expecting for terms like à la or un très to be anywhere homologous to an
expansion of theirs. Consequently, these terms cannot cause aberrant assemblies.
So we have to distinguish two modes of syntactic productivity.
The first one is the expansive productivity schema; it is classical, uses expansive
homology, and depends on expansive gates in the plexus. The terms on which it bears
are constrained: they must be "well-formed" syntagms.
194
There is indeed a reason in French to associate article and adverb together, a fuzzy and partial one,
which is the commutation of this group with [tout] (En.: all, any) and [quelque] (En.: some), these two
words having a tendency to better associate with group Adj+N [tout mauvais traitement] than with N
alone [tout traitement], in certain cases. This is a light phenomenon, a sub-gramamtical phenomenon. The
locality of inscriptions allows us to account for this preference simply by means of a few paradigms.
199
The second one, introduced here, is the non-expansive productivity schema; it is based
on constructability transfer alone. It may bear on terms which do not observe classical
constituent analysis: they may be non-syntagms.
All frameworks so far, which stated something precise about syntactic productivity195,
since they focused excessively on the expansive schema, only accepted as syntagms
segments that undergo expansion. In doing so, they neglected to see that the nonexpansive productivity schema releases a constraint on the boundaries of terms,
broadens the space in which analogical mappings may take place, and adds a degree of
liberty in the apprehension of analogies, that is, in the precision and in the faithfulness
to linguistic data.
7.2.6. Terms should be simple and commonplace
Another reason which contributes to qualify a form as a "good" term is that it will be all
the more useful that it is simple and commonplace.
A commonplace term (a morpheme alone, a short assembly of ordinary morphemes) has
higher possibilities to be reused.
If the content of a plexus is driven by a principle of maximum utility, one is led to
favour commonplace terms. A rare morpheme must not be avoided if one thinks that the
model has to contain it, but the length of the terms may be chosen. One is then led to
favour short terms, even more so if they contain rare morphemes. The notion of rarity,
of course, is understood vis-à-vis a specific speaker since the model is that of a speaker.
This condition would cease to hold if the model was complemented with self-analysis
(cf. section 8.4. , p. 258): self-analysis reduces long terms into shorter terms with higher
utility, which removes the inconvenience of the initial long term.
7.2.7. When do we want two different terms or a single one
The identity of terms would not be understood completely without reminding the cases
in which it is not clear whether one term is needed or several ones. These cases caused
problems to previous theories and fall into two classes.
In the first one, a same form occupies different places in paradigmatic frames. This is
the case of homonymy and syncretism for most of it. Section 6.1.2. Homography,
accidental homonymy, syncretism, p. 160, showed how analogy, by allowing us not to
over specify, authorizes a better adapted approach of the phenomena.
In the second one, a same place in an analysis frame is occupied by different forms
depending on context. This is the case of complementary distribution, that is, of lexical
allomorphy and, in phonology, of alternation. Alternation will be addressed in a
forthcoming work, bearing more generally on phonology. Lexical allomorphy is treated
section 6.1.3. Allomorphy, p. 167.
195
Tesnière, Harris, Bar-Hillel, Lambek, Chomsky, Mel'cuk, Shaumjan, etc.
200
7.2.8. Constituency
The idea of constituency is an old one even when it is not explicitly sated; it starts from
an obvious experience and from the strong intuition that parts of utterances get
reassembled into other utterances. Then we will have words, phonemes, morphemes and
syntagms.
Constituency crystallizes with Hockett in the 'immediate constituent' analysis. For
transformational generativism, constituents map onto the nodes of the phrase structure
and are also the elements affected by transformations.
Constituency is sometimes opposed to dependency: Fillmore196 reconciles (with a
reservation) the dependential conceptions of Tesnière (valence and the 'stemmata') with
the constituential conception which is his proposition in construction grammars.
The scope of constituency exceeds linguistics and extends to cognition:
The question of constituency recently gave birth to an important debate. In their critical
analysis of the propositions defended by connectionism, Fodor and Pylyshyn (1988)
strongly reaffirm the foundations of what is commonly called the classical cognitivist
paradigm. The debate arises from the new connectionist dynamic models which avoid
compositionality as a matter of principle. The question of constituency then becomes a
central argument in favour of dynamical models which do not present this property.
From a demonstrative and empirical standpoint, it is indeed in the domain of the
analysis of languages that the debate may be arbitrated. In effect, if there is a domain in
which compositionality, constituency, and, more generally, the syntactic organization
of the representations has been elaborated, it certainly is linguistics. We know the
cognitive Fodorian theses to form the heart of the Chomskyan paradigm, whence the
importance placed by Fodor in the syntax of linguistic expressions. In Generative
Grammar, these hypotheses much exceed the scope of sticto sensu syntax, so that
numerous generativist phonologists present their models as a theory of the syntax of
phonological phrases (Kaye, Lowenstamm, Vergnaud 1990). From this viewpoint,
phonology offers a particularly interesting field to test proof compositional models,
because in it, the notion of constituency is expressed as a formal, high level real
hypothesis197.
In syntax either, constituency, this happy mereology, does not end up exhausting the
observations and:
Before promoting it, as cognitivism did, to the statute of a confirmed hypothesis, it is
certainly useful to question its adequacy198.
To this question, some199 already answered:
Cognitive grammar regards constituency as less essential than does generative theory,
and also as more fluid and variable (Langacker, 1995a, 1997b). Phenomena for which
syntactic phrase trees per se have been considered indispensable (e.g. the definition of
subject and object) are claimed to be better analyzed in other ways.
196
Fillmore 1992, p. 102.
197
Laks 1993, p. 17-18.
198
Laks 1996, p. 169.
199
Langacker 1998, p. 23.
201
In the herein proposed model, the notion of constituency is weakened in two manners.
Firstly with the suspension of minimality: the status of possible constituents is made
weaker from the fact that they may be shorter or longer depending on the occurrential
needs to map the same fragment in several manners with different homologs. It is made
weaker also from the fact that a same span may undergo several different segmentations,
which are as many analyses, complementary and non-contradictory.
The weakening is increased by the vacuity which is requested for terms. As they are
deprived of essential properties, it becomes more difficult to say about terms that they
are constituents. In the schema of constitution, the "assembly shop" which grammar is
supposed to be, from the properties of the constituents, elaborates those of the assembly:
endo- or exocentric category, compositional meaning, etc. Here, the operation is not this
one since category is not reified and meaning matters will be handled on an occurrential
basis by transfers, subtractions, interpretative abductions bearing on private terms,
making a place to any non-compositionality proper to each occurrence, the particular
case of compositionality being after all fairly frequent.
7.3. Position, positionality, copositioning
7.3.1. Positions and copositioning
This model obtains language effects by a strict observation of positionality when
establishing a plexus and then during the computations; all this with no reified category
and no reified rule. Moreover, if it is difficult to figure out how neurons can be the
effectors of operative rules, it is easier to see assemblies of them forwarding,
transferring, and recombining copositionings.
Positions are not defined in absoluteness: they are defined for terms in relation with one
another. This is why an a priori definition of positions – of 'positional types' – would be
void. It is void also to attempt defining the essence of a position by definitional
propositions. All that is expected from positions is to be able to say things like: these
two terms are with respect to one another in the same positions as are these two other
ones. And this is enough: linguistic dynamics need nothing more; whence the notion
'copositioning". If the word 'position' happens to be found again below, it will only be by
simplicity or metonymy and what is understood is always 'positionality' or
'copositioning'.
Why use 'copositioning' since 'ratio' is attested, particularly in association with analogy?
Firstly because 'copositioning' suggests better a general play; then because
'copositioning' has the merit to oppose to 'position'. The importance of positions is
recognized but a lag is immediately installed by setting in a differential play and
simultaneously 'de-reifying' the position. This is reiterating the negation of the slot-filler
schema. 'Copositioning' appears to better encompass this wealth of connotations.
7.3.2. Position as place or as role
Among linguists, the uses of 'position' span between two poles: in the first one the
position is the place in the linguistic form, and in the second one something rather like a
role.
202
The first pole is illustrated by Harris (1951):
Even when studies of particular interrelations among phonemes or morpheme classes
are carried out, the frame within which these interrelations occur is usually referred
ultimately to their position within an utterance., p. 11. The environment or position of
an element consists of the neighbourhood, within an utterance, of elements which have
been set up on the basis of the same fundamental procedures which were used in setting
up the element in question. 'Neighbourhood' refers to the position of elements before,
after, and simultaneous with the element in question (p. 15) … We can thus identify
any morpheme class, group of classes, or construction, in terms of the next higher
construction in which it participates and the position it occupies in it (p. 332).
The second pole is illustrated by Milner. To the latter, which he names 'place', he
opposes a position with a more syntactic character. The subject of the active sentence,
and the agent complement of the analogous passive sentence are/occupy the same
position. The stability of this notion results from the fact that:
One excludes sheer swaps between canonical positions. It never happens that a term
acting as object complement becomes (by transformation or otherwise) complement of
attribution or subject200.
Milner's notion of position is recognized by himself as being akin to propositions
already made by other authors:
Among the absolute properties of (some of ) the positions (notably the subject), must be
counted semantic properties. … This is a research programme long formulated in terms
of actors [Fr.: actants] (Tesnière) and more recently in terms of thematic roles (school
of Cambridge). It needs to be stressed that the school of A. Culioli reframed the
question by proposing to very strictly reduce the list of possible extrinsic properties; but
its reasoning is not positional201.
The two poles are contrasted by Fradin202, after whom the following table may be built:
200
Milner 1989, p. 408.
201
Ibid., p. 440-441.
202
Fradin 1999, p. 12
203
Syntactic
theories which
build on relations of linear
precedence and hierarchical
dominance
are not combinatorial only but also
recognize a notion of position, which is
independent from syntagmatic realization
For these
theories, the
purpose of
syntax is to
give:
1. rules for the construction of
syntagms
2. rules specifying their
arrangement
3. the relations between the units of
these constructions
1. the geometry of the positions which a
language authorizes
2. the occupation relations which are legal
for each of the positions
3. the grammatical relations which may be
associated to them
They account for the construction of
syntactic units and for their combinations
Examples:
- Categorial Grammars
- Tree Adjoining Grammars
- Polychromous Trees Grammars
(Cori & Marandin)
- Kathol & Pollard 1995.
Table 16 Positions as places and positions as roles
I write 'role', but I could have used 'function' if the term was not already so loaded in
grammar. The point is to be clear about what functional scene we are talking about. The
function of the grammarians can be seen as the problematic attempt to blend, hybridize
or bridge the two poles. In the 'theatre' of language, the mechanisms of the play are not
fundamentally different whether we consider linguistic form alone, or private terms and
meaning, and the most visible part of the show happens in between.
The word 'copositioning' and many of its implications apply both to positions as places
and to positions as roles. Several aspect of the computation apply equally to both. So is
it for:
-
the four abductive movements (already covered and addressed again in an
appendix),
-
the fact that the play deploys itself in the interval between the terms of a task and
the inscriptions in a plexus (later in this section),
-
the question of positioned resetting (later in this section),
-
the similarity of copositionings which is mediately determinable (right now).
The difference between the two poles is that the position as place is supported by C-type
records, the terms of which are segments of the linguistic form, whereas the position as
role calls on private terms and will have to be supported by an extension of the model –
yet to be done – and which might take the form of a new type of records, or some other
form.
7.3.3. The similarity of copositionings is mediately determinable
In the effort which he pursues, after Carnap, to build a system making a logical link
between experience and categories, Goodman, borrowing from Carnap his
Elementarerlebnisse (elements of experience, which are elementary in the sense of
instantaneous) writes as follows:
204
The precedence of erlebs203 near together in time will usually be determinable since
such erlebs will usually be part similar, possessing in common some persisting quality.
And because precedence is transitive, the precedence of erlebs that are temporally
remote and wholly dissimilar will then be mediately determinable in many cases204.
Likewise, the similarity of copositionings is 'mediately determinable'. After several
computation phases, the resulting configurations of terms may be very dissimilar from
the initial ones and analogical ratios may have drifted. However, if upon each transition
care has been taken to conserve the copositionings, the resulting configuration is
positionally linked with the initial terms: the copositionings have remained 'mediately
determined' throughout. This 'mediate determination' of copositionings is not separable
from the very notion of abduction in linguistics.
The mediate determination of copositionings is my proposal to reconstruct the principle
of structure preservation, already mentioned page 17, which is the idea, recalled by
Milner, that it is impossible for syntax to create new positions:
This notion introduced by J. Edmonds, later taken over and modified by Chomsky,
raised up high misunderstandings because it actually contains two different
propositions. The first one with no direct concern here, is the distinction between main
propositions and subordinate propositions (fundamentally, this constitutes structure
preservation in the sense of Edmonds). The second one only concerns us: it bears on the
impossibility for syntax to create positions205.
Of this principle it follows that the number of positional configurations in a language is
very limited. This zooms back from the positions themselves, and positional
configurations is closer to my copositionings. The reconstruction of the principle of
structure preservation requires however a slight inflection: more than a sheer
impossibility to create positions, it should better be seen as a strong, but not absolute,
conservatism. Positional configurations are not immutable; they resist, but evolve
slowly, we do not speak with the syntax of Latin.
Transitive conservation of copositionings is easy to conceive in the abductive movement
by transitivity because paradigmatic links in plexii are exactly about that and crossing a
link conserves position by definition. But it also applies in positioned resetting (cf. p.
206).
In the Analogical Speaker, the dynamics develops coherently as soon as it is initiated,
that is, when initial positional settings, initial copositioning, are acquired. This model
does not cover the way in which initial copositioning obtain, and deliberately so. Initial
acquisition of copositioning presents itself in two complementary but distinct figures: a)
acquisition of linguistic knowledge, that is, how the experiential history of a subject
yields a knowledge constituted with copositionings between terms, which linguistic
dynamics will later utilize, b1) upon initialization of a particular reception act, how to
203
The basic units chosen [by Carnap] for the system are called Elementarerlebnisse [which I shall
hereafter abbreviate as erlebs]. They are full momentary cross sections of the total stream of experience.
They are limited to a least perceivable segment of time, but are otherwise unlimited except by the bounds
of immediate experience itself; each includes all the experience at a moment. Goodman 1951, p. 154.
204
Goodman 1951, p. 180.
205
Milner 1989, p. 649.
205
pass from a perceived sound flow without status, phonetic, to more systematized units
which are copositioned, phonological, or b2) upon initialization of a particular emission
act, how to pass from a flow of mental events without status, to an organization of
discrete and copositioned private terms to which a computation applies resulting finally
in an organization of formal terms.
A resetting (cf. next page) is false or ill-defined if it does not preserve copositionings.
The phrase "preserve copositionings" must be well understood. It is not a quality
associated to a position alone which should have to be preserved. It bears exactly on the
preservation of copositionings since terms can only be positioned with respect to one
another.
This topic will be met again in a further appendix when criticizing agent CATZ: this
agent is suspected because it has a single argument which leaves no room to the
definition of any copositioning. An agent like CATZ has no real place ultimately in the
Analogical Speaker; a next evolution of the model should render by positionally better
means the function which is that of CATZ: the suggestion of similarities for the benefit
of B2-B3 or other beneficiaries.
7.3.4. Positionality plays between the terms of an act and terms in a plexus
Positionality plays in the first place within a plexus, a plexus must encompass
copositionings that are coherent and faithful to a speaker's linguistic and cognitive
knowledge, and otherwise the plexus would be wrong.
But there is also a positional play between the plexus and the data of the act, between
positions of terms of the act and positions of terms in the plexus. In the solving of a task
by immersion (p. 263), immersion is an overall process which encompasses the terms of
the task and the terms of the plexus.
7.3.5. Positioned resetting
The notion 'resetting' was first introduced p. 144 in the chapter on systemic productivity.
A resetting takes place each time the motivation for recruiting an agent (that is, the
motivation underlying a client-commissioner relation in the heuristic structure) is
something else than crossing a paradigmatic link. The cases are as follows:
1. from a given term, access a record containing it by using the index of term
occurrences (simple index, agent CATZ)206,
2. abductive movement by transposition: from a given pair of terms, access a record
where this pair occurs by using the index of analogical pairs (double index, agent
ANZ),
3. when agents B2 and B3 perform an assembly, it is possible to see a resetting
since the client-commissioner link rests on something else than the crossing of a
paradigmatic link, but in this case the mechanism is complex.
Resetting is important because it is one of the main factors of productivity. Without it
the only possible productivity would be internal to a paradigm and this would be little.
206
For the notion of index, cf. section Access in the appendix which specifies the plexus.
206
Upon a transition by following a paradigmatic link, that is, a movement by transitivity,
copositioning is preserved in a simple and conceptually obvious manner. Upon resetting,
the preservation is much less simple or obvious. Resetting is then important for this
second reason that, when designing an agent that performs a resetting, care must be
taken that copositioning is preserved on that occasion. When this is verified, the
resetting will be said to be 'positioned'.
Very often, a resetting makes the computation enter into a new paradigm so it could be
named 'change of paradigm' instead of 'resetting'. This is not done because of the
meaning taken by 'change of paradigm' after Kuhn, but more importantly because it is
not always the case: after a resetting, we may target a paradigm which is the same as the
source one, then in another of its records, and with a reshuffling of roles.
The notion 'resetting' is essential: it is one of the keys of productivity by integrativity.
Resetting contributes to productivity, and the fact that it is positioned is the condition
for the computation to demonstrate coherence even in dynamics which encompass
thousands of agents.
7.3.6. Application points of positionality
Positionality applies in syntactic copositionings (i.e. structural analogies) but also in
analogical copositionings (i.e. systemic analogies).
Both must be seen as two conjoined aspects of a common apparatus.
Positionality applies to terms which are linguistic form and also to terms which are not
linguistic form: private terms.
7.4. Integrativity
Integrativity was introduced as a necessary feature p. 52, then, p. 139, a first example of
integrative operation was exposed. It was then met again several times, and now we
shall assess in greater detail its scope and the mechanisms which support it.
7.4.1. Scope and necessity of integrativity
The proposed model, because it does not assume categories and abstractions as being
present during operation and effective in the dynamics, is based only on exemplarist
inscriptions and exemplarist dynamics. This is a posture adopted for research and
debate. The point up to which it can be sustained is discussed p.268.
From the moment an exemplarist course is adopted, a knowledge which would be both
exemplarist and exhaustive is out of question; we must cope with inscriptions which are
necessarily fragmentary and partial, and the duty of the theory – and of the model – is
precisely to show how it makes up for the lacunae, that is, how the linguistic subject
who has at his disposition a linguistic knowledge which is partial only, nevertheless
demonstrates an ability which extends far beyond. The question of the integration of
these fragments is therefore inherent in a model of this type; it is necessary to make
fragments operate together, to potentiate them into integrative modes of operation.
Conceiving of linguistic knowledge as partial also relates with the learning experience:
the subject is in contact with language facts the number of which is very small with
207
respect to the number of productions of which he becomes capable. This condition was
long recognized as the poverty of the stimulus and is recalled for example in the
following way:
The facts available to the child underdetermine radically the language which he finally
knows with such a wonderful subtlety. Chomsky in Pollock 1997, p. XVI.
From there, a debate develops, which aims at separating what would be innate from
what would be acquired, and therefore variable:
Suppose there is some aspect of language that children couldn't possibly figure out from
the evidence in the speech they hear around them. Then this aspect can't be learned; it
has to fall in the innate part of the language. This has been called the "poverty of the
stimulus argument". Its use requires a certain amount of care, and in fact there is a
running debate on what sorts of evidence children are capable of using. Jackendoff
1993, p. 34.
or, in order to justify a parametric theory of acquisition:
Very little data will suffice to allow the child to fix the ordering constraints of the
language he is learning. A child learning English will only need to be exposed to a
couple of transitive sentences to realize that in English verbs precede their
complements. Haegeman 1991, p. 96.
If none of these courses is adopted, it is proposed to consider the occurrential
inscriptions as produced, indeed, by the linguistic experience of the subject. If
experience is the origin of the inscriptions, another constraint bears on them: that of
heterogeneity. Experience does not happen in a particular order which would be
analytically favourable, facts present themselves in a disordered manner and the subject
must integrate them as he can, in the sequence in which they come.
This is the dimension which is sought when I strive to inscribe in a plexus paradigms
which are not only fragmentary, but in addition heterogeneous. Remember the example
of p. 142 which integrated successfully two verbal paradigms; they are very
heterogeneous in their structure. The summary table is recalled here:
paradigm
what oppose the two
terms in a record
what changes between
two linked records
first paradigm
base aller - base venir
tense + person + number
second paradigm
person 1S - person 3S
base
The integrativity required from the model then has to integrate partial and heterogeneous
resources.
If one succeeds in this – the claim is that his work is making a step towards it – the
proposition has to be reversed: where one believed to perceive the under-
208
determination207 of a language by the facts, is it that, because of being regularist, one has
of language a vision which is maybe over-determining? And if the child ends up
knowing the language with such a wonderful subtlety, is it that the understanding we
have of it is so disappointingly coarse? I mentioned already that the reason is a different
one: the child does not learn a language; he just learns how to speak. Repositioning the
approach in this way invites us to take a very different look at the "faculty of language"
and to what should have to be innate.
Another example will provide a complementary feeling of the integration of scarce and
heterogeneous data.
7.4.2. An extreme example: être jolie licensed by homme grand
In this example, the model is given a task the gloss of which is as follows:
Two terms être (to be, being) and jolie (pretty) are given. Is the assembly être jolie
(to be pretty, being pretty) possible, to what extent, and why?
This example is a caricature by the number of paradigms and the length of abductive
chains that were used to solve the task. As a consequence, the result is weak (strength
.29). It was run on a French plexus in a now obsolete state of development (in the state
reached today, être jolie would rather be licensed by faire beau ([the weather] being
fair) with strength .53). In the former state, the construction infinitive + attribute was
not directly attested.
207
The linguistic knowledge of the speakers is under-determined by the facts to which children are
exposed when they acquire their mother tongue. … The under-determination of the knowledge by the facts
is, in itself, a strong argument (so-called of the poverty of the stimulus) in support of the assumption that
the acquisition of LI [internal language, or individual language] involves much more than just learning.
Pollock 1997, p. 12.
209
rec ru itm e n t b y tan s itiv ity
ch 2
C on s tru c to r o f in fin itiv e ne g a tio n
n e p as ê tre , n e pa s v oy ag er
ag 2 C A T Z
(.8 1 ) 4 7 2 3
ag 4 C A T Z
(.7 3 ) 4 7 3 3
ag 8 C A T Z
(.6 6 ) 6 2 2 3
être
cro ire
v o ya ger
ag 2 8 C A T Z
(.5 9 ) 2 4 2 4
v o ya ger
ag 5 0 C A T Z
(.5 3 ) 2 0 9 4
p o ser
ag 1 0 9 C A T Z
(.4 8 ) 2 0 8 4
cacher
ag 1 6 6 C A T Z
(.4 3 ) 3 7 7 4
cha nter
ag 2 4 2 C A T Z
(.3 9 ) 4 2 4 4
finir
285 CAT Z
(.3 5 ) 5 8 5 2
finir
ag 3 3 8 C A T Z
(.3 1 ) 3 5 2 2
la F rance
ag 3 7 0 C A T Z
(.2 8 ) 4 8 4 2
ho m mes
ag 3 9 8 C A T Z
(.2 5 ) 1 5 6 2
ho m mes
ag 4 3 7 C A T Z
(.2 3 ) 1 5 5 2
chev al
ag 4 0 C A T Z
ag 4 4 1 C A T Z
(.2 7 ) 7 6 7 4
ho m mes
(.2 3 ) 7 6 8 4
fe m m es
ag 4 4 8 C A T Z
(.2 1 ) 6 3 9 2
fe m m es
ag 5 5 4 C A T Z
(.1 9 ) 6 1 4 2
ho m me
R ec ru itm e n t up on
p os itio n ed re se ttin g
A fin d in g be lo ng ing to a n
a g en t o r a re s lu lt to a
c ha n ne l
v oy ag +e r->
v oy ag er
fin + ir->fin ir
p o ur+fin ir
p o ur+la Fra nce
d es +h om m es , u n +ch ev a l
h om m e +s ,
fe m m es +s
ch 1
ag 1 C O L (.9 0 )
148 1290
être jo lie
qu e lqu e s +fem m es
to u t+h om m e
ch 3
ag 3 C A T Z
(.8 1 ) 6 7 9 2
jo lie
ag 5 C A T Z
(.7 3 ) 6 7 8 2
p etite
ag 9 C A T Z
(.6 6 ) 3 1 0 2
p etite
ag 3 0 C A T Z
(.5 9 ) 2 2 4 2
gra nd
u n +ch ev a l
u n +hom m e
ag 4 7 9 C A T Z
(.2 1 ) 1 6 7 2
chev al
ag 5 4 1 C A T Z
(.1 9 ) 4 2 9 2
ho m me
trè s +jo lie
trè s +p e tite
h om m e +g ra nd ;
m a +p e tite +en trep ris e
le +g ra n d +jo u r
ag 6 4 C A T Z h om m e +h a b ile
(.5 3 ) 7 1 8 2
gra nd
ag 6 3 C A T Z tro p +gran d ,
(.5 3 ) 6 9 3 2 p as +g ra nd
gra nd
find i n g 7 7 7
result 8 7 4
(.2 9 ) 7 1 8 1 2
ho m me gra nd
(.1 9 ) 1 9
ho m me
result 8 3 0
(.2 7 ) 1 9
ho m me
find i n g 8 6 4 (.2 9 )
718 1 2
ho m me gra nd
settlin g
find i n g 7 6 4
(.1 9 ) 1 9
ho m me
result 5 9
(.9 1 ) 3 2
gra nd
find i n g 3 7
(.5 9 ) 3 2
gra nd
find i n g 8 8
(.5 3 ) 3 2
gra nd
find i n g 8 7
(.5 3 ) 3 2
gra nd
m e rg in g
settlin g
Figure 28 être jolie licensed by homme grand
The process succeeded however to find a (weak) reason to license être jolie, it was the
C-type record homme+ grand. To achieve this, in the shortest abductive path, it used
serially four paradigms and thirteen computation phases. The move from infinitives to
nouns, their categorial assimilation208, took place thanks to a paradigm of prepositional
phrases: pour + finir, pour + la France (in the end [litt. for ending], for [the sake of]
France). The inspection of this path, which the reader may wish to make step by step,
gives a good idea of the model's integrative power. Another aspect of integrativity is the
fork at the rear of agent 30: two parallel paths are pursued and both turn out to be
productive, which will cause reinforcement. This reinforcement compensates in part for
the damping which is the consequence of the lengths of the abductive paths.
208
This move may be considered as a critical section of the computation if one keeps in mind a culture of
categories, but for a non-categorial model, this vision is quite indifferent
210
s e ttlin g
7.4.3. Mechanisms in the service of integrativity
As an overall property, integrativity is firstly a consequence of positioned resetting.
Positioned resetting is essential in integrative productivity.
Secondly, integrativity results from the cooperation of various agents of different types.
Integrativity thus understood is an important conjecture in this research: that things
happen in this manner in the speakers. Linguistic facts are caught and memorized as
they come, in their exemplarity and in their occurrentiality, and the speaker sets up a few
analogies – from one to three to give an order of magnitude – for each new fact. The
analogies thus set up confer this fact a place in a few paradigms – in the actual mental
processes these may be fragmentary structures which are not exactly paradigms as the
model proposes them today. In themselves such structures are not much, but their
conjoined utilization yields much more. The hope is that the plexus structure plus the
dynamic side of the model propose an interesting approximation of the mental linguistic
computation.
The stimulus may well be poor finally, it may well leave sparse traces in memory, yet
the integrative use of these traces accounts for productivity.
7.5. Exemplars and occurrences
As we are doing away with categories and types, the apparatus contains things like day,
freedom, daffodil, breakfast but it does not contain things like 'name', 'noun', or 'NP'. It
contains things like:
great + day  great day or like
she + is coming + to-morrow  she is coming to-morrow
but it does not contain things like:
NP  Det + N or like
S  NP + VP + Compl.
The static inscriptions of the linguistic knowledge (the plexus) and the linguistic
dynamics bear on concrete forms. Sticking to "occurrential" is not precise enough.
When writing great + day  great day one may mean that such a thing may happen in a
speaker's experience, with no particular date assigned, without it being associated to a
particular situation: great day is possible in general and is segmentable into great + day.
If great day was met hundred and four times by this speaker, these hundred and four
encounters are 'condensed' into one inscription only. This option cannot be said to be
properly occurrential. Call it 'exemplarist': it makes no place for types, abstractions,
categories and bears on exemplars which condense occurrences.
When writing great + day  great day one may mean on the contrary that a dated
occurrence of great day was encountered by this speaker and was segmented into great
+ day for the sake of analogical mapping with other dated occurrences like sad evening
for example, or great day at another date; this would be a really 'occurrentialist' option.
The occurrentialist option does not separate sentences from a situation.
211
If great day was met hundred and four times, in the occurrentialist option, there are
hundred and four different inscriptions. Naturally, this is not sustainable; it is not the
case that we have to remember everything occurrentially. A condensation takes place
but it is not a simple projection of occurrences onto exemplars: something of the
situations is also condensed simultaneously. This is what should allow a proper
treatment of semantic questions.
A word is needed to refer collectively to the exemplarist option and to the occurentialist
one. I propose 'concrete', although I do not ignore that categorial models also may be
deemed concrete in this, that they encompass a lexicon. A 'concrete' theory, in this
sense, is one with exemplars – and possibly occurrences – in which categories and
abstractions are rejected.
7.6. Proximality, totality
The idea of proximality is as follows: when one thinks about something, some other
things come up in a privileged manner, not many other things, and even less a totality.
'Proximality' is distinct from 'locality' which applies to segments, constituents, syntagms
or terms which are neighbours in the form; and is so understood in n-gram approaches
in automatic language processing, or in Generative Grammars in relation with the
notions of c-command, barrier and island.
'Proximal' is also distinct from 'localist', as used by connectionists. In a connectionist
network209, the representation is local (the network is then localist) when a cell (or a
group of cells) is dedicated to represent an object of the problem (a morpheme, a lexical
entry, etc. as far as linguistics is concerned). When on the contrary, objects are
represented by the network in a fuzzy manner as in a hologram, the representation is
distributed.
The idea of proximality is not new; it is that of associationist psychology210. The limits
are clear: why such thing rather than any other one, it says nothing about it. The
mechanics of 'transition from' is not precise. Nothing can be made more necessary than
anything else. The theory is non-operative and sterile; it is not even constituted as a
theory. Associationism fails because it remains simple (one would associate starting
from one element).
If one sets aside the critique and the overcoming of this defect (which will be done
below) proximality in itself comprises a dimension of plausibility: the anatomic
connectivity of neurons is very compatible with the idea of connexion "from some to
some".
209
Cf. for example Elman 1998, p. 8.
210
Associationism (Plato, Aristotle, Hume, Spencer, Taine, etc.) is the attempt to reduce thought to
associations; associations of (experiential) contiguity, of resemblance, of contrast. Associationism would
assume a psycho-physiological parallelism and fails, according to Lalande: "how could we establish a
term-to-term mapping between two sets (fibers and dentrites in the brain on the one hand, and the ideas,
images and judgements of the subjective representations on the other) which do not follow the same
method". Associationism is refuted by Bergson (Matière et Mémoire). Burloud 1948, pp. 265-267,
summarized by the author.
212
The proximality of inscriptions is akin to the idea of the "Knowledge lines" or "K-lines"
of Minsky: We keep each thing we learn close to the agents that learn it in the first
place211, we shall see elsewhere (p. 249) the role which is attributed to proximality in
learning, that is, how acquisition itself is made accountable for the proximalities in a
plexus.
Proximality and the concreteness of a theory (exemplarism or occurrentialism) are
solidary: if a theory cannot categorize, that is, classify its terms, the only thing left to do
is to link them together as exemplars or occurrences, and, as a linkage from each to each
would be absurd, they can only be linked from some to some. Hence, transitivities form
the bases of access and transition and this is how the notion of proximality arrives: is
proximal that which can be reached easily, that is, in few computation steps. This would
apply to simple associationism – which is not the adopted way – and it also applies to
paradigmatic linkage and plexus structure as defined in this model.
A categorial theory makes no room for proximality: in a class, in a category, all
members are equal, even if they are numerous. On the sole basis of categorial
membership, evoking an element is evoking with the same ease a great number of other
ones. Access has the same cost for all members of the category (this touches the
difficulty of "sub-categorization"). It is true that categorial theories do not take care of
access, but a linguistics which recognizes the subject, the dynamics of acts, which is
heedful of the conditions of cognition and careful of plausibility has to.
Here, proximality is approached in overcoming the limits of simple associationism; it is
a virtue of well-understood analogy. Analogy does a little more than simple
associationism.
In a concrete theory, which therefore recognizes proximality, the solicitations (more
precisely the suggestions of similarity) are stepwise and based on proximality as it is
inscribed, from one point to a few other points, then from each of the latter to a few
more, etc. The "point" in question here is not a single element, a single term, which
would be simple associationism and is erroneous. It is at least a pair of terms, so that the
preservation of positionality can be made to bear.
A concrete approach like the one adopted in the Analogical Speaker needs proximality.
Proximality is implemented by the paradigmatic links between records. The abductive
movements depend on it and so does the possibility to compute with a plexus. So the
concreteness212 of the theory implies proximality of the inscriptions of the plexus.
The effect of analogy is to establish copositionings between terms, that is, positions with
proximal applicability. This may be viewed as osculation213 in geometry: at their contact
point, two osculatory curves share a lot (a point in common, same derivative, same
curvature) but, gradually further of the contact point, they gradually differ in these three
respects. Similarity would thus be osculatory: it would have a proximal validity and a
proximal possible effect. This has value as a metaphor only, I am only trying to suggest
211
Minsky 1985, p. 82.
212
Once again, 'concrete' is understood generically for 'occurrential' or 'exemplarist'.
213
In geometry, two curves are osculatory if they are tangent and if, at the contact point, they have the
same curvature radius.
213
how positionality is a notion with proximal definition and effectiveness, like
categorization effects, like regularization effects.
An idea of proximality is also to be found in the 'self organizing feature maps' (SOFM)
of Kohonen, which are a particular technique used in neuromimetic connectionism. Its
main feature is to let emerge lexical items in a 'map' which is a two dimensional space.
In an SOFM, lexical items with close meanings a are close on the map; the training of
the network yields a meaning-based proximality. In an SOFM, proximality is defined in
a two-dimension space each dimension of which is an interval of integers; this space is
an (n, m) rectangle. This structure seems to me to be too precise and no problem feature
calls for it particularly. The topology of a rectangle defined in a plane has no specific
motivation, and in this, the SOFM of the connectionists is artifactual. In the Analogical
Speaker by contrast, proximality assumes no underlying two-dimension frame; the
records which have to be made neighbours are simply linked together by paradigmatic
links and transitive paths across these links constitute the required proximality. The
resulting topology is whatever it can be and finally its nature is not important. It cannot
be mapped onto any geometrical or topological particular structure like a plane and has
no reason to be. In the drawings of paradigms like those occurring in chapters 4 and 5,
records are indeed displayed in a plane but it would be mistaken to understand axes
underlying them, the disposition is for convenience only, readability just demands few
overlaps.
In order to make 'proximal' more completely understood, it makes sense to oppose it.
Let us start from a case. Commenting a work214, Lepage215 writes this:
Paradigmatic relationships being relationships in which four words intervene, they are
in fact morphological analogies: reaction is to reactor as factor is to faction.
reactor
g

factor
f

f

reaction
g

faction
Contrasting sharply with AI approaches, morphological analogies apply in only one
domain, that of words [in AI, they make mappings from the domain of the atom to the
domain of the solar system and thus there are different domains]. As a consequence the
number of relations between analogical terms decreases from three (f, g and h) to two (f
and g). Moreover, because all four terms intervening in the analogy are from the same
domain, the domains and ranges of f and g are identical.
This approach is very first-epoch-IA, that is, symbolist and mathematical. This
framework of thought which can be said to be 'totalist' in the sense that it assumes a
totality of the possibilities, a sort of universe which would have to be postulated in order
for things to acquire meaning. Whatever the thing done or envisaged in particular, this
thing is expressed, is defined, is understood, can be computed, only if previously
214
About analogical conversion of analogical form into orthographical form, cf. Yvon 1994.
215
Lepage 1996
214
referenced, related, defined as a sub-set with respect to this total, all-embracing
framework. This is a 'domain and range', totalist approach.
Totalism is to be found prototypically in the logicist approaches of semantics. For
Galmiche216:
the semantics of Montague is based on the 5-uple (A, W, T, <, F) where A is a set of
entities, W a set of possible worlds, T a set of instants in time, < the precedence
relation, which is an order in the instants in time, and F a set of functions which maps
the element of A onto the logical constants.
In order to account for the meaning (for a speaker) of the smallest, contingent, personal
utterance, will a theory thus based require the previous knowledge of the entire world?
of eternity? of the entirety of the possible worlds? Here is another example of totalism:
about the utterance John saw everyone we are told217 that
an acceptable paraphrase of this utterance would be "For any individual whoever (if he
is human), it is the case that John saw this individual". That is, in logical notation:
(x: x is human) (John saw x).
Who can accept such a paraphrase? It is impossible to figure out a situation where it
applies. John saw everyone can be paraphrased by John already met all the family (of
his fiancée) or by John already had meetings with all the unions (John is a minister and
the social situation is unstable) or otherwise depending on the case218. Gayral also
perceives the same totalism and rejects it when she writes:
[…] in these formal approaches of semantics, the choice of the different indices is made
a priori: the coordinates are defined in advance, regardless or any linguistic data. This
supposes, and it is a very strong assumption, that a reference universe pre exists, prearranged, as for example in Montague, into possible worlds and into instants in time,
and then one utters things about this universe. This is a great worry and it seems on the
contrary that the possible worlds are triggered by enunciation and built based on
discourse219.
It is indeed 'a very strong assumption' and a very unfortunate one. Logicians themselves
today step away from totalism; so does Jacques Dubucs:
The logic for the coming of which I pray should be concerned with transitions between
actual thought and not with transitions between all possible toughts220.
The proximality advocated in the Analogical Speaker is thus opposed to totalism. Doing
this is not different from rejecting types and categories. It also means computing with
what is cognitively available and accessible; that is to say, making a situated linguistics,
216
Galmiche 1991, p. 44.
217
Boltanski" 2000, p. 80. Without necesarily endorsing them, B. rather seems to report the positions the
Government & Binding theory.
218
The above paraphrase is not acceptable also for a few more reasons but these are out of scope here,
and, for this, elided.
219
Gayral 1993.
220
Jacques Dubuc, communication at the meeting Philosophie Cognitive, Ministère de la Recherche, 1,
rue Descartes, Paris, 23 mars 2001.
215
one which is compatible with a situated cognition. This same approach also solves the
extension-intension paradox.
7.7. Extension, intension
Without rules and without abstractions, it becomes difficult, but also useless, to specify
a collection by its characters. Therefore the notion 'intension' falls and with it, that of
extension. In this model, there is no room for extension and intension. Another manner,
more technical to give a feel of this is as follows.
In the dynamics of ABS, agents (e.g. those of Figure 29), commissioners delivering at a
same channel (e.g. channel C, their delivery point), produce findings which are then
merged at the channel into a result. That is to say, homonymous findings are merged
into a single result at channel C.
coverage = extension
channet C
duty = intension
agent 1
agent 2
results
findings
Figure 29 Two agents delivering at a same channel
Then, among the results delivered at a channel, it ceases to be possible to say which
result comes from what agent. It is always possible to reconstruct this detail in order to
analyse the behaviour of the model, but the model itself does not encompass it.
Consequently, the coverage, that is, the set of results, each with a strength tag, cannot be
defined by an agent but is very well defined by a channel.
We stay then with the following paradox:
-
an agent defines a duty (which is an intension) when the coverage of this duty is
not defined or ill-defined by the agent, because the objects that would support
this definition are the findings which are minor in the model and moreover
redundant, and
-
a channel, having a list of results delivered to it, certainly defines a coverage de
facto and extensionally, whereas its duty (an intension) is not defined: one
cannot give an intensional definition of the results delivered at a channel.
216
This is counterintuitive, paradoxical, and yet operates well and renders the desired
effects221. Incidentally, these considerations shed an additional light on the fact that the
product of agents cannot constitute results directly and on the inanity of pretending to
define agent results.
Of this paradox, the following reading may be proposed: a remodelling of the intensionextension opposition is accomplished by distinguishing between duty and its coverage,
and by the ascription of these two figures of need to distinct organs. Agents are ascribed
a duty (which is a specification of need), that is, intension, and channels are ascribed
results (which are a coverage of need), that is, extension. The alternation channel-agentchannel-agent… in the heuristic structure amounts to building processes which
microscopically amalgamate intension and extension so that macroscopically there is no
longer the need to maintain this distinction.
Such distinction was found after a long and difficult analysis work, after criticizing
several unfruitful trials. It was found inadvertently in a way: at no moment during the
conception I had the explicit goal of overcoming the intension-extension opposition.
This should have been awaited though, from the moment categories and rules were
expelled. The question intension-extension made a difficulty to Hjelmslev222 in 1933:
Il faut se demander quelle est la position de la question de la définition intensionale
(sic) de la zone sémantique de chacune des catégories morphématiques (nous
désignons cette question par l'abréviation Int.). Faudrait-il trancher cette question avant
de pouvoir aborder la question Ext.? Théoriquement nous n'hésitons pas à répondre
négativement: on peut en effet étudier les faits extensionaux [sic] (les faits de
suppléance par exemple) sans avoir étudié d'abord le problème de la signification.
D'autre part nous ne croyons pas qu'on puisse étudier les significations sans une
connaissance préalable des formes et des fonctions. Une signification est toujours
nécessairement une signification de quelque chose, et l'étude des significations
présuppose la connaissance du porteur de ces significations. Théoriquement c'est donc
la question Int. qui présuppose la question Ext., et non inversement. Du point de vue
pratique nous estimons cependant qu'il est utile d'avoir en vue les deux questions à la
fois; la recherche demande dans une certaine mesure qu'on les considère ensemble, et
surtout l'exposé des résultats de la recherche gagne en évidence et perspicuité [sic] et
221
In an effort to solve this paradox, one may try to force the interpretation by viewing a channel as vested
with a "virtual duty", which would be the union of the duties of the commissioners delivering to it. This is
neither intension nor extension but a little of both. It is extension because it is a set of resources,
instantiated at each commissioner, depending on the plexus data best matching the agent's duty, each
contributing to the satisfaction of the client agent's duty. But it is also intension in the sense that this
collection has a common motivation which, to simplify, if the duty of the client agent, or, to be more
precise, the part of its own duty that his agent assigns to the channel. In fact, a client, even if it confers no
explicit duty to its channels, nevertheless assigns them one which is a sort of 'equivalence class' of the
duties of the commissioners which it recruits and appends to that channel. The 'equivalence class' in
question is hard to express and remains non-explicit, elided itself in its own way: yet, it is not nothing,
since the commissioner agents that are recruited in order to deliver at a channel are not anything. A client,
vested with an explicit duty, distributes it to its channels, allocating each a part of it, which remains, as
such, non-explicit, which has no other expression than the set of explicit duties of the commissioners, the
determination of which involves the plexus, and the union of which is the best that can be proposed as its
expression. This amounts to say finally that the paradox does not get well solved.
222
Hjelmslev 1933. p. 60.
217
sera plus facilement accessible si les faits structuraux sont projetés sur une matière
sémantique. Aussi ne chercherons nous pas d'éviter le problème Int. Mais il est
nettement en marge; il ne sera qu'effleuré, et les interprétations sémantiques qui seront
proposées ne seront ni discutés ni motivées. Hjelmslev, 1933/1985.
Had the question so much progressed when, still in 1989, Milner223 could wonder:
Should we adopt an extensional or a predicative reading of the notion of category? In
other words, must we say that silence and chattering belong to the same class of terms
or must we say that they share one or several predicates?
And he made no decisive conclusion.
The alternative in question, not decidable indeed in the terms in which it was then
worded, appears now just as a consequence of adopting categories and of accepting
totalism. Reconciliation is achieved by a proximalist and exemplarist dynamics as the
one proposed in this work. This proposition is also compatible with this evidence that
linguistic behaviour and cognitive behaviour as well, take place in ignoring this
dilemma which now appears a fallacy. We are exposed to it only if we accept logicism.
What has just been shown is the deconstruction, from the point of view of linguistics, of
the opposition intension-extension between linguistic terms and what would be their
lexical category (what is a noun in general vs. a concrete set of nouns). This is not
exactly the main theme, classical since Port-Royal, of this question, which is rather the
tension between the (possible) referents and a linguistic term of which they would be the
reference (what is bird in general vs. a concrete set of birds). This second theme cannot
be addressed in the current frame which is exemplarist only, and not yet occurrentialist.
The hope is that the (yet to be done) occurrentialisation of this proposition, which is a
prerequisite or a corollary of the treatment of meaning, will make it possible to address
this second theme in continuity with what was done for the former.
7.8. Binding, variables, variable binding
For what is called 'binding' in English, in French we have liaison or liage depending on
the case. Positioning this work versus binding will require separating its different
acceptations because this word has served many purposes. Very generally the point is to
understand how a generically defined place (an "expectancy of fulfilment") may be
occupied by an exemplarist or occurrential occupier. On the way we will recognize
something of the slot-filler schema already met in Chap. 1.
First of all will be discarded a binding which is described by the psychologists and is a
concern for cognitive science but will not be a concern for us: the binding of sensory
modalities together. It is presented as follows: given that colours (the red colour) and
shapes (a circle) are not processed by the same neuronal areas how do we succeed in
seing a red circle and not separately something red and a circle. And if in addition there
is a green square, why do we perceive the square as green and not the circle. This is a
binding but not of the sort that we want to discuss.
223
Milner 1989, p. 289
218
We shall address the problem of variable binding which is the most important one, then
a few more, including the binding of the Government & Binding theory.
7.8.1. Variable binding in mathematics and in computer science
In the idealized figure which it would have in mathematics, binding is the relation which
takes place between a variable and a value that it takes. In expression X+2, among all its
possible values, variable X takes the value 3, X is now bound to 3, the variable X is now
bound to its value. Mathematics are such that if follows that expression X+2 takes the
value 5. A dimension of complexity appears in case the variable occurs again: in
expression X2 + 5X + 7, if variable X takes the value 3 in X2, the convention requires
that it be the same in 5X. Variable binding is so commonplace in symbolic systems,
beginning with mathematics, that it goes without special discussion.
Of this, computer science provides a similar idealization, which is different but equally
rigid.
The computational architecture of the von Neuman serial computer […] provides
unlimited symbol passing, full generativity, and unlimited scalability, based on the
system of data paths, memory addresses, and processing cycles that could be formalized
in the logic of production systems224.
"Unlimited symbol passing", this is how the v. Neuman architecture binds variables; the
central processor, for example the arithmetical processor, is in perfect functional
situation versus the entirety of the memory. This touches the basic reason why these
machines "do efficiently what we perform poorly and do very poorly what we perform
efficiently". Their architecture cannot be a good model of brain operation. In general a
symbolist theory is not the best possible one to account for phenomena happening in the
brain, linguistic phenomena in particular.
The brain provides no obvious support for the symbol passing that provides the power
underlying the von Neuman architecture. Instead, computations in the brain appear to
rely ultimately on the formation of redundant connections between individual
neurons225.
The evidence is abundant: anatomical and macroscopic. This does not prevent a current
of thought to go on developing ignoring this conclusion: artificial intelligence. Artificial
indeed. Incidentally, these reasons are the same that deprive rules of any plausibility as
operating devices in linguistic operation, and, more generally, in cognitive operation.
Rules are thus disqualified in two manners: as empirically insufficient, and as
implementationally not plausible.
7.8.2. Connectionism faces variable binding with difficulty
Variable binding is a subject of worry among connectionists because connectionist
networks do not perform it easily; it is for them a source of difficulty:
Variable binding is a feature present in certain systems of symbolic representation
which it is difficult to obtain in connectionist networks. In the application of a rule - or
224
Mac Whinney 2000, p. 122.
225
Mac Whinney 1992, p. 288.
219
other symbolic expression - that contains variables each variable must be bound (or
linked to, or replaced with) a constant. If there are several occurrences of the same
variable, each occurrence must be bound to the same constant226.
Neuromimetic connectionism long failed facing this question. That was at the time its
models were mostly associators. A first progress dates back to 1985:
Touretzky and Hinton (1985) have recently developed a PDP implementation of a
production system that can do rudimentary variable binding, and at present it appears
that they may be able to extend it to perform recursive computations227.
The success was limited however, and, in 1991, it was still possible to write:
The connectionist bet consists of developing theories of processing that use other
devices than operations on symbol strings. Generally, connectionists agree tat their
devices must allow them to explain the data that suggest a combinatorial structure in
language. In addition, they identified in their field a closely related problem, namely the
variable binding problem. Symbolic representations use variables so that rules may
apply to various individuals in a class. … Connexionists are challenged with building
networks that perfom the work which, in symbolist theories, is ensured by
combinatorial structures, with symbol strings containing variables 228.
Then numerous works followed, and the question progresses, with difficulty seemingly.
The names are Holyoak, Thagard, Elman, Hummel, Biederman, Pollack, Shastri (the
SHRUTI model, based on synchrony mechanisms), Adjjanagadde, Smolensky, and
Touretzky.
As their predecessors did not differentiate enough long term memory from working
memory, LISA of Hummel and Holyoak (Hummel 1997) addresses variable biding in
working memory and succeeds in binding variables with a mixed network which is both
connectionist and able to handle structured data. This model is analogical and performs
structure mappings. Progress of neuromimeticians in variable binding is thus slow and
difficult, currently obtained by somewhat violating pure connectionist 'orthodoxy'.
A recent synthesis book by Marcus (2001) is even severe for the connectionist
community – but he says he still belongs to it and conserves his sympathy for the
approach. Generally, he sets doubts about connectionist models having succeeded in
really representing variables, and therefore operating bindings. He assigns the
connectionists rules, variables and variable binding as one of the base functions they
must acquire in order to progress229.
Jackendoff230 sees binding as a massive phenomenon. In the sentence The little star is
beside the big star, and about the preposition phrase beside the big star, he thinks that
the following relations have to be encoded: a) le syntagm beside the big star is of type
NP, b) it is a constituent of VP, c) it follows V, d) it has Prep and NP as parts, e) in the
226
Bechtel 1991/1993, p. 329.
227
McClelland 1986sp, p. 322.
228
Bechtel 1991/1993, p. 231.
229
The other functions which are required but yet to be accomplished by connectionist models, according
to Marcus, are: the ability to represent recursive structures and the ability to represent individuals.
230
Jackendoff 2002, p. 59-60.
220
conceptual structure, it corresponds to the Situation-constituent, f) it corresponds to the
phonological constituent beside the big star. Binding is massive in linguistic structure
says Jackendoff, and because it is so massive, it invalidates for example the synchrony
of activation in the SHRUTI model as a possible explanation: the bandwidth is not wide
enough.
7.8.3. Binding as instantiation: linking an abstraction with an exemplar
Binding, as envisaged up to this point, is binding as instantiation (other bindings whill
be examined later). It is the binding between an abstraction (the variable) and a concrete
exemplar (the value); it concerns the application of a rule. This constitutes the central
problem.
The question amounts to understanding how a rule applies, that is, for rule:
NP  Det + N,
for example, to say how Det is bound to the, N to day and NP to the day. This is a
difficulty for connexionist networks: it is hard to make them apply rules. Marcus (2001)
analyses that those who pretend dispensing with rules either fail in achieving regularized
responses or implement rules without being aware of doing so, which is a mistake.
The model I present in this thesis solves this question by overcoming it or by eschewing
it: it simply makes that it ceases to be posed. Take the example of Figure 26 What is to très
gentil as extrêmement is to assez (p. 155)231. The computation, as suggested by this figure
makes that there is no variable binding simply because there are no variables. The idea
of variable is a non-criticized one which is inherited from cognitivism, from informatics
and, before them, from logic and from mathematics.
Otherwise stated, the slot-filler schema is already too high-level a conceptualization to
form the base of a plausible model. The operating dynamics work at a level below it,
and do not have this problem. Much in the same way as what we saw about
categorization, there are variable-value effects (or slot-filler effects, to adopt the terms of
Chap. 1), and consequently binding effects, but effects only. The slot-filler schema is not
reified in the theoretical apparatus and does not have, in itself, a direct part in the
explanation of linguistic productivity. That the question of variable binding ceases to be
a question is a direct consequence – and an important benefit – of the dismissal of the
slot-filler schema. It is a consequence of the radical exemplariness of the model.
In order to succeed, the dynamics satisfy themselves with simpler services:
-
access to term occurrences, to pairs of terms, to exemplarist constructions,
-
proximality and abductive movements based on it,
-
detection, within the observation of positionality, of settlement configurations
that is, of matching.
The dynamics also suppose a body of already available analogies from which a very
large number of other ones (virtually an infinity) can be abducted. This presupposition is
different from that which is made by the connectionist models cited above.
231
One may also use the examples pp.102 and following, or any example in Chap. 4.
221
If we had to force a mapping between the slot-filler schema and this model, we might
take that the slot maps onto the position and the filler onto the term, with this important
remark that the thing is never "functionalized" since there is no abstraction here: the
binding, which then would be the occupation relation would not have itself any analog.
7.8.4. Philosophical detour
A philosopher, Bourdeau, also comes across variable binding as he writes about
categories. A somewhat lengthy quotation232 will provide a transition towards other
figures of binding.
If there is an argument on the interpretation of variables, substitutional for some
authors, objectal for others, it is because it amounts to know whether a variable takes its
values within the nouns of the language or within the objects of the world. But the links
which unite a variable and substitution are not limited to this normalized usage,
established for the sake of computation. The vacuity of the form is an indifference
towards matter, which the variable has the function of making visible. Therefore the
latter is the mark of an undetermination and as the mark of an expectancy of fulfilment.
As long as variation takes place within certain limits, that the constraints on categorial
good formation are observed, the filling may be fulfilled by any element: this one, that
one, that other one, any element may do equally well, because all are interchangeable,
substitutable one to another. The empty form which a category is thus came to be
equated to the (non empty!) class of expressions likely to occupy a place designated as
empty. The success of the latter approach comes from its operative character, since,
with it, we would have a discovery procedure for categories. However, we must
question the reasons for restricting the use of these notions to the realm of language, as
if substitution could not also apply to things, as if the objects could not themselves be
well-formed or ill-formed as expressions are.
Wondering 'whether a variable takes its values within the nouns of the language or
within the objects of the world' does not place us in a very good position to clarify things
because we should be more precise about the variable in question, but at least a problem
is posed: that of the possible binding between a linguistic form and its referent and what
the latter might be.
7.8.5. Binding as referential resolution
There is a binding question each time the question of the reference of a name phrase is
posed. The thing which demands to be bound is now an NP; the case is no longer quite
the same as the variable-value binding but the NP conserves certain characters of the
variable and it is not absurd, by analogy, to see a question of binding here again.
About the nature of what the NP has to be bound to, there is however a real question: is
it its reference, is it an individual of the world, is it a 'representation' of this individual?
The case is not very clear and touches the root of a central and difficult question; it is
the kernel of semantics and saying 'the signified' will not suffice. By lack of a firmer
vision, as a provisory position, the 'private term' (cf. p. 262) is assumed to hold that role
without this being positively defended as a thesis yet. Given the current definitional
232
Bourdeau 2000, p. 146.
222
fuzziness of 'private term' there is no high risk but nothing quite decisive is uttered
either.
The case of anaphor and generally that of coreference is similar to the case X2 + 5X + 7
in this that the anaphoric syntagm and its antecedent must 'take the same value', with the
difference that in expression X2 + 5X + 7, the two syntagms that must take the same
value have the same form: "X" whereas in linguistic form (Is Jo here? No, he just left.),
the anaphor (he) and its antecedent (Jo) generally have different forms. In summary, a
mathematical variable is a systematized device for reference and coreference. The
speaker's approach to the question is contingent and flexible while the mathematical
approach is idealized and rigid but the targeted function is the same in both cases: how
the form may raise again recurrences of identity.
7.8.6. Referential binding: syntax prescribes two NPs to have the same referent
Since the various NPs referring a same referent have different forms in natural
languages, their form alone does not suffice to conclude to coreference. Languages then
have devices to prescribe in which conditions coreference has to be recognized. None of
these devices are categorical but some of them are very precise: they prescribe when one
such NP (then anaphoric) must have the same reference as another one (then its
antecedent). This is the referential binding of the Government and Binding Theory
(G&B). Referential binding is then the vision, as seen from syntax, of the prescription of
coreference. This prescription is in part or in whole independent from the fact that the
reference is actually resolved.
In French, we have liaison and liage, English speakers only have binding. Thus
Jackendoff233, about to start a development on binding finds it necessary to settle that it
will bear on "the linguist's Binding Theory".
As we just saw, the notions 'referential binding' and 'variable binding' are different.
However, they are not entirely foreign to one another because referential binding has
consequences on the ensuing variable binding.
7.8.7. Productivity of thought
In the quotation above, and although somewhat elliptically, Bourdeau suggests
something more: that these notions (variable, value, binding, computation) are not
'restricted to the realm of language'.
"No restriction to the realm of language. Substitution may bear on things. Objects can
themselves be well-formed or ill-formed as expressions are.
The point is that the computation must be extended to objects – a prerequisite will be to
sort out what these objects are. This is not very far from the language of thought, or,
better said, from the productivity of thought – let alone language – of which it remains
to be shown why it should have to be a language by anything else than a metaphor
(Fodor, Lacan).
233
Jackendoff 2002, p. 59.
223
What should the 'fillers' be in this case, those which come to satisfy an 'expectancy of
fulfilment'? They can no longer be terms made up of linguistic form, I propose the
private terms.
If there has to be rules, here again there would have to be a question of variable binding.
But the intuition is rather, here as in the linguistic form, that exemplarist and
occurrential inscriptions, a notion of proximality, and abductive dynamics analogous to
those already exposed for the linguistic form, would account for the productivity of
thought without rules and without categories. Things being so, the question of reference
binding would be solved in the same manner: it would be eschewed, before being even
posed.
7.8.8. Conclusion: the model is functional, but with a plausibility residue
As a model, the Analogical Speaker is functionally adequate on variable binding,
referential binding not being covered within this work. Variable binding is solved by
being eschewed: since there are no abstractions, there is just nothing to be bound.
Linguistic productivity is not the result of abstractions and bindings; is the result of
abductive computations working on exemplars, and observing copositionings.
This model is functionally appropriate. By creating channels, it may make multiple
reference to terms (linguistic terms and private terms), and this solves the 'problem of 2'
of Jackendoff (ibid. p. 61), a problem akin to that of binding. In: "the little star is beside
the big star", the name "star" has two occurrences and current sentence processing
models, by activation propagation in connectionist networks or in semantic networks are
unable to treat it. This is exactly the question of individuals posed by Marcus as we shall
see in the conclusions. The Analogical Speaker supports this well by using channels (cf.
Chap. 4). Any categorial theory also does, and so do systems of automatic parsing and
analysis, whatever their underlying theory, and even if they have none in particular. The
novelty here is that the problem of 2 is addressed in a framework which is strictly noncategorial.
The Analogical Speaker also solves the problem posed by Jackendoff (ibid. p. 64) as
that of the encoded and instantiated typed variables but with an important difference:
Jackendoff asks for the variables to be typed, here the terms are not typed and there are
no abstractions and therefore no variables, this has been explained at length above.
This model is functionally adequate but it contains an implementation-plausibility
mortgage: about its support for the dynamics of the acts, that is, the heuristic structure of
ABS (agents and channels), and one cannot convince oneself that neurons may
implement it as such. The raw mechanism of channels in ABS cannot be proposed as a
direct candidate to physiological interpretation; in itself it is not implementable.
The question will be more extensively addressed in the conclusions p. 272.
224
7.9. Probabilistic model or dynamic model
Over he last ten years, several articles234 converge to complementing linguistic theories
with probabilities. This line is advocated by researchers in contact with corpora and it is
not clear that they aim only to improve their practice or also to promote a linguistic
theory. In the conclusions of Abney235, however, the position is clear: "The focus in
computational linguistics has admittedly been on technology. But the same techniques
promise progress at long last on questions about the nature of language that have been
mysterious for so long".
The general argument is that the limits of rule and category-based theories lead
immediately and necessarily to a probabilistic or "probabilized" vision of language. The
move is neither immediate nor necessary: all that has been exposed so far succeeds in
doing away with categories and rules without calling on probabilities. At any rate, we
need to see what the position of probabilities in the model could be. Therefore there is a
case to clarify how the Analogical Speaker and the probability track are disposed with
respect to each other.
A first argument set forward to introduce probabilities relates to learnability. We know,
and Manning reminds us236, that, according to Gold's theorem, a language is not
learnable without negative data. For Chomsky, this is an argument, among other ones, to
postulate an innate universal grammar. Abney237 and Manning238 also remind us
however that if context-free grammars are not learnable without negative data, it was
shown by Horning (1969) that stochastic context-free grammars are. Of this, they make
a case for stochastic grammars. This argument remains within the assumption that a
grammar, be it stochastic, is the operating cause which accounts for linguistic acts, and
that it is a grammar that has got to be learnt. This assumption is not made in my
proposition which, quite on the contrary, sees a grammar as a result of operating
dynamics that a) are more fundamental and simpler than a grammar, and b) operate in a
given linguistic environment. A speaker does not learn a grammar; he just learns how to
speak.
For the rest, in short, the advocates of probabilities – they are at various degrees – find
limits in classical, algebraic models and question them seriously. They view the
observable regularities as probabilistic more than rule and category-based. In their view,
a mixed approach should permit progress; it should blend rules and probabilities. This
can be done in several manners, the most obvious one being stochastic rules.
234
Harris 1991, McMahon 1994, Abney 1996, Pereira 2000, Manning 2002, Habert to appear in
TAL(Traitement Automatique des Langues, Paris, ATALA) in 2003, etc.
235
Abney 1996, p. 21.
236
Manning 2002, p. 16.
237
Abney 1996, p. 20.
238
Manning 2000, p. 17.
225
7.9.1. Reasons of legitimacy and reasons of variability
In addition to the multitude of little facts that cannot easily be solved by rules (the
leakages of categorial theories), the advocates of probabilities foster them for two orders
of reasons: reasons of legitimacy and reasons of variability.
Under 'legitimacy' I collect questions of gradual grammaticality, of gradual acceptability
(is it different from grammaticality?), the question of the respective share of langue and
parole, and the question of competence vs. performance.
For Abney, postulating a performance separate from competence does not help in
coping with productions that occur in corpora and even it handicaps their apprehension.
Then:
The issue of grammaticality and ambiguity judgments about sentences as opposed to
structures… are no more or less computational than judgments about structures, but it is
difficult to give a good account of them with grammars of the usual sort; they seem to
call for stochastic, or at least weighted grammars239.
Under usual assumptions, the fact that the grammar predicts grammaticality and
ambiguity where none is perceived is not a linguistic problem. The usual opinion is that
perception is a matter of performance, and that grammaticality alone does not predict
performance; we must also include non-linguistic factors like plausibility and parsing
preferences and maybe even probabilities … As a result, there is actually no intent that
the grammar predict – that is, generate – individual structured sentence judgments. For
a given structured sentence, the grammar only predicts whether there is some sentence
with the same structure that is judged to be good240.
Preparing an argument for acknowledging probabilities, Abney notes that there is a
difference between a judgment of acceptability/grammaticality on a form alone and a
structure judgment. An extreme case being the English form:
(1a)
the a are of I
At first sight it is judged bad. However, an interpretation, a very rare one, is possible:
for geometers who are used to name plots of land with capital letters (I, J, K, etc.) and
ares (the surface measure of 100 square meters) within them with small letters (a, b, c,
etc.), this English utterance is grammatical and interpretable, it is a noun phrase which
can be paraphrased "the are named 'a' in the plot named 'I'". Form (1a), associated with
the structure which responds for this interpretation, is now judged good.
This is correct and long known: the first Chomsky, and with insistence later on, states
that what constitutes the linguistic fact is not a form, but a form with an analysis, by a
phrase marker for example. What is curious in Abney's argument is that the
interpretation which ascribes a meaning to form (1a) is more than extremely rare in any
corpus. One is curious to hear how any stochastic approach might help accounting for it.
I shall come back to this. This suspicion connects, anticipating it, with a remark from
Manning, below: Optimality Theory, even after a stochastic complement has nothing to
say of interpretations "made possible in various contexts".
239
Abney 1996, p. 5.
240
Ibid., p. 9.
226
For the promoters of probabilities, the second order of reasons to introduce them
collects reasons of variability: linguistic variation, linguistic change, and learning. For
them, these are the stronger reasons. Abney (already quoted in Chap. 1) accepts syntax
to be autonomous but notices that autonomy is not isolation, and linguistics also
encompasses production, comprehension, learning, variation, and linguistic change.
Transient situations during acquisition would call for the coexistence of concurrent
rules, with stochastic weighting:
Under standard assumptions about the grammar, we would expect the course of
language development to be characterized by abrupt changes, each time the child learns
or alters a rule or a parameter of the grammar. If, as seems to be the case, changes in
child grammar are actually reflected in changes in relative frequencies of structures that
extend over months or more, it is hard to avoid the conclusion that the child has a
probabilistic or weighted grammar in some form. … At any given point in this picture a
child's grammar is a stochastic (i.e. probabilistic) grammar241.
If things are so, during the period in which two rules, or two variants of the same rule,
coexist and compete for application, what determines which one will be applied? What
determines the evolution of the relative weights of both, and later, the moment at which
one of them will fade out? Adding probabilities in this manner may well have a
descriptive efficiency but it makes no progress in the explanation. I will show (p. 250)
how the model of the Analogical Speaker provides on the contrary a precise explanation
of the way a new construction propagates gradually in a plexus, that is, in a speaker's
linguistic knowledge and consequently in his usage.
Likewise, the language in a community of speakers would have to be viewed as a
stochastic grammar to account for the variation among them242. It would have to be a
unique grammar otherwise one could not account for intercomprehension.
The same idea of competing rules is called again for explaining linguistic change.
Manning243 for example makes a corpus investigation on the phenomenon constituted
by the emergence of as least as + Adj, competing with at least as + Adj, during the
1990s, in the United States of North America, in South Africa, and in Australia. Here
again, a stochastic rule would account for this alternation.
To summarize:
It is plausible to think of language acquisition, language change, and language variation
in terms of populations of grammars of different speakers or sets of hypotheses a
language learner entertains. When we examine populations of grammars varying within
bounds, it is natural to expect statistical models to provide useful tools 244.
So probabilities would be required, but no one ignores that they were dismissed by
Chomsky in the 1950s: alone they do not suffice.
241
Ibid., p. 2.
242
Ibid., p. 3.
243
Manning 2002, p. 4.
244
Abney 1996, p. 4.
227
7.9.2. Original sins of probabilities in language
For Chomsky: neither colourless green ideas sleep furiously nor furiously sleep ideas
green colourless was ever observed in linguistic experience but the former is
grammatical and the latter is not.
Abney responds245 that for this argument to hold, from the absence of occurrence of an
utterance, one must have to be able to deduct that its probability is null. But, he adds,
there is a whole literature about the manner to estimate the probability of an event
having no occurrence in a sample, and in particular to differentiate true zeroes from ones
which only reflect a lack which just happens by chance.
Yes but, specifically, in order to found this distinction, a theory is needed that could rule
which of these non-occurrences are 'true' and which ones do not happen just as a matter
of chance. In the case of linguistic phenomena this can only be a theory which rules the
'possible in a language', that is to say a grammar, and this is exactly what we are after.
This is entire circularity. Chomsky's argument was weak because it depended on a
grammaticality which only holds in an idealization very remote from the object, but the
response of Abney is still weaker.
Two more of Chomsky's arguments were related with the length of utterances:
arbitrarily long grammatical dependences can be built and therefore, a Markov model of
order n fails however large n is made. These arguments are very foreign to the recent
come-back of probabilities in language and so I just ignore them.
A manner to give a feeling of the "original sin" of models with probabilities only, in this
case, based on transition probabilities, is to show a sample of the productions of current
n-gram models. Habert reports the following:
The 'localist' models, which n-grams are, faithfully account for constraints in narrow
windows, but they resist the enlargement of the span (the number of occurrences
"melts") as can be shown with the pseudo-sentences generated by a tri-gram model
trained on a corpus of radio and television news of 13 millions words (Rosenfeld 2000,
p. 1313): My question to you those pictures may still not in Romania and I looked up
clean; you were going to take their cue from Anchorage lifted off everything will work
site Verdi246.
Transition alone is surely not a ratio. Syntax in a broad sense cannot be based on sheer
sequencing. An improvement of markovism would not suffice.
At this point of the argument, the promoters of probabilities have removed a few of the
classical mortgages bearing on them, without this allowing yet considering them as
sufficient. With them, a structural viewpoint should be conserved somehow:
dependency, the generativist phrase marker, something which reflects sentence
structure: an alliance would be needed.
Two possible approaches of alliance between probabilities and structure will be
examined, one following Abney (taken as a prototype because there are other
representatives), and a second one according to Manning.
245
Ibid. p. 18.
246
Habert 2003a, p. 18.
228
7.9.3. Alliance number 1: stochastic grammars (Abney)
The first one approaches variation, learning, and linguistic change by making the
grammars stochastic (Abney also says 'weighted').
Statistical methods – by which I mean weighted grammars and distribution induction
methods – are clearly relevant to language acquisition, language change, and language
comprehension. Understanding language in this broad sense is the ultimate goal of
linguistics247.
For Manning, the proposal is to represent subcategorization information as the
probability of occurrence of the various dependents of a verb. The English verb retire
requires a subject with a probability 1, accepts an object with a probability 0.52, accepts
a preposition phrase with a probability 0.05 (from) or 0.06 (as)248
Such models combine formal linguistic theories and quantitative data about language
use in a scientifically precise way249.
Stochastic grammars of this sort do not constitute a rejection of the underlying
algebraic grammars but a complementation250.
Both agree to see probabilities as combining with an algebraic grammar. Jurafsky251
makes the same conclusion: probabilities are a complement, not a replacement.
7.9.4. Example of "disambiguation": John walks and its critique
For Abney (p. 13) determining which analysis is the good one – that is, the one the
speakers will understand – is not a computational problem but determining the
algorithm which computes this analysis is a computational problem. John walks,
depending on the case, may be an NP or a sentence, and the probabilities are different in
each case. This may be accounted for by a grammar like the following:
S
S
NP
NP
N
N
V
→
→
→
→
→
→
→
NP
V
NP
N
N
N
John
walks
walks
0.7
0.3
0.8
0.2
0.6
0.4
1.0
Applying this grammar, Abney evaluates that John walks is an NP with a probability
0,336 and a sentence with a probability 0.0144.
Let alone the fact that we are not told what John walks is in the rest of the cases (62% of
the cases is not a marginal remainder!), it being a sentence or an NP is not determined
by adding weights to generation rules but by the context.
247
Abney 1996.
248
Manning 2002, p. 11.
249
Ibid., p. 12. Are the models scientifically precise because there are numbers in them?
250
Abney 1996, p. 2.
251
Jurafsky (Dan) Probabilistic Modelling in Psycholinguistics: A Survey and Apologia, presentation at
the AMLAP 2000 Congress, Saarbrücken, June 2000
229
In The weather is fair. John walks. He is happy. the probability for John walks to be a
sentence is 1.
In I see that John walks. the probability for John walks to be an NP is 1.
One knows what is what by the context and the punctuation in writing and, in speaking,
by the situation and the prosody.
It is neither reasonable nor necessary in general to approach by a method based on
probabilities that which can be known. In this particular case, it is neither reasonable nor
necessary to approach by a method based on probabilities that which can be abducted at
low cost and with good confidence from what immediately precedes John walks. In the
first case: that, and in the second case, the full stop terminating the preceding sentence.
It has been shown above how the Analogical Speaker discriminates such ambiguities
without even requiring categorial labels, cf. example été (p. 166).
This is a new encounter of the classical argument on polysemy: it is decontextualisation
which creates ambiguity (here: categorial ambiguity). It suffices to reinstate things in
their context and there is no need for a stochastic apparatus.
If one seeks to make up for missing data, for example prosodic data – which may well
be the case in automatic language processing – and which impede a reasonably
economic and sure abduction as the one I propose, it is possible to adopt such stochastic
reasoning. They may happen to be sufficient and more affordable for an engineering
purpose, but they have no theoretical relevance in linguistics. Then the prediction would
be that they allow us to make up for some missing data, but this will let leakage happen,
as usual.
7.9.5. Alliance number 2: Stochastic Optimality Theory
For some authors, complementing rules with probabilities is insufficient and oldfashioned because the same categorical phenomena which are attributed to hard
grammatical constraints in some languages continue to show up as soft constraints in
other languages252.
This section summarizes the proposition, in Manning 2002, which leads to making
Optimality Theory stochastic.
After Bresnan, Dingare, Givón, etc. a model must comprise variable strength
constraints, from soft ones to categorical ones, otherwise, some facts should belong to
competence in some languages, and to performance in some other languages. These
ideas, already established in typology and functionalism, have not been expressed in
formal syntactic models. Now giving out the explanation to performance is a
renouncement because it ceases to make prediction possible.
252
Manning 2002, p. 20.
230
Constraint C1
Functional link
It is preferable
for the subject to
be the agent
Constraint C2
Discourse
It is preferable
for the subject to
be previously
mentioned
Constraint C3
Person
It is preferable for
the subject to be
1P or 2P
A A policeman scolded me
1
0
0
P
0
1
1
I was scolded by a
policeman
Figure 30 Three constraints applying to two utterances
In an example situation, a policeman scolded the utterer. In English, this may give birth
to an active utterance (utterance A) in which the grammatical subject is the policeman,
or a passive one (utterance P) in which the grammatical subject is the utterer. Three
constraints apply:
For each of these constraints – which are cross-linguistic – each of the two forms A
(active) and P (passive) satisfies it (1) or violates it (0). None satisfies them all.
For Generative Grammar, none of the constraints being categorical, none belong to the
grammar: both forms A and P are grammatical, and Generative Grammar says nothing
about how a speaker chooses A or P. However, in several languages, one or several of
the constraints are categorical and therefore there is no option between A and P.
In a categorical grammar, in case of constraint conflict the form is non-grammatical. In
case of conflict between constraint C1 and constraint C3 it is not possible to express
scold (policeman, me). The grammar has a gap, with no corresponding gap in the
languages, but for rare exceptions. Categorical grammars then can only respond by
adding manual restrictions (negative conditions) on the constraints or other devices like
the elsewhere principle253.
The standard Optimality Theory (OT, also named ordinal OT below), which is not
probabilistic, brings a progress. For it, the constraints – their set is postulated universal –
are ordered, and the weaker ones may be violated to satisfy the stronger ones. OT
accounts for many facts in many languages and provides the elsewhere principle of
Kiparsky without added cost.
One of the problems of OT is that it determines a single output for a given input and so
it does not account for inter individual variation nor for the variations of a single
individual. Thence one has tried to make it capable of variable output. But, from the
moment discursive role (constraint C1 above), and information structure (C2) are used
to predict diathesis, the resulting variation suggests to call on probabilities, and a
stochastic extension of OT has been proposed by Boersma.
Smolensky proposes the ranking of OT in reaction to frameworks which maximize
harmony building on quantified soft constraints. Smolensky: Order, not quantity (or
counting), is the key in Harmony-based theories. In Optimality Theory, constraints are
253
Kiparsky 1973.
231
ranked, not weighted; harmonic evaluation involves the abstract algebra of order
relations rather than numerical adjudication over numerical quantities.
Such ranking often suffices the same way as categorical constraints did for many
applications, but something else is needed for variability and ganging up. Ganging up is
the case of several weak constraints conspiring to overcome together a stronger one. For
this, numbers are needed, ranking alone does not suffice.
Coming back to Figure 30, none of the three constraints C1, C2, C3 is categorical in
English but each plays its role. Quantitative data show that a language expresses soft
generalizations where other languages make categorical generalizations. A probabilistic
model can model the strength of these preferences, the interactions between them, and
their interactions with other principles of the grammar. By providing variable output for
the same entries, it may predict the statistical patterning of the data. The model then
makes it possible to relate these soft constraints to the categorical restrictions which
exist in other languages; thus it shows how both are effects of common underlying
principles. Typological data is thus related with quantitative data.
In Stochastic OT (Boersma, Hayes), constraints are not just ordered, they are also
placed on a scale and distances between them matter for the predictions of the theory.
Secondly, the theory comprises a stochastic evaluation which, for a given entry,
provides a variation, that is a probabilistic distribution of the outputs of the grammar.
Any ranking value of a constraint, after its evaluation, is modified by a random
correction following a normal distribution law. Thus the grammar constrains the output
without determining it. Does a speaker really roll dice before speaking? Whether there is
randomness or not in human behaviour, the randomness introduced here reflects the
incompleteness of the model: we do not wish to put into a syntactic model all the factors
which influence syntax. As we cannot know them all, we simply predict that the average
of their effects on the outputs will occur with certain frequencies.
An advantage of Stochastic OT over (ordinal) OT is that it is a robust learning
algorithm. Another one is its ability to learn frequency distribution. This provides a
unified theory of categorical phenomena and variable phenomena. Linguistic change
would then be explained by the strength of a constraint moving along the ranking scale
and this would predict progressive change of usage. The strength of a constraint growing
slowly and linearly with time, nearing that of another constraint, then meeting it, and
then crossing it would explain the shape of the usage change curve, which is a sigmoid
(logistic function). For grammatical change, this model is more plausible than the
coexistence of generativist rules.
Its inability to allow combinations between all the constraint values may be a limit of
Stochastic OT: a few constraints among the stronger ones determine the output and the
remaining ones are simply ignored. In particular, lower-rank constraint violations cannot
"gang up" to win over a higher-rank constraint, and this is contrary to many
observations. In generation, Stochastic OT is adequate for choosing on linguistic
grounds between a limited set of candidates but seems less plausible as a
parsing/interpretation model where most of the readings of an ambiguous sentence can
be made plausible by varying contexts, that is, when the decisive evidence may come
from many places. This explains that OT models are mainly employed for generation
232
whereas work in natural language processing has tended to use more general feature
interaction models.
This terminates the summary of a section of Manning 2002 in which, by convention, the
utterer was Manning. The utterer is now again the author of this dissertation.
7.9.6. Critical commentary on Stochastic OT
In addition to the defects and limits stated above by Manning, Stochastic OT reproduces
certain limits of (ordinal) OT associated with categoricity, although all this approach is
presented as an effort to escape categoricity.
The set of constraints should be unarguable, consensual, motivated, closed and stable,
even more so since it is postulated universal. Here occurs the suspicion of the
impossibility of a closure and of the impossibility of stability much in the same way as
for lexical categories and for thematic roles and, for the time being at least, each paper
on the subject or almost, brings new constraints about. However, the optimalists may
respond, and this can be acknowledged, that the theory is young, and, when mature, it
would stabilize the set of constraints. We must then wait kindly and see.
The three constraints C1, C2 and C3 seen above presuppose the category of subject. Do
the underlying linguistic mechanisms, the detail of which Stochastic OT renounces, and
which it expresses by these constraints, have a manifestation in languages without
subjects? If so, how are the corresponding constraints to be expressed in these
languages? Generally, the fact that constraints, from their very expression, depend so
much on lexical, syntactic and functional categories, makes them the heirs of the limits
of these notions. The optimalist current is a spin-off of generativism which criticizes
categories little and late.
Stochastic OT also shares with stochastic grammars (alliance number 1) the "patching"
character of probabilities, as they are introduced in them.
Finally, the evolution of constraint strengths, which is supposed to account for linguistic
evolution and learning progress, can be related neither with the occurrential experience
of speakers nor with any other notion. This constitutes a break in causal chains which
demands to see this construction as a model at best but forbids it to be a theory since a
link as important as this one is missing.
7.9.7. In an occurrential act, reasons are occurrential
A stochastic grammar, even an optimalist one, explains grammatical probabilities, not
particular acts: it is not equipped for determining them.
In a production act, the enunciative programme of which is to express "to absorb food",
"to enjoy food with friends" without this programme being specific about what is
absorbed (it will for example end up producing "we ate" or "we already ate"), the
uttering process, which is envisaging the 'lexical entry' eat to fulfil the enunciative
programme, finds in linguistic knowledge that it lends itself to transitive constructions
and to intransitive constructions. To select either, if we follow the promoters of
probabilities, the process should be concerned with the recognition that this verb is, for
example, transitive in 60% of the cases and intransitive in 40%.
233
First of all, if the enunciative programme encompasses no object, the intransitive
construction is very much needed and it suffices that it be possible for this way to be
taken. The probability distribution does not have the opportunity to get involved. The
occurrential reason prevails over any statistical reason.
In a related case, suppose the required construction is not attested at all. For example the
speaker plans to say "X, he takes" in a context in which X takes anything that one will
care to give him, systematically, never giving anything back, etc., and there is no
attestation available to this speaker of intransitive 'take' in a context that could be
reported to the present one. But assume that there is one for "give": "she gives" (easily,
systematically, generously). Abductive licensing is possible from there: "he takes" may
be licensed by "she gives". The speaker then evaluates, by simulation, the load which he
thinks his hearer can bear, upon what he decides, or not, to utter this. The 'probability'
which is supposed to be null to build "take" intransitively did not impede this novel
construction (novel for this verb, but not novel in general, not novel for transfer verbs).
From then on, its reuse will be facilitated; a usage will have evolved a little.
If probabilities had to be considered what is the set of the possible cases (the
denominator) to which the number of favourable cases (the numerator) should have to
be reported? The number of occurrences of the verb take? The number of occurrences of
transfer verbs? If we accept that linguistic operation is flexible and that the abductive
chains are shorter or longer depending on the case, we lack any criterion; we do not
know how to characterize the subset which should have to be counted to constitute the
denominator. This is a problem of itensional characterization; what is lacking is a
characteristic property.
Moreover, there is a problem of extensional characterization: we do not know from what
total set this subset should be extracted: from the British National Corpus? From the set
of things heard and uttered by this speaker over his life? Over the last three years?
So is it for the numerator: the number of possible cases. Should it be the number of the
transitive constructions of all transfer verbs, of a narrower or broader class? We do not
know.
Therefore, in production acts, i) the definition of probabilities has no firm base, and ii)
the operating dynamics does not require them. What matters are the relative costs of
various enunciation possibilities versus their adequacy to fulfil an enunciation
programme which is occurrential. These relative costs are defined with respect to the
linguistic knowledge, that is, with respect to the plexus.
The paradigm of the possible constructions of a term is a question for grammarians or
for computational linguists. It is not a useful datum for the enunciative mechanics. For
the latter, if "any utterance is a compromise", what matters is to settle on a reasonably
good one among those which are computable; what matters is the solutions at hand in an
occurrential situation and their relative costs.
Behind the fallacious vision of the 'stochastized' paradigm of the possible constructions
of a term, must be identified another figure of the totalism which was already discussed
p. 212. The exhaustion of a totality of possibilities, here by 'probabilising' them would
be good to account for occurrential operations among them. Again, this is the idea to
guide an occurrential choice in a total set, a new variant of the 'domain and range'
234
approach. Besides its non-plausibility, we have seen – and will see again soon – that the
construction of this set is void because we just do not know what it has to be.
7.9.8. Probabilities do not explain the settlement points
In occurrential acts, probabilities are not explanatory; the mechanism has to be a
computation.
A next step in a process certainly is not determined by chance. When computing a next
step, some terms lead the computation to consider first, and with greater strength,
certain heuristic paths. Of these preferred transitions, one may give a probabilistic
description, then of transitional probabilities. They are conceived of as commanding
preferred expectancies and anticipating them. But they do not command the final
stabilization points. They are not inventive, they are not innovative, they take the
speaker into garden paths but they cannot contribute taking him out of them.
Now the Analogical Speaker, with a single device, a computation building on the
proximality of inscriptions, has the power to account with homogeneity, of both i) some
paths being envisaged in priority and ii) rare, non obvious stabilization points being
finally elected despite them contradicting the paths initially envisaged.
"The set of issues labelled "performance" are not essentially computational" Abney says
(1996:21). They are just computational and that only, but we need to understand
correctly in which way: Abney denies them the computational character because
'performance' would be opposed to the 'grammar' that governs in general that which is
possible in a language. To make things clear, as we are not making a distinction between
a competence and a performance, this amounts to say that the accomplishment of
language acts is principally computational.
Thus it appears that the probabilistic theme is worthless in linguistic acts. It might still
have some value for the description of a 'language' and this is not contradictory since it
is not a prerequisite to the explanation of the acts; in fact it does not even contribute.
7.9.9. "Set of possibilities" criticized
For playing a part in an operational theory of linguistic processes, that is, of language
acts, probabilities have a constitutional defect. Let us recall the simple definition of
probabilities: a probability is always a ratio, the ratio of a number of favourable cases
divided by a number of possible cases.
Roughly, in the data brought by the supporters of probabilities the set of possible cases
is, in fact, bound by the perimeter of a corpus: a time interval in the collection of the
New York Times or a defined fraction of the BNC (British National Corpus).
In order to understand how a probabilistic stance may be legitimate in a linguistic act,
we need to understand the set of possibilities which would be pertinent in it.
When a speaker carries out a linguistic act, a 'possibility' is not defined within a corpus'
perimeter: it is computed occurrentially. A few paradigmatic possibilities may enter the
scope of the computation and stay in it as competitors for a while. They have varied
strengths and one of them will be elected finally. Their consideration will have been
occurrential and guided by a definite act. Each may take part in sets of possibilities from
235
different viewpoints. A set of possibilities is determined by the viewpoint 'diathesis
type', another one by the viewpoint 'lexical choice', still another one by the viewpoint
'thematization or not', etc.
For a given linguistic act, if we envisage it from different viewpoints, there are different
sets of possibilities. For two different linguistic acts envisaged from the same viewpoint
– assuming it is relevant in both acts – the sets of possibilities are also different.
The 'set of possibilities' is simply not a set, at best it is a 'space' if we know what we say
with 'space', but we do not, this wording is metaphorical and finally, the notion 'number
of possible cases' has no firm base; we do not know how to turn this idea into a number.
Therefore, in an utterance act in French which seeks to fulfil a defined enunciative
project, there is no base to define a probability which might help to chose. For a given
verb – assuming the lexical choice is already made – a passive construction will not be
chosen because in Frantex it is constructed passively in 63.2% of the occurrences. The
diathesis will rather be determined by multiple, converging or contradictory conditions,
all of them related to the concrete terms of this enunciative project, then to terms close
to the latter which the plexus makes it possible to reach, then possibly to terms still less
proximal, until a point in which sufficient settlements obtain.
7.9.10. Reception is ultimately a question of settlement
There must exist a rule to the exceptions of a rule; the only question is
to discover it. Leskien254.
It is important to distinguish two things:
a) the fact that after a morpheme, or a defined segment or form, some things are
more awaited than other ones
b) from the fact that, ultimately, reception is a matter of settlement, and that this is
the final criterion of success, and consequently of acceptability.
It is a matter of omen255: what precedes augurs of the continuation (formal habits,
preferred sequences, routines, collocations) and certainly this can be described with
probabilities. But it is also a matter of settlement (coincidence). It certainly is a matter of
expectancy, but more than in the mere sequence.
It is a play between expectancy and surprise. One talks to say something new,
sometimes at least. Upon a topic, a comment is expected. This point contradicts
probabilities: as the interest results from the comparatively unexpected, it is necessarily
insufficient to see it in frequency only. How can a probabilistic model spurt the new out
of the old, the comment out of the topic? This is not very clear.
254
Leskien 1876, quoted by Paveau 2003, p. 25.
255
The original French passage is as follows: "Question d'heurs (heur est le même mot qu'augure. Littré):
ce qui précède augure de la suite (heurs d'habitudes formelles, d'enchaînements, de routines, de
colocations) et ceci peut sans doute être décrit avec des probabilités. Mais il faut aussi des heurs de
coïncidence (settlements). Attente oui, mais plus que dans la consécutivité de la forme; sans cela pas de
place pour les 'bonheurs d'expression'."
236
In the Analogical Speaker, the first findings, that is, the terms or records that are closer
to the terms of the act, are findings that are reached by the shortest (cheapest and
strongest) paths. It may happen that settlement takes place with them as will be the case
in trite, usual tasks encompassing little surprise (or the parts which are such, of tasks
which might not be entirely such), when the plexus is congruent with that triteness. But
in tasks or parts of tasks, it may also happen that longer abductive paths are necessary;
the process will then reach less probable areas and configurations, produce weaker
suggestions, but ones which settle into findings. This is the case for tasks which exhibit
no easy match with the inscriptions of a plexus256: they are understandable even though
the ways to their understanding (I have proposed "immersion") are rare and long.
This is how the Analogical Speaker reconciles in its own manner a kind of algebraic
rigour (not categorical though) with a dimension in which one might see effects of
probabilities, but it does so without requiring probabilities to be assumed to take any
operational role.
To quote Leskien again – he was writing in the Neogrammarian euphoria of those times
– it cannot be the case that there exists a rule to the exceptions of a rule if the
assumption of rules is not made. However, Leskien's request is not unreasonable if we
reword it into: each particular act has a motivation, even if it appears anomalous versus
a series in which we place it. Such motivation detail is certainly not always easy to
know so that it often remains potential only, but at least the theoretical frame must make
room for it instead of blurring everything in advance.
7.9.11. Probabilistic methods are a stopgap
It is when linguistic theory withdraws too much that it then has difficulty in facing the
explanation of variety.
Homonymy: one withdraws the context, creating abstract items, "fabricating"
homonymy; then one has to "disambiguate".
Categories: distributional contexts are projected over a set of classes (which one thrives
to keep small), that is, one withdraws the occurrential and proximal properties of
contexts and the cognitive proximity; and then one has to "sub-categorize"257. Here the
temptation of probabilities, the attempt to adjoin them to a categorist frame which
would be conserved: derivational rules according to every argument schema, the rules
being weighted by their observed frequency.
Abney258 undertakes to refute an objection which he thus presents. An opponent:
Granted humans perceive only one of the many legal structures for a given sentence,
but the perception is completely deterministic. We need only give a proper account of
all the factors affecting the judgment. … A probabilistic model is only a stopgap in the
absence of an account of the missing factors.
256
Far-fetched acts in other words.
257
Manning 2002, p. 6, verbal sub-categorization.
258
Abney 1996, p. 18.
237
Abney responds that, things being so, the queuing theory to account for the arrival of
lorries at a warehouse is also a stopgap. This analogy is bad: whichever way one tries,
the schema 'serially reusable exclusive-allocation resource', which is that of the entry
point at a warehouse, resource on which the candidates to consume its time (the lorries)
must queue up, cannot be made analogous to any linguistic operation.
Other argument259, a global, macroscopic account suffices, detail is useless:
… some properties of the system are genuinely emergent, and a stochastic account is
not just an approximation, it provides more insight than identifying any deterministic
factor. Or to use a different dirty word, it is a reductionist error to reject a successful
stochastic account and insist that only a more complex, lower-level, deterministic
model advances scientific understanding.
Let me quote Manning again, and summarize him:
Any ranking value of a constraint, after its evaluation, is altered by a correction
following a normal distribution law. Thus grammar constrains the output without
determining it. Does a speaker roll dice before producing an utterance? Whether or not
there are probabilities in human behaviour, their introduction here reflects the
incompleteness of the model: we do not wish to include in a model of syntax all the
factors that influence it. As we cannot know them all, we simply predict that, in the
average of their effects, some outputs will happen with certain frequencies.
It is indeed a mistake to pretend identifying all the determining factors. There is also
something to understand about the lag between the determining factors and observation.
But the schema which turns out useful to embrace both is not a probabilistic one, it is
that of macroscopic determinism which 'smoothes' the base processes that are swarming,
but for which single one, determinist causal chains are at play.
Jurafsky, speaking in Saarbrücken in June 2000 concludes: Probability is not a
replacement for structure, but an augmentation. Structures should nor be augmented
with probabilities; symbolic rule systems, which are bad, should not be enhanced by a
device which will add its own implausibility to theirs260. It is more promising to replace
the structures by a device which macroscopically responds so as to produce the effects
of regularity and the probabilistic distributions that are to be observed empirically;
especially so if the device in question lends itself to reduction more easily.
Yes, if taken as a theoretical complement, probabilities are a stopgap.
7.9.12. The Analogical Speaker and the probabilistic speaker
The Analogical Speaker, because it is proximalist, makes that the computation considers
first the closest elements, that is, ones which succeed more often in conjunction. If you
wish, you may call them 'more probable'. It also makes that the consideration of these
proximal elements often settles but not always. In case of failure to settle with proximal
elements, settlements are sought with more remote ones. Externally, this renders a
'probability effect' which is apparent from the outside and can be described with
259
Ibid. p. 19.
Alleged opponent : Surely you don’t believe that people compute little symbolic Bayes equations in
their heads? Jurafsky : No I don't.
260
238
probabilities. But this does not imply probabilities to be modus operandi in the form of
'stochastic rules'.
The Analogical Speaker solves grammaticality judgments, but maybe not in the way
Abney proposes:
There is a problem with grammars of the usual sort: their predictions about
grammaticality and ambiguity are simply not in accord with human perceptions. The
problem of how to identify the correct structure from among the in-principle possible
structures provides one of the central motivations for the use of weighted grammars in
computational linguistics. A weight is assigned to each aspect of structure permitted by
the grammar, and the weight of a particular analysis is the combined weight of the
structural features that make it up. The analysis with the greatest weight is predicted to
be the perceived analysis for a given sentence261.
In the proposition above, the delicate point is the clause: and the weight of a particular
analysis is the combined weight of the structural features that make it up. One would
like to hear what the proposition is for weight combination. I do not find that a
combination which would be an 'averaging' over a set, whatever it is, the perimeter of a
corpus for example, might suit. This combination must itself be based on occurrential
reasons and processes. In the Analogical Speaker combinations occur, at each assembly
stage, in the form of several factors, which taken altogether, reflect the ease or difficulty
to solve: heuristic productivity for each syntagmatic constituent, strengths of the
intermediate results leading to settlement, ease or difficulty of access to the record
licensing the settlement, and then, a combination ('combination' here is very precisely
the quadratic function used in ABS, cf. p. 346) of the strengths of concurrent
paradigmatic paths when it happens that several of them contribute to abductively
license a same assembly.
It is a mechanics such as this one which makes it possible for The a are is I. to globally
receive an interpretation with a substantial strength, in a situation context which is
congruent with it, despite the weakness of The a, of a are, of are is and of is I to which
no stochastic grammar ever, based on however large a corpus, can ascribe an
appreciable strength.
A stochastic model (with stochastic rules for example) contradicts motivation. It cannot
provide a reason, a ratio, that is, a relation to something else which is already known.
Why understand this way and not otherwise? Why act this way and not otherwise? It
cannot tell.
The Analogical Speaker on the contrary relates interpretations with precedents. Doing
so, it motivates its responses occurrentially and not with probabilities. In it, the products
of two syntagmatic heuristic paths, that is, two elements assumed to concur to an act,
may be found either compatible and reinforcing one another, or contradictory and
excluding one another, whereas a probabilistic speaker would at best have conditional
probabilities. It is a poorer model.
The Analogical Speaker is based on strengths, reflecting the lengths of the abductive
paths, that is, costs. At a moment, Abney, addressing on-the-fly learning of novel words
261
Abney 1996, p. 8.
239
and constructions, proposes to see it by assigning costs to the learning operations. Costs
indeed; but this is something else than probabilities.
Manning was regretting that Statistical OT still could not account for the ganging up of
weaker constraints to prevail together over a stronger one. Because it does not reify
constraints, the Analogical Speaker is not exposed to this risk; it allows reinforcements
to happen between effects, even small ones. And if they are numerous, nothing prevents
the elicitation of the form which they favour against another one, backed by an effect
stronger, but isolated.
To respond to the deficiencies of categorical frames,
a "stochastic alliance" solution
the model of the Analogical Speaker
pursues a linguistics of the language
centres itself on the speaker and the acts
adds probabilities to a structural component
(rules, optimalist constraints)
uses:
- copositioning (analogy)
- proximality among inscriptions
- proximality of the dynamics
- abduction and settlement
is a heterogeneous construction and
adds an implausibility on top of another
one
is a homogeneous solution and
comes close to a certain plausibility
remains embarrassed for articulating the
particular and the general and
remains haunted by totalism
addresses the particular first and obtains
effects of generalization
is at last proximalist
Table 17 Comparison of the stochastic alliance with the Analogical Speaker
All this discussion may be summarized in the table above.
It is by all these aspects finally that the Analogical Speaker demonstrates a junction
between langue and parole, between competence and performance, actually, these
oppositions are no longer mandatory.
As for them, the stochastic alliance schemas examined appear as attempts to rescue
rules, categories, and constraints or to live with them by lack of anything better.
7.9.13. Frequential models: Skousen and Freeman
Before leaving this section, a word must be said about the statistical propositions of
Freeman262 and Skousen263 (cf. p. 188). The word 'statistic' is used here and not
'stochastic' or 'probabilistic', for a reason which will become clear soon.
Both approaches are based on the exploitation of a corpus from which they pick up
frequencies of collocations and of distribution. They collect – this is very bulky data,
262
Freeman 200; see also http://www.chaoticlanguage.com/shortSeattleCameraReady.rtf.gz and
http://www.chaoticlanguage.com/fundamenta.ps.gz
263
Skousen 1989.
240
moreover not plausible – the frequencies of cooccurrence. These are 'numbers of
favourable cases' and the quotient by a 'number of possible cases is never made. In this,
they may be said frequential or statistical and not probabilistic. They pay no
consideration to a set of possible cases because they gave up any symbolic frame: there
are no rules, no categories, no constraints. Nothing to what the counts they make could
be reported. It is not an alliance, it is a replacement.
Generative
grammars
Chomsky
Stochastic
alliance
Abney, Pereira,
Manning
Frequential models
Analogical Speaker
Skousen, Freeman
this work
'Structure' is explicit, symbolized, and
precedes the acts
'Structure' is not reified,
it manifests itself during he acts
Categories + rules
Categories + rules
( + constraints)
+ probabilities
Frequencies/statistics
making up for
proximality
Proximality explicit
in the inscriptions
Analogy excluded
Analogy excluded
Underlying analogy
Explicit analogy
Implausible
Twice
implausible
Effects are plausible,
substrate implausible
Effects are plausible,
substrate more plausible
Table 18 Four ways of acknowledging structure
In fact there is a set of possible cases but one only: the whole corpus from which the
counts are made. It is the closure of the 'language' which these models apprehend. As
there is only one, it remains elided in these works. The computations of occurrential acts
are then carried out by exploiting the statistical frequencies attached to terms and
collocations. The computations are very heavy but remarkably insightful, they succeed
on effects of tenuous grammar and they even suggest a little of semantics. A certain
amount of proximality as I defined it is exerted but without having been explicitly
inscribed, which is why the computations are heavy. These models – Freeman's at least
– encompass what I called here 'expansive homology' and operate comparatively well,
with the same limit as the Analogical Speaker currently: dependencies like agreement
are not covered or poorly only.
The two frequential models are positioned as shown in the table above versus already
examined frames. The case of Itkonen264 does not appear in this table: he recognizes
analogy, maintains categories and rules, and has neither probabilities nor statistics.
7.10. Relation with connectionism
This model is residually symbolist and it shares several characters with connectionism:
no categories, no reified rules, no reified constraints, parallelism, involvement of a large
264
Itkonen 1997, Itkonen 2003.
241
population of elements, competition, etc. Does this make it a connectionist model? Yes
and no, in two respects.
7.10.1. Terms are postulated
Firstly, the model postulates entities (the terms) which are discrete, identifiable and can
be referenced. Upon the rebirth of connectionism, a first period, typically PDP265, did
not make such a postulation. Between the input layer and the output layer, there was no
assignment of cells to objects of the question and there were no internal discrete entities
other than the cells. The dogma was then that the weights and the links would suffice for
whatever was given the model to learn, and in a connectionist model everything could
interact with everything. This conception found a limit: in experiments bearing on
language, the response collapsed after about 700 lexical entries and increasing the
number of cells would not restore it:
Models of reading and spelling can avoid lexical representations, because orthographicphonological correspondences typically make little reference to lexical items. However
these models run into more serious problems (Cotrell and Plunkett 1991; Hoeffner
1992), when dealing with language learning and word production. Models of the
Hoeffner type display this problem most clearly. They learn to associate sound to
meaning and store these associations in a distributed pattern in the hidden units. This
approach works well enough until the model is given more than 700 forms. At this
point, the large pool of hidden units is so fully invested in distinguishing phonological
and semantic subtypes and their associations that there is simply no room for new
words. Adding more hidden units doesn't solve this problem, since all the
interconnections must be computed and eventually the learning algorithm will bog
down. It would appear that what we are seeing here is the soft underbelly of
connectionism – its inability to represent Islands of Stability in the middle of a Sea of
Chaos. Perhaps the problem of learning to represent lexical items is the Achilles' heel
of connectionism266.
A more recent generation of connectionist models, building on Kohonen maps, the 'selforganizing feature maps' or SOFM267, accommodates lexical entries as implementable
with connectionist techniques and thereby overcomes the limit met by previous models.
In this, the Analogical Speaker, by recognizing what it calls the terms, is compatible
with the most recent connectionist routes.
7.10.2. This model is localist
Secondly, the Analogical Speaker is localist: it maps a linguistic entity exactly onto an
entity of the model. Localism in this sense was long considered to be 'bad' in the
connectionist culture: Elman took that a localist representation could be adopted
because is was better explanatory since it represented better the ways of obtaining
results, but he added that it was neither plausible nor necessary268; however, the same
265
McClelland 1986
266
Mac Whinney 2000, p. 133.
267
Cf. for example Mac Whinney 1998, Mac Whinney 2000.
268
Elman 1998.
242
Elman also said: The following simulation (…) used localist representation (this makes
the point that, for this issue, nothing critically hinges on localist versus distributed
representation)269. In the neighbouring field of vision, Michael Page, in A Localist
Manifesto, advocates localism: the localist approach is preferable whether one
considers connectionist models as psychological-level models or as models of the
underlying brain processes270.
This suggests that the question with relevance is not whether the representation is
distributed or localist; it is more critical to know whether the model, whatever its
approach to representation, has the expressive power which, at the model's proper level,
makes sufficient room for the necessary entities.
If the model is a distributed connectionist network and if it is able to let emerge lexical
entries when they are needed, then it being localist or distributed is only a matter for
another plane of discussion: that of plausibility, possible reduction, etc.; but at the
model's level itself, Marr's intermediate level to situate the discussion, implementation
considerations are second and the choice between distributed representation or localist
representation has no other import than allowing, or not, the necessary entities.
7.10.3. What algebra for the mind
Another good manner to situate the Analogical Speaker with respect to connectionism is
to remind the assessments and the discussion of Gary Marcus271. In his book (The
Algebraic Mind), Marcus defines what he collectively calls 'basic computational
elements" (hereafter BCE). BCEs are functional requirements which, for Marcus, are
mandatory in cognitive systems: cognition in general and language in particular. They
are expressed at an intermediate level which lies between a high level vision of
cognition (high level properties of the mind) and the neurons (facts about cell
transport).
The three basic computational elements (BCEs) that are required are:
(a) it must be possible to represent rules and variables and to make them interact
with each other; the empiry is that the models which have them behave better,
(b) it must be possible to represent recursive structures,
(c) it must be possible to represent individuals and to involve them into cognitive
operations.
For these three BCEs a 'symbol manipulation machinery' is needed; we do not have a
proof of its existence but all the effects which we cannot obtain otherwise require that it
be a product of evolution.
Still for Marcus, neuromimetic connectionism so far – he analyses dozens of
propositions and architectures – is uniformly bad on these three points. For example,
either the models have no rules, and their response is insufficient, or they end up
269
Elman 1998, p. 8 (highlighted by me).
270
Page 2000, p. 443.
271
Marcus 2001, Algebraic Mind.
243
behaving correctly but then, they encompass rules in a hidden manner, even when they
pretend not to.
All this for Markus does not invalidate connectionism but assigns to it an obligation: the
three BCEs are mandatory and the models must cope with them in a way or another.
The model fostered in this work may be analysed at the same intermediate level along
the three BCEs of Markus.
About point (c), individuals are required indeed. They are not the atoms or the primary
elements of Russell or of the Tractatus, they are not the ultimate constitutive parts of
reality; they are, for Marcus and in my own view, identifiable and discernible entities.
"The little star is beside the big star", il must be possible not to make the confusion
beteween the little one and the big one. For Marcus, they are bound to be instances.
Here, since categories are not received, it is neither possible, nor necessary to envisage
individuals as instances, they have to be viewed as (some of the) private terms. The
subject of cognition and of language perceives them and recognizes them as 'the same
ones' in their recurrences, which qualifies them as terms since this is the definition of
'term'. Such terms are recognized independently from the linguistic form (or forms)
which may refer to them, and also of course when there is none. Being terms, they can
be involved in analogies.
About point (b), recursive productivity is, for me, ensured by the abductive computation
founded on the four analogic abductive movements. This was shown in the numerous
examples provided, for example: "John is too stubborn to talk / to talk to / to talk to
Bill" where productivity obtains while observing agentive roles. The provided solution
is productive and recursive without having to be generative, and even less
transformational.
About point (a), Marcus requires symbols but here they are not needed. We only need
'terms' which are not the same thing. As for rules, it has been shown abundantly how
regualrization effects obtain without reifying the least rule in the device. As for
variables, it was extensively shown that there is no need for them since abstractions are
rejected, and rules among these. A consequence is that (this aspect of) variable binding
falls by itself. On this question, see also section 7.8. Binding, variables, variable
binding (p. 218).
Finally, two among the three obligations272 assigned by Markus to the connectionists,
point (a) and point (b), correspond to capabilities which the Analogical Speaker already
has. Simply, they are not fulfiled exactly as Markus formulates them. It appears, on
point (a) in particular, that Markus overspecifies the requirement. He assesses – rightly –
that regularity can be observed in cognitive systems, but he prescribes – wrongly – rules
to be causally present in the network which approximate them. Sticking to 'rule effects'
would be more faithful to observations, it would be sufficient, and no doubt easier to
implement and reduce.
272
The third one, individuals, is not anticipated to be a problem, but, as it belongs to a compartment which
is not developed yet, I refrain from considering it as granted.
244
In other words, the effects of symbol manipulation that are to be observed maybe do not
require a 'symbol manipulation machinery'. This modifies the requirements made to
evolution (here understood as phylogenesis).
This is what the Analogical Speaker claims, and proves in part. This effect is obtained
within an approach which is connectionist by the meshed character of the plexus but
which is not, of course, neuromimetic and has its own limits of plausibiity. From there,
two ways are possible; a) remain a neuromimetician, that is, prolong the metaphor
which is the usual one in connectionism (cells for neurons, activable connections for
synapses, etc.), and try to reimplement the Analogical Speaker neuromimetically, and b)
rework the Analogical Speaker in the direction of greater plausibility (simultaneously
with the extension of its functional coverage, or separately).
245
Chapter 8.
Margins, prolongations, improvements
This chapter addresses some limits of the model and questions related with its
architecture, like the definition of its perimeter, or which relate its limits with the
architectural options taken.
Some linguistic questions are currently little addressed or poorly solved by the model. I
showed a way of treating agreement without syntactic features: agent AN2. This
treatment is not fully satisfactory because it is heavy, it has a low plausibility, and the
procedure used seems to be difficult to extend to more than two constituents. It also
seems difficult to combine it with the B2-B3 analysis process, cf. p. 157 for details.
In its current status, the model is insufficient in the treatment of groups (conjugation
groups, declension groups, etc.), it mixes up morphemes across groups without any
control, cf. p. 169.
The following topics will now be addressed in his chapter.
The model's extension to non-concatenative morphologies is conceptually simple and is
just a question of development.
For acquisition and learning, a proposition is made which is homogeneous with the
dynamics of the acts and very compatible with the findings of psycholinguistics. It
should be validated by an experiment but the latter would have a certain cost.
The possibility to extract a plexus mechanically from a corpus is discussed. Overall, the
conclusions are negative.
By contrast, the orientation is more positive for a concept of self-analysis which, by
different means, would alleviate the description burden (plexus fabrication).
The quasi-absence of coverage of semantic questions is a limit of the model in its
current development. Section 8.5. (p. 260) below draws a few lines in this direction.
Finally, the core assumption of this research: that of the radical non-categoricity is
discussed.
247
8.1. Non-concatenative morphologies
Certain morphological 'recipes' are not currently treated in the model and are required:
infix morphology, apophony, and, more generally, non-concatenative morphological
processes. If the plausibility of the solutions deserve a discussion in themselves (at
Marr's level 3, that of the material composition), such extensions pose no particular
problem to the model at its own level (Marr's level 2, which Marr calls algorithmic, but
this wording is not very good).
Such an extension is conceivable for apophonic morphologies in the Semitic languages;
the experiment could be in Arabic. Facts in Arabic as in the table below lend themselves
very well to analogical computations.
triconsonantal accompliroot
shed
nonimperative
accomplished
noun
diminutive
place noun
KTB
yaktubu
he writes,
he will write
kitaab
book
kutayyib
booklet
maktabun
office /
library
kataba
he wrote
?uktub
write!
Table 19: Sample of Arabic morphology273
First of all, it has to be noted that n-arity of concatenative assemblies (cf. p. 371) cannot
be invoked in the treatment of such phenomena. The idea would be to make kataba an
assembly of six constituents k+a+t+a+b+a, that is, of six terms. This cannot be
sustained because a pattern like K*T*B has to be viewed as a single unit (that is, in the
model, one term only), it would be inappropriate to dissociate it and pretend making K,
T, and B constituent terms. Root K*T*B is not itself a morphological or syntactic
assembly; it is altogether a morpheme, it just has a particular structure and a particular
assembly mode with the vocalic-accentual patterns which can be associated with it.
Likewise for the latter, a breakdown into constituents has no reason to be.
This morphology rather calls for an adaptation of inscription structure: in addition to the
C-type record (assembly by concatenation) defined above, a new record type is required
and so are the corresponding dynamics for assembly and analysis. It still is a
construction but the recipe is different.
Here again, the dependency is not on a language or on a group of languages but rather
on a morphological process which may be found in several languages.
The morphological process: consonantal pattern + vocalic pattern apply to Arabic,
Hebrew, Aramaic, and to certain apophonies in Germanic languages.
Similarly, a same morphological process of accent placement + adjustment in closed
syllable applies in French (lever : lève, crever : crève, mener : mène, etc.) and in English
(ínsult (noun) : insúlt (verb)274 etc. or sane : sanity, vain : vanity, nation : national.275).
273
Bohas 1993a.
274
Cruttenden 1986, p.7
275
Lamb 2000, p. 92.
248
What would be for example the impact of this measure on a process like syntactic
analysis; what of the current B2-B3 process would be reused and what would be
affected?
The general dynamics of the process remains stable. Overall questions like activity
control, processing of syntactic ambiguity276 and the way out of garden paths are not
touched. About n-arity, from the examples which could be collected, it appears that
binary branching suffices, no cases were found in which non-concatenative morphology
would require ternary branching. However, this is contingent; if a case of nonconcatenative assembly requiring ternary templates were found, the model would grant
it without difficulty.
An incidence shows off on the structure of the base inscriptions: this was mentioned
before, a new record type is needed which manifests the assembly recipe in question.
There is also an incidence on field data. From the point where a non-concatenative
assembly is made and at which a term is obtained which, in turn, undergoes
concatenative assembly, the vision of field, and of field data can be that which was
provided in Chap. 5, that is, a start point and an end point of a span in a monolinear
string. But before that point is reached, the terms recognized in the input flow and their
assemblies, if they assemble non-concatenatively, do not follow this schema. Another
topology has to be modelled – maybe with bilinearity – which is something else than
edge-to-edge adjacency.
Of course, the same incidence also applies to the process which, in the input flow which
it explores finds all potential constituents that are directly identifiable in the plexus and
delivers them to the builder agents. Cf. section 16.3. Parsing of the argument form (p.
365).
8.2. Acquisition, learning, reanalysis
8.2.1. Mode of learning
This model does not tell how a plexus is initially obtained. Bootstrapping is not
addressed by the Analogical Speaker which, on the contrary, focuses on the isonomic
dynamics of language in a speaker, at a given point in the history of his linguistic
knowledge, which means, when a minimal set of analogies is already acquired. At such
a point, it is possible to propose a first-approach model of learning, by building on the
definiton of the plexus and of the associated dynamics.
Chomsky, writing about rule-changing creativity and rule-governed creativity277, writes
this:
276
However, the contribution of non-concatenative constructions to syntactic ambiguity ought to be
investigated. There is no a priori reason to believe that the properties of concatenative constructions are
transposed identically.
277
Cf. the very beginning of the Introduction section of this dissertation.
249
In fact the technical means to treat the rule-governed creativity as distinct from the
rule-changing creativity really became accessible recently only, in the last decades, on
the occasion of work in logic and in the foundations of mathematics278.
This may be true if one keeps a symbolist approach to language, and therefore to its
learning. If this is not if rules are not adopted; the point can no longer be to account for
their evolution, nor for the substitution of new rules to older ones; but it may become
that of showing the evolutions of the modes of regularization, that is, stressing the slight
exemplarist modifications to the dynamics which produce regularity effects. The
modifications will be, or will not be, followed by propagation by analogy, operating
occasionally as paradigm reparation. Considering "technical means", this does not
require anything mathematically or logically particularly elaborate.
8.2.2. Incremental learning, a simple learning model
Let us assume a speaker making a novel linguistic experience, for example he receives
an utterance and analyzes it. What he makes are structure mappings: the new utterance
is mapped onto an existing record which licenses it abductively279; the simple learning
model consists of assuming that this linguistic experience leaves in the plexus the
following remnant modifications: a) the utterance received with success is inscribed in
the plexus as a new record, b) between the latter and the record which licenses it, a
paradigmatic link is installed, and c) familiarity orientation between them is such that
the new record is less familiar than the licensing one280.
The plexus is thus locally modified. Thereafter, in this plexus:
a) certain utterances take advantage of the new exemplar (of the new record) and of
the new link; they are analysed faster.
b) their interpretation base is modified.
Another effect is that some utterances which used to be so difficult to analyse as to be
non interpretable in practice, now acquire a better interpretability.
This schema constitutes a model of learning. It is supervised learning in the sense of the
connectionists since the plexus' modification is subordinated to the speaker's
presumption of the analysis which he made being successful. Modelling linguistic
learning in this way has a great advantage: it is incremental because it consists of slight
and successive modifications to the linguistic knowledge. They are exemplarist because
they bear on the novel exemplar just inscribed and on that which licenses it. Their effect
is proximal, the incidence of each is limited to a small number of other exemplars, but
the repetition of such learning steps produces cascades which explain the generalization
of a usage in a speaker.
278
Chomsky 1964 .
279
More precisely, an analysis consists of several such mapping which are leveled, this was shown in
Chap. 4. Locally, the discussion will be carried out in this section in considering one level only, but the
conclusion is the same.
280
This model is compatible with a learning model proposed by Minsky, the theory of "Knowledge lines"
or "K-lines": We keep each thing we learn close to the agents that learn it in the first place. (Minsky
1985, p. 82, already quoted supra)
250
The simple learning model may be simulated in the Analogical Speaker:
1) on a plexus taken in an initial state, have a given form F analysed; the result may
be that the forms is analysed slowly and weakly licensed, yet it is successful.
2) sanction his success by modifying the plexus as indicated above.
3) repeat the same analysis and observe that the result is now faster to obtain.
4) now have the model analyse forms that share something in common with the one
that caused its modification. Observe that, for some of them at least, the analysis
is now carried out at a cost lower than their initial one.
5) check non-regression: acts akin to this one do not undergo a degradation of their
results after the modification.
This experience may be done. It was not just by lack of time.
This learning model explains the progressive generalization of a usage. The constant
finding in psycholinguistics is that a new syntactic acquisition does not appear in
children available at once in all usages. Rather, it follows an 'epidemic' propagation
schema like:
Daddy gone
Jo naughty
cat dead
Daddy gone
Jo naughty
cat is dead
Daddy gone
Jo is naughty
cat is dead
Daddy is gone
Jo is naughty
cat is dead
Let us take a result at random in the literature281; here, the phenomenon is characterized
in the formalism of the Government and Binding theory but this is not the point that
matters.
A syntactic acquisition by a speaker first appears as one or a few occurrences after what
its extension grows progressively, slowly at first, then faster, then slowly again gaining
finally the last few exemplars. The process follows a sigmoid curve and its time span
lasts 20 to 40 weeks depending on the phenomenon and on the speaker. In a given
speaker, several such sigmoidal acquisitions follow in time, massively between two and
three years, but each lasts between 20 to 40 weeks. All reported empirical results follow
this schema.
An explanation as the one of Principles and Parameters (a parameter, takes a new value
and this determines the application of a new rule) is in a bad position to say why a rule
does not apply everywhere at once. Incidentally, this constitutes another argument to
refrain from positing rules.
Stochastic approaches explain these transition periods by the coexistence of two
stochastic rules and the gradual evolution of the probabilities that weight them (cf. p.
225). But this explanation does not make a precise causal link between the evolution of
stochastic weights and the occurrential experience of the speaker.
281
Arnold Evers and Jacqueline van Kampen, 2000,
E-language, I-language and the Order of Parameter Setting,
http://www.let.uu.nl/~Jacqueline.vanKampen/personal/downloadables/evers-kampen-Syntax.pdf
251
Figure 31 The propagation of an acquisition follows a sigmoid curve
In an exemplarist and proximalist model as the one defended here on the contrary, the
explanation may be more precise if we recall what was just proposed: how a new form
is licensed by analogy with one – or a few – exemplarist precedents. Chap. 4 showed
how the B2-B3 process performs mappings between a new form and one or a few
licensing records which are the precedents in question. For a new form F1, the licensing
records (they are C-type records) are P1a, P1b, etc., they are determined proximally
depending on the terms that it is possible to recognize in F1. For a form F2, the
licensing records P2m, P2n, etc. may accidentally match the P1i but they are most often
different, at least at the beginning of the acquisition process. This allows us to
understand how, at a given point of the learning process, F1 may take advantage of a
new syntactic acquisition whereas F2 cannot yet. Later, P2m for example may have been
reanalysed and become aligned on the new syntactic acquisition; this modifies the
outcome of the new forms that tend to be licensed by it: a step has been made in the
generalization of a usage; the dynamics has progressed a little along the sigmoid.
This model lends itself to formalization. Let n, a function of time, be the fraction of the
linguistic knowledge that follows a new usage. At a given point of the learning process,
the variation of n, that is its derivative function, is proportional to n because, in the
proposed schema, only existing exemplars of a construction can license new ones. The
derivative of n is thus proportional to n:
dn / dt = k n
But the only exemplars likely to adopt the new usage are those that have not done it yet.
Their number is (1 - n) so the variation of n is also proportional to (1 – n) and the
derivative then has the form:
dn / dt = a n (1 - n)
252
where a is a factor which is constant with time. The function n (t) itself is obtained by
integration of its derivative:
n = ∫ a n (1 - n) dt
that is, after integration:
n (t) = 1 / (1 + e-at)
1/(1+ e -0.5 t)
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
-10
-8
-6
-4
-2
0
2
4
6
8
10
t
Figure 32 The logistic function
This function is drawn in the figure above, it is the logistic function. It governs many
social and biological phenomena like the growth of a culture of bacteria, or the
propagation of epidemic (when no contrary factor comes to limit its growth). The simple
learning model thus predicts that the acquisitions will spread according to the logistic
function.
Now the logistic function is one of the possible realizations of a sigmoid curve. The
prediction of the simple learning model is then in accord with the learning curves which
experience shows. Finally, proximalist inscriptions and the simple learning model make
the Analogical Speaker a plausible model of linguistic learning.
8.2.3. Reanalysis
"P2m is then reanalysed". The simple learning model also explains reanalysis because
multiple analyses is always open in it as a possibility,. Let us assume an utterance U
which, before the plexus modification, is analysed with an analysis A1 (that is, a
segmentation of U), by a licensing record C1; C1 operates on U a defined segmentation
into constituents. Assume U is analysed also into an analysis A2 by a licensing record
C2, operating the same segmentation or a different one. Assume analysis A1 is strong
and analysis A2 is weak. Macroscopically, U is analysed according to A1, the other path
remaining a vitality hidden within the analysis process, probably unconscious to the
speaker. Suppose now the plexus is modified as described in the preceding section, that
is, a learning step is taken. It may change this balance; it may alter the difference of
253
strengths between A1 and A2 and make that A2 becomes the preferred analysis for U.
Assume in addition that C1 and C2 impose different segmentations on U. U used to to
be analysed with segmentation C1, it now is analysed with segmentation C2. This is a
first acceptation of reanalysis: the analysis habits of one speaker only change on a point.
There is another one, associated with the linguistic activity in a community. Of a given
utterance U, Speaker S1 makes a given analysis (somnolent = somn+olent to take
Saussure's example again). Of the same utterance, speaker S2 makes a different one (ex.
somnolent = somnol+ent), but which is such that the meaning thus construed by S2 is
not contradicted by any of the situation's data so it is accepted. Speaker S2 'reanalyses'
the analysis made by S1. Speaker S2 is younger than speaker S1 and does not know the
same things (he knows lent, prudent, parlant, marchant, etc. but he does not know the
Latin olentus). As time elapses, many S2s, whose knowledges are compatible on this
point, so reanalyse. The S1s grow old and die: the 'language', a shortcut for 'a static,
grammatical description of their linguistic practice made by a set of speakers who think
they understand each other', has operated a reanalysis.
I may stop here for the explanation of learning and change, with a worry: this all seems
too simple. Yet all the steps of this reasoning are available in the model. The proposed
schema may be that of an experiment; it would be heavy to carry out but all its operative
steps are defined and the experiment is possible.
Actually, some explanations become much simpler if we adopt, as operational
explanatory mechanisms, assumptions that are different from rules and categories, and if
we dispense with the assumption of a language, conceived of as an abstraction with an
explanatory role.
If this model has some value, it meets Auroux, suggesting against Chomsky that
creativity and productivity (rule-changing creativity and rule-governed creativity) do
not constitute two fundamentally distinct modes:
One may […] claim that creativity is part of ordinary human behaviour, and even that
there is no essential difference between the way men speak day-to-day, and the way
language changes. Auroux 1998, p. 95.
If there had to be one advantage only to an exemplarist and occurrentialist theory, it
would be that of explaining in continuity and with the same means, linguistic acts and
learning and, after them, variation and linguistic change.
8.2.4. Discussion of the simple model
Learning is more than a mere recording of facts.
For the simple model, learning is the recording of a fact but not merely the recording of
a fact since precise paradigmatic links are established. The novelty which is learnt is
thus strictly copositioned with the already known and so is directly ready to serve,
following modes already described with precision.
Learning cannot be an in extensto recording, experience shows that it goes
along with condensation.
In the proposed schema, a first condensation is the following one: when a linguistic
form, for example this evening, is encountered several times it is not recorded several
254
times as the string t+h+i+s +e+v+e+n+i+n+g. On the contrary, this string is inscribed
once only and the various encounters of the term, its various occurrences in A-type
records or C-type records are seen as as many references to its unique inscription. This
is what is said when one requests that terms should be re identifiable in their
recurrences. This is a first condensation; after it, A-type records and C-type records are
"alleviated" of the form and the terms may be seen as punctual.
It is not the only condensation: the records, even so "alleviated", remain exemplarist ;
when they accumulate, they become redundant; there is evidence that brains, without
going to categorial abstractions, also condense some of this redundancy, cf. section 8.6.
Is radical non-categoricity sustainable? (p. 268).
Condensation even has to be semantic.
But there is more to it: condensation has to become "semantic" and abandon the form.
This requirement comes from the results of psycholinguistics282 which show that the
speaker tends to forget the form as soon as he has understood. When asked to repeat
what he heard, he tends to paraphrase, instead of repeating literally.
The thing is recognized but nothing more can be said as long as the model is not further
developed in the direction of meaning.
8.3. Using a corpus to set up a plexus
Currently a plexus is "hand-made", it is constituted term after term, record after record,
paradigmatic link after paradigmatic link, by a human descriptor who, at each step,
meets questions of opportunity and responds by judgments which involve his culture as
a speaker, therefore a subjective one. This, in itself, is not an inconvenience: as the
plexus is assumed to match the linguistic knowledge of an individual, it is not
incoherent that it be marked by the subjectivity of a person.
Yet, when the target is the linguistic knowledge of someone else, we are not very sure of
the method of 'cultural displacement' which should be adopted.
Moreover, the descriptor uses his subjectivity as a speaker but also that of the
grammarian, or of the linguist, which he cannot help to be even when pretending
refraining to be, in this task. This inconvenience is more serious. The risk is, beside a
speaker's subjectivity, to introduce the preconceptions which the descriptor may have on
the very description of his language: epilinguistic knowledge and metalinguistic
knowledge. Now this is not the point which is linguistic knowledge and not
metalinguistic knowledge. Therefore there is a risk to yield just that which was
introduced, namely preconceptions. The risk is increased from it not being thematized in
the approach; this dimension might produce effects escaping critique just because they
would not be identified.
Finally, preparing a plexus by hand is expensive and hardly allows reaching a sample of
a language: some thousands of terms and records when a reasonably complete coverage
would rather require hundreds of thousands or millions. At worst an engineering
282
Since Bransford 1971.
255
problem might one say: this would begin to matter only when trying to apply this theory.
Not only that, there is a question with theoretical import associated with the plexus size
which is that of the explosion, or not, of the computation costs, also that of the possible
dispersion or degradation of the results with size, or of their improvement, it is hard to
say before investigating. These questions are difficult to study without a range of plexii
of different sizes available.
Therefore, the idea for a plexus is to make its fabrication objective and economical,
maybe by mechanising it. The idea would be to exploit a corpus to that end in the most
automatic possible way. I was suggested this several times.
After all, by contrast with introspection, a corpus is 'data', even with limits. Corpus
investigations yield surprising results. For example, Rastier finds non-compositionalities
in French which nobody would guess: for bras, main, jambe, pied, the distribution of
singular and that of plural are almost entirely different. Another suggestive work is for
example that of Goldsmith (2001) who finds the morphemes of a language out of a
corpus by applying to it the MLD (minimum length description) heuristics of
Rissanen283. Mostly, the method finds the morphemes with which we are familiar and
when it does differently, it is for reasons that are understandable and 'good' finally on the
corpus to which it was applied.
However, for the Analogical Speaker to work from corpora presents inconveniences;
those that are general to corpus linguistics to begin with:
The proportion of hapax in a corpus often exceeds 60%. Turenne 1999.
Research over the last twenty years demonstrated practices exclusively based on
corpora to be subject to two restrictions. 1) The construction of a grammar from a
corpus by distributional or structural analysis methods fails. With a small corpus, a
comparatively coherent system obtains, then, as the size increases, as the nature of the
corpus changes, beyond a threshold of quality, a new rule enters in conflict or in
contradiction with another and unbalances the system: it causes a previously balanced
system to "diverge". 2) In "approximate" grammars built from larger and larger corpora,
that which is grammatical is that which is described by the grammar. The union of two
heterogeneous corpora may yield two incompatible grammars. Habert 1997.
This same fundamental property has another symptom: the multiplication of concurrent
analyses:
The scope of identifying the correct parse cannot be appreciated by examining
behaviour on small fragments [of English], however deeply analyzed. Large fragments
are not just small fragments several times over – there is qualitative change when one
begins studying large fragments. As the range of constructions that the grammar
accommodates increases, the number of undesired parses for sentences increases
dramatically. Abney 1996, p. 17.
Phenomenon which Hopper284 summarizes as: The larger the corpus, the smaller the
grammar.
283
Rissanen 1989
284
Paul Hopper, conference in Paris Nov. 2001.
256
Techniques extracting structures from a corpus generally lack incrementality, this
unwanted characteristic is also present in connectionist models. Now here
incrementality is essential because i) it is an experimental variable: it must be possible
to assess and analyse the effects of an incremental modification of a plexus, and ii) an
ambition of the model is to account for acquisition which is itself incremental. It might
be possible to live with non-incremental corpus techniques and see them as providing an
initial plexus which might, from that point, evolve incrementally in a manual mode.
This would be compatible with the opinion … combine corpus-based techniques with
intuition-based techniques285.
This would be workable provided that the technique in question could yield results
matching the needs. To this there are several shortages the first of which is the lack of
proximality (cf. p. 212), which is a corollary of the lack of incrementality. I cannot see
how proximality can be obtained from a corpus. If it were proximality of terms, this is
probably possible, several models can do that; Freeman286 for example, who, with
enhanced distributional analyses succeeds in extracting from a massive corpus, and at a
high computational cost, a king of exemplarist grammar, which is non-categorial, very
lexical, and can support syntactic analyses curiously precise and pertinent.
But an approach too exclusively centred on terms does not cover the need; it does not
provide a) systemic analogies of which I do not see how to extract them from a corpus,
b) constructional proximality, c) familiarity orientation, d) signification and meaning.
For systemic analogies and structural analogies the possibility to obtain them from a
corpus seems conceptually and technically out of reach on any planning horizon for the
time being.
As to familiarity orientation, it cannot be substituted with frequency. A proposal could
be to order the corpus in chronological order: the first elements in the sequence would
be assumed to be more familiar. The corpus then should have to be exploited in this
order which would require developing an incremental method – corpus methods lack
incrementality, this was mentioned above. The thing is possible maybe but it has not
been searched. However, it should be noted that the pre-sequencing of the corpus has a
cost and the economy which is usually sought in corpus approaches could not be
expected then.
Finally, signification and meaning are present partially only in corpora however hard
one tries to "make them talk". Meaning is present in them in a manner which is curious
but very incomplete, be it as the "ontologies" which can be extracted within particular
bounded domains or works about meaning in other orientations like those reported by
Rastier, mentioned above.
A corpus is a snapshot, small or large, homogeneous or heterogeneous, of linguistic
productions. It has to be taken as a manifestation, a symptom. The question of what it
reveals of the causal chains that operated at the time of its production and how these
could be reconstructed from it remains entire; so the question of the possible
constitution of a plexus by mechanical techniques is an open one. A plexus must
285
Françoise Gadet in Normand 1990, p. 342.
286
Cf. a contrastive analysis above, in the conclusions section .
257
comprise proximality and incrementality, whereas a corpus approach is totalist. One of
the signs of the totalism in corpus techniques is the aphorism of Hopper which was
recalled above: "The larger the corpus, the smaller the grammar". I conjecture that this
aphorism applies only if one thinks of a grammar based on categories and rules. It might
cease to apply in a framework in which productivity is rendered without abstractions –
without reified categories or rules – and if this apparatus were substituted with analogy
and proximality. Now in such a case, it is precisely not a grammar which is done. Then
the base assumptions of the Analogical Speaker are compatible with Hopper's position,
but also, a limit is traced to what can be expected from the exploitation of a corpus.
8.4. Self-analysis
Currently, in the model, terms must be pre-analysed. This is to be understood as terms
occurring at record sites had better be segmented by the descriptor in morphemes or
otherwise (and not that a syntactic analysis tree has to pre-exist). A term which would
not be sufficiently analysed would constitute a limit to productivity.
One might prefer the model not to impose this. In particular, an enhancement which is
obvious and comparatively at hand, would be self-analysis: the plexus would analyse
itself. The idea is the following.
Assume a term which is long and in constituent position in the plexus. In the very
measure in which it is long, its usefulness decreases in the abductive analysis process. It
is so because it becomes less likely to coincide with an intermediate result of the
computation, therefore it becomes less likely to participate in a settlement. Thus when a
constructor record is reached via one of its constituents which is short – this is a
frequent event – if another of its constituents is long, the heuristic path is likely to
remain unproductive if the latter remains unanalysed. Things are different if the long
constituent may be analysed, because it then becomes possible to assess its abductive
coincidence with the converse of the problem's term which caused the arrival on the
record. To analyse the terms which would demand to be, all the weapons are readily
available: it is possible to trigger a B2-B3 process such as described in Chap. 4 on a
term already contained in the plexus. It is so because the B2-B3 process, initially
designed for an externally received term, depends on the literality of this term only, and
not on its origin. The process may therefore, as it is, be applied to a term of another
origin, in particular to one contained in the plexus.
This being settled, self-analysis may take two modes: either (mode A) the analysis is
performed on the fly, during the computation. Any computation branch is likely to
initiate one such sub-process. Naturally this has a cost which may incur an explosion of
the general cost, but since the manoeuvre increases the possibilities to settle fast, and
increases the possibilities to just settle, for a given description effort, it is difficult to
predict the balance; an experiment should be made. This mode presents the advantage to
allow the analysis of the term in question to be influenced by the concrete data of the
task; globally, this would make its utilization more efficient. Or (mode B) the analysis
would have been made before the execution of the particular task to which it is intended
to benefit. The schema then is one of a preparation of the plexus: the plexus will have
been pre-conditioned for all its part to acquire the best efficiency, without imposing this
258
burden to the human descriptor. Within this mode, the execution is no longer penalized,
but the plexus volume increases systematically and perhaps importantly, without this
increase being motivated by needs identified on the basis of exemplars.
Self-analysis presents two advantages. Firstly, it alleviates the burden of plexus
description; secondly, it helps escaping the subjectivity of the descriptor and the limits
of his imagination. The results of self-analysis may be innovative and creative, they may
be shifted with respect to classical analysis frames, from which the descriptor might not
always free himself. For example the model may perform on itself multiple self-analyses
whereas a human descriptor has to make efforts to that end, forcing the Port-Royalist
habits, or the Generativist habit which he learnt.
Self-analysis can be related with bootstrapping287: only a part of the effort of
constructional analogy description would be required from the human descriptor. On the
base of a restricted body of externally provided constructional analogies, the model
would then introspect itself and pursue, on its own, its elaboration. It is to be expected
that this might show a 'creativity' which might diverge from what a human descriptor
would do; the development should be brought to that point to judge. In such case, two
judgments might be possible: either to accept these surprising creations as linguistic,
and qualify them, in the measure in which they do not hinder intercomprehension, this
could be implemented only after a better development of meaning in the model; or
refute them as non linguistic, that is, extraneous to the sort of productivity we want for
the model; then a critique should be made of the sort of abduction at stake in selfanalysis.
Naturally, an interesting question is the plausibility of modes A and B above. A
corollary, or a prerequisite to this question, is the epistemological status and theoretical
status which might be that of self-analysis.
At first sight, it seems to be artifactual only, a mere consequence of the artefact which
the plexus is. It would help living with that, by reducing the plexus cost for the
descriptor.
But one may see more to it. If mode A is adopted, that is, if self-analysis takes place
under the pressure of a given task, and if it is constrained by the proper terms of the
task, one will await with interest the appearance of analogical innovations. If the model,
operating on a French plexus, already produces j'ave, tu aves, il ave, thus
287
Bootstrapping here is understood as priming, for example "the bootstrapping of a computer", with no
particular reference to the semantic bootstrapping of Pinker : Pinker 1984 has been the main proponent to
argue that children may use semantics as a bootstrap into syntax, particularly to acquire the major
syntactic categories on which grammatical rules operate. Thus children can use the correspondence that
exists between names and things to map onto the syntactic category of noun, and physical attributes or
changes of state to map onto the category of verb. At the initial stages of development all sentence
subjects tend to be semantic agents, and so children use this syntactic-semantic correspondence to begin
figuring out the abstract relation for more complex sentences that require the category of subject. TagerFlusberg 1995, p. 222.
259
(over)regularizing verb avoir on the first conjugation group288, there is some hope that it
might also create forms like somnolent289 and more "popular etymologies".
A last word on self-analysis only to wonder whether a process of this nature might not
contribute to explain the avalanche effect of the linguistic knowledge which takes place
between two and three years, and, for non-linguistic cognitive knowledge , well before.
8.5. Treatment of meaning, prerequisites and directions
In this work, I did not seek to cover the question of meaning and I did not cover it.
However, it is not possible to deal with linguistic productivity without coming across
meaning, and meaning was met several times above. Moreover, the options taken to
model the static knowledge and the dynamics incur consequences, some negative, some
positive, on the way to address it.
The purpose of this section, a modest one then, is to present two directions of thought:
'private terms', and the conception of utterance understanding as its 'immersion' in a
plexus. These directions are coherent with the general orientations of this work:
rejection of abstractions, analogical copositioning, inscriptions that are necessarily
contextual, proximality of the inscriptions and of the computation, etc.
Previously, some general theses about meaning will be briefly stated because they
provide the background necessary to better situate private terms and immersion. They
are stated without argument because, again, this stands beyond the proper perimeter of
this work.
8.5.1. Preliminary theses
Rejection of representation. Language does not represent the world. Words do not
represent things. Inscriptions in a plexus do not represent linguistic knowledge they are
this knowledge (along with the dynamics, cf. p. 57). A dynamic model must be (and can
be) non-representational. The relation between the productions of a dynamic model and
their meanings are to be grasped at operation time, when the model operates; these
relations are not statically reified in the model.
Rejections of 'concepts'. But for "constructed sememes, the definition of which is
stabilized by the norms of a discipline, so that each one of their occurrences is identical
to their type"290 – but this is not the problem which is posed to linguistics –, in the mind,
288
But it does not proceed with this proposition, because the escalation recuperates forms ai, as, as,
inasmuch as they are attested in the plexus, and makes them prevail, or, as in the example, by the
impossibility to pursue the assembly process with the rest of the received context, the form thus construed
will only have been a local pun or a small-span garden path.
289
A particularly curious example will show how analogy works, along time, on new units. In modern
French, somnolent is analysed somnol-ent, as if it were a present participle; so much so that there exists a
verb somnoler. But in Latin, they used to cut somno-lentus like sucu-lentus, etc. and still formerly somnolentus ("smelling like sleeping", from oleere, as in viin-olentus "wine-smelling"). So, the most visible
and most important effect of analogy is to substitute older, irregular, and decayed formations, with other
ones more normal, composed with living elements. Saussure 1915/1970, p. 233.
290
Rastier 1991, p. 126
260
there is no assumption of 'concepts', which would then have to be understood in their
relation with words or linguistic terms.
Meaning is not a counterpart of the form. Meaning is reconstructed each time; for
apprehending meaning correctly, it is appropriate to take a dynamic approach.
Meaning issues must be approached within textual context and within situational
context.
Linguistic meaning is just paraphrase. 'Lexical meaning', 'linguistic meaning' are just
effects. They are contingent (but certain types of effects may have a broad extension).
There is no point postulating lexical meanings or linguistic meanings to build the theory.
8.5.2. Arguing for terms which are not formal
To prolong the model in the direction of meaning questions, the approach that comes to
mind is that the treatment of linguistic form and the treatment of what is not linguistic
form be made in the greatest possible continuity, by applying to what is not linguistic
form the methods of analogical inscription, and the types of dynamics, which were
found efficient with the form alone. This track is made credible by the importance taken
by analogy among psychologists and cognitive scientists, and the evidence they bring
out that the basic modes of operation are analogical. It would then be needed to perform
analogies in something else than the linguistic form.
If we accept and maintain that analogy establishes between terms291, we then have to
understand what sort of terms could be at play in such analogies. Do we need nonformal terms, are we able to conceive them, and with them, the processes that they
support.
A first attestation of this need is to be found in Aristotle:
Sometimes, there is no existing name to designate one of the terms of the analogy but
the metaphor will be made nevertheless. For example, throwing grain is to saw, but for
the flame that comes from the sun, there isn't a name; however this action is to the sun
as to saw is to grain, so that it has been possible to say; sawing the divine flame. There
is still another manner to utilize this sort of metaphor; it is to designate by the improper
noun while depriving it of some proper feature; so the shield could be named, not Ares'
cup but cup without wine292.
The predicate applying to the flame which comes from the sun, and has no name for
Aristotle – today we would have radiate in English – , can be thought before being
expressed. It being thinkable has a symptomatic manifestation: a metaphor can be built
concerning it. The metaphor is analogical293, and invites us to presume that, before
being expressed, it is disposed in the mind of the metaphor's author in an analogy with
three more things. We also have to presume that something similar is working in the
hearer's mind when he hears; this is what 'understanding' would be. If one postulates this
291
Which is the conception adopted thus far in this work for analogy in the form alone.
292
Aristotle 1980 (Poetics) p.109.
293
For Aristotle, the analogical metaphor is one of the four possible types of metaphor, it is based on an
underlying analogy.
261
underlying analogy, we need to say between what terms it holds. Specifically, we need a
term 'the flame which comes from the sun'. The term is needed not only if not
lexicalized – it is in English: radiate, it appears it was not in Ancient Greek) – but even
if it was never before expressed by any linguistic or rhetorical process. We would then
need a term which should not be linguistic form.
The term called for in Aristotle's example is small and ancient. Here is another one
which is big and modern: a high-level mental chunk that lacks label by Hofstadter:
That time I spent an hour hoping that my friend Robert, who was supposed to arrive by
train some time during that day in the Danish village of F., might spot me as I lurked
way out at the furthest tip of the pier, rather than merely bumping into me at random as
we both walked around exploring the streets of this unknown hamlet 294.
Hofstadter continues: it has to be a mental chunk because otherwise we cannot explain
his sudden and integral remembering, fifteen year after, of this episode characterized by
unrealistic hope, when, with random combinations of letters, he unrealistically tried to
remember the forgotten name of a friend.
This is a rhetorical analogy:
walk up to the tip of the pier where he can't be
---------------------------------------------------------- ::
find a friend
combine letters at random
--------------------------------find a friend's name
The elided predicate is 'this is not a good method for', 'doing A, I'm not doing what it
takes to achieve B'. It being elided does not prevent the ancient episode to make the
present situation understood. The analogy is good and it operates well. But its four terms
are not linguistic form. These are things to which linguistic form may be assigned, we
may talk about them (what exactly was just done), but without this being a prerequisite:
they are here and operate very well on their own before one talks about them and even if
one never does.
8.5.3. Formal terms and private terms
Thus, beside formal terms, we need terms which are not formal. There are various
possibilities to name them and it is interesting to look why some are rejected, it helps to
better understand the notion.
Cognitive term does not suit because formal terms also are cognitive: the linguistic
exercise is a cognitive activity as any other.
Mental term has the same inconvenience: formal terms also can be said to be mental.
Experiential term is not good because non-formal terms may be created by abduction.
Then, their relation with experience is second. Perceptual term is not appropriate for the
same reason because we need to let happen terms that are not directly perceptual, which
are elaborate, remote from perception, but terms anyway.
294
Hofstadter 2001, p. 515, condensed by RJL.
262
Term without assignment might be possible because one such term is assigned neither to
the signifier, nor to the 'signifiable'. This word then has no reference inconvenience, but
it is not adopted because it is a negative predication.
Conceptual term has the already mentioned defect to encompass ''conceptual", in a field
in which this word must be proscribed as a precaution of method. In the undermined
land of "concepts", "thoughts" and "intellections" one discards in this way
misconceptions and connotation that take us into dead-ends.
By lack of something better, 'private term' is adopted which has the best advantages and
the least inconveniences. A private term is a term which participates in the linguistic
computation but which is not linguistic form. It is private by opposition with linguistic
form which, itself, is 'public' because it crosses the interface between speakers.
"Private"' is attested in this meaning in Wittgenstein (Philosophical Investigations) – yet
it is for negating private language – then in Mandelbrot (1954), Frei 1954 (private object
vs. public object) and other authors (but they do not apply it to 'term'). With this
important difference that if is not constituted with linguistic form, the private term
behaves like the formal term in all other respects: identity, non-essentiality, minimality
suspension, ability to coposition with other terms in analogies; this qualifies it to take
part in linguistic computation.
Private terms are to be recognized in the terms without existing name of Aristotle or in
the high-level mental chunks that lack labels of Hofstadter.
A term is formal or private exclusively. There is no private term with an associated
linguistic form. Seeing a private term as having a form without this compelling to
reconsider its quality of private term is not desirable for two reasons. First it would lead
to allocate a property to a term, thus contradicting the principle of the vacuity of terms
(p. 79). Then, this one-to-one coupling between the word and the object would not let
the necessary room for ambiguity and paraphrase; it would be a rigid and poor treatment
of reference.
A private term cannot be directly observed. Its observation is difficult even via its
indirect effects because the only things ever to be observed are effects of assemblies of
private terms. The best that can be done is to propose such assemblies and their
plausibility will be demonstrated if, with them, effects that match what we see in nature
can be reconstructed.
Private terms, along with formal terms, can belong to the common ground or to the
topic, that is, elements that are well established in the interlocution, and so they may
serve as targets for reference or as a base for anaphor resolution.
To private terms, (some of) the same computations apply as to formal terms.
Computations may involve private terms and formal terms together.
8.5.4. What is receiving an utterance, what is understanding
In order to talk of the process of comprehension, a first metaphor295 is that of the
message, that is, that of information transmission. This vision is simple but inaccurate.
295
Shannon, Jakobson, etc.
263
No one defends any more today the idea of a clearly defined mediate object, the
'message', which would 'mean the same' for the utterer and for the receiver. This only
suits very coded situations, some of these do happen, but they are only a margin of what
linguistics has to cover.
Then 'mapping' was tried: the successful reception of an utterance would operate a
mapping of the linguistic material or of the intention of the utterer onto the 'conceptual
structures' of the receiver. This constitutes a progress: i) there isn't any longer an
intermediate, unarguable object (the message, the information), and b) between
interlocutors A and C things might not be exactly the same as between A and B. If
'mapping' is understood in a vague and metaphorical meaning there is no inconvenience
but nothing much has been said. In order for 'mapping' to be made more operatory, more
precision is needed about the units that might be mapped, and here there is nothing very
firm; first of all, the candidate units: words, concepts, etc. all have more than
problematic definitions, and secondly, no operational model of such mapping has been
produced which would not fall down facing the smallest metonymy for example. The
idea itself of mapping, that is of an application in the sense of set theory, induces a
vision that is mereologic (things can be described by their parts) and partonomic (the
things and their parts have properties); among them, correspondences would be
established (the mapping). It is now clearer and more accepted that things do not work
in this way.
The Analogical Speaker allows us to suggest another approach. With the terms, the
plexus, copositioning, abduction and settlement, another track becomes conceivable. It
can only be a suggestion first because this vision is far from being entirely constructed,
and then because it involves many elements so that it is difficult to grasp it in linear
fin d in g (0 .7 3 )
une
le
P rim in g in p a ra d ig m
d e f-u n d efin e d
un
une
la
O n e ste p in
a ra d ig m
d e fin e d-u n d efin e d
=
une
la
a g e n t 2 (0.8 1 )
re c o rd 74
settling
a g e n t 7 (0.7 3 )
re c o rd 75
re s u lt (0 .8 3 )
la
le
T h e in itia l te rm s of
th e ta sk
une
fusio n
un
P rim in g in p a ra d ig m
d e f-u n d efin e d
fin d in g (0 .7 3 )
une
le
un
O n e ste p in
p a ra d ig m
m a s c .-fe m .
a g e n t 1 (0.8 1 )
re c o rd 81 0
Figure 33 Elements of the computation
264
la
la
le
=
le
a g e n t 3 (0.7 3 )
re c o rd 21
settling
discourse (conventional language usage), and that any drawing that can be done of it
very quickly starts out swarming; before becoming unreadable by the reader, it is
expensive to produce for its author.
I propose to name this "immersion". Immersion is the process which accounts for the
reception of an utterance: an utterance is received (and occasionally understood) when
its immersion could take place. "Immersion" is proposed to correct the vision of
mapping: it grants primacy to analogical ratios, that is, to the copositionings of terms.
What is at stake between terms is the copositionings instituted by the analogies that
make up the linguistic knowledge, then those that can be abducted from the latter by the
computation.
Immersion can be illustrated with an example which is the computation of the following
analogical task: "what is to le as une is to un?". Surely this task is very far from
deploying all that would be expected from a decent treatment of meaning but it makes it
possible to suggest how this proposition, when it would have progressed, would allows
us to see the question. A first figure displays the elements involved in the task's
computation.
On the second figure (below), I highlighted in thick lines (brown when reading in
colour) a schema which is the abstraction that interests me. The clusters of thick lines
(groups of three) stand for mappings of groups of terms onto groups of terms which
conserve copositionings. This holds first between the terms of the task and their first
echoes in the plexus, then between the latter and second echoes. After this, two
settlements occur (equal signs on the drawing). Note that starting from the terms of the
task, the immersion branches into two branches (two in this figure, an arbitrary number
in general). The results of the two branches are compatible here (both find the same
term: la) and reinforce themselves, in other cases, they may be different and concurrent.
This is a small example of the general configuration: multiple branches, tree structure
(or lattice structure in other cases), several possible settlements, competition or
reinforcement, in short, all what was named 'heuristic structure' above. The utterance is
easy to analyse and understand if the immersion is easy. If the immersion cannot take
place, the task does not get solved. If the immersion is difficult, the task gets solved but
badly: the utterance is difficult to analyse and understand.
The immersion is based on mappings but not of elements one-to-one: here the mappings
involve terms by groups of three, with preservation of copositionings at each edge.
An effect of immersion is that the existing parts of the plexus are set into relation in a
novel way which is not without link with conceptual integration. These settings into
relation following novel modes occur at each linguistic act and may be connected with
learning but this will not be developed here, it was already above.
265
une
une
la
=
le
resu lt (0.83 )
la
le
un
une
la
une
m erging
un
une
le
la
la
un
le
=
le
Figure 34 Immersion schema superimposed to the elements of the computation
If a mapping had to be found between the terms of the linguistic task and the plexus, it
would have to be along these (groups of) lines.
P lexus
=
L ingu is tic
tas k
=
Figure 35 Immersion schema alone
266
In the above figure, the immersion schema296 now appears without the concrete
elements of the task, in order to be better contrasted with what would be a set-theoretic
application à la Cantor (fig. below).
An important difference is visible: in a Cantor-like application the arrows are from one
element to one element. In an immersion on the contrary the edges are grouped in
bundles. That is to say, these bundles carry together the arguments and the predicates
simultaneously297. It also means that the mechanism can only be contrastive and
differential by construction.
There exists in English a usage of "true" in domains of professional practice like timber
or carpentry which is interesting. To true up is to ensure that the measures are good, to
adjust a window, make flat, make parallel298, put at a right angle299. To true = bring (a
tool, a wheel, a frame) into exact position or form required300. So this language in a way
R e p res e n te d w o rld o r
re p re s e n te d lin g u is tic k n o w le d g e
W o rld as p e rc e iv e d o r
lin g u is tic e xp e rie n c e
re p re s e n ta tio n s , c o n c e p ts , e tc .
L in g u is rtic o b je c ts o r
w o rld o b je c ts
Figure 36 Cantor-like mapping
sanctions epistemological relativism. The French vérifier does that equally well but the
Latin etymology makes this less apparent. Following this, judging the truth and
adjusting would be the same thing. We just saw that understanding is an adjustment, the
296
The immersion schema is exposed here according to its principal aspect only. It comprises other
dispositions like persistence (the model 'learns' or not), the creation of novel structures caused by this
reception occurrence, structures for which the question of their permanence arises. This is because the
memory of this occurrence may consist of new paradigmatic links between existing records, the formation
of new records bearing on existing terms, and the creation of new terms.
297
I write 'argument' and 'predicate' using terms of previous theories, but these do not belong to the
Analogical Speaker.
298
Informants.
299
Harrap's dictionnary.
300
Oxfoxd dictionnary.
267
least bad possible, between an utterance and the plexus. Thus truing up is not too bad:
taking things in this way, corrects the vision of the possible worlds301, and at least
assigns this 'truth' a perfectly operative status, that of comparison.
8.6. Is radical non-categoricity sustainable?
This question was touched before several times and it is now possible to discuss it in the
light of all we have seen. The discussion will bear on its advantages and limits.
I call 'radical non-categoricity' the option – which is adopted in this work as a research
posture – that the linguistic data, in the static side of the model (the plexus), and in the
course of the computations, are and remain strictly exemplarist: they are never grouped
or elaborated into abstractions. This option is initially motivated by the various
inconveniences associated with categories, which were reminded in Chap. 1 – and are
further exposed in an appendix – and by the wish to explore an opposed track to see up
to where it can be taken. This is what was done and reported so far.
This track in turn shows inconveniences.
Firstly, on a precise example: the priming of the analogical task was found difficult.
More generally, it appears that the model, in order to provide results, requires a little
more data than what should be sufficient according to intuition: the yield is suboptimum.
Finally, the demand of computation resource seems high, even if there is no unarguable
base to benchmark the model against what biological neurons do.
These observations, lessons from psycholinguistics and what neuromimetician
connectionists are able to interpret from their models, all this suggests that between a
categorical model and a strictly "flat" model like this one, biological neurons do
something intermediate of which the figure below tries to give an image.
These small circles stand for exemplarist or occurrential linguistic facts.
On the left side is the assumption of categories: categorial structures (the K squares on
the figure), even with multiple inheritance, even with under specification, ensure certain
functions with efficiency but do not render the flexibility, gradation, innovation, they do
not explain variation and learning.
301
Tarski (slightly paraphrased) : I understand an utterance when I know how the world in which it can be
true is.
268
K
K
categorical models
symbolic systems
intermediate upsurges,
lesser than categories,
non-symbolic,
factoring the connectivities,
yielding better processing efficiency
what neurons and neuromimetic networks do
radical non-categoricity
(flat model)
Figure 37 Three options versus categoricity
On the right side: radical non-categoricity (the assumption of this work) is a flat
structure: the exemplars have exemplarist links and there is no other structure. No
abstractions, no a priori groupings on the base of common properties or common
behaviours.
In the middle a metaphoric drawing presenting upsurges: they reach intermediate levels,
never quite as high as categories. This middle is alleged to be what the neurons do
(anatomical neurons or the simulated ones): semi-categories, partial, blended
viewpoints. They are not just distributional.
A track presenting this character and resting on factoring techniques will be presented in
another publication.
269
9. General conclusions
9.1. Dynamics are primary and grammar is second
After the critique of the intent to explain linguistic processes on the base of a static,
grammatical description of a language – which is sometimes postulated to be a
prerequisite to the understanding of the dynamics – I showed that directly addressing the
dynamics is a more promising track. A model with this intent focuses on the speaker
(not a language) and dynamical analogy (not categories or rules), whence its name: the
Analogical Speaker.
Its static side was built on analogy, by modifying the notion of paradigm and by defining
a notion of proximality; this led to organize the linguistic knowledge as a 'plexus'. In a
plexus, analogy and anomaly coexist which makes room for a flexible interplay between
them. Terms are vacuous and analogies 'coposition' them several at a time. Inscriptions
are necessarily contextual.
In the proposed approach, a plexus on its own, because it is static, proves nothing
without the associated dynamics. Among the inscriptions that constitute it, I defined
four abductive movements which form the base of the dynamics: they articulate the
static side of the model with its dynamic side. The dynamics, as a general framework,
was defined to be abductive and likely to be fragmented into agents in the service of
various heuristics.
For morphological and syntactic analysis, an operative model was implemented and it
was shown that it explains appropriately a great deal of structural productivity.
I defined systemic productivity as that which operates in pluridimensional paradigms, I
framed it in its cognitive dimension and showed how if could be constructed in the
model and explained by it. The central device for this was defined, implemented, and
demonstrated on various examples. It was combined with that of the structural
productivity to give a first-approach reconstruction of grammatical agreement.
It was shown how this approach to modelling language dynamics, which contains no
metalanguage, within the limit of its current perimeter, responds to numerous linguistic
questions with a certain plausibility.
This approach joins together a static linguistic knowledge and its mobilization in the
dynamics of occurrential acts.
271
It renders a great number of the stipulations of the grammars, or of static theories with
grammatical orientation, as secondary effects, instead of expecting them to be causal or
explanatory.
A simple model of acquisition was presented – without yet being implemented – it is
compatible with the principles of the plexus and of the dynamics, and its predictions
comply with the data provided by acquisitional investigations.
Because it is founded on analogy, the Analogical Speaker places itself in continuity with
2400 years of linguistic thought, notably with the repairing analogy of the
Neogrammarians and of Saussure. It provides an implemented demonstration, still
incomplete but very precise on the parts that it covers, of the intuitions that the same
Neogrammarians, then Bloomfield and Householder had formulated without being able
to develop them.
9.2. Plausibility
9.2.1. Reasons of plausibility
The model of the Analogical Speaker is plausible for the following reasons:
1. The connections in the plexus always connect some elements to some elements,
never one to many or many to many. The physiological constraints and what we
know of the brain's anatomy make us think the connectivity of neurons has a
property of that kind.
2. The execution of a linguistic act by the model engages elements in tens of
hundreds (not hundreds of thousands) and cross a number of layers which counts
in units. This is compatible with empiric results.
3. Nothing is reified. Categorization effects take place without any category being
symbolically represented. An external observer may abstract categories to give an
approximate description of its behaviour, but the model has no component
providing categories or rules. This is compatible with the empiric and
commonplace fact that speakers talk well before any explicit learning of a
grammar. One may argue that the model requires preset analogical inscriptions.
This must be seen as the result of learning. From there, productivity is
demonstrated.
4. The model is compatible with a proximal and situated vision of linguistic
operation, thus it eschews totalism.
5. Finally, the results of experience with the model present numerous properties
which belong to linguistic behaviour in nature: flexibility, gradation of effects,
that more categorial models find it difficult to render.
However it contains an area of arguable plausibility: the computing apparatus in ABS
may seem exaggerated. This is what we now want to review.
272
9.2.2. Heuristic structure and working memory
The heuristic structure of the model appears as the homolog of the working memory set
forward by psychologists because: i) as for the heuristic structure, the working memory
is the main assumption to explain how psychological processes are carried out, and ii) as
for the heuristic structure, its content is transient. Let us take a closer look at the
conditions of his analogy.
The first thing to note is the important lag between the six to ten 'chunks' that would be
the capacity of working memory302, and the volume of the heuristic structure. With the
B2-B3 process, the analysis of form elle est arrivée avec son homme requires 1032
agents, 31 channels and 1088 products, including all intermediate objects and all nonproductive paths. Assume some technical improvements divide the figures by three,
assume even that the 'chunks' of the psychologists are constructs bigger than our agents
or channels, the numbers in the model are still forty times larger than the capacity of
working storage.
Two types of reasons help qualifying this lag. Firstly, the heuristic structure not only
implements a linguistic task, but it also serializes, on a serial processor, the homolog
mental process which itself has an important parallelism available; a parallelism rate of
103 to 105 is not unrealistic.
Secondly, even before questioning the inscription principles, the heuristic structure may
be seen as making up for deficiencies of inscriptions in the plexus: it allows finding
results despite inscriptions which could be felt to be locally deficient for the task. This is
because the computation has the property to be very persevering: when short and strong
abductions do not happen, it abducts a longer distance, with weaker evidence.
Finally, the very principles of structuring linguistic knowledge in the plexus may have a
part in this lag. The model approximates its analog the best way it can.
The multiplicity of agents (their proliferation) may be interpreted as ensuring principally
the serialization of a parallel process. Consequently, if agents are not directly plausible,
there is an explanation to their exaggerated number, and a tenable, provisory response. If
there were not a parallel process to serialize, there wouldn't be so many agents; with an
adequately parallel processor, the proliferation of agents would disappear or be much
reduced (but it remains to say what type of processor it should be, and to understand
what type of parallelism is needed).
The role of channels in the serialization of a parallel process is lesser. The channel
contributes in part to the serialization because it ensures for example a certain
proliferation of syntactic analysis tracks, one of which only will be elected (or two
concurrently prolonged, waiting for a syntactic ambiguity to be resolved). But the
channel has another function: to separate different instances in the question, whether
they are idioreferent303 or coreferent. Consequently, serialization let alone, a lag remains
between the volume of the heuristic structure and the computing capacity which is
presumed to be that of the humans.
302
Atkinson 1968.
303
"Idioreferent" is proposed to apply to a term which would be coreferent to no other term.
273
Is it possible to suggest a connectionist programme in this direction: rather than staying
hung up by the resisting problem of variable binding, rather accept strict exemplarism
and strict occurrentialism (this must not be too difficult because these themes are
congruent with connectionism) and try to implement the four abductive movements
within a dynamic device which would replace the channels of ABS by a mechanism
with better plausibility304. Success in this would solve together the three 'basic
functional elements' assessed by Marcus to be prerequisites to progress in
connectionism.
9.3. Making a grammar?
A grammar is an intellectual construction aiming at determining statically that which is
possible in a speaker's language. This is what pedagogical grammars superficially do for
the language of a community, which is supposed to be normalised. It is also what a
generative grammar for a speaker's language does; despite it being named a "grammar"
is sometimes regretted, a generative grammar corresponds to that aim. A grammar his
envisaged, bounds the set of utterances which are possible in a language but it does not
specify how acts are carried out.
The proposition defended in this thesis contrasts on the following two points:
1. It comprises no grammar, generative or otherwise: is does not define
grammaticality a priori.
2. It proposes dynamic (abductive and analogical) models of linguistic processes
based on exemplarist inscriptions. From there an explanatory vision of linguistic
productivity follows.
The status of grammar is thus questioned. It still is possible to make grammars but they
are not a prerequisite to the explanation of linguistic dynamics, the 'possible of language'
being seen thereafter as the de facto result of the computation of acts. Making a
grammar, as an approach, is limited by the constraints it accepts: defining a language as
an ideal object, bounding this object by setting aside the subject and the cognitive
interfaces (perception, proprioception, motor ability, and phonation).
Secondly, it is appropriate to remind again the rule-list fallacy of Langacker already seen
p. 181. In Langacker's argument the theory does not have to chose: accepting a rule does
not exclude exemplars that are regular under this rule (which ones exactly is another
question). As seen from the Analogical Speaker, the argument seems strong, more
especially as, no rule being reified; the model instead features regularization effects
which are the result of operations bearing exactly on exemplars, those that Langacker
proposes to accept beside the rule. Recognizing this is the second important factor
which helps understanding the constitutional and inescapable limit of the grammatical
exercise. Any grammar is bound to fail on phenomena which are marginal for it, but
which are important because they are the mark of evolution or variation, that is, one of
304
But the same Marcus, speculating on the sort of elementary apparatus which connectionist models
should provide proposes a notion of "treelet" which looks much like the channel in ABS.
274
the symptoms of the mechanism itself of the object: the factum linguae, which is to be
perceived in the linguistic dynamics and nowhere else.
Is the grammatical enterprise void then? It is not because it produces:
-
general propositions of a sort which a theory like this one cannot produce, in the
direction for example of linguistic universals,
-
propositions which a theory like this one must consider, i) as heuristic
stimulations, ii) as things to explicate and try to reduce.
For example I think of results of the binding theory, domain phenomena, subjacency, the
resistance to central embedding or the prosody-syntax relation depending on the
language branching left or right. The grammatical enterprise is then repositioned: it does
produce interesting propositions about language, but it is no longer expected to explain
the acts and the operations. For productivity, it may provide a non-explanatory,
approximate vision, but for an explanatory, more accurate one we should rather rely on
dynamic models.
9.4. Summary of propositions
Here is now a summary of the defended propositions. No more justifications are
provided, please refer to the text above, possibly via the index.
In the table below, a proposition may be:
thesis:
the proposition is defended, against alternate possible ones which are
mentioned,
support: the proposition, or a similar one, is necessary to at least one of the thesis,
but it is not defended for itself, variants might be possible. However, it is
necessary for a proposition of this nature to be conceivable and defendable
(and perhaps implementable) to support the rest.
moderate thesis: the proposition is adopted as a research posture. The two
propositions with this rating are about radical non-categoricity; the
conjecture would rather favour a moderate non-categoricity but for this, I
do not know how to propose a model.
275
Proposition
Rating
1. In linguistics, the failure of grammatical frames to account for linguistic
dynamics is circumvented by directly studying the dynamics.
Against: improving a linguistics of the language.
thesis
2. Occurrential linguistic acts are motivated by occurrential, exemplarist, and
proximal dynamics.
Against: abstractions, rules alleged to be operational.
thesis
3. As far as ensuing dynamics – that is productivity – is concerned,
perception is by similarity of differences – that is, by analogies. This
proposition, initially a psychological one, also applies to linguistic acts.
Against: qualia, essential properties, categories.
thesis
4. Similarity of differences is directly the principle of linguistic knowledge
inscription. It holds between exemplars, that is, between terms.
Against: abstractions, properties, prototype and distance to prototype.
thesis
5. Similarity of differences holds within proximality: some inscriptions are
proximal to one another, other ones are less so.
Against: categorization.
Against: location or identification in a totality.
Against: probabilistic similarity within a domain.
thesis
6. Inscriptions are directly the perceived analogies.
Against: inscriptions are an abstraction or an abstractive elaboration
anticipating the needs of the dynamics (prototypes, rules, lexical entries).
moderate thesis
7. In linguistics, probabilities are void as an explanatory track because they
fail to define a 'set of possibilities' compatible with the explanation of acts
and of learning.
Against: probabilities have an explanatory value.
thesis
8. The reduction schema which reconciles a) quasi-uniform observables, with
b) the idiosyncrasy of individual knowledge, and c) a deterministic
implementation substrate, is macroscopic determinism based on stability in
complex systems.
Against: a regularist schema.
Against: a stochastic schema.
thesis
9. The linguistic dynamics are abductive, by movements between an inscription and the proximal ones. They are deterministic in these movements.
Against: algorithmic, based on general rules.
Against: probabilistic dynamics.
thesis
10. The dynamics of acts is productive by using exactly the available analogies moderaand by abducting more analogies from the latter.
te thesis
Against: break between perception and the productive dynamics.
11. Terms are empty (they have no 'properties'); they have value only by being
re identifiable in their recurrences, and by their analogical copositionings.
Against: property-bearing lexical entries.
Against: relations, the essentiality of which would then be a question.
thesis
12. A term is necessarily involved in (at least) one analogy which copositions
it with other terms.
thesis
276
Against: monadic, decontextualized inscriptions.
13. The success of a dynamics is sanctioned by a settlement (that is, a
coincidence) which is copositioned and so involves several terms at once.
Against: single-argument coincidence.
thesis
14. Single-argument, proximality-based distributional similarity is a possible
model, but an imperfect one, of the similarity suggestion required by the
dynamics.
Missing: similarity suggestion better observing copositioning.
support
15. Agent-based solving (ABS) is a model of abductive linguistic computation
which is functional but not plausible.
Missing: a more plausible dynamic architecture.
support
16. The schema for reception/comprehension is the immersion in the plexus of
the linguistic (then cognitive) knowledge.
Against: a first order mapping schema.
Against: evocation of concepts or representations.
thesis
17. The observed regularizations are secondary effects of an abductive
dynamics.
Against: they are the effects of rules.
thesis
18. The variation of individual histories renders exemplar-based dynamics
variant in their detail. However, exemplar-based licensing licenses about
the same things and this reconciles variation with quasi-normativity.
Against: variation explained by stochastic rules.
thesis
19. A speaker does not learn a language; he just learns how to speak.
Bootstrapping let alone, a successful act causes the inscription of the new
exemplarist analogy with the lowest cognitive cost.
Against: an innate universal grammar and parameter setting.
Against: evolution of weights in stochastic rules or constraints.
thesis
20. The young subject of cognition and of language at a given moment
succeeds in making his first analogies.
Missing: analogical bootstrapping, which is not covered here.
support
21. Linguistic change is caused by many occurrential modifications in
speakers' plexii.
Against: evolution of weights in the stochastic rules or constraints of a
language.
thesis
Table 20 Summary of propositions
277
10. Appendix: Rules and categories
do not qualify as a theory of operations
The problems of using rules and categories have been abundantly described; this topic is
among the most visible ones in the linguistic literature305. The usage of categories and
rules in natural language processing also faces numerous problems306 which constitutes
an additional hint of their theoretical inadequacy.
The argument will be briefly reminded.
10.1. Fragility of a lexical category: the noun-verb opposition
He who undertakes to question the solidity of lexical categories may chose, in French
for example, to target the adverb-preposition area in which the categorial status is
notoriously most precarious. He will easily recall items belonging to two categories and
the variety of distributional behaviours. He will quote the French word ne which
"categorizes with nothing else"307 and he may remind the abundance of proposed
classifications, none satisfactory. This is the easiest.
Compared with the categorial fragility of the adverb-preposition area, the noun-verb
opposition looks better, it is the most fundamental and the one identified first in
history308. However its solidity is not dogmatic.
Firstly, and formally, in French and in Romance languages, a categorial leakage is to be
observed through the infinitive, participating in what was sometimes called 'improper
derivation'. One may argue that the infinitive has a special position and see finite verb
forms as a one category and the infinitive as another one. But the phenomenon is still
more flagrant – and the case less easy to solve – in a language, as English, in which
finite verb forms are less marked. If we now leave the Indo-European domain, the
reasons to be confident weaken:
305
Householder 1971 p. 15, Hagège 1976, Milner 1989 p. 86, Lemaréchal 1989 p. 63, Bechtel 1991 p.
243, Auroux 1994 p. 154, p. 175, Laks 1996 p. 153 et suiv., Marandin 1997 p. 156, Koenig 1999, p. 82,
Lamb 2000, p. 117, etc.
306
Fuchs 1993, p. 90 et seq. makes a good survey.
307
Martinet 1985 about, p. 140.
308
Aristotle: onoma-rhema even if this opposition does not stricly match the name-verb opposition.
279
… the universality of the lexical categories under which all the diversity is subsumed is
far from evident. The noun-verb opposition has many degrees, from Romance
languages, in which it is strong, to Salish languages, where it is weak or null, and
Hungarian where it is neater in syntax than in morphology309.
Regarding the semantic determination of the noun-verb opposition it is possible to quote
Mauro:
… when, not long ago, a linguist with Benveniste's authority fostered doubts on the
consistency of the traditional definitions of the substantive as the part of speech
indicating the 'object', and of the verb as indicating the "process", claiming on the
contrary that it is the object-process distinction which is a projection and a
personalization of the distinction in our languages between nouns and verbs
(Benveniste 1950, p. 29-36) he caused astonishment and scandal among the
'specialists'310.
More recently, Langacker311 reassesses the noun-verb opposition: ultimately, the
semantic difference between nouns and verbs reside in the way they profile; nouns
profile "things" and verbs profile "processes". It is difficult to disagree.
In sum, the category effect is certainly sharper between nouns and verbs than it is
between adverbs and prepositions but it is not absolute here either. This gives birth to a
"depackaging" approach312: since taking the constitutive properties of a category as a
solid package appears not to suffice, certain authors suggest to treat the properties
separately. This of course disqualifies such sets as possible universals:
The best way to argument against universals consists of "depackaging" the properties of
certain categories: by showing for example that there exists in such or such language,
elements which have such property of the category "noun" and such property of the
category "verb". In such a case, neither the property package defining nouns nor the one
defining verbs, nor the one associated with the elements in question, can be considered
as universals. Auroux 1998, p. 44.
A figure of depackaging is to be found for example in the complements to HPSG313
recently proposed for morphology314: accounting for lexical productivity and yielding a
behaviour which produces rule effects without rules, by postulating abstract lexical
categories and building on category intersection.
In the lines of the Minimalist programme315, Distributed Morphology adopts a
deconstructive position on lexical categories; the following quotation is taken to Rolf
Noyer from the Internet:
309
Hagège 1976, p. 93. On this question, cf. also Hagège 1999, p. 69 et seq.
310
De Mauro 1969, p. 168.
311
Langacker 1998, pp. 17-19 summarized; this position was expressed as early as 1987.
312
Auroux's term in French is "décompactification".
313
Head Driven Phrase Structure Grammars, for a summary introduction and a bibliographical
orientation, cf. Abeillé 1993.
314
Koenig 1999.
315
Chomsky 1995/1997a
280
A related hypothesis (Marantz 1997a, Embick 1997, 1998a, 1998b, Harley & Noyer
1998, to appear) contends that the traditional terms for sentence elements, such as noun,
verb, adjective have no universal significance and are essentially derivative from more
basic morpheme types.
In Barner et al. (2002) a refusal of lexical categories on linguistic, neurolinguistic and
developmental grounds is to be found: postulating categories in the lexicon only
increases the conversion overload with no advantage. They also envisage a solution like
that of Distributed Morphology, with some qualms.
In conclusion, all this amounts to negate lexical categories: effects of categorization do
obtain, but they are precarious and contingent. Thus, far from postulating lexical
categories in the theory, we should rather find a different explanatory mechanism and,
on the contrary, explain categorization effects in the lexicon in the exact measure in
which they are to be found.
But perhaps functional categories are less questionable.
10.2. Functional approach, the grammatical function
The functional approach first constructs the grammatical function by opposing it to the
lexical category.
We shall briefly review how the grammatical function develops in the history of
linguistic thought, and then make a critical assessment of the most prototypical one, the
subject316 function, which is given as the most solid.
10.2.1. Categorial label and grammatical function
The functional viewpoint seems to appear in France in the 18th century. According to
Swiggers317, a functional approach can be found in the Père Buffier who, for the first
time, recommends a distinction to be made between the subject of the verb (which
commands its agreement) and the subject of the action (Brutus in Caesar is murdered by
Brutus).
Again in the 18th century, the Abbé Girard, still according to Swiggers, recognizes
seven functions: the subjective, the attributive, the objective, the terminative, the
circumstantial, the conjunctive, and the adjunctive, plus two governments (Fr. régimes):
the constructive and the enunciative governments, the latter breaking down into
dispositive government and concord government. Dumarsais will not mention functions.
316
The functional approach could be complemented with thematic roles (or agentive roles depending on
authors) but this will not be done here to preserve focus; still another extension of the functional approach
is possible: The notions of topic (what we are talking about) and of comment (what is said about it)
belong in principle to a semantic theory which would contain functional notions (Milner 1989, p. 372).
This also will nor be covered here.
317
Swiggers 1997 p. 192
281
Moving now into the 20th century, there is no room for functions in the work of
Harris318, in France functions will be ignored by Bally and Tesnière, recognized by
Benveniste:
In all the European structuralists, no doubt the closest to Chomsky's conceptions in his
concrete investigations, was Emile Benveniste. (…) about the genitive in Latin
(Benveniste 1966, p. 140), from purely syntactic considerations, he succeeds showing
that all usages of the genitive boil down to a unique function: the transposition within
the noun phrase of relations which were initially at sentence level. (…) Benveniste does
not make the error – common to Bally and Tesnière – to make the confusion between
two different levels: categories and functions. He realizes that this is a syntactic
problem with impacts on morphology319.
From this brief historical browsing, let us stay with what Ruwet stresses: the interest of
the function is to make a separation between properties (lexical category, categorial
label, etc.) which would be inherent in a language unit, independent of its utilizations,
and the role which the unit may play in specific usages. Bloomfield for example, clearly
states the difference between the functional units and their formal classes:
Certain word and syntagms may occur in the position of actor, some other ones in the
position of action. The positions in which a form may occur are its functions or its
function. (…) Thus, all the English words and syntagms which may occupy the position
of actor in the actor-action construction, form a large formal class which we may call
noun phrases; likewise, all the English words and syntagms which may occupy the
position of action in the actor-action construction, constitute a second formal class, and
we shall call it conjugated verb phrases320.
The lexical form of any real utterance as a concrete linguistic form, is always
associated with a grammatical form: it occurs with a certain function, and these
privileges of occurrence, collectively constitute the grammatical function of the lexical
form. (…) The functions of the lexical forms are created by the selection taxemes
which allow constituting the grammatical forms. The lexical forms which share a
function, whatever it is, belong to a common formal class321.
However:
The functions of the lexical forms appear as an extremely complex system… Different
functions may generate overlapping formal classes322.
Here is where this theory stops. It limits itself to making statements because the
grammar of a language is made up of a very complex set of habits323.
As for him, in order to account for the properties of linguistic terms which are not
inherent to them, Martinet, considers, then quickly rejects, the assumption of a monème
de position (a 'position morpheme'):
318
Maurice Gross , seminar, Univ. Paris 7, 2000.
319
Ruwet 1967, p. 231.
320
Bloomfield 1933/1970 p. 184.
321
Ibid. p. 248.
322
Ibid. p. 248.
323
Ibid. p. 249.
282
Among the meaning effects which do not match a signifier characterized by one or
more distinctive features, those that manifest themselves by the respective position of
certain monèmes [morphemes] in the chain must be pointed out […]. In such case, one
might possibly be tempted to talk of 'position monèmes' […]. But, as units of this type
belong to the well-characterized class of functions, and that, in this class, we will need
to distinguish the signifiers composed of distinctive features from those that manifest
themselves by a particular disposition of the units in the chain, it is appropriate to keep
the term monème for the former. It will also be appropriate not to equate monème and
function324.
Then he makes the same distinction as Bloomfield:
… the compatibility of the nominals with the verbs may undergo very different forms.
Which is what is referred to when talking of the various grammatical functions. If we
acknowledge this term 'function', we will say that beside the monofunctional relations
that take place between the modalities and their kernels, between the attributive
adjectives and the names which they determine, we find plurifunctional relations
between nouns and verbs. The question which arises is by what means can a language
make explicit the different functions325.
What seems to surface here without being well differentiated is the distinction between
grammatical function and thematic role. It will be done later by generativism and I shall
not deal further with it.
In order to assess the theoretical value of the grammatical function, let us now take a
closer look at the most important one, the subject function, and at the doubts that
aroused about it in the second half of the 20th century.
10.2.2. Contingency of a functional category: the subject
At first accepted in the European languages as a non-problematic notion with a potential
to universality, the grammatical subject was attacked when phenomena were scrutinized
and attention moved to other languages. A new consensus was not reached, and
Langacker summarizes the general disagreement situation as follows:
One basic problem for a symbolic account of grammar is to characterize the notions of
subject and object. There are few topics on which linguistic theorists exhibit such a
striking lack of consensus. About the only thing virtually they all agree on is that a
conceptual definition valid for all subjects or all objects is just not feasible. 326
Without going into the details of the observations leading to question the notion subject
– please see the references provided below – the positions fall into three classes:
abolitionism, formalistic retreat, and 'depackaging'327. Some apply to a language alone
(they will be illustrated here in the cases of Japanese and Basque), other ones are
presented by their promoters with a cross-linguistic scope.
324
Martinet 1985 p. 45.
325
Ibid. p. 163.
326
Langacker 1998, p. 26.
327
This is Auroux' term, again, cf. supra.
283
In Japanese, Maes328 realizes the impossibility to maintain several important properties
of the subject:
… the subject in Japanese [is] a "complement as any other", that is, an optional
modifier of the utterance […]. It is easy to show sentences not only deprived of a
(narrow) overt subject, but also ones which have no covert one which could be uttered,
or more precisely, which could be substituted329.
This position is abolitionist: the subject ceases to be characterized, it ceases to be
necessary. For this author, and for most linguists of Japanese, the less remote notion
presented as stable in this language is the topic-focus opposition, but this does not
'salvage' the grammatical subject: it covers the descriptive and theoretical need
differently.
In Basque, the subject is put in doubt by Martinet for the benefit of a different working:
André Martinet330 proposes a new and clever interpretation of the ergative construction
in Basque. For him, Basque belongs to a language type which ignores the subjectpredicate syntagm and constructs regularly its utterances by the successive
determinations of an existence predicate331.
More recently Rebuschi (1997) also negates the subject in Basque and proposes
functions specific to that language. He sees five possible properties for the subject, and
an appropriate analysis of Basque must distinguish them: i) noun-verb agreement, ii)
"zero rank complement", that is, a complement mandatory to the construction of a
conjugated sentence, even an "impersonal" one, iii) one-place construction with
neutralization of any opposition between any semantic roles, iv) unique NP governing
several transformations, and v) thematic subject of a sentence332. He identifies in Basque
two 'polar roles': the agentive, and the objective; two additional roles: a dative and an
instrumental, plus a few secondary roles and concludes:
Depending on the case, Basque favours either the /+animated/ feature or the /+acting/
feature. Calling on the concepts of subject and object seems insufficient to analyse the
phenomena associated with transitivity, ergativity, and thematization333.
Like Martinet, Rebuschi makes for Basque theoretical propositions which are
substitutive because they cover differently the descriptive and theoretical need that the
grammatical subject was supposed to meet. Coyos334 does the same thing by negating
the notions subject and object and proposing that of actualisateur, generalized in the
case of the Basque absolutive, non-generalized for the ergative.
328
Maes 1980, p. 214.
329
And it should be added that, in Japanese, there are also sentences with several noun phrases, for which
we are without criterion to tell which one could be the subject.
330
Martinet La construction ergative et les structures élémentaires de l'énoncé in Journal de psychologie
normale et pathologique 1958, p. 377-392.
331
Lafon 1960, p. 613.
332
It is interesting to compare this list with the different acceptations of "head" according to Zwicky
(1985).
333
Rebuschi 1997, p. 2.
334
Coyos 1999, p. 309.
284
This position is the same, near-consensual, as the one we identified in the linguistics of
Japanese.
Facing the question of the subject, a second type of position has been announced above
as 'formalist retreat'; it conserves the subject but with a definition calling on one
criterion only. For an increased universality this criterion must become formal. Gross
and Milner follow this track, each with a different criterion. For the subject, Gross
proposes a definition based on agreement with the verb and on that alone:
It will be possible to define the subject as the term of the sentence which agrees with
the verb. A definition of that sort limits itself to introducing a terminological precision
which connects the formalized combinatorial description to school grammar335. This
preserves the usefulness of the didactic definition. But it excludes that it be the unique
concept bearing on the varied and complex set of phenomena […]. Therefore, from a
theoretical standpoint, the notion 'subject' has a very restricted place in syntactic
description. So is it from a semantic standpoint, the coexistence in a language of forms
like:
John likes moving pictures.
Moving pictures interest John.
very well shows that the semantic role of the subject is almost null, since its
inversion with the object play no part in the interpretation of the signifier chain
in these two sentences336.
Milner, as for him, proposes to salvage the subject by adopting an indirect viewpoint,
dependent on other, more structural postulations:
One may define the subject as the N which c-commands the content of S and which has
S as its domain337.
However, it should be noted that the notion of c-command depends on that of VP which
assumes, among the NPs surrounding the verb, one of them to be already distinguished
as the subject: this condition is necessary for the VP to comprise the other NPs, and
precisely to exclude the subject. So Milner's proposition appears circular.
The third type of theoretical position regarding the grammatical subject consists of
separating its properties and noting that subsets of these are to be observed without they
having to be the same in all cases.
Creissels (1995), in order to rescue the subject – which is a question in African
languages – "depackages" its definition. In this book, the section La notion de sujet (p.
217) presents the various definitions which may be given for "subject": nominal
argument which commands the inflection of the predicate, mutual presupposition with
the predicate, etc. For Creissels, the subject in a language is that which has a set of
syntactical properties globally comparable, as for their underlying principle, to those of
the Latin or French subject, even if they are not identical in the details (p. 219). In view
335
But it is operatory only in languages in which the verb undergoes agreement, which is not verified in
numerous languages, notably some Asian languages (Gross' footnote).
336
Gross 1996.
337
Milner 1989, p. 669.
285
of that, a subject must not be refused in the Japanese language (p. 220). The nominal
constituent recognized as subject must manifest its nature by a specific transformational
behaviour which, in its details, may vary cross linguistically, but which globally must
incur a hierarchy of argumental functions, the top of which being occupied by the
subject (p. 221).
Along similar lines, Langacker renounces a narrow definition of the subject:
I propose that subject and object status ultimately reduces to a kind of focal prominence
conferred on participants in a profiled relationship. In particular subject and object
nominals are identified as respectively specifying the trajector and the landmark of a
profiled relationship338.
He distinguishes properties which are "preferred" for the subject, with prevalence for
semantic ones:
A subject is more likely than other nominals to be the controller for verb agreement, the
antecedent for reflexivation and pronominalization, the pivot for relativation, the
controller for complement-subject deletion, the source of floated quantifiers, the
understood subject of adverbs and subjectless adverbial clauses etc… There are
obvious problems in trying to define the notion subject by means of such properties.
But from my standpoint, this effort misses the point in any event. The trajector / subject
notion is not at root syntactic, but rather semantic, and its attendant grammatical
correlates are not criterial, but rather symptomatic of the special salience that trajectors
(in particular clausal subjects) have by virtue of their roles as relational figures 339.
What Croft thus comments:
For Langacker, there is a semantic basis to subjecthood, but it is not causation: a
subject is a trajector, that is, a profiled figure. […] A subject is not just a figure but also
in profile. Langacker's conception of the semantic structure of linguistic units is
essentially an adaptation of Fillmore's frame semantics analysis. The meaning of a
linguistic unit is not only what the unit denotes (profiles), but also the frame (base in
Langacker's terminology), that is, the additional concepts presupposed in the profiles
part of the meaning, which are therefore present in the "background" or 'base' 340.
The position of Langacker evokes, and anticipates, Optimality Theory which, by
stressing constraints at the expense of essential properties, also "depackages" the subject
in its own way. This theory will not be developed here, but an example of an optimalist
treatment of the grammatical subject was met p. 230.
At the end of this survey of the critics made to the grammatical subject, it appears
finally that this notion, when not simply negated, can be rescued only by a formalistic
definition, which is very impoverishing, of by dissociating its properties; they happened
to be found together as subsets only, or occasionally only. If something then might be a
candidate to universality and to form a theoretical base, it might be these properties, but
not the notion of subject itself which is now disqualified.
338
Langacker 1998, p. 26.
339
Langacker 1987a, p. 231.
340
Croft 1993, p. 34.
286
10.2.3. Functions or organs?
A general limit of the functional viewpoint is to be seen in this passage of Newmeyer
commenting Givón:
… Even if it were correct that all structure is ultimately artifactual, the conclusion that
it is therefore misguided to characterize formal systems independent of the functional
factors that shaped them is false. This point can be illustrated by developing further an
analogy Givón himself introduces early in "On Understanding Grammar". He writes:
Imagine an anatomist describing the structure of the human body without reference to the
functions of various organs. But this is precisely what happened in transformationalgenerative linguistics: by fiat, a priori, and with no visible empirical justification, an attempt
has been made to describe the structure of human language, both syntax and phonology,
without reference to natural explanatory parameters.
Givón is apparently unaware that there are anatomists – histologists for example – who
do precisely what he finds so unthinkable; they describe the structure of the human
body without reference to the functions of the various organs. And they have good
reasons for doing so. First because they show that similar structures can perform very
different functions and that many anatomical functions are performed by diverse
histological structures. Some structures (the appendix, for example) serve no useful
function at all, while others (the gallbladder, for example) have phylogenetically been
adapted to novel functions. And second because some anatomical structures serve no
known function. Clearly it would be unreasonable to postpone their study until their
function is known. The point is that the organs, tissues and so forth, of the human body
form structural systems that interact with the functional systems of the body (digestion,
reproduction, etc.) in extremely intricate ways (p. 121). This would have no serious
consequence if it turned out that there were in language a one to one match between
syntactic structure and communicative function341.
Givón puts forward "natural explanatory parameters" in support of the functional
viewpoint in linguistic theory: it would not be reasonable for anatomists to ignore – as
the Generativists do – the functions of the organs that they describe. Newmeyer finds
this argument a weak one because the functions of the organs are not always clear and,
when we understand them, the organ-function mapping is seldom simple.
This analogical argument reveals indeed a limit of the functional approach in linguistics.
In languages, organs (structures) are to be found for which the functional mapping is not
simple: expletives, agreement phenomena implying redundant marks, etc. This imposes
not to neglect the structures themselves (a grammar or linguistic theory which would be
functional only would not suffice), but it does not allow us to ignore the functional view
point; both are necessary. What was shown so far is that even the coupling of both is not
sufficient if they are taken categorically – this comes in addition to the NewmeyerGivón argument.
341
Newmeyer 1983 p. 122.
287
10.3. A brief reminder of rules refutation
As for rules, the situation is mot more satisfying than for lexical or functional
categories. Their refutation is the subject of an abundant literature. Let us sample
briefly, beginning with Skousen for whom they give birth to undecidable attribution
conflicts, despite speakers coping very easily with the multiple cases which rules do not
address well:
[…] there is empirical evidence from language behavior that the boundaries between
different types of behavior are not well-defined. I consider [in my book] a number of
examples from English: children's use of the indefinite article (a/an), misspellings,
morphological extensions, pronunciation of nonce spellings, experiments with voicing
onset time, and Labov's semantic experiments. In addition, there are some conceptual
problems with rule approaches. One particular difficulty is the indeterminacy that
occurs when either no rule or more than one rule is applicable. Yet evidence from
language usage clearly demonstrates that speakers can readily deal with cases of
missing information and ill-formed contexts. In addition, rule approaches have
difficulty in dealing with redundancy342.
Laks on his part, writing in 1993 during the debate in the years 1989-1994 between
connectionism and the symbolist approach, was setting doubts on their explanatory
power as operating rules, particularly in phonology:
Writing rules is a synthetic mode of description which allows us to embrace facts in a
single sight, whence their obvious heuristic value. Even if, from another viewpoint their
explanatory value is a question, it may be the case that, at the end of any explanatory
analysis, there remains a residue of regularities that no properly phonological constraint
active in synchrony will be able to explain: the historical and social character of
languages does not let itself forget so easily. Finally, we must not forget that
universalizing rules as parametered principles leads necessarily to particularize the
systems in which these principles bear effects. A certain amount of arbitrariness is
inevitable and it is not sure that it is entirely on a single side, that of the systems.
Nevertheless, the language of the rule as such is fundamentally unable to provide an
explanation of properly phonological phenomena. Formal rules work on symbols which
are not conceived of as objects with a substantial reality. Only after a translation can
these symbols be related with properties of the substance. Thus the rule asserts
regularity without providing a ratio to it, or it does so with circularity. Making a rule is
assessing a regularity; the regularity is not a consequence of the rule, but the rule of the
regularity. We predict, but without being able to explain343.
10.4. Conclusion: a descriptive approximation but not a theoretical base
In sum, as diverse and inventive the proposed kinds of rules and categories to account
for the diversity of linguistic facts might be, limits and residues are always to be met. It
is not that categorization effects do not occur, they are quite manifest on the contrary,
but we have to conclude that their mass cannot be circumscribed in any categorial frame
342
Skousen 1989, introduction.
343
Laks 1993, p. 8.
288
applicable to all cases. Even if we found one, it would still have to be shown how it is
learnable and how it copes with variation and evolution.
This lead to renounce categories and rules, which is what many linguists already do:
Chomsky himself344, finally refusing rules and "categorial labels", Dryers (1997), Croft
(2001), to say nothing of the connectionists since in connectionist models categories are
inherently absent.
344
The P&P approach maintains that the basic ideas of the tradition, incorporated without great change
in early generative grammar, are misguided in principle – in particular, the idea that a language consists
of rules for forming grammatical constructions (relative clauses, passives, etc.). Chomsky 1995/1997a
(Minimalist Program), p. 5
289
11. Appendix: The slot-filler schema,
a historical picture
In order to complement the discussion on the slot-filler schema made in Chap. 1 and to
clarify it, this appendix presents a table of the aspects that this question took in the
history of linguistic thought. Then the form taken by the slot-filler schema in
construction grammars is discussed in greater detail: they add some flexibility into it,
but they conserve the schema.
11.1. Table of some figures of the slot-filler schema
The table below presents, in chronological order, figures taken by the slot-filler schema
for different authors. It is not exhaustive; the intent is to suggest how a constant theme is
recurrently dressed up in different fashions.
The slot-filler schema first appears rudimentarily in Aristotle who differentiates onoma
and rhema, and says that a sentence is made up of both. There isn't yet a clear distinction
between the slot-defining structure, the names of the slots, and the conditions under
which potential occupiers qualify.
Thereafter and for long, grammarians will adopt a point of view centred on the 'parts of
speech'. In Arnauld and Lancelot (1660/1997), the slot-filler question is little visible and
indirectly only.
The question arises again in Tesnière, in the 1950s, with the 'valence formulae' (Fr.
formules valentielles).
In generativism, firstly, the derivational rule both defines the slots and provides for
their occupation. Secondly, the transformation rule plays a part in the modification of
this economy. This theory is characterized by non-occupied slots: either empty terms in
the generation process, or the traces which are left behind after a transformation has
been performed.
Milner (1989) develops a post-generativist conception which differentiates between
'place' (defined in the overt form) and 'position' (with a more syntactic character). The
occupation may be coincident or distorted. Something more will be said on this in
another appendix, p. 308. In particular, I stress there how weak the definition of
coincidence is.
291
Author, school of
thinking, domain
Structure defining
the slots
Name of a slot
Name of an
occupier
Aristotle
[not named]
onoma, rhema
onoma, rhema
Arnauld & Lancelot
+/- rection,
prescriptive rules
case, régime
[not named]
Tesnière
formule valentielle
valence
actant
Martinet (for
example)
Chomsky (Aspects)
(not thematized)
function (subject,
object, etc.)
word, syntagm
Generativism with
transformations
derivational rule,
transformation rule
e.g. NP, VP, etc.
syntagm,
lexical item
Government &
Binding
in addition to
the latter:
thematic structure ,
theta grid
thematic role,
morphological case vs.
abstract case, structural
case vs. inherent case
argument
to saturate a
thematic role
terms
constructed
from notions
a term instanciates a place,
becomes the argument of a
predicate
Fillmore
case, semantic case
Pottier
conceptual case /
linguistic case
Culioli 1982, p. 19.
Culioli 1990, p. 49.
schéma de lexis
Shaumjan 1987
predicate frame,
operator,
Givón, McClelland
case frame
place
a syntagm has a
function
term acting as
an operand
case-role
Tanenhaus 1988
gap position
filler
Milner
position, place
argument
Kerleroux
Predicate of the
occupation
syntactic skeleton
Lemaréchal
to saturate a
valence
Creissels
predication scheme
valence, argumental
function
Langacker
valence relation
valence?
Vergnaud, Kaye,
Lowenstamm
(phonology)
skeleton of positions
position
protagonist
Table 21 Figures of the slot-filler schema in linguistics
11.2. Table of the slot-filler schema in neighbouring fields
A similar slot-filler schema can be found in mathematic, in computer science, and in
cognitive sciences, as the table below shows. This table is presented although it does not
bear on linguistics, because between linguistics and neighbouring domains theories
interfere and cross-fertilize; so much so that the lexicon in columns 2, 3 and 4 is
common partly identical to that of the table above.
292
Author, school of
thought, domain
Structure defining
the slots
Mathematics
operator
Mathematics
function
variable, parameter
value
a variable takes a
value
Informatics: subprogram call (v.
Neuman)
invocation schema,
argument sechema,
operator
argument
variable or value
provided as a
parameter
provide a parameter, give an
argument a value
symbol,
symbolic address,
reference,
"real" address =
defined in the
target space
resolution/binding
of a reference/of
an address
atom
satisfy a goal
Informatics: naming
space mapping,
addressing space
mapping
Name of a slot
Name of an
occupier
Predicate of the
occupation
operand
Prolog (Colmerauer)
Prolog clause
goal
Cognitive science
n-ary predicate
variable
bind a variable
Table 22 Figures of the slot-filler schema in neighbouring domains
11.3. The slot-filler schema in construction grammars
With the construction grammars, a progress towards less categoriality is made: beside
the lexicon which is their first component, they define a set of constructions (cf.
Goldberg 1995 infra):
C is a construction iff C is a form-meaning pair such that some aspect of meaning or
some aspect of form is not strictly predictable from component parts or from other
previously established constructions (p. 4).
The construction determines slots (even if the notion of slot is not explicit in Goldberg)
and there does not seem to be a drift possible: in all the instantiations of the construction
its semantic contribution is the same. This limit is softened by the possibility to create
all the necessary constructions: there isn't a limit to their number.
A verb has systematic difference of meaning in different constructions and this is
attributed to the constructions (p. 4).
Since the set of constructions may now proliferate, it matters to maintain its coherence
and preserve the economy of the theory. A structure is proposed to these ends:
constructions are arranged into a lattice:
The entire collection of constructions forms a lattice, with asymmetric inheritance
links, which accounts for generalizations among them. It captures related form and
systematically related meaning. … Inheritance holds among constituents internal to
constructions and so grasps generalization about the internal structure of constructions.
Multiple inheritance applies to an instance which may be motivated by two distinct
constructions. … A highly recurrent motivation link is analogous to a rule (conclusions
of chap. 3, summarized).
The verb is demoted from the central place of the element par excellence – if not the
only one – which assigns positions to other ones.
The syntax and semantics of a clause is not projected exclusively from the main verb
(p. 219).
293
We also note that the notion of head is absent from the theory.
A few questions let alone (for example, how are to be expressed the selectional
restrictions which apply to the possible participants in a construction), this theory calls
principally for two remarks.
The first and most important one is that the theory maintains a conception of semantics
as form-meaning association. Even if we are now equipped to detail the constructions as
necessary, there is no room made for interpretation and it is not clear how figures,
metonymy for example, can be accounted for.
The second remark is that among its goals, this theory does not try to account for
occurrential linguistic acts; it says nothing about it; nothing about the analysis process
for example345. In this line, there is no place to say how the success of an occurrential
act may modify a little the linguistic knowledge. No discourse either about inter-speaker
variation. However, although this is not directly apparent in the text, one may suppose
that a slightly variant configuration of the lexicon and of the construction lattice might
yield inter-speaker variation; it seems this theory has such a potential.
Inheritance among constructions is fostered by Goldberg (1995, p. 2). This requires
things to be inherited to be formalized, that is, to symbolize them. As with HPSG, we
stay to close to abstractions.
Four types of inheritance links are provided: polysemy, metaphorical extension, subpart, and instance. It has to be suspected that this categorical quadripartition contains a
risk not to be able to respond to the need for graded and intermediate effects that facts
will reveal, otherwise than with ad hoc responses.
A "normal mode" opposed to a "complete mode" renders inheritance less intractable but
"normal mode is designed to allow for sub-regularities and exceptions". This theorydestructive proposition omits assigning a place in the theory to sub-regularities and
exceptions and to specify how they are acknowledged; what is needed on the contrary is
this to be done in a manner which is inherent, constitutive, and homogeneous with the
rest.
The facts that suggest inheritance are much better treated, as we saw, by a dynamic
computation, seen as a process applied to the linguistic knowledge, rather than by a
structure (lattice with inheritance) described as static.
345
A mechanism like unification for example is mentionned (p. 14) for unification grammars, but neither
this one nor another one is adopted for the theory defended in the book.
294
12. Appendix: Specification of the plexus
12.1. Plexus: introduction
In the Analogical Speaker, linguistic computations operate on permanent inscriptions
which are the static side of the linguistic knowledge: the plexus.
A plexus is made up of records consisting of sites (four currently). Each site may be
occupied by a term. Between records, paradigmatic links may be established. Terms at
the same sites in linked records are said to be homolog. Thus a plexus is a graph of
records for the relation 'paradigmatic link'. A connected part of the graph is a paradigm.
A plexus captures structural analogies and systemic analogies, both interfering in
specified modes.
Within a paradigm, the connectivity is important, proximality matters within paradigms.
With proximality are obtained proximal categorization effects, and the regularization of
linguistic facts onto one another is proximal. The graph of the records linked with
paradigmatic links has thus a connectivity motivated by linguistic and cognitive reasons.
This let alone, a paradigm's connectivity as a graph may be diverse: linear, star-shaped,
many of few cycles, long or short cycles, etc…
12.2. Term
12.2.1. Definition of 'term'
The notion of term was defined and discussed p. 193. Terms are either formal or private
exclusively.
Formal terms have a linguistic form, cross the interface between speakers and
participate in utterances. There are no homonyms, that is, all formal terms have different
forms.
Private terms have no linguistic form; they do not cross the interface and do not overtly
participate in utterances. Private terms are a postulation which is felt to be necessary but
they are little developed currently (cf. p. 262).
Formal terms and private terms occupy sites in records and they take part together in
linguistic computations.
295
12.2.2. Is a 'table of terms' needed, up to where downgrade the lexicon?
Formally, the model contains a 'table of terms'346, a kind of impoverished lexicon in
which each term has its individual place. This table is a question in itself, it is not
certain that it is functional and necessary in all respects. Let us examine one after one,
various possible reasons to make a table of terms; this is clarifying because it helps
understanding what a term is.
In general, a table is made to methodically record data about different items. In
linguistics, the intent is to have available a locus to record linguistic terms; a 'table of
terms' would fulfil that need. This is what is always done in natural language processing
systems; this is also what linguistic theories do: generativism has a lexicon, Mel'cuk and
Shaumjan have words (even if they don't agree to distinguish homonyms or to melt them
into one term), HPSG has very rich lexical descriptions. I showed why I refused the
assumption that terms might have properties. Terms must be discrete, identifiable, re
identifiable in their recurrences, but they must stay 'body-less', property-free, nonessential. They take their efficacy only from their occurrences and from their mutual
copositionings in these occurrences. Consequently, this reason – record properties – is
not valid to justify a table of terms.
Let us assume this. "We need anyhow to distinguish what needs to be distinguished: if
we have two homonyms, it is convenient to store each in a different entry of the table.
Doing so, even if properties are refused for terms, we know at least what distinctions are
made." Now it begins to be known that dictionary practices in this respect are variant,
and that there is no way to give them a solid foundation. This problem is not one just for
lexicographers; it is a theoretical question in the first place. We also know that the
choice between the different entries thus created is itself intractable. In natural language
processing for example – a domain in which lexical categories are usually recognized –
it causes a proliferation of analysis paths which then must be reduced347 using methods
to which it is not possible to give a sound foundation. Finally, I showed section 6.1.2.
Homography, accidental homonymy, syncretism (p. 160), on the example Fr. été, how
the previous distinction between the season été and the past participle été was not
necessary: it not having been done does not hamper the success of the computations
because the context provides for it. Therefore, this reason – differentiate homonyms – is
not either a good reason to motivate a table of terms.
There remains in the model the functional need to associate the orthographical form of a
term with its occurrences in plexus records. The model comprises an organ – which is a
table – that does the association. It is used by various agents (CATZ, B2, B3) for
supporting the access function. It may be viewed as a very lean lexicon; the model
comprises then a residual lexicon the only function of which is to support some of the
access to linguistic data (cf. also section 12.6. Access, p. 298).
346
The phrase 'table of terms' is deliberately chosen in spite of its culinary connotations, against the word
'lexicon', to stress that, contrasting with lexical entries in other theories, terms in the Analogical Speaker
have no properties. So terms are questioned, and the access to terms in particular, we investigate the
opportunity to make leaner an apparatus, still suspected to be too rich, but we stress with 'table of terms'
that it is already much leaner than previous visions of lexicons.
347
In that field, they say "desambiguate".
296
This is where the model stops in the critique of the terms. It maintains a table of terms
for a reason which is not directly linguistic but which is not without implementational
implications and plausibility implications: it is doubtful that brains support access in this
manner. A next step in the critique of terms still remains to be done.
At this point, one cannot help to take another look at connectionism: the terms which
are ours, are symbols – in a sense which will be defined shortly – and one of the main
effects of the debate which, after PDP (1988) kept the profession busy, was to
understand that connectionist models succeeded (when they did) because they
substituted symbols with a different more flexible and more adequate apparatus; this
debate reassessed preceding approaches as 'symbolist', for example, linguistic theories
that were popular at the time, and are still defended today. At that time, among
connectionists – I recalled this p. 241 – the word was deconstructed, the lexical entry
was negated. Later, this deconstructive and negative route appeared not to suffice and
novel techniques (self-organizing maps) were found to make models able to represent
lexical entries; this was presented as a condition to overcome the performance barriers
behind which the models were blocked before. If we believe this, it should be kept in
mind, and it is maybe not the thing to do to try and get rid of the table of terms at any
rate.
12.3. Record
A record is an organic (therefore implementational) unit of the model which has a type
and four sites. Sites are occupied by terms. The precise meaning of site occupation
depends on the record type and is specified below.
Why four sites and not three or five? For a ternary constructor, three constituents are
needed plus an assembled form which makes four altogether. It did not seem urgent to
consider quaternary constructors. If they had to be, the only changes would be technical.
12.4. A-type record
About A-type records, an abundant introductive material was provided in Chap. 3. This
allows us to remain here brief and dogmatic.
An A-type record contains a pair of terms. That pair is likely to be involved in systemic
analogies. This happens when the record has a paradigmatic link with another.
Technically, the terms occupy sites 1 and 4; in an A-type record, sites 2 and 3 are not
occupied.
12.5. C-type record
About C-type record much was written in Chap. 4, which makes it possible to stay here
concise and stipulative.
A C-type record defines an exemplar of concatenative construction.
Sites 1 and 2 are occupied by terms which are constituents.
297
Site 2 may be occupied by a term which is a third constituent. In such case, the
constructor is ternary, otherwise, it is binary.
Site 4 is occupied by the assembly, which is the concatenation of the constituents.
This simple, literal vision is not sized to treat phonosyntax, which does not mean that
the model is globally incapable of it. On the contrary, the exemplarist option is a
favourable factor for this; but the track was not explored within this work.
The reported experiments are based on plexii with orthographical coding. It being
orthographical is not obligatory, any other coding one may wish, for example a
phonological one, may be adopted without consequence on the principles of the model.
There is however a consequence on the description cost since a coding less familiar than
the orthographical one costs more to the descriptor. There is another consequence: a
different coding distributes homonymies differently. However, the model treats
homonymy and ambiguity in general terms, without being affected by the coding which
causes them, be it orthographical or phonological.
Whatever the selected coding, the only requirement bearing on terms is that they be be
identifiable in their recurrences, that is, coded identically.
12.6. Access
"Access" collectively refers to the means whereby the elements of the plexus (records,
terms) are reached during the computations, either from the elements which specify a
linguistic act (or a linguistic task), or from other elements of the plexus, already reached
during the course of the computation.
Access in this model consists of three complementary devices, i) the index of term
occurrences, ii) the index of analogical pair occurrences, and iii) the crossing of
paradigmatic link.
12.6.1. Index of term occurrences (unary access)
The index of term occurrences accepts a linguistic form and returns the occurrences of
the corresponding term in the plexus; if the argument form is not a term known to the
model, the returned list is empty.
An element in the returned list is a term occurrence. It consists of: i) a record identifier
and ii) the indication of the site which the term occupies in the record. This is because a
term is said to occur in the plexus when it occupies a given site in a given record.
The current implementation of the index of term occurrences is a randomization by hash
coding; it might be a b-tree or any equivalent technique. The technical option is not
important and might be changed. The point is that this index is a function which maps
terms onto their occurrences as terms in the plexus.
There is much to say about the plausibility of this device and about its position in the
theory. It is dependent on the sustainability of the radical exemplarist option, see the
discussion p. 268.
In fact, in addition to the parsing process (cf. p. 365), the index of term occurrences is
used only by the CATZ agent (cf. p. 95) which ensures the function of similarity
298
suggestion in a single-argument mode, which is questionable; it was criticized above.
This index and the CATZ agent are suspected because they are invoked with one
argument only; the pretension to be able to designate one thing would be a residue of
essentialism.
The index of occurrences of analogical pairs is a proposition to correct this defect.
12.6.2. Index of analogical pair occurrences of (binary access)
The index of analogical pair occurrences accepts a pair of terms and returns the list of
the occurrences in the plexus where the pair participates in a systemic analogy, that is: i)
all pairs in A-type records, and ii) in C-type records, the pairs formed by terms which
bear the A mark – the function of the A mark in C-type records is exactly to distinguish
the terms which participate in a systemic analogy, cf. p. 67 where the inscription
methods are defined.
A returned occurrence is the indication of a record plus the sites occupied in the record
by the two argument terms.
The index of analogical pair occurrences is used by agent ANZ, which is the base of the
dynamics of systemic productivity, and from it indirectly, by agent AN2, which is a
client of agent ANZ and solves analogical tasks with two-term syntax.
Contrasting with the index of term occurrences, which is single-argument, and
suspected for that reason, the index of analogical pair occurrences, which has two
arguments, allows us to construct processes that observe copositionings. It is
positionally more correct if one dares say.
Is it plausible? It must be if we recall the Saussurean intuition of "eternally negative
differences". What is the feeling of those who their position make more familiar than us
with the brain? Consider again for a while Edelman's paper already quoted in Chap. 2
(1998, Representation is representation of similarities). The title contains similarity and
this invites to perceive difference transparently, what the rest of the reading does not fail
to confirm:
Obviously, a shape, a colour, or some other quality considered in isolation can be
represented in any manner whatsoever; it is the introduction of other objects that makes
representation challenging. […] It may be more productive to consider quale such as
"redness versus greenness" and "pear-shape versus apple-shape" as primitive and
redness or pear-shape as derived (p. 466).
It's clear: here again we must regard as primary the differential oppositions. But why
should be have at all to derive the quale redness and the quale pear-shape? All which
that will do will be to concentrate one more time on form [red] and on form [pear]. Back
now to linguistics, we may for instance undertake to clarify the noun-verb opposition,
but if we do, the most urgent is not to dig on one side what the properties and essence of
the noun would be, and on the other those of the verb; the only thing that interests
speakers is to successfully carry out dynamics in which nouns and verbs contrast, and
what interests us is to understand these dynamics.
299
12.6.3. Crossing a paradigmatic link
The two indexes we just saw accept terms and return occurrences. They may be viewed
as ensuring an access function within the model, that is, the base mechanisms of
circulation, elementary support of the computation processes. As such, another
elementary mechanism complements them: the move from a record R1 to one or its
neighbours, by crossing a paradigmatic link between them. There is little to say about it,
except that here again, we have a unary variety (from a term of R1 move to its homolog
in R2) and a binary one (from a pair of term of R1 move to its homolog pair in R2).
12.7. Paradigmatic link, paradigm
12.7.1. Paradigmatic links and paradigms viewed formally
Between two records, a paradigmatic link may be established.
Between two A-type records: R1 (X, Y) and R2 (A, B), the paradigmatic link means "X
is to Y as A is to B"; this is a systemic analogy.
Between two C-type records: R1 (a1+a2  a) and R2 (b1+b2  b), the paradigmatic
link means that these two constructions by concatenative assembly are constructionally
the same; this as a structural analogy.
A plexus is then a graph348 with nodes the records and edges the paradigmatic links. A
plexus paradigm is then a connected part of this graph.
12.7.2. Paradigm in the plexus and linguistic paradigm
Classically, since structural linguistics, a paradigm is a set of forms which are
substitutable at the same place or at the same places.
A paradigm, as I define it in a plexus, I mentioned this already, remains a collection of
elements which share something of the order of the place, but which are no longer
isolated forms. They are pairs of forms, either the so-called 'analogical pairs', or
assembled exemplars. As can be seen, this is a slight step aside with respect to the
canonical definition of 'paradigm', and this is how the principle of obligatory
contextuality is implemented in the model.
12.7.3. How many neighbours?
A record has a few neighbours, typically from two to six.
A record with one neighbour only is possible. For example, it is the frequent situation of
a less familiar record which gets to be known via one only other record, more familiar
than it. It is computable, a little awkwardly, and not very productive.
To a record, too high a number of neighbours is not reasonable: it is conjectured to
contradict the anatomical constraints bearing on neuron connectivity. However, even if
we limit the immediate connectivity, for example to six, it is possible to constitute a
348
The terms used are those of Claude Berge, La théorie des graphes et ses applications, Paris, Dunod,
1968.
300
small number of records into a small diameter set, with strong internal connectivity and
which may connect a high number of other records. In this way, it is easy to constitute
kinds of quasi-prototypic kernels which tend to play as centres in a computationally
efficient way, positively influencing a much wider area. Then, there isn't one single
object acting as prototype but rather a prototype effect which the descriptor may chose
to make concentrated or diffuse.
12.7.4. About isolated records
In a plexus, an isolated C-type record is formally possible349. It is a syntactic hapax. An
isolated record is little useful; it cannot contribute to similarity suggestion, which is
based on paradigmatic links, but it may be a licensing record in an analysis (agents B2
or B3 may use it), so it may contribute to abductively license an unknown form.
In an extension of the model which would encompass learning, the analysed unknown
form would cause a new record to be inscribed and a paradigmatic link to be set
between the latter and the licensing record, so far an isolated one, which would put an
end to its isolation, the set thus formed seeing its utility increase much more than
linearly, all this was explained above.
The interesting question is: why should an isolated record arise in the model. It touches
the question of bootstrapping, of initial learning, which is not addressed in the current
perimeter, but the orientation conjecture is that an initial structure is an analogy
(structural or systemic) right from the start; it involves at least two records and a
paradigmatic link between them. Following this, an isolated record would be a sheer
artefact, the model as it is can include one but it does not provide for it a linguistic or
cognitive interpretation.
One may then wonder what the model does with the syntactic hapax which can be met
in languages. There are none to be found or nearly so, by definition in a way: where
nothing is comparable, there can be no syntax and therefore no syntactic hapax either:
where nothing is comparable, everything is a hapax but nothing is syntax. The fact that
there are no syntactic hapaxes is very congruent with this work which grants analogy the
fundamental role. The question of the syntactic hapax is discussed by Kerleroux. She
found one. At least she found one utterance that would take that quality if it were not so
problematic: the French utterance La ferme! 350 (Shut up!, literally: it shut; the regular
French construction ought to be Ferme-la! that is, shut it!).
349
It is a matter of fact which is not changed because it has practical advantages. If it had to be prohibited,
it would be easy to do.
[…] a unique exemplar, a sort of syntactic hapax. But what can the statute of the exception be in the
theoretical frame defined by generative grammar and more generally in any syntax? If we assume that
sentences are the result of the interaction of a number of principles and rules, belonging to several orders
or modules, since the form exists, we are led to think that the form is possible, and to try accounting for its
possibility, that is, consider new analyses, since the description just proposed casts the problem into and
endless contradiction: (1) [La ferme!] is impossible, and yet (1) it exists and it belongs to French. How
can we solve the contradiction between an impossibility in the language and an occurrentiial possibility?
Kerleroux 1996, p. 209 (…) What deserves to be noted is that everyone analyses (1) as a sentence in the
imperative, at the very high expense of postulating a syntactic hapax, an assumption that might be hosted
in a fantastic linguistics only. ibid. p. 220.
350
301
This sentence is a hapax versus other sentences, that is, if considered separately of a
situation: it then has no unarguable analog. But it ceases to be a hapax if taken as an
utterance, that is, if regarded in a situation, because:
La ferme!: [situation 1] :: La ferme! [situation 2] :: Tais-toi! [situation 2]
It is then homologous to Tais-toi! (Stop talking) or Ta gueule! (Shut up!). Then, for
speakers nowadays who ignore the linguistic history of this form, it does not matter
much that the grammatical analysis of La ferme! be difficult, discordant, however we
take it, if the enunciative analysis can be done, and in this case it is comparatively easy:
someone asks you to stop talking and he/she subordinates the elegance of the expression
to its illocutionary force. The form is conventionalized as a whole with the associated
situation, entrenched as Langacker would say.
The idea then would be that a syntactic hapax does not actually happen, provided that, in
the 'syntax', that is, in structural analogies, we reinstate the situation – that would build
on private terms. This takes us a little beyond the perimeter, within which this model
may be regarded with some confidence, but if this prospect were validated, it would
make it possible to extend the validity of the analogical stance; in any case, this
occasion would not contradict it.
12.7.5. A-type and C-type records coexisting in a paradigm: mixed paradigms
Because geese is to goose as horses is to horse, but horses is built regularly with the
singular and geese is not, we would like to be able to write something like:
C
A
S1
horse
goose
S2
-s
S3
S4
horses
geese
that is, we would like to set a paradigmatic link between an A-type record and a C-type
record. The model allows that: it allows paradigms with mixed record types. This
increases the productivity of inscriptions all in reducing redundancy. However, an Atype record which would intervene between two C-type records would hinder access
from one to the other for syntax-oriented processes because the trace of one constituent
(-s here) gets lost when paradigmatic link crossings are chained. When planning for the
connectivity of a mixed paradigm, this risk of loss must be anticipated. It leads to place
A-type records out of the paths linking C-type records.
12.7.6. Homology, deflectors
Between two records with a paradigmatic link between them, the simplest is for sites 1
of both records to be made homologs and likewise for the other site numbers. However
we may have to represent homologies between sites with differing numbers. Assume for
example in an English plexus, an already numerous gender-oriented paradigm in which
the feminine is in site 4. It would be difficult to complement it with the pair (bride,
bride-groom), and the following inscription would be false:
A
C
302
S1
husband
bride
S2
-groom
S3
S4
wife
bride-groom
The convention that sites with the same number are homologous cannot represent this
analogy because bride-groom must be at site 4: it is a rule for assemblies. Rewriting the
whole gender paradigm is expensive if it is numerous, and not doable if two conditions
like this one bear in it contradictorily, which is the case as many feminine nouns in
English are assemblies built on the masculine (waitress, she-cat). In order to describe a
plexus comfortably in all cases, it must be possible to make homologs sites with
different numbers, what the crossing lines suggest below:
S2
A
S1
husband
C
bride
-groom
S3
S4
wife
bride-groom
The model contains this feature; it is named "deflectors". The device has no real
linguistic import; simply the intricacies of languages and the comfort of the descriptor
require it.
12.7.7. Analogies in constructions
The subject is covered above p. 299.
12.8. Familiarity orientation
This section covers in detail familiarity orientation; this topic was introduced p. 62. Less
familiar things are understood with the help of more familiar ones; utterances containing
less familiar elements are built from more familiar precedents. This platitude, if it was
stressed by some linguists, notably cognitive linguists, remained thus far unexploited by
precise operable models which try to account for linguistic productivity.
12.8.1. The more familiar makes the less familiar understood
A first development state of the model presented the following character: the heuristics
used the paradigms in all directions and tended to use them exhaustively when the
number of phases granted to the computation increased. This approach has the
inconvenience that it renders the computations sensitive to plexus size in average in a
polynomial way. This is a practical inconvenience if we care for the duration of the
computations, and it is a defect of the theory because we do not speak more slowly
because we know more words or more constructions. So there was a need to reconsider
this isotropic indifference and to orientate the heuristics towards what which would be
more promising.
The track adopted consisted in paying consideration for Aristotle's view that, in a
metaphor (and consequently in the analogy which underlies it), a less familiar tenor is
understood with the help of a more familiar vehicle and not the other way round. The
cup is to Dionysus as the shield is to Ares because the relation between Ares and its
shield is assumed to be well established between the interlocutors when the relation
between Dionysus and the cup would be less well established.
This venerable theme is taken over without variation on the renewal of research on
metaphor:
303
Each metaphor has a source domain, a target domain and a source-to-target mapping.
The metaphor is natural in that it is motivated by the structure of our experience. (276)
The metaphor PURPOSES ARE DESTINATIONS … from the time we can first crawl,
we regularly have as an intention to getting to some particular place. In such cases we
have a purpose – being at that location – that is satisfied by moving our bodies … and
at the final state, the purpose is satisfied. Schemas that structure our bodily experience
preconceptually have basis logic. Preconceptual structural correlations in experience
motivate metaphors to map that logic onto abstract domains. Thus, what has been called
abstract reason has a bodily basis in our everyday physical functioning. It is this that
allows us to base a theory of meaning and rationality on aspects of bodily
functioning351.
The greater familiarity of bodily and spatial experience is explicitly made the cause of
the elaboration of "superordinate concepts":
Meaning is not a thing; it involves what is meaningful to us. Nothing is meaningful in
itself. Meaningfulness derives from the experience of functioning as a being of a certain
sort in an environment of a certain sort. Basic-level concepts are meaningful to us
because they are characterized by the way we perceive the overall shape of things in
terms of PART-WHOLE structure and by the way we interact with things with our
bodies. Image schemas are meaningful to us because they too structure our perception
and bodily movements, though in a much less detailed way. Natural metaphorical
concepts are meaningful because they are based on a) directly meaningful concepts and
b) correlations in our experience. And superordinate and subordinate concepts are
meaningful because they are grounded in basic-level concepts and extended on the
basis of such things as function and purpose352.
Familiarity orientation is even empirically verified by the psychologists; for example:
Golson found that third and fourth grade children were able to use the framer's dilemma
as a source and transfer the solution to the missionaries' problem. However, the
converse was not true; i.e. the missionaries' problem was not successfully used as a
source. This has also been found to be the case for adults353.
12.8.2. Amérique, ô ma Norvège!354
Along these lines, the model is enhanced with a "familiarity orientation" which consists
of two complementary measures: one in the plexus and one in the computations.
In the plexus the paradigmatic links between records (whatever their type) are oriented:
one of them is supposed to be more familiar. Occasionally they may have equal
familiarity.
In the computations which require them, paradigmatic link crossings take place towards
a more familiar record or one with equal familiarity but not towards a less familiar
record.
351
Lakoff 1987, p. 276.
352
Lakoff 1987, p. 292.
353
Eliasmith 2001, p. 269.
354
America, O my Norway! The French poet Dominique Fourcade.
304
The familiarity orientation holds for links between all record types. So for a C-type
record, we must be able to say that its familiarity is lesser or greater than that of its
neighbours. This is not self-evident because a C-type record consists of: i) terms (two or
three constituents plus an assembly), and ii) the exemplarist construction itself. The
terms each have their familiarity, and the exemplarist construction has also a familiarity
attached to it. For example, as was already suggested, a construction in which a term
presents a "categorial distortion" (following Milner) sounds less familiar than a
coincident construction. There is no reason for these familiarities to be the same, so
what should be the familiarity of a record the elements of which have diverse
familiarities. The question should not worry too much, we can rely on an overall
judgement of the descriptor – this is not the only time – or say that a record's familiarity
is the lowest one of its elements.
To establish the relative familiarity of two records, a criterion among others is
morphological anomaly: anomalous formations are often more familiar. This is because
frequency is antagonistic to the 'analogical repair' of the forms. A frequently used form
tends less to let a competitor one arises, which would follow another analogy than itself;
it tends rather to perpetuate its frequency. This criterion however must be used with
discrimination: in English, brethren cannot be said to be more familiar than brothers
and in French cailloux or genoux certainly are familiar but not more than trous.
As an example, here is an analogical paradigm associating country names with names of
inhabitants. The more familiar is at the bottom of the drawing and the less familiar at the
top. The topmost elements are understood with the help of those below, but not in the
reverse way.
This paradigm belongs to the knowledge of a defined speaker about how country names
are associated with inhabitant nouns; it may well be a part only of that knowledge. He is
a Frenchman; for him, England and Germany are less familiar than France, Norway and
Sweden still a little less and so on. Finland is apprehended via Iceland: he is not a very
good geographer but he has this particularity, maybe he travelled through Iceland. But
why after all should mental inscriptions be subordinate to an academic geographical
knowledge?
Dynamically, the idea is that the heuristics processes, when they exploit the paradigms,
cross the links from the less familiar to the more familiar (or towards equal familiarity).
Thus for example, Portugal is known through Spain, then transitively through France,
but France is not known through Portugal – again for the defined speaker of whom this
is the model.
305
877 A Finlande°°°Finlandais°
886 A Colombie°°°Colombien°
879 A Islande°°°Islandais°
889 A Pérou°°°Péruvien°
885 A Mexique°°°Mexicain°
888 A Chili°°°Chilien°
878 A Danemark°°°Danois°
60 A Suède°°°Suédois°
887 A Argentine°°°Argentin°
883 A Cuba°°°Cubain°
62 A Iran°°°Iranien°
884 A Porto Rico°°°Portoricain°
881 A Tunisie°°°Tunisien°
8876 A Norvège°°°Norvégien°
890 A Brésil°°°Brésilien°
882 A Portugal°°°Portugais°
61 A Espagne°°°Espagnol°
880 A Italie°°°Italien°
892 A Taiwan°°°Taiwanais°
59 A Allemagne°°°Allemand°
893 A Japon°°°Japonais°
874 A Angleterre°°°Anglais°
891 A Chine°°°Chinois°
875 A France°°°Français°
Figure 38 Country names and inhabitant names
The progression towards a greater (or equal) familiarity applies in case a paradigm is
exploited by crossing a paradigmatic link. It does not apply upon resetting, even if the
resetting operates within a single paradigm. For a good reason: in case of resetting,
increase/decrease of familiarity cannot be defined, cf. notably section 13.4.7. What turns
out with familiarity orientation after transposition, (p. 327). More generally, familiarity
is not defined in a plexus as a measure; more weakly, it is only a partial order on the
records of a paradigm. It should also be noted that the difference of familiarity is not
defined for the terms themselves. Thus a same term may occur in various records each
with very diverse familiarity hierarchisations.
12.8.3. Proximality and contingency of familiarity orientation
Familiarity orientation is proximal and has value instantaneously in a speaker's history.
Its configuration varies and adapts along the linguistic and cognitive history of the
subject, or of the views generally accepted at a given moment in a society of subjects.
Early in the 20th century, Bohr's atom was made understood with the help of the solar
system and not the contrary, but in 2002.
This [the disturbance of G7 meetings by street action] shows to what point the electrons
of the public opinion may influence international affairs355.
355
Bertrand Badie, Institut d'Etudes Polittiques, Paris, speaking on the radio, 10-12-2002.
306
The "less familiar" Bohr's atom now became familiar enough to help explain something
else. So is it for the linguistic terms and the inscriptions in which they occur. It is not
granted that the relative familiarities which instantaneously apply in a young learner will
remain stable in future. Their change has a part in the change of his linguistic
knowledge.
12.8.4. Familiarity orientation alleviates computations without sterilising them
It is interesting to appreciate the incidence of familiarity orientation on the model's
behaviour. The same test set as in Chap. 6 is used, each test form is analysed twice:
without and with familiarity orientation.
The table below displays i) the computation phase for which a first abductive attestation
is obtained for the whole form, ii) the number of agents required, without familiarity
orientation and with it, iii) the number of products, including in these figures all
computation intermediates. The numbers of agents and of products may be taken as
indications of the computation cost.
test form
phase
nb of agents
nb of products
without/with
without/with gain
without/with gain
1 un très grand jour
2/2
393/311
20%
337/287
2 une très grande maison
4/5
1693/1443
15%
1665/1576
3 séjour de vacances
2/4
445/547
-23%
423/674
-59%
4 bon séjour en France
18/18
2996/1613
46%
3582/2083
42%
5 elle est arrivée avec son homme 6/4
1765/1044
40%
1870/1112
40%
6 elle est arrivée avec son homme 10/7
et son cheval
3034/1898
38%
3225/2000
38%
15%
5%
Table 23 Compared tests, without and with familiarity orientation
From these results, two conclusions and a conjecture are drawn:
a) When a solution was found without familiarity orientation, with the orientation one is
still found. As the orientation amounts to suppress certain resources from the plexus,
since certain links can no longer be crossed, it was wise to check that its introduction
does not impoverish the productivity. This impoverishment does not happen, which
means that, before the orientation, new utterances already tended to be licensed by 'more
familiar' inscriptions. The plexus' descriptor (who is the author of this work) had this
good intuition, even before orientation was thematized.
b) With orientation, the computation which finds the solution is cheaper by 20 to 40%
depending on the case. In one case only is the performance less good: an exemplarist
inscription was missing, which required longer resolution paths. Such a phenomenon
may happen in a plexus as that which was used and which may feature weak coverage of
certain linguistic facts.
307
c) It is conjectured that this economy, significant but modest after all, becomes critically
more important, first with the increase of utterance size (about ten morphemes only in
these tests) and then, with the increase of plexus size.
The latter point is very important. If a paradigm is seen as a disk, the part of it that a
computation uses with the orientation nears a radius. Without it, it tended to be the
entire surface of the disk. The cost function, polynomial before (maybe cubic), now
becomes linear (maybe logarithmic) only (the ideal would be a constant).
12.8.5. Familiarity orientation, coincidence and distortion
It is fortunate that familiarity orientation presents good effects, because it reduces the
computation cost, but above all because it is cognitively founded and gives a sound
vision of a certain asymmetry356 in linguistic dynamics.
Perhaps it also provides a theoretical reception to a question raised by Milner: what he
calls coincidence and distortion (distortion is non-coincidence). The question holds an
important place in Milner 1989.
In the theoretical apparatus, inherited from the first generativism, with modifications,
which Milner adopts for syntax, two notions play a central role: categorial label and
positional label. The "individus de langue" have a categorial label; the syntactic
positions have a positional label. When the position is occupied by an "individu de
langue" with a categorial label that is compatible with its positional label, the
occupation is coincident; otherwise it is non-coincident, it is a distortion.
In certain positions, certain categories are expected. Only with respect to this
expectancy may there be distortion357.
The lag may be graded, there are degrees to distortion. Marandin renames distortion
"heterocategoriality".
Heterocategoriality (distortion) constitutes a general organizational principle. Its modes
of realization vary across languages (English and French differ much in this respect),
across different states of a language, and in all likelihood across language levels358.
The 'positional paradox', is doubled with an 'argumental paradox':
When, in direct positional paradox, a term presents positional properties in a position
which does not ascribe positional properties, in the argumental paradox (or indirect
356
Linguists used "asymmetry" in several different meanings : 1. asymmetry of speech organs (Martinet
1955), asymmetry of auditory and articulatory organs (Laks 1993, p.15-16). 2. asymmetry in the sense that
A determines B without B determining A, for example, an adverb requires a verb but a verb does not
require an adverb (Bazell 1949), which is also the autonomy-dependency asymmetry (A-D asymmetry) of
Langacker : In a grammatical construction, the relationship between an autonomous component and a
dependent component. (Langacker 1987a, p. 485). In [UN-DRESS], [UN] is dependent and [DRESS] is
autonomous. ibid. p. 313, 3. Finally, cognitive asymmetry, that of Aristotle and that of the psychologists,
for example of Eliasmith already quoted. It is this third type of asymmetry which is envisaged here. We
shall see that it is also that of Milner.
357
Milner 1989, p. 369.
358
Marandin 1997, p. 156.
308
positional paradox), a term presents argument properties in a position which normally
receives no argument359.
How is this intuition of coincidence to be founded, what is its anchoring point, what is
going to set this in relation of mutual necessity with other terms of the theory? The
response is as follows:
The options taken by a theory for determining what structures are coincident depend on
empirical decisions360; in a given language, one may consider that the descriptions may
roughly agree on what they will consider as 'normal' structures and analyses,
distinguished from 'marked' ones361.
Coincidence is not associated with any other reason or foundation. Is coincident what
one agrees to find normal; the rest will be marked, that is, a distortion.
In the strictly non-categorial approach defended here, nothing of all this should cause
too much worry: without the assumption of categories, positions – if positions at all –
have no category, thence there is no coincidence or distortion either.
However, phenomena which the coincidence-distortion theory attempts to account for
are to be observed: there is a difference between le parler vrai (literally: the true
speak(ing)) and le discours sincère (literally: the sincere discourse), and simultaneously
a similarity. They present a constructional similarity but speakers will agree to find the
latter constructionally more familiar and the former less so.
The proposition is, soberly, to acknowledge this judgement, shared by speakers of
French, with two C-type records:
(C1)
(C2)
le + parler + vrai
le + discours + sincère
 le parler vrai
 le discours sincère
between which a link makes C1 less familiar than C2. In order to understand the
infinitive construction (C1) the computations may call on the nominal construction (C2)
but not the other way round. The model of this speaker ratifies this fact, that most
speakers of French probably share today, that the exemplarist infinitive construction le
parler vrai is possible, that its meaning effect is the same as that of the exemplarist
nominal construction le discours sincère, but that it is less common and less familiar362.
One should also note the terms discours and sincère are not mandatory in C2; exactly
the same effect might be obtained with:
359
Milner 1989, p. 450.
360
What is an 'empirical decision'?
361
Milner 1989, p. 551.
362
I indulged myself to write "nominal construction" and "infinitive construction". The alert reader has
corrected of course, the difference of familiarity does not hold between abstract constructions but, here as
elsewhere in this work, between exemplars. In another area of the pleus, the condition may be the opposite
one. For example, for a given speaker, le manger may be more familiar than la nourriture, le laisser-aller
than la négligence, the former licensing le boire and the latter le laisser-faire. Here, we touch the question
of "semi-productivity" (Jackendoff 2002, p. 157-162), acknowledging the locality and the contingency of
inscriptions and computations would be a way to account for this.
309
(C3)
(C4)
(C5)
le + comportement + honnête
le + comportement + maffieux
le + tendance + dure
etc.
 le comportement honnête
 le comportement maffieux
 la tendance dure
provided that elsewhere in the plexus, other inscriptions provide for the necessary cocategorizations.
The orientation, the asymmetry, which is advocated here, does not hold for all
viewpoints simultaneously. This can be illustrated still on the same example, (2) is more
familiar than (1) and so acts as "kernel" for (1), however, parler is more familiar than
discours, and vrai is more familiar than sincère and therefore, what acts as a kernel
construction-wise and what acts as a kernel term-wise are not the same things, in this
particular case, it is exactly the contrary, whence there is not in a plexus a centre which
would be central in all respects. It is possible to select utterances which present a
maximum familiarity in all respects: certain books for the pedagogy of foreign
languages try and do this in the first lessons for the comfort of students, sometimes
painstakingly, and the result is often not tempting. But in real language practice things
are different: in a same utterance, the different elements have in general quite diverse
familiarity orientations.
12.9. Overall properties of a plexus
So far, individual or local properties of records and paradigms were exposed. In a plexus
there are also more global properties which concern an entire paradigm or several
paradigms together.
12.9.1. Plexus: volume, representativity, validity
Formally, a plexus is a set of A-type records and C-type records among which
paradigmatic links are established. From the nature of paradigmatic links as defined in
Chap. 3, it follows that a plexus may also be seen as a set of analogies, systemic
analogies (A-type records) or structural analogies (C-type records).
To provide a base of appreciation, here are the volumes of a few plexii that were used
(in bold, those which support the experiments reported in chapters 4 to 7).
A plexus is the static model of a linguistic knowledge. The linguistic knowledge
inscribed in a plexus is supposed to be that of a speaker, so one expects to have several
plexii of a same language: frenchSpeaker1 and frenchSpeaker2 for example. This would
allow us to demonstrate variation in the realization of the same linguistic acts. This is
not yet done: validating and improving the general schemas of inscription and
computation was deemed a higher priority; for example in order to treat more
adequately agreement and other long-distance dependencies, or group effects, as was
reported above. However, I am confident that, when a sufficient level of functionality
will be generally acquired, it will be quite simple to alter the detail of inscriptions to
obtain variant behaviours. This, which is a hard question for category and rule-based
theories, and which take a language as their object, is inherent and easy in this model.
310
The plexii in the table have limited sizes. In order to near the real knowledge of a
speaker, if we limit ourselves to day-to-day language, excepting speciality jargon, if we
start on 5,000 lexical bases, 20,000 to 30,000 terms are needed because the bases must
be complemented with morphemes, semi-lexicalized forms, inflected forms, derived
forms, conjugated forms (some of them only, the other ones are built following the
model's productivity), longer syntagms, etc. These forms are as many terms. The records
will be in the range 15,000-25,000 if we extrapolate the ratio which seems to emerge as
a trend in the table. It appears then that, in the plexii that were used, the French plexus
alone has a beginning of numerical representativity. Must we fear risks or
methodological biases in working with too small plexii? No in a sense: a plexus, even
small, is a source of experience useful for testing and improving the model. No also,
because there is a compartmenting effect owing to the proximality of the inscriptions
and of the dynamics. Rules by contrast are dangerous because they are too powerful and
their applicability is too far-reaching. Those who use rules complain about their fragility
and the instability of rule-based systems over a certain size.
number of terms
number of records number of paradigmatic links
French plexus
1863
1270
2151
Japanese plexus
401
304
410
English plexus
188
96
158
German plexus
77
31
52
Italian plexus
42
25
28
Basque plexus
18
11
8
Table 24 Statistic of some plexii
Yet, there are reasons owing to size effect; we must verify on voluminous plexii that
performance does not collapse with size increase. Moreover, occurrentialism and the
making of compartments also play in the reverse sense: we may find phenomena in time
adverbs which are to be found neither in other adverbs nor in other linguistic devices
associated with temporality, like verbal tense. In short, we must remain cautious with
excessively sampled validation.
12.9.2. Pluridimensional systems and single-dimensional inscriptions
12.9.2.1. Linguistic paradigm, system, dimension
The question of linguistic systems (in the precise sense of pluridimensional tables) was
introduced p. 129 in the context of the systemic productivity and of its explanation. This
question is now addressed again in view of how these systems can be inscribed in a
plexus with restricted means (analogy is single-dimensioned in a sense which we will
see), despite their richer structure (they are pluridimensional).
311
The morphology of the French verb – this also holds in Romance language, in German,
in Russian, etc. – is a three-dimensional system: tense-mode363  {singular, plural} 
{person 1, person 2, person 3}364. In effect, an inflected form, if we limit ourselves to
the dominant canonical frame and discard the syncretism of the forms (which is
addressed p. 160), the infinitives and the imperatives, is determined by the conjunction
of these three data. This is what three-dimensional system means: it is subject to three
independent determinations.
The construction Det + N → NP in French, if we limit it to defined determinants
(articles le, la, les), may be seen as an analogy between N and NP. The notion of
dimension can be extended to this system and it has to be seen as one-dimensional
because the determined form, the NP, is entirely identified by giving the N, that is by
one data only. This extends the notion of dimension to the one-dimensional case; if we
consider this construction in isolation, speaking of dimension about it is not very
interesting, but accepting to do it generalizes nicely the notion of dimension and we will
need this below.
An agglutinative morphology in turn can be analysed along these lines as a system with
a high number of dimensions. In the Japanese verbal system, there are nine to twelve
depending on the possible variations of the analysis and on the extension given to the
'verb phrase'. It seems that in the case of Turkish (cf. Hankamer for example) there are
even more, but the morphology of Turkish comprises in a same system not only the verb
but also other lexical classes because Turkish presents a morphology of the 'translation'
(in the sense of Tesnière, that is of the change of lexical class) which is very productive
and systematic: a verb can be inflected, then nominalized by affixation to this first
result, and to this second result casual or derivational affixes may be appended giving a
new result which may in turn undergo 'translation', etc.
This vision of the dimensionality of morphological systems is compatible with that of
Demarolle (1990) already quoted. Recognizing dimensionality in this way is useful
because it helps understanding the question of multiple analogical ratios. A pair of
terms, candidate to enter into analogies, will be subject to as many analogical ratios as
there are dimensions in the system in which the terms of the pair belong.
12.9.2.2. Multiple ratios
I introduced p. 63 the idea that a pair of terms may have several analogical ratios
associated with it. This is the case in verbal paradigms, be they integrative or
agglutinative, as has just been seen, and also in the articles in French, in morphological
systems with double marking (gender and number for example), etc.
Now the inscription structure postulated in this model: the paradigm, which will be
called "plexus paradigm" to distinguish it from the "linguistic paradigm", is not directly
pluridimensional. This parsimony of the base model is intended; it is rooted in the
I adopt Maurice Gross's conception : … our utilization of the traditional terminology for the different
tenses and modes makes reference to the morphological properties only. In fact we found no base which
would allow us to establish, for the different verbal forms, a distinction between tenses and modes; we
call them all tense-mode, or more simply tense. Gross 1968, p. 10.
363
364
The  symbolises the Cartesian product operation.
312
presumption that neurons can implement analogies between couples of oppositional
pairs, that is, similarities of differences, but they cannot directly implement
pluridimensional structures. Plexus paradigms are one-dimensional chains (this is
something different than the possibility for them to have ramifications or cycles) and not
pluridimensional structures.
The a priori refusal to reify the pluridimensional analysis frame to assign the rendering
of multiple dimension effects to analogy not only makes a step toward plausibility, but it
also favours a better account for the accidents of the frames: defectivity, syncretism,
"parochial" subsystems" with "collapsing" of entire areas in the verbal paradigms of
Walmatjari365, etc.
There is another claim: the model has also the potential to render these effects in their
contingency. To that end, it has to:
i)
sample diversely the linguistic paradigms by plexus paradigms which are
integrative.
ii)
make the most of the plexus paradigms through computational mechanisms
which are able to integrate them.
Point ii) is implemented principally by agent ANZ which was introduced in Chap. 5 and
is specified formally in an appendix below.
Point i) was introduced with an example p. 139 and the rest of this section shows
different sets of such integrative plexus paradigms, in the case of a linguistic paradigm
with multiple dimensions. The example is taken in the Japanese verbal syntagm which
is richer in this respect than an Indo-European verb.
The following sections display schemas which suggest how it is possible to set
integrative plexus paradigms to account for of pluridimensional linguistic paradigm
effects.
For clarity, the same pair is always used, that is, the same vehicle; it takes place within
two plexus paradigms the ratios of which are different, the ratio in question being
determined by the vehicle plus a third term. Practically, when describing a plexus
however, it is not obligatory that the same pair be literally occurring in two such
paradigms because other integrative properties may make this non necessary.
12.9.2.3. Pair da-desu
The pair opposition is non-polite-polite in both cases.
First plexus paradigm: the base varies (copula, miru, yomu), the aspect is constant (nonaccomplished).
365
non polite
polite
da
miru
yomu
desu
mimasu
yomimasu
be, copula
look
read
Lemaréchal 1998, p. 61 et seq.
313
Second plexus paradigm: the aspect varies (non-accomplished, accomplished), the base
is constant (copula).
non polite
polite
da
datta
desu
desita
non accomplished
accomplished
12.9.2.4. Pair da-datta
The pair opposition is non-accomplished-accomplished in both cases.
First plexus paradigm: the base varies (copula, taberu, iru), the politeness is constant
(non-polite).
non accomplished
accomplished
da
taberu
iru
etc.
datta
tabeta
itta
be, copula
eat
be there, stand (animate)
Second plexus paradigm: politeness varies (non-polite, polite), the base is constant
(copula).
non accomplished
accomplished
da
desu
datta
desita
non polite
polite
12.9.2.5. Pair desu-desita
The pair opposition is non-accomplished-accomplished in both cases:
First plexus paradigm: the base varies (copula, kau, taberu, yasui).
non accomplished
accomplished
desu
kaimasu
tabemasu
yasui desu
desita
kaimasita
tebemasita
yasukatta desu
be, copula (polite)
buy (polite)
eat (polite)
be easy (polite)
Second plexus paradigm: the copula is here in both cases, what varies is that it is alone
in the first three records and it receives a prefixed adjective in the fourth one. The
morpheme of accomplished is borne by the adjective366. In the fourth record the copula
bears the morpheme of the polite register.
non accomplished
accomplished
desu
omosiroi desu
desita
omosirokatta desu
be, copula (polite)
be interesting (polite)
Third plexus paradigm: what varies is the base but with a change in lexical category
(copula in the first record, na-Adj in the second one).
366
In Japanese adjectives are conjugated.
314
non accomplished
accomplished
desu
sizuka desu
desita
sizuka desita
be, copula (polite)
be quiet (polite)
12.10. Topology, connectivity, influenced proximality
12.10.1. Plexus paradigm topology
Paradigms have no centre, no privileged record. A paradigm has nothing coming close
to reification: there isn't a representative of which it might be said for instance here is
where the notion of number (in English or in German) is concentrated or here is where
the construction Subject-Verb-Object is concentrated. On the contrary, each of its
records is linked with other records by a small number of paradigmatic links: from one
to six to give an order of magnitude. The mean value and the variance of these numbers
is a question of tuning the model and maybe not a very important one.
Paradigms do not encompass a centre, however, the paradigmatic links being oriented
by the familiarity orientation, it is possible to arrange a paradigm so that a group of
records plays a central role in it: they are much accessed from other records and
conversely, starting from them, the dynamics do not reach other record often. In a
cognitive perspective, and particularly in an acquisitional one, these records are the
analogs of the primordial acquisitions. They may form a quasi-centre, but diffuse; a
quasi prototypal area.
So, the records of a plexus paradigm form a graph; in it, some pairs of records are close,
other are distant. That a linguistic paradigm should have to be echoed by a single plexus
paradigm (a connected graph) or on the contrary by several ones (several internally
connected parts, but without link among them) is an open question, and, it seems to me,
not a very important one: counter-intuitively in some measure, the ability of a plexus to
serve abductive computations does not depend on the complete connectedness of the
plexus paradigms; this is principally because of the integrativity of the model.
Moreover, as this was explained at the beginning of Chap. 5, since a verb system, for
example, is questioned as an antecedent linguistic structure, it is not even desirable that
it be echoed by a plexus paradigm that would be single and systematic.
12.10.2. Influences determining proximality
In a plexus paradigm, questions of closeness and remoteness matter because this is how
the proximality of the model – introduced and defined in Chap. 1 then complemented in
Chap. 3 – is implemented. All records having among them paradigmatic links belong to
a same paradigm but some are proximal to each other and other ones are not. This
notion is particular to this model and is not to be found in numerous other approaches,
except in some connectionist models which may be said to encompass it in a way.
Paradigmatic proximality may be influenced by conditions that are different from those
commanding the placement of the record in the paradigm. The influences may have
diverse natures. In the gender paradigm in French, we expect to find pairs (le, la) and
(un, une) close together and close also of the determinants (ce, cette). Elsewhere, the
grouping may favour records concerning animals, motion, abstract terms, lexemes or
315
expressions concerning the same address level, words the plural of which is not marked,
etc. The influence on the grouping is an influence only and the grouping logic may
change within a same paradigm.
Preferred groupings (for example those of the articles in a gender paradigm)
complement but do not replace other means, more 'structural', whereby the 'category'
article is implemented in the plexus: i) analogies which are proper to these words and ii)
their distribution such as manifested by the C-type records. These are orders of facts
external to the paradigm which influence the proximality in it.
The processing of a linguistic task that encompasses number is accelerated when the
exploited paradigm has a proximality influenced by this category. The paths to be taken
are shorter; the reinforcement effects quicker, better synchronized and therefore stronger
and more prevailing. These favoured paths produce winning results. In common
language experience, the most common tasks benefit from this influence and so are
economical for the speaker. A less common task benefits less from them, it does execute
however, but its execution is more expensive.
An extreme case of influence is that which was encountered in section 4.3. John is too
stubborn to talk (p.112); the influence in this case is rather a negative and dissociating
one: two different – and unlinked – paradigms are made, with records that discord on
agentive roles, even if they might seem to be connectable if we were to satisfy ourselves
with a vision of their similarity that would be formal only.
Examples on how to accommodate syntactic ambiguity and multiple analysis in the
plexus are now going to be provided.
12.11. Syntactic ambiguity: example
In the case of syntactic ambiguity it is appropriate to make one C-type record per
interpretation and to place each in a paradigm with other records. The other records had
better not be all ambiguous because if they were, the model would have no base to
behave in a differentiated manner in the computations.
Example: Pierre m'a parlé de lui367
The ambiguity resides in the fact that lui may refer to Pierre or may be a deictic or an
anaphoric referring to some other person.
Pierre m'a parlé de lui.
Pierre m'a parlé de Pierre.
Pierre m'a parlé de lui.
Pierre m'a parlé de X.
To accommodate this, make the following two paradigms:
367
Ducrot 1972, p. 360.
316
C
Pierre
m'a parlé
de lui
C
Elle
se parle
à elle-même
Peter talked to me about himself / She speaks to herself
Pierre m'a parlé de lui
Elle se parle à elle-même
C
Pierre
m'a parlé
de lui
Pierre m'a parlé de lui
C
On
m'a parlé
des affaires
On m'a parlé des affaires
C
Ma banquière veut me parler de placements Ma banquière veut me parler de placements
C
Tout
me parle
de toi
Tout me parle de toi
Peter talked to me about him / Someone talked to me about the affairs / My banker wants to talk to me
about investments / Everything reminds me of you
In a production task, among the two paradigms, one only will be used: that which is
more activated by the data which specify the utterance to be produced.
In a reception task, both might be used – along with other paradigms, foreign to this
syntactic ambiguity – and one only will license the winning interpretation; it will be that
which is more congruent with the complementary data, if some is available, and if it is
discriminant in this. Depending on this, the ambiguity might prolong further.
12.12. Multiple analysis: examples
Even in the absence of syntactic ambiguity, a term may have several analyses. It is good
to give a term several analyses when the first one maps it onto certain records and the
second onto other ones. This amounts to recognize the term as pertaining to several
constructions. In other words, the speaker processing this term is able to make several
structure mappings (Gentner 1989). Examples will show this more clearly.
Example with factitive: Fr. il fait marcher ses affaires
First analysis
S1
C il fait
C je laisse
S2
marcher
aller
S3
ses affaires
les choses
S4
il fait marcher ses affaires
je laisse aller les choses
S3
(empty)
(empty)
S4
il fait marcher ses affaires
il mène sa barque
He runs his business / I let things go
Second analysis
S1
C il fait marcher
C il mène
S2
ses affaires
sa barque
He runs his business / He manages his affairs well
317
Example: En. unlawfully
In the example unlawfully,
[[un[[law]ful]]ly]368".
"most
analysts
would
bracket
as
follows:
The proposition is that lawful certainly must be assembled first, but then, for un- and -ly,
each may be assembled first and then the other, or both at the same time; altogether
there are three possibilities and the brother records with which the paradigmatic
mapping may take place suggest why each may be interesting.
First analysis: modification of an already assembled adverb:
C
C
C
C
S1
very
very
unmost
S2
fast
explicitly
lawfully
decently
S3
(empty)
(empty)
(empty)
(empty)
S4
very fast
very explicitly
unlawfully
most decently
Second analysis: adverbial derivation of an adjective which is negative already, or
detrimential:
S1
C unlawful
C coward
S2
-ly
-ly
S3
(vide)
(vide)
S4
unlawfully
cowardly
Note that the adjective is negative either intrinsically (coward) or by derivation from a
positive one (unlawful).
Third analysis: modification and adverbial derivation in a single construction:
C
C
C
C
C
S1
unnonvery
counter
counter
S2
lawful
explicit
explicit
clock
intuitive
S3
-ly
-ly
-ly
-wise
-ly
S4
unlawfully
non-explicitly
very explicitly
counter clockwise
counter intuitively
The reasons to choose one or several of these analyses are flexible ones: a) they are not
absolute, b) they may vary from lexical entry to lexical entry, and c) they may vary from
plexus to plexus.
Another theory might consider it must make a uniform choice in this. A generative
grammar for example, should have to decide what generation tree rules the formation of
"unlawfully" and similar words; there should be one only and it should apply to all
members of the same lexical category. Here, so rigid and so uniform a prescription has
no reason to be.
368
Langacker 1987 p. 307.
318
13. Appendix: Specification
of the abductive movements
This appendix bears on the bases of the dynamics which the four abductive movements
are. It complements the data provided in Chap. 3.
13.1. Abductive movement by transitivity
For the sake of completeness only: the subject is covered in section 3.6.2. (p. 83).
13.2. Abductive movement by constructability transfer
The notion was introduced, section 3.6.3. Abductive movement by constructability
transfer (p. 84). It is now going to be formalized and criticized.
13.2.1. Semi-formalization of constructability transfer
Let C1 be a constructor plexus paradigm, P1 one of its positions, T1 the set of terms
occupying P1. Idem C2, P2, T2 and let t be a term belonging both to T1 and T2 (the 'bioccurrent' term).
Definition of the abductive movement by constructability transfer: Because t is bioccurrent, any element of T1 may abductively occur in P2 of C2.
In other words: if two positions P1 and P2 of two paradigms are occupied by a same
term, any homolog of the term in one of the positions may also abductively be its
homolog in the other position.
In other words again: when a term occupies a position in a constructor paradigm and
another position in another paradigm, its homologs in the former may abductively
become its homologs in the latter.
For a given bi-occurrent term, what makes that this possibility will actually occur, that
the constructability transfer will take place or not? Firstly, it is a question of need: not
for all homologs in C1 of the bi-occurrent term does the need to be built into C2 arise. A
constructability transfer being possible does not incur that it is necessary or useful. The
push to operate the transfer is subjective, that is, it proceeds from the speaker and from
no other instance: this is up to where it has to go, it is not enough to say semantic,
pragmatic, cognitive. But when we say 'subjective', we comprise these three things and
other ones in addition.
319
Secondly, it is the quality of the result: a constructability transfer being possible does
not incur that its result is felicitous in all cases. The construction must in addition be
free of defects of all kinds (phonological, semantic, garden paths369, interpretation
difficulties). The quality of the result is computed before enunciation, the speaker
anticipating by simulation the possible effect of the planned utterance on the
interlocutor; alternately, the issuer realizes it afterwards, by perceiving how
(un)successful the utterance was.
Constructability transfer is used by process B2-B3 which analyses a linguistic form and
was exposed in Chap. 4.
13.2.2. Critique of constructability transfer
The potential objection is evident: this abductive movement is too loose, it can let
happen about anything and it will be easy to bring out examples among those which
initiated the question of sub-categorization. But it must be reminded that this question
holds in a categorial frame only.
In the adopted frame, the response cannot be a 'demonstration' of the adequacy or not of
constructability transfer thus defined, because it will not be possible to 'characterize' its
shortages in the first place.
First, we shall answer that speakers spend their time producing 'deviating' utterances;
they spend their time soliciting the resources of their linguistic knowledge, that is,
pulling them a little aside from attested uses. A very deviating utterance getting analysed
by the model is not important finally if it is never produced in any situation. We are not
going to begin bounding grammaticality.
We will also ask to take account of proximality, against the centralism of the rule (and
the resulting efficiency loss). The rule (with the category) deprives proximal processes
of the benefits of proximality, that is, of the freedom to do the best with portions of the
knowledge that are most congruent with the terms of the task. When these latitudes are
reinstated, many difficulties disappear which are just side effects of regularism.
We shall also observe that, within the current perimeter of the model which covers
meaning very little and pragmatics not at all, some accidents happen which are due to
this lack of coverage. They would be corrected upon an extension of the model.
Finally, we will grant that the current C-type record follows too simple a schema which
does not yet capture enough similarity, or not always the right one.
13.3. Abductive movement by expansive homology
The abductive movement by expansive homology was introduced p. 85.
369
A garden path is the situation in which a syntactic ambiguity leads to opt for the interpretation solution
which looks the simplest one at first sight (for example following the minimal attachment clause of Janet
Fodor), thus minimizing the cognitive load, but when this analysis is contradicted by the rest of the
utterance, which imposes a different intepretation, the one which was less preferred initially. Example in
spoken French: Jean qui va passer son baccalauréat a la fin de l'année… gâchée par les révisions (the
example is from Ligozat 1994, p. 20).
320
Expansive homology is not a primitive: if an axiomatic and deductive approach was
taken – this is pure counterfactual as the general character of this work is not such – we
would deduce it from the movement by transitivity and from the movement by
constructability transfer, in the inscription configurations which lend themselves to it,
that is, those in which a term and its expansion (or some of their distributionally similar
terms) are homolog. In this, expansive homology is different from the three other
abductive movements: none can be made a consequence of the other ones. Whether
expansive homology is made a 'theorem' or is taken as an 'axion', it has sufficient
importance and dignity to be described in particular. This is why the phrase 'abductive
movement by expansive homology' is used. Because of its importance, this abductive
movement was described in some detail in the section quoted above and there is no
further complement to add.
13.4. Abductive movement by transposition
The abductive movement by transposition was introduced p. 87. Here, we will
investigate in detail the validity of analogy transposition, find it imperfect, fail in an
attempt to characterize the imperfections, assess the incidence of this imperfection on
the abductive movement by transposition, and conclude that the movement is
dependable nevertheless.
13.4.1. Principle of analogy transposition
The abductive movement by transposition is schematized by:
X : Y :: A : B 
X : A :: Y : B.
From the former analogy the latter is abducted which is the 'transposed' analogy (terms
Y and A are simply swapped).
Currently, transposition is used by agent ANZ which was described in Chap. 5 (p. 129).
Therefore it is important to examine when analogy transposition is valid. The
investigation of a set of examples showed that sometimes it is the case and sometimes
not. Seeking a mathematical demonstration or refutation would be moot: analogy is
underspecified and cannot be defined mathematically.
Thus the abductive movement by transposition works most often but not always. For
example it works well in the French articles and in the verb paradigms of Indo-European
languages. It also works well in the Japanese verbal syntagm. Therefore, it absorbs an
integrative morphology and an agglutinative one as well.
13.4.2. Transposition of "linguistic" analogies
Informally, and for local purposes, 'linguistic analogies' are ones in which the placement
of forms in language paradigms prevails on the referent or on the meaning effect; the
opposite cases being called 'cognitive analogies' below. This is just a classifying
measure and the demarcation between the two classes is not sharp.
321
13.4.2.1. French articles → transposable
(a)
le : la :: un : une
(a')
le : un :: la : une
In (a) the vehicle and the topic are the ratio that grammarians analyse as masculinefeminine. In (a') the vehicle and the topic are the ratio that grammarians analyse as
defined-undefined. This transposition operates well.
13.4.2.2. le : la :: homme : femme → transposable
(a)
le : la :: homme : femme
the (masc.) : the (fem.) :: man : woman
(a')
le : homme :: la : femme
As previously, in (a) the vehicle and the topic are the grammatical gender. In (a') the
vehicle and the topic are the move from the definite article to the name of the exponent
of the human species with the same grammatical gender as the article, and conversely.
Curious as this clause may sound, these ratios are precise and good, they are biunivocal
(cf. p. 63); the proportional fourth is well determined in all senses and this transposition
operates well.
13.4.2.3. l'un : l'autre :: celui-ci : celui-là → transposable
(a)
l'un : l'autre :: celui-ci : celui-là
one : the other :: this one : that one
(a')
l'un : celui-ci :: l'autre : celui-là
In (a) the vehicle and the topic are a relation of rank or of proximity. In (a') the vehicle is
not simple to express. One may say that "l'autre" is a quasi-synonym of "celui-là", but
this is not a very good expression of the ratio because "le second" is another quasisynonym and yet:
(a'')
l'un : celui-ci :: l'autre : le second
the one : this :: the other : the second/the latter
is not an acceptable analogy, in any case not like (a') which contains something much
more precise. Even if the analysis of the ratios is difficult, speakers will often feel that
(a') is good, and will often accept it. Overall, it is a mixture of meaning effect and of
formal variation which is almost a suppletion, it operates in these two dimensions which
interact well. So the transposition is good in this case.
13.4.2.4. je : je souhaite :: tu : tu veux → not transposable
(a)
je : je souhaite :: tu : tu veux
I : I whish :: you : you want
(a')
je : tu :: je souhaite: tu veux
In (a) the vehicle and the topic are a part-whole relation, a mereologic relation. (a) is an
acceptable analogy.
322
In (a'), for the leftmost pair, the ratio is 1S : 2S whereas in the rightmost pair, it is not
this.
Analogy (a): "What is to je souhaite as tu is to tu veux" is good because the answer is
univocally determined, it can only be je. In the transposition (a') "What is to tu as je
souhaite is to tu veux" there is no determination that would be near univocal. This
analogy infringes the bijection constraint already discussed p. 63. Thus, analogy (a) does
not transpose at all. So is it for all mereological analogies when the part, in the role that
it plays, does not determine the whole.
13.4.3. Transpositions of arithmetic analogies
13.4.3.1. Arithmetic, sum → transposable
(a)
12 : 9 :: 6 : 3
(a')
12 : 6 :: 9 : 3
In (a) the vehicle and the topic are the addition of 3. In (a') the vehicle and the topic are
the addition de of 6. This analogy transposes.
All similar analogies, interpreted as arithmetic sum, transpose. It is so because, (a) can
be "interpreted" by X - Y = A - B whence it follows that X - A = Y - B, which is the
"interpretation" of (a').
I just wrote:
analogy (a) is "interpreted" by X - Y = A - B
that is:
a given concrete, exemplarist analogy is "interpreted" by a given categorical (and
symbolic) proposition.
Such a move is not self-evident: "What! You pretend to expel categories and make a
symbol-free theory, and you indulge yourself this negligence which contradicts your
approach and compromises it. This is not acceptable". The reader may note that
"interpret" has quotation marks around it. The intent is momentary only: in an effort to
assess the scope of transposability, one might have listed two or three pages of such
examples – you would have skipped them – and suggested by abduction that numerous
other examples 'worked as well'. Up to where? Up to where a subject with a moderate
ability in arithmetic has nevertheless a naive arithmetic knowledge which covers his
ordinary needs, and, as everybody, finds it difficult to cope with large figures, a fuzzy
and variable frontier. As you are supposed to be educated in arithmetic, this shortcut
was proposed but it must be seen as a shortcut only and connivance is asked for it. In
particular, the vision which is proposed for interpretation – or understanding – in this
theory is not to map exemplarist linguistic forms onto categorical and propositional
knowledge; this vision is described elsewhere as an 'immersion' process, cf. p. 263.
13.4.3.2. Arithmetic, product → transposable
(a)
18 : 9 :: 6 : 3
(a')
18 : 6 :: 9 : 3
323
In (a) the vehicle and the topic are multiplication by 2. In (a') the vehicle and the topic
are multiplication par 3. This analogy transposes.
All similar analogies, interpreted as arithmetic product, transpose. It is so because (a) is
"interpreted" by X / Y = A / B whence it follows that X / A = Y / B, and this is the
interpretation of (a').
13.4.3.3. Arithmetic, exponentiation → non transposable
(a)
25 : 5 :: 9 : 3
(a')
25 : 9 :: 5 : 3
In (a) the vehicle and the topic are exponentiation by 2, but (a') is uninterpretable. It is
not and analogy. Analogy (a) does not transpose.
13.4.4. Transposition of "cognitive" analogies
Informally and locally, "cognitive", refers to analogies for which the referent of the
meaning effect prevails onto the possible placement of forms in a paradigm of the
language.
13.4.4.1. Motherhood → non transposable
(a)
my mother : me :: your mother : you
(a')
my mother : your mother :: me : you
In (a) the vehicle and the topic both are the relation between mother and child. First
attempt: in (a') my mother and your mother may be sisters, then you and I are cousins
(Fr. cousins germains). If we remember that in Catalan 'brother' is 'german', (a') may be
accepted if absolutely necessary, but the fact that my mother and your mother are sisters
is an assumption proper to (a') and it is not at all necessary to (a). So this transposition
holds poorly and only at the expense of an assumption foreign to the direct analogy.
Second attempt: in (a'), it is possible to interpret the vehicle and the topic as 'being of
same generation' or, more precisely, as 'being of same generation lag'. At this expense,
(a') is an analogy. But this expense is somewhat expensive, or to put it in a better way,
the yield of this interpretation is poor because (a') understood in this way is devaluated
versus (a) which is much more precise. Third attempt: if you and I are enemies, our
mothers are enemies. Maybe but numerous other ratios are equally possible; some more
context should be necessary to determine this interpretation among many more, which
amounts, as in the first attempt, to making an assumption which is foreign to the direct
analogy. Finally, in this case, transposition is bad.
13.4.4.2. cup : Dionysus :: shield : Ares → non transposable
(a)
cup : Dionysus :: shield : Ares
(a')
cup : shield :: Dionysus : Ares
In (a) the vehicle and the topic are both in the relation from representant to represented
or from signifier to referent or from signifier to signified, or conventional attribute as
you prefer.
324
In (a') the vehicle is not clear, what is the relation between Dionysos and Ares except
that both belong to the Pantheon unless one is the other's father-in-law, we should look
up from a reference book, but here again, doing that would be introducing data foreign
to the initial analogy. The theme (cup : shield) is not clearer generally. This analogy
does not transpose. This case does not seem to differ from the next one.
13.4.4.3. Capital cities and countries→ non transposable
(a)
Caracas : Venezuela :: Roma : Italy
(a')
Caracas : Roma :: Venezuela : Italy
In (a) the vehicle and the topic both are the relation from capital city to country. In (a')
the vehicle is a pair of countries between which the relation is not clear (different
continents? They are not the only ones in that case. A Latin language is spoken in both?
Not characterizing and somewhat poor. And so on. etc. The topic is a pair of cities
between which the relation is not clearer. So an interpretation through the ratios gives
nothing. In order to make meaning out of (a'), we can try to profile along possible
attributes which are shared by countries and large cities: population, pleasure to live,
violence, etc. Then it becomes possible to understand things like 'Caracas is 25% less
rich than Rome as Venezuela is a quarter less rich than Italy', but numerous other
propositions of that sort are also possible. This transposition is bad.
13.4.4.4. "siblinghood" → non transposable
Assume Alex is my brother and Bea your sister.
(a)
I : Alex :: you : Bea
(a')
I : you :: Alex : Bea
(a'')
Alex : Bea :: I : you
In (a) the vehicle and the topic are both the siblinghood relation.
In (a') the vehicle may be the interlocution relation: you and I we are talking to each
other. It might also be any relation between I and you, which would be well established
between us but once again this would be calling on data foreign to the proper data of (a).
The topic (Alex : Bea) is subject to the same discussion and to the same doubt. This
analogy does not transpose.
13.4.5. Characterizing transposability
We have just seen that certain analogies transpose and other ones do not. Is it possible
de characterize this, to find a criterion for it? A characterization effort is always
interesting: if it succeeds, it point to something new, a possible structure which
authorizes a local reconstruction, more interesting, descriptively and theoretically.
325
13.4.5.1. 'Linguistic' analogies vs. 'cognitive' analogies
In the analogies which were investigated, the first candidate generalization is that
'linguistic' analogies transpose well while 'cognitive' ones, that is, those which have
value from the properties of their referents (including their virtual referents370) do not.
This is true most often.
It is false for arithmetic sum and arithmetic product: these analogies are "cognitive" but
transposable.
13.4.5.2. Determination of the proportional fourth
The more general criterion would be possibility for three terms to determine the fourth,
to determine the proportional fourth.
This phenomenological qualification is not very powerful: it is not far from tautological
and cannot be connected with anything else, but it would be in the spirit of analogy.
Now this criterion is false: in the case capital cities-countries, in the direct analogy the
proportional fourth is very well determined and yet this analogy does not transpose.
13.4.5.3. Bijection
As in analogy there is something of the order of unique determination, one can think of
bijection. But the idea is short because bijection is one application of a set onto another
or of a set onto itself, whereas in analogy, whatever the set we can define, there are two
applications. This let alone, each needs to be quasi-bijective only and not strictly
bijective.
13.4.5.4. Bidimensionality
The generalization which is then suggested is: an analogy is transposable when there is
some sort of underlying bidimensionality in it. The four terms are in a bidimensional
system, which sometimes is analysed in two features.
In the French articles, the two dimensions are gender and definiteness; usually, both are
described by features in formalized grammars.
In the case le : la :: homme : femme, one of the dimensions is the gender feature. The
other is not clear at all, there is something but we do not know very well how to speak
about.
In the case of l'un : l'autre :: celui-ci : celui-là, even if they are not usually described by
features, the two oppositional dimensions are present indeed but here again, they are
dificult to characterize371.
370
In the terminology of Milner (1982), actual reference denotes the term's referent and virtual
reference its lexical meaning. A referential term has a virtual reference independently of its usage, but it
has an actual reference only in usage context. Only when appearing in an utterance produced by a speaker
can one ascribe a referent, an actual reference, to a referential term. Moeschler 1994b, p. 349-350
371
The case l'un : l'autre :: celui-ci : celui-là, also illustrates the fact that dimensions may span few
exemplars, contrasting for example with the dimensions of verbal paradigms which span thousands of
forms.
326
In the case 'capital cities and countries', an oppositional axis is precisely the axis capitalcities-countries but for the second one, it is not possible to find anything very clear. It is
the absence of a second axis which inhibits the transposition.
It must be noted that bidimensionality may be an excerpt from an underlying system
with more than two dimensions. This is the case in verbal morphologies, be they
integrative (Indo-European type) or agglutinative.
So, the bidimensionality criterion is not a bad idea, but it is not always possible to
characterize the second dimension.
Finally, we found it hard to univocally characterize analogy transposability. This is not a
surprise: analogy being underspecified, it does not lend itself to theorizing in itself.
What can be approached on the contrary is a theorizing of its operation in an overall
dynamics in which it operates with other elements and other mechanisms.
13.4.6. Transposability and movement by transposition
Analogy transposability which has just been surveyed needs to be considered with
transposition movements that take place in agent ANZ (the base element accountable for
systemic productivity).
In agent ANZ, the transposition movement is equated with (concomitant with)
positioned resetting, that is, with the swapping of the roles of the three running terms,
that is again, with the reassignment of their positions. The transposition movement
makes a substitution between horizontal pair and vertical pair, trying all possibilities and
thus causing the recruitment of a commissioner, when this proves possible. This
constitutes positioned resetting.
If the current analogy – that which is associated with the recruiting agent – is such that
its transposed analogy is good, this legitimates the transposition movement. What if the
current analogy does not transpose? The worry is the risk to transpose wrongly and that
ensuing heuristic branches might loose the relation with the initial terms of the task after
such a questionable movement. The questionable movement is attempted indeed but it
leads to nothing because the pair that characterizes it finds no echo in the plexus: there
is no entry for it in the index of analogical pair occurrences. The vision of this index (cf.
p. 299) which coindexes only the pairs that play a part in systemic analogies amounts to
this: all transpositions may well be attempted; they are productive only for analogical
pairs since these only are coindexed.
Consequently, the transposition movement is immune to non-transposability: all
transpositions are tried, even the bad ones, but the only successful ones are those for
which the new pair is coindexed. This de facto filtering discards bad transpositions
immediately after their attempt. Agent ANZ is thus functionally very strict despite
analogy under-determination, and the restrictions on its transposition.
13.4.7. What turns out with familiarity orientation after transposition
Before a transposition movement, the recruiting agent is located on a plexus record, and
after it, it is located on a new plexus record. What about the mutual familiarities of these
two records? The underlying question being: does it make sense that the transposition
327
movements should observe familiarity orientation? Or: can transposition movements
take advantage of familiarity orientation?
The question is licit in principle, but the device's organization makes that it is not posed,
or that it cannot be posed. It is so because familiarity orientation is defined on
paradigmatic links only; now by definition the transposition movement is a resetting
and, like any resetting, it consists of something else than crossing a paradigmatic link.
Therefore, the familiarity of the commissioner agent is impossible to relate with that of
the client agent. Of a transposition movement, it cannot be said that it moves towards
higher, lower, or equal familiarity.
13.5. Solidarity between the plexus and the dynamics
In this model it is not possible to assess the value of a plexus separately of the
computations which use it and, symmetrically, it is not possible to assess the
computations separately of a plexus to which they apply; between both, there is an entire
solidarity. This places a constraint on the elaboration of the model: it is not possible to
work separately on one or the other.
Likewise, it would be void to describe the plexus and its paradigms as a static
description without telling how the computations use it. In doing it, one cannot motivate
why such link rather than such other one or why such record. It is more convincing to
show dynamic effects. For example, agent CATZ used in the suggestion of similarities
demonstrates proximal and flexible categorization.
The best that could be done to break the solidarity between the plexus and the
computations was to formulate the four analogical abductive movements372 and a
functional notion: the expansive gate. To some extent, they make it possible to make
reasoning on either the plexus or the computations while confining the impacts. For
example, it is possible to question whether an area of a plexus has the necessary
expansive gates. Symmetrically, when designing the computations, it is possible to take
as granted in principle that the four abductive movements will be possible without
defect in the plexii to which they will be applied, and this can be checked separately in a
distinct operation.
But this confinement is far from complete; there remain many effects which still
demonstrate interactions that are not entirely circumscribed in the four movements. The
abductive movements themselves, this has been explained at length, are subject to
comments, restrictions, precautions of application, long-distance effects or delayed
effects which make them impure.
As already mentioned, the computation in the model has two functions: a) to model the
linguistic computation that takes place in the brain with high parallelism and b) to
provide of this process a serial equivalent – to serialize it – so that is can be run on an
ordinary computer. The modelling function has scientific relevance whereas
parallelization is technical and artifactual only. Ideally, both ought to be separated. If a
372
The four analogic abductive movements are reminded to be: by transitivity, by constructability
transfert, by expansive homology, and by transposition.
328
parallel processor some day became available, the modelling function, being neatly
separated, would be the component to implement directly on this processor and
parallelization would disappear, absorbed by the hardware.
Unfortunately, in the current state of the proposition, they could not be separated,
neither conceptually nor organically. This constitutes a track for future research, but it
may also be the case that the separation is not achievable.
329
14. Appendix: Specification of the dynamics
14.1.Position and function of ABS in the model
Agent-based solving (ABS) is a possible implementation of the dynamic side of the
model. It was not the only one: there existed a first implementation which was less
architectured and harder to make evolve … but with better performance in its limited
scope of application. On the other hand, ABS is not the ultimate solution: this
component might eventually be replaced with a functionally equivalent one but with a
better design, or broader coverage, or any other desirable quality. What matters is not
the particular architecture of ABS, but what can be concluded with the experiments it
supports. There is no claim that ABS is a reasonable model of brain operation when
accomplishing linguistic tasks. ABS is rather a tool to explore questions like:
-
the overall integrativity, which potentiates and empowers fragmentary,
heterogeneous linguistic data,
-
the value of the proposition of proximality (against totalism),
-
can linguistic knowledge be limited to the inscription of similarities of
differences, that is, does analogy suffice to structure linguistic knowledge or is
something more needed,
-
does a purely exemplarist and occurrential memory suffice, and if it does not
what other model can be proposed for gradient and flexible abstraction and
categorization, etc.
ABS may be viewed as performing two functions: a) modelling: ABS implements an
inherent model which is a model of linguistic processing by the brain; abundant and
converging evidence show that it is highly parallel, and b) serialization: ABS converts
the inherent model into a serial equivalent which is indispensable for it to be run on a v.
Neuman machine. One might like to see the two functions sharply separated but this is
not the case currently; the separation did not arise on its own and was not willingly
sought; I do not know whether it is possible.
An element which arose on its own in the course of the design was the distinction
between recruiting process and edification process.
This appendix is followed by a few more which treat separately each agent, so it limits
itself to the common architecture and he general processes. The description is formal
enough to provide for the reproduction of the results; the formality may be at the
331
expense of pedagogy, for which introductory material and examples were provided
abundantly in chapters 3, 4 and 5.
14.2. Requirements for the architecture of the dynamics
The computation must solve linguistic tasks without being limited to a predefined set of
tasks: the architecture must be open because it is a research enterprise. In the general
case, a task is implemented by the cooperation of agents of several types; the computing
architecture must ensure the interworking of the various types of agents, here again in an
open-ended approach: adding an agent must possible at marginal cost, without incurring
a completely new design.
The computation must be integrative: it must integrate the effects of several agents, and
it must integrate plexus inscriptions which are sparse and heterogeneous.
The computation must be abductive because it is assumed that linguistic dynamics are
abductive.
The products of the computation – intermediate products in particular – are required to
be multiple, concurring or competing, thus acknowledging the conclusions of the
optimalist current in linguistics373 – and those of the connectionists – and providing
them with an operable support. The products therefore have strengths.
It is also necessary that the computations be time-sensitive to reproduce the timesensitiveness of real linguistic acts. For example, certain utterances, not necessarily the
longer ones, are more difficult to understand than other ones; the cognitive costs differ
depending on the cases, and the processing time is longer. The conjecture is that the
linguistic knowledge is mobilized piecewise and gradually.
Finally, the design must be able to evolve, even at the expense of performance, because
this is a research tool and it must be possible to explore different ideas.
14.3. ABS is indebted to Copycat
The elaboration of ABS is indebted towards Douglas Hofstadter, towards the conception
of the workspace in Copycat in particular. This text was decisive in a conception which
resisted. Although ABS is very different finally, it encompasses several ideas freely
reinterpreted and adapted. This is an explicit acknowledgement. The following
paragraph summarizes the origin text374, the ideas of Copycat which have an echo in
ABS are marked with a plus (+), and with a minus (-) those which do not.
A construction yard where several teams are at work (+). Several structures of different
sizes are simultaneously under elaboration (+). Any structure can be undone to leave
room for new ones (-). Initially, the process receives raw data without link between them
(+). Small agents (codelets) patrol, seeking features of various sorts (+). Items acquire
descriptions and are linked following different perceptual structures (-). The salience of
an object in the workspace depends on its importance and its unhappiness, this
373
Smolensky 1999 for a summary of the principles, Kager 1999 for a more systematic exposition.
374
Copycat, The workspace, Hofstadter 1995, p. 216 et seq.
332
determines the degree of attention which it receives (+/-, the activity control mechanism
could be compared). Salience depends on the workspace (here the heuristic structure)
and on the slipnet (? there is no recorded knowledge in Copycat, so nothing analogous
to the plexus of ABS). An object is more important if its description is richer and has
more active nodes (+). An object is unhappy if it has few connections to the workspace
(it's the grating gear which receives the oil). Reification (of pairs of neighbouring
objects) is the creation in the workspace of links between objects (links of similarity, of
consecutivity, of precedence). Links have strengths which vary dynamically: conceptual
depth and corresponding activation in the slipnet + prevalence of similar links in the
neighbourhood (- because in ABS there isn't an analog of the link in Copycat, unless the
channel might be seen as a possible analog but this is not very striking).
14.4. Solving with agents
A solution which satisfies the requirements above was adopted, it is based on agents375.
The computation is carried out by the cooperation of a number of agents which belong
to defined types. Each agent is vested with a duty. An agent recruits more agents and
assign them a duty derived from its own. Agents may be of different types. The model
evolves easily by a) adding a new agent type, and b) modifying an existing one.
Provided these evolutions comply with the specifications of other agents, complexity is
controlled and so is the evolution cost.
This architecture is called 'agent-based solving", in short 'ABS'.
ABS integrates the effects of agents of different types: an agent may recruit
commissioners of the same type or of types different from its own. Numerous examples
in chapters 4 and 5 illustrate integration effects.
Beside agents are channels. An agent recruiting another agent does so via a channel
when the contribution called by the recruitment is syntagmatically determined.
The set of agents and channels such as it develops at a given instant to support a
computation is the 'heuristic structure'. In the simplest cases its form is a tree and a
lattice in more complex ones.
Schematically, each branch of the heuristic structure is strictly exemplarist: it
encompasses a limited number of terms which are exemplars. These terms are strictly
copositioned with respect to one another. New terms follow one another at different
positions.
The general operation of ABS ensures the preservation of positionality, that is, of the
copositionings of the terms which follow one another at defined positions in the course
of a computation.
The simplest schema of copositioning conservation is walking through a single
paradigm using paradigmatic links. A schema beyond the latter is positioned resetting.
375
The metaphor of economical or administrative agents is deliberate. In addition to Hofstadter, the notion
of agent is ABS also owes to the agents of Minsly (1986) : as with the latter, ABS agents are numerous,
autonomous, specialised, simple and short-sighted.
333
Positioned resetting was described in detail in section 7.3.5. Positioned resetting (p.
206).
14.5. Agents
14.5.1. Agent
An agent is a short-sighted entity: it has a limited intelligence and a limited perception
of its contribution to the process that uses it. It has a duty assigned. To fullfil its duty, an
agent considers in the plexus the linguistic data that matches its duty and a) the agent
may identify a coincidence and perform a settlement, and b) it may recruit more agents –
its commissioners – to help fulfil its duty or prolong its effect.
An agent has exactly one delivery point, which is a channel.
Redundancy control: there may not be two agents with same the type, with the same
duty, and delivering at the same delivery point. This clause ensures that, when
exploiting a paradigm, a single route will be taken in the paradigm, that is, in the graph
which this paradigm is, a tree will be extracted, without cycles and without the same
record being reused twice in the same way. However, two agents with the same type and
the same duty are possible if their delivery points are different.
14.5.2. Agent duty
An agent has a duty which specifies what is expected from it. An agent duty is made up
of one to a few elements; six at most in the current implementation but this limit is
contingent. These elements are:
-
either term identifiers, that is, term numbers,
-
or term occurrence identifiers, consisting of a record identifier plus the site where
the term occurs in the record
-
or field data: so far field data are the start and the end of spans in a linguistic form
under analysis, that is, the rank in this form of the first character and of the last
character of segments of this form.
All the components of an agent duty are implemented as numbers referring to the plexus
or to a linguistic form under analysis. This is an implementation decision but different
ones would be possible, in particular, in a different implementation, segments of
linguistic form could be part of an agent duty.
The agent duty is used to watch for settlements, that is, coincidence between it and the
data in the plexus that best matches the duty.
It is also used to produce duties for potential commissioner agents. Commissioners with
such duties are actually recruited if the non-redundancy condition is observed: no two
agents with same type, same duty and same delivery point.
These are general clauses. They are particularized for each agent type, please refer to the
particular agent schemas in the appendixes below.
334
14.5.3. Life cycle of agents
Agents are created either by recruitment, or by the edification process.
In the case of creation by recruitment, upon creation, that is, in the same phase or in the
immediately following one, they take, depending on their type and the plexus data
matching their duty, one or several of the following actions:
i) unconditional raising of a finding ii) production of a finding conditioned by a
settlement, iii) recruitment of more agents. After this, they cease to be directly useful to
the computation and they might disappear but they are conserved for the following three
reasons:
a) redundancy control. Lest having a useless and hramful redundant heuristic
structure built up, two agents with same type, same duty, and same delivery point
must not be allowed; this condition could not be exerted if agents disappeared.
b) explanation: the analysis of the processes must be possible after their completion,
this requires to investigate the agents detail and therefore to conserve them.
c) measurement: the total number of agents created by a process is a measure of its
cost; upon process completion, it must be possible to count the agent.
In case an agent was created within an edification process, the agent may still serve
several phases after its creation. So deleting agents is even less advisable in this case.
Agents being made persistent in this way obviously raises questions of plausibility. It
would also have an implementation impact if the model were extended to more that a
limited linguistic task. This is not directly faced in this research.
14.6. Channels: syntagmatic positions
14.6.1. Notion of channel
An agent may have channels. When an agent recruits a commissioner, it may do so
directly or via a channel. Channels are an instrument to reconcile a great architectural
flexibility with the rigorous observance of positionality. Channels are an important
organ for positionality observance.
Channels are delivery points: they are where results are delivered by agents. That is, the
merging process merges findings together giving results that belong to channels.
A channel does not have a duty, only agents have.
A channel may be created by the recruitment process, then it has no field data and it has
exactly one client agent.
It may also be created by the edification process. Then it has field data and none, one, or
several client agents.
14.6.2. Channel usage
The first usage of channels is the case in which an agent attempts to solve with results
from two or more sources which are syntagmatic with one another. For each
syntagmatic position, a channel is created. Thus for example agents B2 and B3: they
335
accept terms at different channels and try to locate their cooccurrences in plexus C-type
records. Agent B2 has two channels and agent B3 has three channels.
Coreference and anaphor seem to be able to be treated by channels. A channel would be
open for the anaphoric term, and another for the antecedent, each accepting private
terms which would be their interpretants. The settlement would consist in a same private
term occurring at both channels. The same suggestion may also be applied to
relativation. A limit of this schema is that it is referentialist and extensional only. It
could work only in cases presenting this character and would not generalize. A different
solution to the problem of coreference, no doubt more adequate, requires a revision of
the C-type record which would enhance its expressive power.
14.7. Conventional forward-rearward orientation
In ABS, heuristic structures have a conventional orientation along a forward-rearward
axis. This orientation arises from the need a) to differentiate a rearward process and a
forward process (more on this below) and b) to express that recruitment develops the
heuristic structure rearwards whereas edification develops it forward. In a first approach,
the forward-rearward orientation is viewed merely as a convention. Later, it is granted
an interpretation.
When heuristic structures are presented on figures, conventional fore is on the left and
the conventional rear on the right. In the internals of the development and in certain
appendixes specifying agents (infra), the rear is abbreviated by RW (rearwards) and the
fore by FW (forward).
14.8. Development of the heuristic structure by recruitment
14.8.1. Recruiting process
The heuristic structure may develop by recruitment when the linguistic task is entirely
defined by few terms. It is then possible to initiate the process at a unique point, the
root, which is a channel and where all results will be delivered. One agent or a few
agents are appended to the root at the initialization of the process. These agents (then
clients) recruit more agents (then commissioners) to which they assign a duty. The
commissioners recruit in turn, then behaving as clients and so on.
The recruitment of agents develops rearwards (RW), it encompasses no field data,
contrasting with the edification process. Both are contrasted below p. 341.
The recruiting process is used in simple tasks like for example the analogical task (agent
ANZ) or the suggestion of similarities (agent CATZ).
14.8.2. Duty assignment upon recruitment
Recruitment is commanded by the duty of the client and the corresponding data of the
plexus. An agent that recruits knows how it uses these two data to assign duties to its
commissioners. This belongs to its prerogative and depends on the agent type; see the
ensuing appendixes per agent type.
336
14.8.3. Agent tree
After several such recruitments, agents end up forming a network which is a tree (fig.
below).
This figure assumes that there are always channels between agents: all recruitments in it
are opaque (cf. below) which is not always the case.
Several more trees, less readable because they are produced mechanically, but
illustrating with better precision model processes, appear in chapters 4 and 5.
14.8.4. Transparent recruitment
Recruitment may be transparent (agent-agent) or opaque (agent-channel-agent).
FRONT
R E AR
agent
ca n .
R oot
ch a n n e l
ca n .
agent
At ro o t ch a n n e l :
- th e lin g u istic ta sk is p o se d
- th e re su lts a re d e live re d a fte r
se ve ra l p h a se s o f
co m p u ta tio n
ca n .
agent
ca n .
agent
ca n .
ca n .
Figure 39 Agents recruit more agents and end up forming a tree
In transparent recruitment, the client agent determines the to-be-recruited
commissioners, that is, it determines their duties. It then commands the recruitment of
these commissioners. Recruitment is subordinated to the non-redundancy clause: if an
agent of this type, with this duty, and delivering at this delivery point already exists,
recruitment does not take place. Otherwise, the commissioner is created and two
relations are made.
An RC relation (recruitment) is installed between client and commissioner; it supports
explanation and the analysis of the model's operation which may be ordered after the
computation's end; the model itself, for its own ends, does not use the RC relation.
337
A DL relation (delivery) is installed between delivery point and commissioner; it will be
used to merge onto the delivery point the findings which may arise at this commissioner.
Note that in transparent recruitment, the forward target of the RC relation and the
forward target of the DL relation are different; for details, please refer to the agent
diagrams in forthcoming appendixes.
14.8.5. Opaque recruitment
Some agents have a syntagmatic vision: they need commissioners which bring results at
distinct positions. In the ABS architecture, each of these positions is embodied by a
distinct channel.
The agent recruits the necessary channels, according to its needs which are inherent in
the agent's design. It recruits two at least because there is no syntax with one position
only, by definition. An agent is the sole owner of its channels: channels are not shared,
that is, a channel cannot have more than one client agent. This would have to be
amended if coreference and anaphor were to be treated using channels, cf. above p. 335.
For each channel, its client agent determines the appropriate commissioners. It
commands their creation which is, as above, subordinated to the non-redundancy
condition. If non-redundancy is verified, the commissioner agent is created and two
relations are made.
An RC relation (recruitment) is installed between channel and commissioner; it supports
explanation and the analysis of the model's operation which may be ordered after the
computation's end; the model itself, for its own ends, does not use the RC relation.
A DL relation (delivery) is installed between channel and commissioner; here again, it
will be used to merge onto the delivery point the findings which may arise at this
commissioner
Note that in opaque recruitment, the forward target of the RC relation and the forward
target of the DL relation are identical.
14.8.6. Transparent recruitment and opaque recruitment compared
The main contrasts between transparent recruitment and opaque recruitment are
summarized in the table below:
338
Transparent recruitment
Opaque recruitment
Channel
No channel creation
Creation of a channel
Delivery
point
That of the client agent
The created channel
Type of
the created
agent
The type of the commissioner is necessarily The type of the commissioner is not
that of its client
necessarily that of its client
Duties
The duty of the commissioner differs from
that of the client by values only
Prototypical Stepwise exhaustion of a paradigm
uses
Resetting without settlement (ex. CATZ)
Resetting with settlement delegation (ex.
ANZ)
The duty of the commissioner differs
from that of the client also by its nature
Building of analyses (B2-B3)
Non delegable settlement (ex. ANX),
Table 25 Transparent recruitment and opaque recruitment
14.8.7. Interaction between heuristic structure and plexus
Whatever the linguistic task and at each moment, the computation depends on the
plexus content on the base of exemplars. This property is not occasional or valid for
some agents; it applies to all agents and at any point in the computation.
The productions (findings, then results) depend on the plexus, but the development itself
of the structure, that is, the determination of agents and channels planned for creation,
whether by recruitment or by edification, is also narrowly subordinated to the plexus. It
is so in association with the data of the linguistic task. At no moment is any decision
based on general reasons (for one thing: the model contains no disposition to express
general reasons), the mechanisms at play are always exemplarist mechanisms. The
mechanism is exemplarist and only that.
In addition, it is copositioned. That is, the terms are always involved several at a time, at
least as pairs, with preservation of copositionings between them all along the progress of
the computation. It is so ideally, and in actuality in most cases only, because in the
current development status of the model there is an exception: agent CATZ, which is
single-argument and, by this alone, escapes the copositionality constraint. This is felt to
be a drawback and is made responsible for certain limits. It defines a possible track for
evolution and improvement, cf. p. 353.
14.8.8. Rearward process, forward process
The rearward process develops from the conventional fore to the conventional rear, that
is, from left to right on the figures; il ensures:
-
pending recruitments: agents which have to recruit do so,
-
redundancy control: there may not be two agents with the same type, the same
duty, and the same delivery point,
-
finding production by direct raising,
-
settlement detection and production of the corresponding finding.
339
The forward process develops from the conventional rear to the conventional fore, that
is, from right to left on the figures; it ensures:
-
merging, that is, the consolidation of findings into results,
-
keeping result strengths up-to-date,
14.9. Agent redundancy control of and resource reuse
In a heuristic structure the non-redundancy clause forbids two agents with same type,
duty and delivery point.
The clause above is necessary because, without it, short-sighted mechanisms – what
agents are, and this short-sightedness is explicitly wanted – are exposed to do and redo
endlessly the same actions. This accident happened in a first development: the model
suffocated after five or six computation steps, the computation resource was saturated
with void redundancies the number of which exploded and, very quickly, nothing useful
was taking place. Performance and relevance recuperation took two routes very different
in scope and nature: firstly redundancy control, and much later, familiarity orientation
(cf. section 12.8. Familiarity orientation).
The redundancy control clause for agents, as stated ahead of this section, may be
implemented by different techniques. Its current implementation is a central shared
service, sort of registration office able to respond to questions of the type: is an agent
with such type, such duty, and such delivery point already in the heuristic structure. In
computing jargon this is "posting" a condition. The condition "an agent with such type,
such duty, and such delivery point is created" is "posted", which later allows the
computation to avoid redundant creations. The implementation is not difficult. The
problem with this solution is that it has a null plausibility. To quote Kayser again, a
model may have an overall plausibility without all its details being plausible. No doubt,
but detail plausibility would be an additional advantage.
One may strive for better plausibility by tagging the plexus elements that the
computation already used. If the plexus is a model of the linguistic knowledge in the
brain, and if proximality in the former is an analog of the anatomy of the latter, tagging
plexus elements may well be the analog of activations in the brain and this would set
certain parts of it in a "busy" status; therefore they would not be immediately reusable.
This track, "laying marks in the plexus", was indeed evaluated but it was not followed
because it appeared that the condition which had to be posted was not "such part of the
plexus is already used" but rather "such part of the plexus is already used in a defined
way, with defined copositionings, for the benefit of a defined part of the task". The nonredundancy clause "not two agents with same type, duty, and delivery point" contains
two parts which impede to interpret it as laying marks in the plexus: a) agent duties are
not plexus elements but copositioned sets of such elements, and b) the sub-clause "same
delivery point" has no possible interpretation in the plexus because channels are foreign
to it, they belong to the heuristic structure which is something else than the plexus.
Posting then takes place in a space which is not that in which the inscriptions
constituting the linguistic knowledge are deployed, it is a much richer space.
340
Whether this view is right or not, it has at least two applications in the documented
behaviours of the model, that is, we have already seen above two cases in which the
same plexus data are used twice in a same linguistic task:
1. In the case of Bavarians, cf. Figure 24 Route followed by the computation in the
paradigm (p.150) and the associated text, the same plexus records are reused but
with different copositionings each time.
2. In the case C'est beaucoup trop grand, cf. Figure 12 c'est beaucoup trop grand
(p. 105) the same expansive gate was used twice in the analysis of the utterance:
the record [trop]+[grand]  [trop grand] was used twice as settlement
condition, that is, as licensing record; but it was each time for a different
channel.
Redundancy control implemented as marks in the plexus would have prevented reuse in
these two cases. The question of course is whether we want this.
Either we want the model, as it does today, to reuse on short horizon the same resources
in different ways or for different parts of a task (this may be called the "remobilization")
option, or something with a better implementational plausibility has to be found.
Or we think that neurons generally do not have this capability. Then, in a strict
exemplarist approach as this one, we must show how the same exemplars cannot be
reused twice on short term, observing a latency or recuperation delay before a second
solicitation. But then we also have to show how for example the recursivity of syntax
succeeds in mobilizing different expansive gates in case of reapplication of what other
traditions would view as the same rule.
It is not simple to respond today to the remobilization question and I shall stop here, but
it is a very interesting one because it is posed at the hinge of the symbolist option and
the connectionist option: a resource which is "obliging" enough to let itself be reused
very quickly with other data, or with the same data but with different argumental
positions, actually acquires certain characters of rules and abstractions; the machine
tends to become von Neumanian since doing this boils down to something which begins
to look like an operator being put in a somewhat fluid functional relation with things
which begin to look like typed data.
14.10. Development of the heuristic structure by edification
In syntactic analysis with the B2-B3 process, the elaboration of the heuristic structure
follows a mechanism now different from that of the recruitment. It develops towards the
conventional fore (towards the left on the figures) and does not emanate from a single
root. It was named edification376. Edification will now be presented, and then contrasted
with recruitment.
376
To refer to this second mechanism, the more natural term to use whould habe been construction.
However, this lexeme is already loaded: it denotes syntactic constructions as defined by Fillmore in
particular. As construction is also used with this meaning in this work, the term edification was preferred
to refer to the second mechanism whereby the heuristic structure builds up.
341
14.10.1. Edification
This process is more open than recruitment. Counter to recruitment, at no moment it has
available the entirety of the data; in the course of the reception of an utterance for
example, at a given instant, a part of the analysis work is already carried out while the
rest of the utterance is still being received. The process 'edifies' heuristics structures
which do either or both the following: a) accept more field data (the rest of an utterance,
ensuing non linguistics perceptions), and b) from the already made elaborations, carry
on abductively the elaboration process. Edification works towards the conventional fore,
counter to recruitment which operates towards the conventional rear.
Edification encompasses channel creation. In recruitment, channels are optional
(depending on agents), and rare in average, whereas in edification, channels are between
all agents and obligatory. Agent structures set up by edification may in turn initiate a
local sub-process operating by recruitment; the recruiting sub-process is a sort of a
subcontractor to the edification process.
Edification is used in complex tasks like for example utterance analysis (cf. below
details on agents B2 and B3).
Recruiting process
Edification process
Simple tasks (e.g. analogical task) defined
by few terms which hold in an agent duty
Complex task (e.g. receive an utterance,
produce an utterance, analyse a scene)
No field
There is a field. Field data participate in the
definition of channels and agents
Rearward development (RW)
Forward development (FW)
A client agent recruits commissioner agents
rearward
Channels, the fields of which are adjacent,
are paired forward into an agent. When the
latter produces a finding, a channel is
created forward.
Transparent and opaque depending on the
case (channels are optional)
Always opaque (channels are obligatory)
Minima installed by a third-party process
Single root (= a single maximum) where the Multiple maxima
task is entirely defined
Tree
Lattice
Low level, unconscious process, serializes
parallelism
Low level, unconscious process, serializes
parallelism, and
higher level, conscious process
Table 26 Recruitment and edification
A heuristic structure obtained by edification does not have a single root (it is not a tree).
For example, in the analysis of an utterance, in stead of a single root channel, several
maxima are to be found; they are channels. At a given instant, they collect the partial
analyses made that far. The heuristic structure is a lattice since the partial order
342
(conventional) fore-rear relation holds between its elements. Its maxima are, on the
figures, the leftmost channels and they are the best abductions that could be made in the
treatment of the task, that is, those which engage – in mutual conjunction, in
conjunction with the plexus, and the best possible way altogether – the greatest number
of the task's terms.
14.10.2. What recruitment and edification share
However different they may be, recruitment and edification coexist and cooperate in the
ABS architecture where they share the following functions: agent redundancy control,
the settlement-merging mechanism which forwards products to the fore, and the
mechanism of strengths and reinforcement which applies to agents and products. All
these constitute the general ABS framework which hosts agents of different types and
rules their interworking.
In addition to this, as already stated, an edification process may, at one of its points,
initiate a recruiting sub-process to fulfil a function which is limited and independent of
field data. Such a "subcontracting" is very common.
14.10.3. Field and field data
Field is informally defined as that which, in the perception of the world, is within the
subject's scope when he is busy performing a linguistic task. Field data are indexes on
elements of situation: linguistic form exclusively so far. When extending the model's
application, field would extend to perceived elements which are not linguistic form.
A recruitment process does not encompass field data. There is necessarily field data in
an edification process. For example in the reception of an utterance, certain field data
are the place, in the received string, of the parts (segments or constituents, possibly
syntagms) addressed by sub-processes. In this case, field data stand, in the heuristic
structure, for places in the organization of the string being received and processed377.
More generally, in an extension of the model to non linguistic perceptual data, field data
are bound to index the spatial determinations, the temporal determinations and the
perceptive channels of the elements subjected to the computation.
Defining the field in this way is not fully satisfying and might be criticized. The notion
of field is an intuition arising from concrete work about the settlement architectures
which are appropriate to obtain the required effects.
14.10.4. Questioning the recruitment-edification duality
The coexistence between recruitment and edification is not self-evident. Actual
development work encompassed an important number of trials which cannot be reported
here and it is a selection, a Darwinian one in a way, which finally selected concurrently
and complementarily processes of these two natures. This was the result of the method
followed in this work, which consists of having some general directions about what is
desirable, some more directions about what must be rejected a priori, and leaves a broad
377
This organization is currently assumed to be unilinear for simplicity but this is not a postulation
inherent in the Analogical Speaker
343
span of possibilities in which multiple attempts are made and finally evaluated after
their results.
It is this method which led to stabilize these two types of processes. It then appeared that
they might constitute a useless non-minimality and that, if this work leads towards a
theory, and if it is accepted that a theory must be minimal, it might be desirable to unify
them into a single mode of constitution of the heuristic structure.
A unification track was followed for a while. It involved questioning the conventional
forward-rearward orientation which, itself, was a result of the same method but was not
supported by a very foundational argument. A suspicion also bore on the notion of root
channel which seemed somewhat ad hoc: in a model of the speaking and knowing
subject which strives for some plausibility, what could be the analog of this miraculous
origin from which everything emanates. A revision which would reduce the recruitmentedification duality and, on the same occasion, could also improve the treatment of these
two questions would have been welcome.
The mechanism of result processing (raising, settlement, finding merging, delivery to
delivery points with possible reinforcements) was judged to obligatorily require an
orientation. A partial order relation is necessary in the heuristic structure – calling it
'forward-rearward' or using any other convention is unimportant. Without this order, all
the good qualities of the model including its integrativity and the gradation of its
responses with the congruence or otherwise of the task's data with plexus data, all this
would be lost. Whatever the architecture revision, the heuristic structure had to remain
oriented.
On this axis, therefore necessary, certain agents, according to their own 'logic', continue
recruiting rearwards commissioners which report results – forward – to the delivery
points of their recruiters. Simultaneously, a process like syntactic analysis as envisaged
with agents B2 and B3, causes the creation of agents and channels which are not the
result of the 'logic' of a single existing agent but on the contrary associates several of
them depending on the contiguity of their spans, that is, it involves field data. And the
agents and channels to be created, far from having to report results to the already-there
elements which motivate them, are on the contrary elements to which the already-there
elements will have to report their own results. In recruitment, the causes of the
recruitment are the beneficiaries of future results whereas in edification, the causes of
edification will be result providers; they build structure pieces as sorts of assumptions to
the abductive validation of which they may contribute, now or later. Some will be
validated by some of these agents, not all of them by all agents. In writing, I realize how
these metaphors oblige no one to nothing, in particular not the reader to adhere. On the
other hand, it is not possible to say: 'this is how it's made and it works' but currently, I
can do nothing better. At any rate, it is for this dissymmetry that processes with opposite
orientations were let coexist, and that recruitment and edification were finally not
unified.
Is it so worrying. Not that much, firstly because en passant the justification of the
forward-rearward orientation happened to be somewhat consolidated. Secondly because
the notion of root channel which used to be poorly motivated is now reinterpreted. An
edification process itself does not have a root channel, it has maxima instead, several in
general. It may – it often does – initiate recruiting sub-processes. The point at which it
344
initiates a recruiting sub-process is the point which used to be viewed as root channel.
Now there may be as many 'roots' as there are starting points of such sub-processes, so
they cease ipso facto to be tree roots, and what was felt to be the arbitrariness and lack
of justification falls since the insertion of the sub-process (e.g. suggestion of
similarities) in a larger process, itself better founded (e.g. syntactic analysis), confers
them a better one.
14.10.5. Plausibility and scope of recruitment and of edification
Edification processes are given as models of whole linguistic tasks. To put it better, the
model of a linguistic task which can be defined externally with some autonomy
necessarily encompasses edification.
Contrasting with this, a process which does only recruitment is not plausible if
considered in autonomy because of the doubts on the notion of root channel, and of the
fact that linguistics acts cannot be defined by the few terms contained in an agent duty.
So it is not pretended that any real linguistic task might be adequately modelled by a
process which would recruit only. On the other hand, we just saw how an edification
process, itself less implausible, may require the contribution of sub-processes that
recruit only.
The work share would therefore be as follows. Edification is bound to model tasks with
a certain complexity, involving field data, in particular tasks with an autonomous
external definition. Whereas recruitment applies to sub-tasks of the former, therefore
ones which are dependent and do not encompass field data. This does not mean that the
latter are necessarily small; a recruiting-only process may occasionally involve a large
number of agents and channels. There are limits to these numbers but they are of the
order of plausibility, of computations remaining 'reasonable' (if really too hard, then give
up) and are not inherently associated with the fact that the process only recruits.
Does the edification-recruitment opposition coincide with the conscious-unconscious
opposition? On this point, for several reasons, only opinions can be stated. Recruitment
processes certainly are entirely and always unconscious. Such as they were instantiated
so far, they are akin to simulation of neuronal parallelism and remote from the conscious
mental mechanisms. An edification process, on the other side, has unconscious parts and
perhaps conscious ones and the shares between both depend on the case. In the analysis
of a received utterance for example, edification processes which stay small, like lexical
evocation, morphological analogy, agreement between neighbouring morphemes, etc.
nearly all are unconscious. Processes affecting longer spans, more complex syntagms,
anaphors close to ambiguity, etc. are conscious in the measure of their complexity and
difficulty, up to the resolution of garden paths upon syntactic ambiguity which may
involve elaborate conscious thinking. Finally, the conscious-unconscious opposition
appears to be associated with conjunctions of factors and it does not seem that the
opposition edification-recruitment might be held as a model of it.
14.11. Phase management
The ABS computation develops in phases which are a means to ensure the overall
coordination of numerous elementary process and to serialize their operation. This is a
345
model, it is not claimed that mental processes are phased in this way but it is certain that
they have a temporal development. In ABS, phasing is a model of the temporal
development of mental processes, in particular of the linguistic ones.
The model is such that each elementary action leaves elements marked as requiring the
attention of the phase management process. Phase management is the general engine in
ABS. It is responsible for the triggering of all required actions; it is the general
controller of the computation.
A phase encompasses the execution of the rearward process then the execution of the
forward process. Each agent is implemented by a rearward process and a forward
process. They are particular to an agent and each is embodied in a computer program. If
there are 12 agent types, there are 24 such programs: for each agent, a program for the
rearward process, and in general, another one for the forward process378. When the
phase management process finds an agent of a given type marked as requiring attention
(for its rearward process and / or for its forward process), phase management triggers
the corresponding computer program for this agent.
Phase management also ensures the forwarding of products: a finding just produced at
an agent is marked as requiring attention. A finding requiring attention is merged that is,
it is projected as a result at the agent's delivery point – which is a channel – and the
result in question is in turn marked as requiring attention: at next phase, it will be
considered as a candidate to participate in settlements.
Thus, the different elements marked as requiring the attention of phase management are
finally the following ones:
a) an agent created by this phase requires attention at next phase to activate its
rearward process (and its forward process if applicable for the relevant agent
type).
b) some agents created before this phase, but on which a particular condition
occurred in this phase require attention at next phase to activate their forward
process.
c) a finding new in this phase at an agent, requires attention for the merging process
to be merged giving a result at the agent's delivery point.
d) a finding the strength of which has varied in this phase, requires attention for the
merging process to forward the strength change onto the corresponding result.
e) a result which is new at a channel requires attention as a candidate to participate
in a possible settlement at the client agent of this channel.
14.12. Strength management
14.12.1. Mechanism of strengths in ABS
A result has a strength tag; it is a number between zero and one that marks its relative
importance. At a channel of the heuristic structure, candidate results compete and the
378
A few agents, the simpler ones, have the rearward process only and no forward process.
346
strongest are the winners.
ro o t
c ha nne l
(1 )
c lie nt
a ge nt
(.8 1)
s tren gth
trans fer
c ha nne l
dam pin g f ac tor
ty pic ally 0.9
c o mmis s io nne r
a ge nt
(.6 9)
(.8 1)
a findin g
is rais ed
w ith its
a ge nt’s s tren gt h
fina l
re s ult
(.8 7)
finding (by
s e ttling)
(.7 8)
s ettling a ppli es
a rule d epe ndi n g
on a ge nt ty pe
re s ult
m er gi n g a pplies
s tren gth c om bi natio n
(.7 4)
finding
(by raising)
(.6 9)
in itia l s tre n g th s
fina l
re s ult
(.8 9)
e xisting
finding
(.8 1)
s tren gth
c om bin ation
e xisting
re s ult
(.7 6)
m er gi n g a pplies
s tren gth c om bi natio n
finding
ne w finding
(direc tly ra ise d)
(.6 9)
(.2 1)
s tre n g th re v is io n
Figure 40 Mechanisms of strengths
The mechanism of strengths is summarized in the above figure. As for many other
points in ABS, the implementation is motivated in part only: the detail might differ from
this one, we only need an overall mechanism which behaves about as this one.
An agent is recruited with a determined strength. A client agent recruiting a
commissioner agent assigns to it its own strength reduced by a damping factor, typically
0.9. So agents have strengths which decrease exponentially with the phase in which they
were recruited.
An agent raising a finding assigns to it its own strength.
A result obtained by merging one finding only takes its strength. When two findings are
merged into one result, their strengths are combined following a combination function
to give the result's strength.
Here are two views of the combination function. The function is S, it combines
strengths x and y.
347
S
S
1
0.9
1
0.8
0.8
0.7
0.6
camber
0.6
0.5
0.4
0.4
y
0.2
0.3
0
1
x
0.8
y
0.2
0.1
1
0.6
0.8
0.6
0.4
0.4
0.2
0
0
0.2
0.2
0
0.4
0.6
0.8
10
0
0.2
0.4
0.6
1
0.8
x
Figure 41 The strength combination function
S is a simple quadratic function of two variables, chosen to present obviously required
values at the limits and to be approximately associative and to be efficiently
computable. S is as follows;
delta = K2 + x2 + y2 + 2Kx + 2Ky - 2xy - 4Kxy
S = (K - x - y + delta) / 2
K = 0,10 = camber factor
Other technical options would be possible for S, the latter is just a good compromise.
14.12.2. What selection schema
A general question is to understand how, among the elements in paradigmatic position
in a broad sense; one will end up being distinguished. Two schemas are possible.
In the first one, of which the mechanism just described is an example, the competing
elements each have a strength tag, which may evolve in time and the winner is that
which ends up with the highest strength. Each competitor increases its strength in
isolation of the other ones. Call this the election schema.
In a second schema, a mechanism between the competitors makes them thwart each
other: one can increase its strength only at the expense of its neighbours' and
competitors' ones. The point is no longer to be the best, but, in order to rule, to kill the
other pretenders. Call this the Shakespearian selection schema.
The latter schema is adopted in certain connectionist models. Thus in the already quoted
model by MacWhinney, which involves the emergence of lexical items in a 'selforganizing feature map' (SOFM) of Kohonen, emerging representations of a given item
may concurrently arise in several points of the map but the ensuing process will make
that one will survive after killing all the other ones.
The election schema has a weakness: the winner being that which ends up with the
higher strength the schema does not state when this end takes place. It is so because, a
computation may always be prolonged and the relative strengths may always evolve,
with more remote inscriptions coming into play. The criterion "the relative order of
strengths ceases to evolve" is not a good criterion because it does not specify for how
348
long they have to be stable before concluding. The Shakespearian schema is clearer:
after a while, the competitors are dead. The weakness of the election schema can be felt
in current ABS: there is not a very strong stance about the term to be set to processes
and sub-processes, and this term does not set by itself. Dispositions taken for activity
control (cf. next section) are an attempt to fill the gap but they have an engineering
flavour and lack naturality.
On the other hand, Shakespearian selection is associated with a metrics; this is true at
least with Kohonen maps. Linguistic paradigmatics as for it does not have this property:
it is not very obvious how to arrange that paradigmatic competitors may watch one
another to thwart one another, and in view of that already, how they just might be
conscious of one another.
Finally the option taken in ABS, which is election, counter to Shakespearian selection,
is not very well motivated and might be revised, but today there isn't a firm base to have
it changed while understanding well what is done.
A possible direction is to adopt a resource viewpoint. In the current election mechanism,
the computational resource is not bounded: agents may be added without limit to the
heuristic structure. In a real system, the computational resource is necessarily bounded.
Any new resource request then should be compensated by restitution, stripping off the
less useful areas of the computation, or the less promising ones, deactivating the areas
with a low activity.
14.13.Length of computation paths
The computation seems not to have to involve long paths in the plexus: this would
contradict intuition and the results of psycholinguistics as well. Computation paths are
typically three to ten steps long. However, the computation is parallel and branches
somewhat: the categorization and regularization effects which are sought depend exactly
on such branchings. Some paths get invalidated very quickly, other paths remains active
longer, still more paths, weakly activated initially, later have to be awoken (syntactics
ambiguity resolution, cf. below). Finally the computation may occasionally become
heavy and the treatment of a linguistic task may involve one thousand agents or more.
14.14. Activity control
As we just saw it, the election schema for paradigmatic selection does not by itself very
clearly set an end to the computation: if nothing is done, longer and longer heuristics
paths develop, and they may modify the acquired results, often with little significance,
occasionally only with some relevance. For a complex task, and when the number of
phases increase, the heuristic structure then proliferates out of proportion with the
marginal benefit.
More technically said, but this amounts to the same thing, agents B2 and B3 have no
settlement criterion whereby the accruing plexus data would naturally dry up. It is so
because B2 and B3 use the CATZ agent which is productive without limit provided the
349
plexus is abundant. CATZ, lacking a drying up settlement criterion, tries everything,
even very far, even involving very improbable categorial drifts379.
This is not satisfactory in practice. The presence in the model of familiarity orientation,
because it much increases the efficiency of the heuristics, already makes this "waste"
less critical since it reduces its incidence, but the thing remains theoretically not
satisfactory because, in a long linguistic task, like the analysis of a long text, beyond a
point, when the beginning of the text is analysed and understood, and when this
temporary result has played its role in the interpretation of the ensuing text, the heuristic
activity concerning this beginning should be stopped. 'Controlling' in this way the
computational activity would direct the computation resource towards useful tracks
instead of wasting it in spurious ones.
The point currently reached in this research does not make it possible to fully take
advantage of this remark because we do not know how to interpret: "the beginning of
the text is analysed and understood". Provisorily, it may be substituted with: "the
beginning of the text is analysed syntactically", but we must watch the biases this
substitution may cause. This question is not a secondary one.
In order to control activity, the first thing which comes to mind is to put off the
operation of a channel deemed to have served enough. To interpret "has served enough"
the simplest is that a channel has served enough if it has produced enough, that is, if the
number of its results has reached a threshold. One applies the extinction clause EC0
(read EC zero):
(EC0)
put off a channel with a number of results beyond a threshold
This approach is justified: a channel having enough results, the presumption is that,
from it and on, the adductive analysis process may be pursued without accident towards
the conventional fore. The few result exemplars obtained at the channel are expected to
open enough abduction occasions for the next assembly level, and their number beyond
a threshold is assumed to cover the risk that one of them be unproductive. This is most
often the case in a balanced plexus, extinction, on result threshold condition, controls
adequately the computation demand without hindering the yielding of final results.
Extinction has to bear on the channel and on the part of the heuristic structure which
depends on it (rearwards). On all these elements, the activities of recruitment, raising,
settlement, and result merging are suspended.
This method was tried and showed an improvement in most cases, with a defect
however. Such (commissioner) channel may have produced, for example four results –
and become extinct if four is the threshold – without any of its client channels having
been able to do anything with these four results – this may be a consequence of a local
property of the plexus which, in itself, is not necessarily a defect – whereas a fifth result
of the commissioner would allow the client to settle and therefore the analysis to
progress. Extinction was too short-sighted.
379
ANZ does not have this defect: it has a drying up settlement criterion. This must be related with the
already mentioned fact that ANZ, working with several arguments, is copositionally constrained, what
single-argument CATZ is not. This is a new reason to dislike agent CATZ and to place it in first line for
future revision of the model.
350
Before putting a channel off then, one should take account of its client and of the
productivity within these clients of the commissioner's results. The extinction clause
then would rather be:
(EC1)
put off a channel having produced beyond a threshold and if at least one
of its clients was able to take advantage of its productions.
Instead of counting all results, count only those which settle at next level (in blue in the
figures of Chap. 4). This modification amounts to make control decisions with one level
anticipation. Computationally it costs a little more but the results thus obtained are
better: the process better crosses the barren areas of the plexus without going into
saturation in the fertile ones.
Concerning anticipation, why one level only (EC1) and non and not two (EC2) or n
(ECn)?
The question here coming into discussion is that of garden paths, that is, the cases in
which ambiguity (syntactic ambiguity so far) leads the analysis into a track which is
contradicted after two or more subsequent analysis levels. The dilemma is as follows:
without extinction, all tracks are concurrently pursued380, then however remote the
decisive data stands, the appropriate track is still available, the alternate ones are
contradicted, and the garden path is passed, but, in order to get there, the computational
resource saturates so the process is often not even given a chance to reach that point.
With an n-level extinction on the contrary, n-level garden paths are passed but those
which resolve with more than n levels fail: when the decisive data comes under
consideration, then is felt the lack of intermediate data, which would form the base of a
belated abduction, a weak one maybe, but one which would make it possible to carry on
the analysis.
The question does not have in principle a simple answer. Syntactic ambiguities have
varied lengths. For some of them, an important effort of conscious deduction is
necessary and many speakers fail at it. There is no categorical limit to the phenomenon.
A possible idea would be the general awakening of all extinct channels in the heuristic
structure. It might not be a good idea because it is very expensive. Moreover, it
implicitly assumes that the garden path recovery process is homogeneous to the
unconscious and systematic first analysis process. Now there are reasons to think that
this homogeneity is not verified: it seems on the contrary that, in certain cases, garden
path recovery is a conscious and selective process. If things are so, i) it becomes
legitimate to control the activity of B2-B3 by systematically extinguishing the channels
which already produced beyond a threshold, for example with a EC1 or EC2 extinction
clause, but ii) for difficult garden paths dispositions of another nature should have to be
taken.
380
On this occasion, a word must be said about the proliferation factors and those in this model must be
compared with those arising in category-based syntactic analysers. The latter are exposed to artificial
ambiguities due to the homonymies incurred by the lexical categories. This inconvenience does not occur
in the Analogical Speaker. But the latter has an endemic proliferation, that which is described about agent
CATZ, and which of course has no analog in categorial theories.
351
This closes the general discussion of the model's dynamics. The following appendixes
now present separately and in detail the specification of each agent type.
352
15. Appendix: Simple similarity suggestion
(agent CATZ)
In section 3.7.7. Similarity suggestion (p. 95) we saw, within agent-based solving, the
need for a function called 'similarity suggestion'. Similarity suggestion is the substitute
for lexical categories and allows us to view similarity on a dynamic and exemplarist
mode. It is one of the devices which help accounting for linguistic productivity while
eschewing categorical rigidities.
Following the general idea that linguistic productions regularize onto one another, when
uttering or receiving a new utterance, account is taken of similar facts already met.
Then, a linguistic unit being given, we need to be able to retrieve from the plexus the
precedents which are similar to it in a way or another.
In the same section, we established that similarity suggestion may be simple or
copositioned. Simple similarity suggestion is implemented by agent CATZ and is the
subject of this appendix.
From an argument term, this agent produces those of the other terms which are most
similar to it in different respects. It was mostly found useful to do so according to
distribution. Another viewpoint, according to constituency, will also be presented but it
is little used in the model.
15.1. Distributional similarity
In two C-type records connected by a paradigmatic link, homolog terms which are in
constituent positions are distributionally similar by definition. In the following records:
C
C
categories
lexical classes
+ are rejected
+ are refused
categories are rejected
lexical classes are refused
terms categories and lexical classes are distributionally similar and terms are rejected
and are refused also are. This definition is obviously issued from structural linguistics
but with an adaptation: the requirement here is not that left and right distributions be the
same, but that homology hold in two C-type records which are set in paradigm.
Reminder: two C-type records may be set in paradigm when they are perceived as
syntactically similar, that is, as constructionally similar. The perception of similarity is
rooted in the plexus descriptor's intuition. Doing so protects against the accident the
prototypes of which are the well-known examples: John is easy to please / John is eager
353
to please (Chomsky 1960, p. 532) or J'ai promis à Pierre de venir / J'ai permis à Pierre
de venir (Milner 1989).
15.2. Constitutional similarity
Similarity may also be defined according to constitution. In two C-type records
connected by a paradigmatic link, homolog terms which are in assembly positions are
constitutionally similar by definition. In the example above, term categories are rejected
and term lexical classes are refused are constitutionally similar.
In theoretical terms, constitutional similarity may be connected with the remark made by
Chomsky381 that a linguistic form does not count just for itself but that its 'derivational
history' also counts. However, it is possible to show also that adding the derivational
history is not itself sufficient if we limit ourselves to a narrowly syntactic viewpoint,
that is, a formal only viewpoint. This demonstration will not be made here.
Several tests were made (one of them is reported below in section 16.8. Performance
with the type of similarity p. 369). It adds constitutional similarity to distributional
similarity. They never showed that adding constitutional similarity improved the results,
the dynamics was only penalized by additional agents and the results were not better or
faster. This finding has not been further elaborated.
So constitutional similarity is a possibility for similarity suggestion which remains
offered in principle – and agent CATZ proposes it – but, for the moment, it stays
without utilization in the model.
15.3. Similarity on request
Initially, distributional similarity is envisaged as in structural linguistics but its scope
and development are different. Harris for example, even if he grants:
If we seek to form classes of morphemes such that all the morphemes in a particular
class will have identical distributions, we will frequently achieve little success 382.
he maintains however:
We seek to reduce the number of elements in preparation for the compact statement of
the composition of utterances … Considerable economy would be achieved if we could
replace [identical or almost identical statements of distribution] by a single statement
applying to the whole set of distributionally similar morphemes383.
In the Analogical Speaker, stress is placed on distribution, but the target is not
statements which would apply with economy to sets of distributionally similar elements.
Instructed as we are of the deficiencies of class, category and abstraction-based
approaches, having adopted an exemplarist option and the notion of proximality, and
having dealt the notion of a priori grammaticality against the dynamics, it now becomes
381
Chomsky 1957/1969, Structures syntaxiques, p. 42.
382
Harris 1951, p. 244.
383
Ibid; p. 243.
354
possible to view the suggestion of similarities as operating on request. This option
contrasts with the notion of lexical category in two manners.
Firstly, the suggestion of similarities is triggered for a defined term, its argument, which
is the subject of an occurrential need at a defined point in a defined computation. It is
exactly a term, that is, a precise exemplar. The question is to suggest terms similar to
that term, not to build classes with the least generality or permanence.
Secondly, the process is expected to produce similar terms in successive phases,
inasmuch as its operation is allowed to/can proceed. So there may be few or many
depending on the argument term, on the plexus, and on the computation phase. It is
specified that the terms most similar to the argument are produced first. Those coming
later are still similar, but perhaps a little less. If the process is allowed to carry on with
exaggeration, it may produce terms with weaker similarity, then suggesting more
adventurous abductions. This is one of the threads whereby the escalation principle
(Chap. 3) is implemented.
15.4. Agent CATZ
Agent CATZ (this name is arbitrary) accepts a term as its argument and produces a list
of terms which are most similar to it, each with a strength tag. There may be none, one,
or several terms in the list; additional terms may be added to the list as the number of
computation phases increases, and their strengths may evolve.
As an option, CATZ produces distributionally similar terms, constitutionally similar
terms, or both. A client, depending on its needs, may in principle recruit a CATZ agent
with any of these three possibilities, although the former one only was used so far (cf.
above). Both options: distributional similarity and constitutional similarity, no doubt are
not the last word to this variety of viewpoints.
As any other agent, agent CATZ produces its results in successive phases. In this, it
simply complies with the general constraints bearing on any agent in ABS.
Successively, we will see the technical architecture of the agent, then two examples of
its operation.
15.5. Technical architecture of agent CATZ
Agent CATZ accepts a single argument – which is a term – and produces in successive
phases the terms which best categorize with the argument. More precisely, the argument
of CATZ is not exactly a term but rather a term occurrence, that is, in a record, a site of
the record which must be occupied by a term. This option allows a better definition of
the running operating conditions (the recursivity of the successive recruitments is more
easily expressed), it simplifies the design but requires a slight complication when
initializing a CATZ process.
As any other agent in ABS, a CATZ agent always has a delivery point (which is a
channel) and one only.
355
10 CATZ categorisands
2001-07-22
Duty
Result
channel
(may be the
same ch. )
channel
T1
X
T2
-
RC
T3
-
RA
R
-
SA1 SA2
S
-
12 CATZ (R, S)
CATZ (R', S')
raise occupier (R, S)
DL
 result
:a
term X
DL
change of paradigm : (R', S')=
more occurrences of T (R, S)
one step in same paradigm :
tLocateHomologs1S (R, S)
DL
delivery
point
RC
 result : a term X i.e.
a new categorisand of term occurring in (R, S)
RC
CATZ (R', S')
DL
RC = recruitment relation
DL =
delivery
relation
Agent 10 :Figure
CATZ,
categorisands
(in progress)
42 Diagram
of the CATZ agent
A CATZ agent receiving its argument – which is a C-type record R at a site S, the latter
being occupied by a term T – does the three things indicated on the diagram:
1. it raises a finding the content of which is T. The finding will be merged into a
result at the delivery point by phase management which is a general process of
ABS, and was described above (merging does not belong to the strict functional
perimeter of agent CATZ).
2. it operates resetting. To that end, with term T as argument, it invokes the unary
index which delivers all other occurrences of T in the plexus. Here again, any
record thus reached gives birth to a new CATZ agent, commissioner of the
former agent.
3. it carries on the search in the current paradigm, that which contains record R. It
thus reaches R' records – which are immediate neighbours of R – at that of their
sites which is homolog to S. Each new record thus reached causes the
recruitment of another CATZ agent.
An ABS computation phase involves one recruitment step only. After several phases, a
CATZ agent has recruited a structure of commissioners which is a tree, of which it is the
root. The recruited commissioners are assigned a delivery point which is that of their
client agent.
356
As things stand at this point, several different CATZ agents might be recruited bearing
on the same plexus point. Such redundancy does not happen because of the general
redundancy control mechanism which prevents it, see the appendix above which
describes ABS in general. In the particular case of CATZ, this makes that there can be
no two CATZ agents for the same record, the same site and the same delivery point.
Thus, CATZ avoids walking in circles through the same paradigm and it also avoids
using again an already used paradigm with the same position conditions.
CATZ contains a mode parameter which allows the recruiter to order the production of:
a) distributionally similar terms only (terms occurring as constituents),
b) constitutionally similar terms only (terms occurring as assemblies),
c) both.
The effects of either option were exposed above.
15.6. Examples of distributional similarity
With a French plexus, the model is requested to produce the terms distributionally most
similar to term 'le' which, in French, is the definite, masculine article. The table below
shows the results of the first three computation phases. For each term, line "ph 1"
displays the strength tags associated with the results at phase 1, likewise for lines "ph 2"
and "ph 3". Some strengths increase with the phase number: the agent finds new reasons
for similarity.
similar
terms
le
la
une un
ma
des
the
fem.
a
fem.
a
msc.
my
fem.
ind. the
l'
plur. plur.
ph 1
1.00 .91
.83
.83
.73
.73
.73
.73
ph 2
1.00 .94
.86
.83
.73
.82
.82
.73
.59
ph 3
1.00 .97
.93
.96
.76
.86
.84
.76
.71 .48
English the
equiv.t msc.
les
l'
ce son
mon
cet cha- cerque tain
this his
my
msc.
this each certain
.48
.48 .48
.48
Table 27 Terms distributionally similar to Fr. le
Articles are produced first and units with different traditional categories (for example:
ma) follow: the model of this speaker, in its own way, recognizes the category articles
and it also recognizes the category determiner. This illustrates its categorial underdetermination.
Later, the term certain is produced. In French, it is both a determiner and an adjective. If
the process were allowed to continue, many more adjectives would be found, and even
nouns after them. This property is general: in this model, processes produce very
expected results in the first phases and stranger ones in ensuing phases. The possibility
of strange results is a corollary of the fact that the model has no reified category. This is
a value because it is necessary to a flexible operation. However, excessive strangeness
357
would be meaningless. Strange results are produced in a decontextualized task like this
one, when the number of phases is forced willingly; this condition is artificial and
experimental. In a more contextually determined task, this must not happen: non-strange
results occurring first, this tends to extinguish processes which would produce
excessively strange results (cf. section 14.14. Activity control, p. 349).
The following table shows the same thing for argument term avec (En. with). More
acute reinforcement effects can be observed in it.
similar terms
avec
dans
à
en
sans
pour
sur
hors de
de
English
equivalent
with
in
at
-
without
for
on
out of
of
ph 1
1
ph 2
1
0,81
0,81
0,81
0,81
0,81
ph 3
1
0,93
1
0,99
0,97
1
0,73
0,73
0,73
Table 28 Terms distributionally similar to Fr. avec
These results are not results about French in general; they are produced with a particular
plexus. They are typical of the model's response facing this plexus. They cannot be used
to draw the slightest conclusion on French prepositions or French determiners but solely
to give a concrete indication on the similarity suggestion process.
15.7. Deconstructing categoriality and prototypicity
The similarity suggestion approach deconstructs lexical category is in several respects: i)
it becomes occurrential and is triggered on request, ii) it is guided by the proximality of
inscriptions and iii) it depends on time via the number of phases reached by the
computation. Finally, it is modulated by the congruence between the terms of the task
and the plexus content.
In fact, lexical categories or the "categorial labels" of syntagms become entirely obsolete
since it is no longer necessary to pre-establish them: a 'categorial computation' replaces
them entirely. This may be backtracked to a pretheoretical intuition: the family
resemblance of Wittgenstein.
Such a vision also replaces prototypicity. As there are no categories, the question of a
category's prototype also falls, which is happy given the difficulties that it creates. The
'categorial computation' operates exactly where the computation stands in the plexus, to
cover an exemplarist and occurrential need. What acts as a centre then is the term
argument of the computation and nothing else is necessary.
358
15.8. Adequacy (or not) of CATZ for similarity suggestion
So far similarity suggestion is distributional384 only. Such a vision may well be a partial
and provisory one, it is likely to be complemented, in particular in the direction of
meaning; it would then involve private terms. CATZ is used by several other agents,
with a morphosyntactic orientation (please refer to the summary of agents below). It just
happens that its clients are those yielding the most questionable results. For example
B2-B3 is short on agreement.
In the vision of heuristic processes which consists of separating the suggestion of
similarities from their validation by settlements, the heuristic is all the more efficient
that it is monotonous, i.e., that settlement may operate closer to the suggestion because
then, the heuristic structure may be pruned and focused before much proliferating; in
this line of thought, strict compositionality of meaning is strict monotony – we kow
what it turns out to be in languages: it is partial only. CATZ, for similarity suggestion
would be sub-optimal because it mixes up too many titles of similarity: even constrained
to distributional only similarity for example, in order for a similar term to be produced,
it suffices that it be homolog of the argument in one paradigm only; then there may be
many such paradigms which constitute too many different constructions.
Seen in another way, what arises here is also the inadequacy of taking things one at a
time – unfortunately, CATZ takes tings one at a time, it is single-argument – and the
superiority of two- or three-argument processes which may exert and propagate
positional viewpoints385, positional constraints. Such n-argument processes are not easy
to design and implement386 but we must strive for them because this is how we can hope
to make happen in an operable construction the promise of analogy, which is that terms
have value only by their différences éternellement négatives387 and that the pairs of
terms have value only by similarities of differences.
384
We have seen that a constitutional similarity is possible but currently without application
385
Remind the simple idea that mutual positioning, i.e. copositioning, can only be defined between two
terms at least.
386
Agent ANZ is however, as we have seen, an example of a three-argument process which observes
copositionings and propagates them to its commissioners.
387
Saussure.
359
16. Appendix: Analysis (agents B2 and B3)
16.1. Process B2-B3, specification and overall design
Agent B2 alone will be discussed in detail: agent B3 derives from B2. For an overall
functional presentation and an introduction to the mechanisms, please refer to the
relevant sections of Chap. 4.
The figure below is an excerpt of the heuristic structure which analyses Fr. un très
grand jour pour elle (En. a very great day for her). It displays a few channels, a few
agents and the relations between them: recruitment relation (RC), delivery relation (DL),
and agent-channel relation (AC).
several agents are clients of the same
channel : in its own span, each sees a
subspan which is that of the channel
one client channel only for
this B2 agent, because the
span must be the same and RC
one channel only per span
several commissionner agents
for a same channel : they
segment differently the
channel‟s span
DL
ag B2, span =
un + très grand jour
RC
AC
ag B2, span =
très grand + jour
DL
ag B2, span =
très grand jour + pour
AC
ch B2, span =
très grand jour
RC
ag B2, span =
très + grand jour
DL
ag B2, span =
très grand jour + pour elle
AC
AC
ch B2, span =
pour elle
RC
DL
DL
ag CATZ
dure semaine
feeder line
RC
ag CATZ
petit plaisir
the recruiter is a finding
produced by a settlement,
ipso facto it recruits a feeder
Figure 43 Connectivity of agent B2
361
The (DL) relation has a critical importance: it supports the consolidation of results and
the update of their strengths.
The (RC) relation is less important: it is used for explanation purposes only.
16.2. Heuristic structure for agents B2 and B3
16.2.1. B Channel
The term 'B channel' is used although channels are not explicitly typed; agents are
explicitly typed but channels are not.
Is a B channel a channel with one or several B agents as its commissioners? This
definition is not sufficient because installation creates installation channels which are B
channels but do not have commissioners and will never have any.
Is a B channel a channel with one or more B agents as its clients? This definition is also
insufficient because an installation channel is, upon its creation, without client and may
never have any.
Actually, in a B process, there are B channels only. There hasn't occurred a case which
requires to mix up B channels with channels of another type.
16.2.2. Field data and span of a B channel
The pair <L-R> (L for Left, R for Right) defines the span of the channel. The span is the
fraction of the form being analysed (inputStr) on which the channel bears. It includes its
boundaries.
16.2.3. Connectivity and existence of B channels
Rearwards, a B channel has zero, one or several B agents. A commissioner agent of a B
channel segments the channel's span into two parts.
Rearwards, a B channel has a feeder line of CATZ agents or not. It has a feeder line if
either or both the following two conditions: 1. it is an installation channel, 2. or it is not
an installation channel (then it is an assembly channel) and, one of its agents has settled
and has created feeders for it.
Forward, a channel has zero, one or several client B agents.
16.2.4. Field data of a B agent
A B agent uniquely segments a defined span of inputStr.
Conventions: LL is the leftmost boundary of the leftmost term of the segmentation. LR
is the rightmost boundary of the leftmost term of the segmentation. Likewise RL, RR.
LL-LR + RL-RR is the segmentation (RL=LR+1).
LL-RR is the span of the B agent.
362
16.2.5. B agent as commissioner
A B agent (LL-LR + RL-RR) is a commissioner for one and only one B channel: that
which has the span LL-RR. Problem: there might be several of these if installation
creates as many channels with the same span as there are terms matching this span in the
plexus. This problem is suppressed after homonymous terms are expelled from the
plexus.
16.2.6. Possible ambiguity of a B agent
The segmentation which characterizes a B agent specifies two constituent spans but it
does not otherwise specify their occupying terms. It may be the case that diverse
findings of this agent give interpretations of this segmentation which are different in the
sense that, facing the characteristic segments of the agent, they place licensing terms
with "different derivational histories". In other words, puns are possible in the model.
This is the case for katta in the figure below: the Japanese form katta which, in the
utterance under analysis sinakatta (have not done, did'nt do, not having done), can only
be the katta morpheme (negative, non polite past), is interpreted by B2 agent 142 as
katta (bought, I bought, having bought). This is a pun.
One such ambiguity will resolve, or not, at the next assembly level (N+1). It may be the
case that the assembly possibilities at level N+1 eventually disqualify some of the
interpretations made at level N. On the example in the next figure, the pun katta is
disqualified at level N+1 because this interpretation cannot assemble with sina on its
left.
It may also be the case that several interpretations at level N still be qualified at level
N+1: the ambiguity bears on a span larger than that of level N. Then level N+2 will
possibly disqualify some of them.
16.2.7. Origins of results at a B channel
At a channel, the first result (it is the first one in time, it is the strongest in the beginning
but may not remain so afterwards) is an installation result or a settlement result. The
creation of the channel is motivated by that installation or settlement result. Then, terms
which are distributionally similar to this first result will join at this channel. They are
produced by the CATZ agents of the feeder line which is attached to this channel.
So that a channel (ex. channel 4) may have:
1. installation results. They are directly installed upon installation (ex. a result
[sina], which is not on the figure, would be a direct installation result) or indirect
results (ex. [oisi-], [katta] morpheme of accomplished). Installation results
(direct and indirect) come through the feeder line388 (CATZ agents) linked
rearwards to the channel by (RC) links, or a chain of such links. They are
distributionally similar to the channel's span (i.e. to the installation term) in its
388
The phrase "feeder line" is adopted by analogy with industry. In mass production organization, for
example in automotive industry, a main assembly line is feeded by secondary lines, the 'feeder lines',
which bring the sub-asssemblies to it. Here, the line of B agents and B channels is the main line, and the
lines of CATZ agents, providing distributional similars, are the feeder lines.
363
entirety and therefore do not presuppose any segmentation for it. The installation
results (direct and indirect) do not assume a segmentation of the channel's span
and therefore are not associated with a B2 agent.
S s e tu p fin d in g
R re c ru itin g fin d in g
a m b igu o u s fo r m
RC
a m b igu o u s fo r m
RC
ch 4
(ka t ta )
n o t h a ve d o n e
ch 15
(s ina ka tta )
n o t h a ve d o n e
RC
DL
ag B 2 142
(ka )(tta )
DL
DL
AC
ag B 2 146
(s ina )( ka tta )
AC
DL
~ be dead
ch 11
(s ina )
RC
DL
CATZ
[s ina ]
a b a se o f
"to d o "
h e re , a
fe e d e r lin e
of C AT Z
RC
DL
CATZ
[o is i-]
b e in g g o o d
S R
S R
[ka tta ]
[s ina - ]
a b a se o f
"d o "
re s ult
[o is ika tta ]
f ind ing 3 3 5
[o is ika tta ]
h a vin g b e e n go o d
h a vin g b e e n go o d
T1 0 0 0 0 0
R
f ind ing 3 3 5
[o is ika tta ]
m a rk o f
a c c o m p lish e d
b o u gh t
re s ult
[ka tta ]
b o u gh t
T1 0 0
T1 0 0
[o is i-]
b e in g g o o d
T1 0 0 0 0 0
RA 0 0
0 0 0
f ind ing 2 2 7
[na o s ita ]
[ka tta ]
m a rk o f a c c o m p lish e d
T1 0 0 0 0 0
FR
f ind ing 1 3 8
[ka tta ]
m e rg in g o f
C A T Z fin d in g
[ka tta ]
h a vin g b e e n go o d
T1 0 0 RA 0 0
m a rk o f
a c c o m p lish e d
ag B 2 155
(s in )(a )
RC
a m b igu o u s
fo rm
CATZ
[ka tta ]
[o is i-]
m a rk o f a c c o m p lish e d
re p a re d ,
re n e w e d
T1 0 0
b e in g g o o d
T1 0 0 0 0 0
RA 0 0
p u n s w h ic h w ill
re m a in ste r ile
Figure 44 Installation products and merging products on the sinakatta example
2. direct settlement results (ex. [katta] bought) produced by the merging of findings
(ex. finding 138), the latter being produced by settlement. Settlement results
arrive by B2 agents (ex. agent B2 142). It is equivalent to say that they
presuppose a segmentation of the channel's span.
3. abducted results (or indirect, or distributionally similar to the two latter types)
which are produced in the feeder line, by commissioners of the line's head (the
origins of the head being direct: either direct installation results or settlement
results).
More precisely, those characters apply rigorously to the findings merged at this channel.
Quite often a result at a channel is obtained by merging one finding only; then, by
metonymy, it may be said to be direct or indirect, installation or settlement, depending
ont the sole finding which produces it. A result at a channel may also be the merging of
several findings with different characters; then, it may not be said to be direct or
indirect, installation or settlement.
364
16.3. Parsing of the argument form
A particular organ ensures parsing: it accepts the form to analyze and tries to find in it
plexus terms.
In the form to analyze, seen as a character string, this function seeks all the possible
substrings matching a plexus term, whatever their length, from one character only up to
a limit set at 20 characters.
This limit, 20 characters, reflects a double condition which may ironically be
paraphrased in this way: in a plexus, we may find terms which are comparatively long,
but not too much. There is no fixed criterion: a long term in a plexus is always possible
because of the minimality suspension principle, but it would be rare and its matching
with an input string even more so (cf. section 7.2.6. Terms should be simple and
commonplace, p. 200). Then, setting the limit too low would miss the long term at
parsing time and the analysis would not take advantage of it; conversely, setting the
limit too high burdens the process quadratically for a least marginal utility. This
spluttering with a technical flavour is the symptom of a theoretical blindness; it is true
that this particular point has not been searched. An extension – to be made – of the
model in the direction of phonology should substantially improve this area, but investing
here within the current perimeter (morphology and syntax) was not very promising.
Any substring found to match a plexus term produces a 'term notification'. A term
notification is made up of this term and of the span, in the form under analysis, where
the match was found. The span itself consists of the rank of the first character and the
rank of the last character of the occurrence of this term in the form.
A term notification thus produced is delivered to the installation process.
Actually, the parser operates under control of the installation process. When invoking
the parser, the latter may either order the parsing of the entire form or it may order the
parsing of increments (a given number of characters). This incremental mode allows the
tuning of reception time versus analysis time. The assumption is that speakers mostly
analyze faster than they receive, and that this has effects. Moreover, delays in the
reception process, like prosodic breaks, have effects on the analysis.
In a sense, the parser overplays it role: as the space between words is not treated as a
particular sign389, for example in form avala (En. swallowed) it makes three
notifications of term a at spans (1,1), (3,3) and (5,5) if the French plexus against which
it works contains term a, which is expectable. For the same reason, it also notifies term
la at span (4,5). These notifications give birth to installation structures (installation
findings, channels, and feeders) which will dry up quickly because the corresponding
segments cannot be assembled neither at their right nor at their left.
16.4. Installation process
The installation process receives term notifications from the parser.
389
Reminder: this is a consequence of the minimality suspension principle, and the model does not define
a notion of word.
365
Each term notification received creates an installation finding.
Then the installation process installs feeder line headers which are CATZ agents: one
CATZ agent per occurrence of this term in the plexus. These agents are marked as
recruited by the finding just created; this marking is only for later explaining the
heuristic structure history and has no direct function in the ensuing analysis process. See
for example the findings marked SR in Figure 44 Installation products and merging
products on the sinakatta example above.
These CATZ agents deliver at the channel with the span of the notified term. Either this
channel pre-exists, or it is created on the occasion.
Installation CATZ agents (feeder line heads) will recruit more CATZ agents, thus
progressively constituting a feeder line; this structure will deliver results at the channel.
The agents of the feeder line raise in successive phases findings which will merge at the
channel in question. This channel thus receives gradually distributionally similar terms
of the installation term. In the simpler case, it receives these results only, but in the
general case, it may, concurrently and complementarily, also receive installation results
and settlement results, see the previously referenced figure.
This closes the discussion of the installation process. The rest of the analysis process is
understood as the operation of agents B2 and B3, that is, the mechanism of the
edification of these agents and of the associated intervening channels.
16.5. Agent B2, edification procedure
The edification mechanism consists of the recursive phasing of 'edification cycles'. An
edification cycle is performed in one computation phase; its description consists of six
steps:
(1) The triggering event is a result (hereafter: argument result) arising at a channel
(hereafter: argument channel). In initial conditions, the argument result is an
installation result, in the subsequent course of the process it is a merging result
(this clause ensures the recursivity of the whole). The rest of the edification cycle
does not depend on the origin (installation or merging) of the argument result.
(2) The process then considers any channel (L channel) left-adjacent to argument
channel. It also considers any right-adjacent channel (R channel), but the rest of this
description will be limited to left-adjacency.
(3) For all left-adjacent channels, an agent is created, which is characteristic of this pair
of channels. It is vested with the duty of watching the possible settlements between the
channels' products. The watching starts immediately.
(4) A settlement happens when (i) a result that is already present at L channel and (ii)
argument result, both are constituents in a binary C-type record. This record is the
'licensing record' and the term in assembly position in it is called the 'licensing term'.
Settlement then creates a settlement finding the content of which is the licensing term.
(5) The merging of this finding gives a result bearing on the same licensing term.
(6) If it does not already exist, a channel is created for this result. It is allocated a span
which is the catenation of the spans of (i) argument channel, and (ii) L channel.
366
Delivering at this channel are recruited CATZ agents the argument of which is the
licensing term. These agents are heads of feeder lines. They will report to the channel
distributionally similar terms of the settlement term, hoping that these participate in
settlements at next level.
2
c ons ider an adjac ent c hannel on the right
3
c reate agent if non -ex istent
R c hanne l
(rightA djC h)
agen t B 2-R (right)
(asse mb ly A gen t)
6
c reate a c hannel
ase mb ly c hannel
(asse mb ly C han ne l)
3
argum en t c hanne l
(argume ntC h)
c reate agent if non- ex istent
agen t B 2-L (left)
(ase mb ly A gen t)
2
c ons ider an adjac ent c hannel on the left
L c hannel
(leftA d jC h
1
4
result R4
(has attes ting T erm)
5
s ettling
triggerin g fiding
(argume ntR esu lt)
finding
(refers attes ting T erm)
L result
(refers lefA djTer m)
R es ult at a c hannel
by m ergin g of a finding
T he edific ation of the structure goes from left to right
Figure 45 Agent B2, edification mechanism
This completes an edification cycle: the product obtained at step (5) in this instance of
edification cycle acts at next computation phase as the trigger of a new instance of
edification cycle, but one assembly level higher.
16.6. Agent B2, edification procedure in pseudo-code
Exactly the same cycle is described below more formally.
function ABS_B2_BUILD (argumentResult)
In ABS, the BUILD procedure for agent B2
Input arguments ---------------------------------------------------------------argumentResult: a product number which is a result at a channel which has
(one at least) B2 client agent
Program logic -----------------------------------------------------------------Let argumentCh be the channel of argumentResult.
For all existing channel, left-adjacent to argumentCh (leftAdjCh),
367
with their field data, make the putative field data of a B2 agent which is the pair (span of
leftAdjCh, span of argumentCh)
If an agent with that field data does not exist, create it. In any case, call it 'assemblyAg'.
Make assemblyAg a client (FW) of argumentCh (RW), and of leftAdjCh (RW).
Analysis note: assemblyAg MUST be created because witout it, it is difficult to
watch the solvings. Because of that, there will exist some B2 agents which will
never solve and therefore will never receive any finding.
This is no big concern: the potential proliferation of the heuristic structure will
stop with them: one such agent will never have a channel because channel
creation is subordinated to effective solving (and is simultaneous to transparent
recruitment of CATZ agents).
Try to match rightAdjTerm (the term in argumentResult) with the term (leftAdjTerm) of
all results existing at the partner channel leftAdjCh.
Successful matching consists of leftAdjTerm and rightAdjTerm being adjacently
attested within a C record (the solving record). This constitutes a solving (a settlement).
Upon solving, in the solving record, collect the term in the assembly position (let
'attestingTerm' be that term).
Create a finding with duty = term 'attestingTerm' and owner = assemblyAg.
This is the 'recruitingFinding'.
Connect the recruitingFinding (FW) through a FR link to argumentResult and the result
in which rightAdjTerm was found.
Analysis note. The mere construction of the heuristic structure might do
without explicitly creating the recruitingFinding. However, the
recruitingFinding is mandatory to the explanation paths so it HAS to be created
explicitly.
Ensure existence/create assemblyCh (will be the delivery point of 'feeder')
Connect assemblyCh-RC-assemblyAg
Connect assemblyCh-DL-assemblyAg
Recruit/make a CATZ agent (the 'feeder'),
Connect recruitingFinding-RC-feeder
Connect assemblyCh-DL-feeder
Analysis note. The 'feeder', flagged for attention in next phase, will in turn
recruit more CATZ agents thus forming a feeder line. The feeder line will
produce findings which will be taken over by ensuing phases of the
computation, thus generating more opportunities for solving.
This is all for left adjacency.
Do the same thing for right adjacency.
This completes a cycle of edification.
16.7. Agent B3, edification procedure
As for agent B2, just take channels by three.
368
16.8. Performance with the type of similarity
Agent CATZ suggests similarities in the service of the B2-B3 process; doing so, it opens
up heuristic tracks which then either lead to settlements, or remain unproductive. Now
CATZ may produce similar terms in different modes. (cf. 15. Appendix: Simple
similarity suggestion (agent CATZ)). It is interesting to assess how the different modes
impact the behaviour of the B2-B3 process.
Measurements were made with a set of four utterances: Fr. elle est arrivée avec son
homme, Fr. très très grand homme, Fr. reprendre la route, and Jap. sinakatta.
With the distributionally similar terms alone, the production is the same as when adding
the constitutionally similar terms, the number of agents is lesser and the computation
time is better.
With the distributionally similar terms and the constitutionally similar terms, about 10%
more agents are required and the task successful completion is never faster than with the
distributionally similar terms alone.
These results may depend on properties of the plexus used.
16.9. Productivity of agent B2
16.9.1. Why the question is important
A B-type channel has a commissioner agent (therefore of type B2) which attempts a
segmentation of its span. It is important to report whether this segmentation is useful,
that is, if something could be done with it at the client channel.
During the exposition of a B2 channel one of its commissioner agents will be exposed
only if it is productive.
16.9.2. Productivity of an agent with respect to its client channel
The productivity of a B2 agent is considered with respect to its client channel (which is
unique, cf. Figure 43 Connectivity of agent B2, p. 361). If the agent is considered in
itself, it is not possible to assess its productions; these productions are distributionally
similar to the span, but at this stage their relevance is not defined; they are just similar.
The agent becomes productive with respect to its client channel when one of its findings
is delivered at the channel and then merged into a result which itself settles.
Channels are fraught with results which do not settle. Settlement results only count
because only them signal the agent's productivity.
It needs to be noted that the findings at stake are not the findings of the B2 agent (which
are not delivered but rather recruit a feeder head). The findings at stake are those of the
CATZ agents in the feeder lines, because these finding only are delivered at the client
channel of the B2 agent which recruited the CATZ feeders.
369
16.9.3. Method for assessing an agent's productivity
From the agent, via the (RC) relation, exhaust the transitive closure of the CATZ agents
in the feeder line. Each CATZ agent has a term (pick it up either in its duty or in its
unique finding ad lib.).
Consider the result at the channel which has this term. If this result settles, then the
agent is productive at its channel.
It suffices that this be the case for one only of the CATZ feeders.
16.10. Result of a B2-B3 analysis
The new interpretation in this model of 'analysis of a received utterance' was already
explained at the beginning of Chap. 4. It is now possible to provide a more technical
paraphrase building on what was presented in this appendix.
16.10.1. Licensing channel, licensing finding, licensing result
The fact that a channel has been created, the span of which entirely embraces the form
submitted to analysis (inputStr), reflects the fact that a settlement took place at an
agent's, (a commissioner of the channel). This means that the argument utterance is now
analysed.
It may be the case that the licensing result has not yet arrived at the channel at his phase:
this is normal; it will get there at next phase, after the finding will be merged. However,
an exposition query issued at the channel must display the licensing finding
nevertheless.
16.10.2. Result of a B2 analysis
The result of an analysis is not restricted to the sole final licensing result.
Firstly because the analysis is done before the licensing result is produced as the
merging of the licensing finding, as we just saw.
Secondly because there may be several licensing findings that merge into as many
different licensing results.
And finally (this is the most important) because a B2 analysis is made up not just of the
sole final result but of the network of the findings, agents and results which pile up and
lead to the final licensing and justify it level wise. It is so because it is at each level of
this network that some meaning may be built and because the meaning of an utterance is
not a monadic object but the result of a levelled construction process.
Therefore, a formal only B2 analysis is not in itself an autonomous achievement: it must
be viewed as supporting a process which is subsequent to it (but interleaved with it):
interpretation. This supposes the semantic side of the model which is not yet developed.
So today, a B2 analysis must be regarded cautiously:
a) it may seem good now but prove later badly fit to support interpretation,
b) conversely, it may seem painstaking today but in future turn out to be much
facilitated by the introduction of the semantic dimension with the private terms.
370
17. Appendix: Binary branching,
ternary branching
17.1. The question and its history
A construction may be binary: it assembles two constituents. Can it also be more than
binary (assembling three constituents or more), or perhaps also, less than binary? To be
rigorous, n-arity in generativism bears on derivational rules and therefore on the phrase
marker390, whereas in the Analogical Speaker it bears on exemplarist C-type records
since there is no abstraction here. Yet, the parallel remains possible and it is interesting.
The n-arity of branching was discussed in the context of the X bar391 theory and the
conclusions are summarized by Chametzky392.
About whether branching may be less than binary that is, (p. 33) whether, in the analysis
trees that the phrase markers are, a node may have one son only, Chametzky concludes:
no well behaved phrase structure theory ought to have such a relation and this for a) a
conceptual reason: constituency is part-whole relation and claiming that a whole with
one part is in the same relation to that part as a whole with two (or more) parts is to its
parts is to make a non obvious, quite plausibly spurious claim and b) an analytical
reason which is an examination of the actual range of cases of nonbranching
domination in the literature. In the literature four cases happen: the first one concerns
level zero, just below the phrase marker393; of it, it cannot be said that it is a dominance
relation; a second one is the relation of the type X''-X' or X'-X for which it is suggested
to ascribe a multiple label to the same node; the two remaining cases are exocentric
390
The generativist culture calls 'phrase marker' the tree which analyses an utterance. Its nodes are either
terminal nodes, then they match the ultimate constituents picked up in the lexicon, or non-terminal nodes,
then they stand for assemblies of the latter and/or of themselves. Each node has a categorial label. The
edges of the phrase marker are constituency relations between the linguistic entities represented by the
nodes. The phrase marker is produced by applying derivational rules.
391
X bar theory was proposed by Chomsky in 1970 (Remarks on Nominalization) then complemented by
Jackendoff in 1977 and Gazdar in 1982. It governs the constitution of noun phrases, verb phrases, and
adjectival phrases. It says nothing on sentence syntax. The different expansions of the head are denoted by
none, one, two, or three superscript bars, or, more conveniently, by primes, seconds and thirds (X, X', X",
X"').
392
Chametzky 2000, p. 33-34.
393
Chametzky here uses a metonymy for"just below the root of the phrase marker".
371
labelling, and the utilization of "functional" labels such as "subject" and "topic".
Chametzky concludes finally: if nonbranching domination is conceptually unsound,
then there ought to be no clear and compelling instances of it – and there are not. Lessthan-binary branching not occurring in the Analogical Speaker and, as we just saw, it
appearing in the X-bar theory only for reasons that are side effects of theoretical options
much away from ours, we are not going to be concerned with it further.
What is now the position of the X bar theory as to more than binary branching; reasons
depend on authors and, following Chametzky's survey, they are the following: 1.
Restricting branching to binarity amounts to an analytical restriction (or an acquisitional
one), the restriction is therefore desirable and supposed by the theory. 2. Quite often, we
ignore the empirical reasons to think that branching may be more than binary. Williams
1994 however notes that these facts do not require that the effects of branching on the
requirement of locality bearing on the relation "argument of" go beyond binarity,
otherwise said, if a predicate has more than one argument, it cannot be the case that all
arguments are brothers of this predicate. Williams seems to prefer rejecting more than
binary branching to make the locality condition less strong, because he finds for the
latter an autonomous justification. 3. Kayne (1994) demands that the branching should
not be more than binary in order to satisfy another syntactic relation (his linear
correspondence axiom) rather than stipulating binarity for itself. 4. Chomsky
(1995/1997a, chap. 4) seems to think that branching must be limited to binarity by
"virtual conceptual necessity". 5. Chametzky (1996) sees the restriction to binarity as an
empirical generalization which moreover favours the analysis of adjuncts.
A generativist theory has to make a choice: before the application of transformations,
the phrase markers result of the application of derivational rules and a derivational rule
has to be binary or ternary. Such a theory must make a choice. Are the reasons for
choosing good reasons? The reader will answer following his preferences in the light of
the reminder above.
Since the model of the Analogical Speaker is rule-less, the reasons for opting for
binarity or ternarity in it much less need to depend on general principles: they may be
associated with particular cases, that is, constructional similarities attached to a few
exemplars only, possibly two exemplars only.
Turning now to the organic implementation in the model, an option about n-arity
consists of deciding whether C-type records are limited to two constituents or they may
have three or four. The assembling agents must have a compatible design: for binary Ctype records a binary assembling agent is needed (B2 currently), for ternary ones, a
ternary agent (B3), etc.
In addition, a principle of homogeneity must be observed: a B2 agent cannot, in a
ternary record, make an excerpt limited to two constituents and attempt to use it in this
way. This is because, a) doing this is unproductive in its consequences most often, and
b) when it is productive it leads to bad productions. This principle was found useful
after several trials and meanderings and it is observed in the current implementation of
the B2-B3 process.
372
17.2. Exemplarist reasons
What facts and needs lead to be satisfied with binarity or on the contrary to want
ternarity? For clarity a case which would be an error needs to be discarded first: n-arity
cannot be invoked for treating the morphology of Semitic languages. Cf. section 8.1.
Non-concatenative morphologies (p. 248) where reasons are provided.
First, particular reasons will be reviewed.
A ternary construction allows us, as in the case of ne … pas in Fr. to constrain the
occurrences of non-contiguous morphemes that are (quasi-)systematically coupled. In
the ne … pas, construction, there is really no reason to impose ((ne parle) pas) [En.:
don't talk] against ((ne (parle pas)). The ternary formula ((ne) (parle) (pas)) seems more
apt to impose the cooccurrence of ne and pas.
Concerning now the treatment of agreement with n-ary constructions, this seems to be
an artefact both efficient and partial. Still it is an artefact: the scope of agreement
phenomena is a whole expansion and a better adapted structure is preferable. The
intuition is that the solution of agreement is elsewhere, in the direction of agent AN2
and perhaps a revision of the inscription structure which provide "feature effects" more
directly.
Conjunctive constructions are a very obvious example le in which ternarity is useful:
-
l'Etat + et + la société
-
dix + - + sept, trente + - + deux
-
est + - + ce, est + - + il
here again, there isn't any reason to bracket right or left.
In rouge et noir vs. le rouge et le noir, allowing ternary assemblies is a very economical
way not to let happen zeugma like rouge et le noir and le rouge et noir, which does not
exclude to also allow, in a controlled manner, to license the latter form by other records;
but this is a distinct paradigm.
Considering now the case of N-N or NP-NP juxtaposition:
-
malentendu + mère + fille (mother-daughter misunderstanding),
-
ligne + Bordeaux + Genève (Bordeaux-Geneva line).
"mère fille" is productive only with rare N on its left (here malentendu). Likewise for
"Bordeaux Genève". A ternary construction eases this sort of sub-categorization.
Let us now move to general reasons.
When facing a "requirement for ternarity", as in the examples above, if a binarist
limiting option were taken, making two binary levels instead of a single ternary one
would always be possible. Such makeshift is an occasion of leakage. Not demonstrated
but conjectured. It is also an occasion of poorer performance. Not demonstrated but
conjectured.
373
The prototypical binarist argument in Chomsky394 is about the case S=NP+SV. For
Chomsky, a two-level model;
S=SubjectNP+SV and
VP= V+ObjectNP
must be preferred to a one-level ternary rule:
NP=SubjectNP+V+ObjectNP
where the SubjectNP is distinguished from other NPs surrounding the V. These other
NPs are associated with the V into a VP, while the SubjectNP is not. Why is that so?
Because in the transformations the VP has to be treated as a unit: it is moved as a whole.
Firstly, this argument is not applicable to languages without subject. Secondly the model
which I propose does not have transformations and so there is no need to state that the
VP is moved as a whole because there is no moving a VP. Finally the conservation of
the identity of the VP in analogies which motivated transformations or its move as a
whole is not self-evident; and the ObjectNP has very numerous behaviours of cohesion
and move that are similar to those of the SubjectNP: Je vois la mer. Vois-je la mer? La
mer je la vois. C'est la mer que je vois. etc. It is not striking that a structure must treat
them differently. That the subject be (possibly) obligatory and the object (possibly)
optional, is not more a criterion. So the prototypical argument for binarism does not
seem very strong in general and its strength seems even weaker within the options of
this model.
In sum, there are arguments for ternarity and nothing so far which compels to reject it.
Maybe an economical argument is more decisive.
17.3. Cost reasons
In a B2 agent, settlement consists of considering the Cartesian product395 of the results
appended at the two channels which this agent assembles. Such Cartesian product is
bidimensional (it has two channels). For a B3 agent, it is three-dimensional, for a B4
agent, it would be four-dimensional. Is the computation cost thus moving from N2 to
N4? This must not be feared: in an assembly with more than two constituents, almost
always intervenes a position occupied by a term with few distributionally similar
terms396. Example: demande + à + voir would be an open-closed-open B3; other
example: il + ne + ment + pas is an open-closed-open-quasi_closed B4 (pas, plus,
jamais, presque pas, pas toujours).
Even in a case in which open-class terms accumulate without any 'empty word' between
them (examples in En.: Tokyo Stock Exchange, or summer season holiday plan forecast
figures397), rather than by B3 or B4 agents, the analysis process will take successions of
394
As late as in Chomsky 2000, p. 58.
395
It must be reminded that the Cartesian product is potential only: it is actually built in part only, as long
as a settlement hasn't occurred.
396
Empty words, closed lexical classes, or grammemes in other theories.
397
Thanks to Robert Freeman
374
levelled B2 agents, applying between two successive B2s the expansive homology
abductive movement.
We see finally that a cost argument is not one for favouring low n-arities. We keep the
liberty to choose n so as to favour the better grasping of dependencies; we are free to
choose it separately for each, or better said: for each paradigm.
17.4. Choice of n-arity
The proposed model makes binary assemblies (B2 agent) and ternary ones (B3 agent).
Quaternary assemblies make no difficulty in principle or in the implementation, simply
the need has not aroused and quaternary assemblies are not currently implemented.
375
18. Appendix: Analogical task (agent ANZ)
18.1. Agent ANZ, specification and overall design
Given an analogy in the classical wording:
X is to Y as A is to B
or, conventionally:
X : Y :: A : B
Y, A and B being given, an analogical task is defined as:
find X, which is to Y as A is to B.
Depending on the case, the task may have no solution, have one, or several. Following
the general principles of the dynamics established in Chap. 3, the model rewords this as
follows: after a given number of computation phases, this task produces none, one, or
several results, each with a strength tag. At next phase, more results may obtain, and the
strengths of the existing ones may change.
The model implements this task by means of an agent which is named ANZ.
To implement the task, an agent (then client of that which interests us) recruits an ANZ
agent. The latter then recruits more ANZ agents, and so on. In the successive
recruitments, the agents substitute to one another analogies abductively equivalent to
that of the task to solve. The abductive movements used are transitivity and
transposition.
ANZ uses systemic analogy, that is, the analogical pairs of the plexus:
a) in A-type records, the pair of terms in the record,
b) in C-type records, the pair of the terms bearing A marks.
In short ANZ uses the coindexed pairs (it is reminded that coindexation bears exactly on
pairs defined by the two clauses above).
The operation of agent ANZ is described with the four following steps (cf. Chap. 5 for
examples and in particular Figure 19 The mechanism of agent ANZ ):
1. Priming. The terms of the task being given: Y, A and B, find in the plexus a
record where a pair of these terms occurs. For this, use the binary index. The pair
then becomes the current pair and it is located in the plexus in defined sites of a
defined record. This record belongs to a paradigm. The remaining term will be
377
the spare term. An ANZ agent is then recruited and assigned the duty consisting
of the current pair and the spare term.
2. Step in paradigm. A move is made from the current record to directly linked
records, which identifies a new current pair. A new ANZ agent is then recruited
with a duty made up of the new current pair and the spare term (the recruitment
is conditioned by the agent non-redundancy clause). After a sufficient number of
phases, the paradigm may in this way be exhausted.
3. Positioned resetting. The ANZ agent looks whether its duty may be transposed
(see above a discussion of analogy transposition): a tentative new current pair is
made by involving the spare term. The attempt succeeds if the new current pair is
coindexed in the plexus. Then, this ANZ agent recruits another one, assigning it
this pair as its duty (again, the recruitment is conditioned by the agent nonredundancy clause).
4. Settlement. When, in its duty, the agent ANZ finds the spare term equal to a term
of the current pair, then the settlement condition is detected and the third term in
the duty is an X, that is, a result, as specified by the analogical task; a finding is
raised. The finding is later merged by ABS at the agent's delivery point, into a
result.
In short, the ANZ agent recruits a systematic tree of possibilities by exhausting its
current paradigm and by performing resetting when analogy transposition allows it to do
so398.
The settlement condition is the coincidence of two terms one of which is the spare term.
18.2. Rearward procedure for agent ANZ, in pseudo-code
function ABS_RW_ANZ (argAgNum)
Implicit arguments = components of argAgNum's duty which are relevant, that is:
a) term Y implemented in ABSagT1
b) the cooccurrence (ABSagRA(argAgNum), ABSagSA1(argAgNum),
ABSagSA2(argAgNum))
Assumption: these sites are occupied by existing terms
Let A = term (ABSagRA(argAgNum), ABSagSA1(argAgNum))
Let B = term (ABSagRA(argAgNum), ABSagSA2(argAgNum))
Raising:
none
398
Such a systematic and exhaustive search may be questioned: do we have reasons to think that the brain
operates in this way? What is not doubtful is that numerous activations happen in parallel. The way ABS
organizes the search must rather be seen as the serialization of a parallel process. Its detail may not be
plausible while its overall effect may. The general adequation and plausibility of the model do not require
all its components to be adequate and plausible. (Daniel Kayser, pers. comm.).
378
Solving:
if Y==B then produce A as a finding
Positioned resetting:
let newY=A
in plexus, pick up coindexed occurrences of (Y, B) → cooccurrences (RZ, S1Z,
S2)
Term (RZ, S1Z) is interpreted as newA, term (RZ, S2Z) is interpreted as newB.
For each cooccurrence found recruit ANZ (NouveauY, RZ, S1Z, S2Z),
the settlement base is transposed, the record belongs to another paradigm
One step in same paradigm = abductive movement by transitivity:
Starting from RA, take a step in paradigm
yielding homologous cooccurrences RZ, S1Z, S2Z
For any such cooccurrence found, recruit ANZ (Y, RZ, S1Z, S2Z),
the settlement base is conserved,
the record is a homologous one in the same paradigm.
return
end ABS_RW_ANZ
18.3. Forward procedure for agent ANZ
None. No forward procedure is necessary for agent ANZ.
18.4. Discussion of agent ANZ: under-productive priming
For agent ANZ, priming is the initial process which accepts the terms of the task and
yields current operation conditions399. Priming accepts the three terms defining an
analogical task (e.g. le, une, un) and distributes them into: a) a spare term, and b) a
current pair, the latter being located in a defined plexus record.
In the current implementation, priming requires a pair of the task's terms to be directly
coindexed. However, intuition suggests that a less explicit, more diffuse attestation
should suffice. Current priming may then be seen as too rough and under-productive in
certain cases.
399
"Priming" is also used in experimental psychology and in psycholinguistics, with a different meaning.
379
A more productive design, which would succeed in priming in less favourable cases is
algorithmically possible, but it is heavy and little plausible, which is why it was not
implemented. A more parallel and efficient processor, as the brain certainly is, may do
things differently and one of the propositions below applies:
-
it produces a more diffuse priming,
-
the structuring mechanisms which the brain uses are compatible with those of the
model but more flexible,
-
the structuring mechanisms which the brain uses differ from those of the model,
-
after all, men also find it difficult to process analogy when the conditions relating
their terms are not favourable enough and the model is not overall much worse
than us.
In the absence currently of a base to make a more precise statement, this point is left as
it is.
380
19. Appendix: Analogical task with two
constituents (agent S2A)
19.1. Agent S2A, specification and overall design
Agent S2A solves an analogical task of the type "find X which is to Y//Z (sign // is
concatenation) as A is to B". Where there used to be a single term (Y) in agent ANZ, we
now have the concatenation of two terms (Y//Z) in agent S2A. Thus S2A is productive
in cases in which ANZ is not.
Agent S2A:
-
finds Y' = ANZ (Y :: A : B) producing results at channel C1,
-
finds Z' = ANZ (Z :: A : B) producing results at channel C2.
-
adapts then it operation depending on C1 and C2 being productive. The cases
are: none is productive, C1 alone is productive, C2 alone is productive, C1 and
C2 both are productive.
If C1 alone is productive, let Y' be its production, then S2A produces X=Y'//Z as result
If C2 alone is productive, let Z' be its production, then S2A produces X=Y//Z' as result.
If C1 and C2 are productive, S2A produces Y'//Z' as result.
This notation is a convention, actually, Y' must be understood as the set of the terms
which arise in successive phases at channel C1; likewise for Z'. But this happens seldom
only. The results of the agent therefore are sets in principle only; most often they contain
zero or one element, less often two or three. When C1 and C2 are productive, all the Y'
are concatenated with all the Z' (Cartesian product effect).
19.2. Architecture of agent S2A
The diagram below shows the architecture of agent S2A.
381
3 0 S2 A S yn ta x w ith 2 c on s titue n ts , fo r s ys tem ic a na lo g y v. 7 2 0 01 -0 5 -26
D uty
D irec t res ult
A sse m b led res ult
X is to Y//Z
a s A is to B
T1
Y
X
X left
T2
Z
X rig ht
S 2 A (:Y //Z :: A:B )
Init_ ANZ ( :Y :: A:B )
T3
-
ch YP
YP
Init_ ANZ (Z, A, B )
c h ZP
ZP
RC
Y a lo ne ha s a na lo gs Y P  Z P
Init_ ANC (Y P:Y :: M YP :Y //Y + )
ch YS

RA
-
S A1
A
-
RC
S A2
B
-
1 2 ANZ o r
direc t so lv ing
DL
RC
1 2 ANZ o r
direc t so lv ing
DL
RC
1 2 ANC o r
direc t so lv ing
DL
X = Y P // Z
d elive ry
p oin t
Z a lo ne ka s a na lo gs  Y P Z P
Init_ ANC (Z P:Z :: MZ P :Z -//Z )
ch X
DL
c h ZS

RC
1 2 ANC o r
direc t so lv ing
DL
X = Y // Z P
X
Y a nd Z have a na lo gs
Y P  ZP
ch C
ye s/n o
RC
1 2 C OL
(Y P , Z P )
DL
X = Y P // ZP
M YP = a ss em bly o f YP ,
Y+ = s ucc es so r o f YP ,
Z - = p re d ec ess o r o f ZP
Figure 46 Diagram of agent S2A
19.3. Limits of agent S2A
S2A has the following limits. It:
-
inherits from ANZ its own limit at priming.
-
requires preanalysed constituents. This limit is minor: it is easy to wrap up S2A
with a client which ensures the analysis which is done with agent AN2.
-
impose the constituents to be attested terms (it does not process unknown terms).
This limit is not critical in a two-term only vision. The absence of the 'unknown
term' function becomes more sensitive when processing a longer form.
-
is limited to two terms and the architecture is difficult to extend to more.
382
20. Appendix: Limited syntax with
agreement (pseudo-agent AN2)
20.1. Definition of pseudo-agent AN2
AN2 treats the analogical task (find X which is to Y as A is to B) by combining two
approaches:
a) it tries to solve directly by recruiting ANZ
b) it tries to segment Y into two attested terms and then recruits S2A.
20.2. Merits and limits of pseudo-agent AN2
The current implementation is a "wrap up" which modestly compensates for the absence
of syntax in the analogical task (or the absence of structural analogy in B2-B3 which is
the same thing). The operation is heavy and little plausible. AN2 should disappear upon
'syntactization' of the analogical task or 'analogization' of the analysis process.
AN2 is "pseudo" in this that it has no code of its own: no rearward function, no forward
function (there is no ABS_RW_AN2 or ABS_FW_AN2). Its implementation is reduced
to a Matlab function of triggering/initialization. With an implementation like this one,
AN2 could not act as a commissioner for another agent. This is contingent, if it had to
be, a cleaner packaging would be easy to make, but it was not invested upon because it
is not promising.
Within its current limit, AN2 is the best that can be shown to be doing a little syntax
while observing agreement, still without lexical categories, without syntactic features,
and without rules.
AN2, because it is client of CATZ which neglects positionality, should not perform this
well. However, the tests are good: the agent responds less than one might whish (it is
deemed to be under-productive) but when it does, the results are always good. This is a
happy effect which should be explained but has not yet been.
383
21. Appendix: Summary of agents
The general picture below indicates what agents use what other agents.
Agents B2 and B3 are melted to denote that they form a solidary whole. For example,
B2 is its own client, B2 is client of B3, B3 is its own client and B3 is client of B2.
B2
Binary assembly
B3
Ternary assembly
CATZ
Suggestion
of similarity
S2A uses a variety of B2
S2A Analogical
task with two
constituents
AN2 (virtual)
Limited syntax
with agreement
ANZ
Analogical task
Neglects copositionings
CATZ
Observes copositionings
ANZ
Figure 47 Client agents (on the left) and commissioner agents (on the right)
Agent CATZ neglects copositionings. The consequence must be that its clients (its left
transitive closure: B2, B3, S2A, AN2) also neglect copositionings.
B2-B3 for example, which is client of CATZ, does not observe agreement.
0.35
0.1
Strictly, agent ANZ is the only one to observe copositionings. However, its client AN2
observes them too, as tests show, despite it using CATZ (cf. section 5.7. Grammatical
agreement with AN2, page 153). This favourable effect is a surprise in the model. It is
welcome but was not explained.
385
References400
Abney 1996 (Steven) Statistical methods and linguistics, in Judith Klavans and Philip Resnik,
eds. The balancing act, MIT Press, 1996, http://www.sfs.nphil.uni-tuebingen.de/~abney/
Altmann 1990 (Gerry T.M. ed.) Cognitive models of speech processing Bradford Books / MIT
Press.
Aristotle 1980, La Poétique , Text, transl and notes, Roselyne Dupont-Roc and Jean Lallot,
Paris, Seuil.
Arnauld 1660/1997 (Antoine, and Claude Lancelot) Grammaire générale et raisonnée, Editions
Allia, Paris 1997, 1st plubl. 1660.
Atkinson 1968 (R. L., and R. M. Shifrin) "Human memory, a proposed system and its control
processes" in K.W. Spence and J.T. Spence, eds., The psychology of learning an
motivation: advances in research and theory, vol. 2 New York, Academic Press.
Auroux 1989 (Sylvain, at al.) Histoire des idées linguistiques, tome 1, la naissance des
métalangages. Mardaga, Bruxelles.
Auroux 1991 (Sylvain) La linguistique est une science normative in Meschonnic 1991.
Auroux 1994 (Sylvain) La révolution technologique de la grammatization, Mardaga, Liège.
Auroux 1996 (Sylvain, and al., éditeurs) Histoire et grammaire du sens Armand Colin, Paris.
Auroux 1998 (Sylvain) La raison, le langage et les normes, PUF, Paris.
Barlow 2000 (Michael and Suzanne Kemmer editors) Usage-based models of language, CSLI
Publications, Stanford.
Barner 2002 (David, and Alan Bale) No nouns, no verbs: psycholinguistic arguments in favor
of lexical underspecification, Lingua 112 (2002) 771-791.
Bazell 1949 (C.E.) On some asymmetries of the linguistic system, Acta Linguistica, Vol 5, fasc.
3, 1945-1949.
Bechtel 1991/1993 (William) & Adele Abrahamsen Le Connectionnisme et l'Esprit La
Découverte Paris 1993 (Ed. orig. en ang., 1991).
400
When two dates are provided, separated by a slash, as for example in :
Saussure 1915/1975 (Ferdinand de) Cours de linguistique générale (ed. Tullio de Mauro) Payot, Paris,
1975, first publ. in 1915.
the first one (e.g. 1915) is that of the original publication, it is provided for precedence reasons or to
locate the publication in history; the second one (e.g. 1975) is that of a more recent publication, a more
accessible one, occasionally a translation. In the quotations, the page indication is a refeence to the more
recent publication.
387
Benveniste 1950 (Emile) La phrase nominale, Bull. de la Société de Linguistique, 46, 1950, p.
19-36.
Benveniste 1966 (Emile) Problèmes de Linguistique Générale 1, Gallimard Collection Tel
Paris 1966.
Bloomfield 1933/1970 (Leonard) Le Langage Payot 1970 (1ère plubl. New York 1933).
Bohas 1993a (Georges) Diverses conceptions de la morphologie arabe, in Bohas Georges, (ed.)
Développements récents en linguistique arabe et sémitique. Institut Français de Damas,
Damas 1993.
Boltanski 2002 (Jean-Elie), La révolution chomskyenne et le langage, L'Harmattan, Paris, 2002.
Bourdeau 2000 (Michel) Locus logicus L'Harmattan Paris, coll. Commentaires philosophiques.
Bransford 1971 (J.D. and J.J. Franks) The abstraction of linguistic ideas, Cognitive
Psychology, 2, 331-380.
Bresnan 2001 (Joan) Lexical Functional Syntax. Blackwell.
Brunot 1887/1961 (Ferdinand Brunot and Charles Bruneau), Précis de grammaire historique de
la langue française, Masson, 5th ed. 1961 (1st ed. 1887).
Burloud 1948 (Albert) Psychologie Hachette, Paris.
Cajétan 1498/1987, De l'analogie des noms, in Pinchard 1987.
Caravedo 1991 (Rocío) La competencia lingüística, crítica de la génesis y del desarrollo de la
teoria de Chomsky, Editorial Gredos, Madrid.
Chafe 1996a (Wallace) How consciousness shapes language, Pragmatics and cognition, 4: 1, p.
35-64.
Chametzky 1996 (Robert A.) A theory of phrase markers and the extended base, Sunny Press,
Albany, United States of America.
Chametzky 2000 (Robert A.) Phrase structure, Blackwell.
Chauviré 1989 (Christiane) Ludwig Wittgenstein, Seuil, Paris.
Chevrot 2001 (J.-P., and M. Fayol) Acquisition of French liaison and related child errors, in
M. Almgren, A. Bareña et alii, eds, Research on child language acquisition: Proceedings
of the International Association for the Study of Child Language, vol. 2, Cascadilla
Press, 560-774.
Chomsky 1957/1969 (Noam) Structures syntaxiques, Seuil, Paris, coll. Points (1ère plubl. en
anglais 1957).
Chomsky 1960 (Noam) Explanatory models in linguistics, in E. Nagel, P. Suppes and A.
Tarski, eds, Logic, Methodology and the Philosophy of Science: proc. of the 1960
International Conference, Stanford 1962.
Chomsky 1964 (Noam) Current issues in linguistic theory, in J.A. Fodor & J.J. Katz (dir.), The
structure of language, Readings in the philosophy of language Englewood Cliffs,
Prentice Hall, 51-118.
Chomsky 1965/1971 (Noam) Aspects de la théorie syntaxique, Seuil, Paris, 1965.
Chomsky 1966b (Noam) Cartesian linguistics, Harper and Row, New York; (trad. La linguistic
cartésienne, Seuil 1969).
Chomsky 1972/1975 (Noam) Questions de Sémantique Paris, Seuil, trad. d'un original en
anglais de 1972.
388
Chomsky 1974/1975 (Noam) Problèmes et mystères dans l'étude du langage humain in
Chomsky 1977/1975.
Chomsky 1975 (Noam) The logical structure of linguistic theory, Plenum Press, New York,
London.
Chomsky 1981/1984 (Noam) La connaisance du langage: ses composantes et ses origines,
Communications 40, Seuil 1984; déjà paru en anglais in Philosophical transactions of
the Royal Society of London 1981.
Chomsky 1986b (Noam) Barriers, MIT Press, Cambridge Mass.
Chomsky 1990b (Noam) On formalization and formal linguistics, Natural Language and
Linguistic Theory, 8, 143-147.
Chomsky 1995/1997a (Noam) The minimalist program, MIT Press (1st publication 1995).
Condillac 1973 Essai sur l'origine des langues, ed. Galilée, Paris.
Cook 1988 (Vivian J., and Mark Newson) Chomsky's Universal Grammar, an introduction
Blackwell.
Cori 1998 (Marcel and Jean-Marie Marandin) Héritage de propriétés dans les grammaires
d'arbres polychromes, Lynx vol. 39 n° 2, 1998, Université de Paris 10, nanterre.
Coulon 1976 (D., D. Kayser, A. Bonnet, JM. Lancel, M. Monfils), Essai de compréhension
d'un texte à l'aide d'un résau sémantique de procédures, Congrès AFCET-Informatique,
Gif s/Yvette, Actes pp.113-122, 3-5 Novembre 1976).
Creissels 1991 (Denis) Description des langues négro-africaines et théorie syntaxique Ellug,
Univ. Stendhal, Grenoble.
Creissels 1995 (Denis) Eléments de syntaxe générale Paris, PUF.
Croft 1993 (William) The semantics of subjecthood, in Yaguello 1994.
Croft 2001 (William) Radical Construction Grammar, Oxford University Press.
Cruttenden 1986 (Alan) Intonation, Cambridge University Press.
Culioli 1982 (Antoine) Rôle des représentations métalinguistics en syntaxe Communication 23.
congrès des Linguistes Tokyo 1982. Département de Recherches Linguistiques, lab. de
Linguistic Formelle (ERA 642), Univ. Paris 7, Paris.
Culioli 1990 (Antoine) Pour une Linguistique de l'Enonciation, opérations et représentations,
tome 1 Orphys, Paris.
De Mauro 1969 (Tullio) Une introduction à la sémantique, Payot, Paris (trad. par JL. Calvet
d'un original italien de 1966).
Demarolle 1990 (Pierre) Réflexions sur l'analogie; formes et lieux dans l'étude du verbe en
fançais moderne, périodique Le français moderne, 58ème année, octobre 1990, n° 3 / 4.
Edité par le CILF, 21 bis rue du Cardinal Lemoine, 75005, Paris
Dokic 2000 (Jérôme & Joëlle Proust, eds.) Simulation and knowledge in action, Bibliothèque
du CREA, Imprimerie de l'Ecole Polytechnique, F-92128 Palaiseau, dec. 2000.
Douay 1991 (Françoise, and Jean-Jacques Pinto), Analogie / Anomalie in Vandeloise 1991.
Ducrot 1972 (Oswald and Tzvetan Todorov), Dictionnaire Encyclopédique des Sciences du
Langage Seuil Paris.
Edelman 1998 (Shimon), Representation is representation of similarities, Behavioral and Brain
Sciences (1998) 21, 449-498.
389
Eliasmith 2001 (Chris, and Paul Thagard) Integrating structure and meaning: a distributed
model of analogical mapping Cognitive Science 25.
Elithorn 1984 (A.& Banerji P., ads.) Artificial and human intelligence, NATO Publications,
Brussels.
Elman 1998 (Jeff) Generalization, simple recurrent networks, and the emergence of structure
Proceedings of the 20th annual conference of the Cognitive cience Society. Mahway, NJ:
Lawrence Erlbaum Associates.
Engel 1996 (Pascal) Philosophie et psychologie Gallimard, coll. Folio/Essais, Paris.
Fauconnier 1997a (Gilles) Mappings in thought and language, Cambridge Univertisty Press.
Fehr 2000/1997 (Johannes) Saussure entre linguistique et sémiologie, Presses Universitaires de
France, (traduit de l'allemand, publication originale 1997).
Fillmore 1990 (Charles J.) Construction Grammar. Course reader for Linguistics 120A,
Université de Californie, Berkeley.
Fillmore 1992 (Charles J.) Constituency v. dependency in Madray-Lesigne 1995.
Fodor 1983 (Jerry) Modularity of mind, The MIT Press.
Fodor 1988 (Janet Dean Fodor) Thematic roles and modularity in Gerry T.M. Altman (ed).,
Cognitive models of speech processing Bradford Books / MIT Press, 1990, p. 434.
Forbus 2001 (Kenneth D.) Exploring analogy in the large in Gentner 2001a
Fradin 1999 (Bernard) Syntaxe et morphologie, in Histoire, épistémologie, langage, tome 21,
fascicule 2, 1999, SHESL, PUV.
Fradin and Marandin 1997 (eds.) Mots et grammaire, Didier, Paris.
Freeman 2000 (Robert J.) Example-based Complexity--Syntax and Semantics as the Production
of Ad-hoc Arrangements of Examples, Proceedings of the ANLP/NAACL 2000
Workshop on Syntactic and Semantic Complexity in Natural Language Processing
Systems, pp. 47-50.
French 2002 (Robert M.) The computational model of analogy-making Trends in Cognitive
Sciences Vol.6 No.5 Mai 2002.
Frei 1954 (Henri) Critères de délimitation in Word, 1954, p. 136-145.
Fuchs 1993 (Catherine) (and coll.) Linguistique et Traitement Automatique des Langues,
Hachette, Paris.
Galban 1907 (A.) Nouvelle grammaire espagnole-française, Garnier, Paris.
Galmiche 1991 (Michel) Sémantique linguistique et logique, PUF. Paris.
Ganascia 2000 (J.-G.) Logique et induction, un vieux débat, in Kodratoff 2000.
Garnier 1985 (Catherine) La Phrase Japonaise, Structures Complexes en Japonais Moderne
Publications Orientalistes de France (INALCO) Paris 1985.
Gayral 1996 (Françoise , Daniel Kayser, François Lévy) Logique et sémantique du langage
naturel: models et interprétation, in Intellectica 1996/2, 23, 203-325.
Gazdar 1985 (Gerald, Ewan Klein, Geoffrey Pullum, Ivan Sag) Generalized phrase structure
grammar, Basil Blackwell.
Gentner 1983 (Derdre) Structure Mapping: A Theoretical Framework for Analogy. Cognitive
Science 7:2. pp 155-170.
390
Gentner 1989 (Derdre) The mechanisms of analogical learning, in S.Vosniadou & A. Ortony
(eds.), Similarity and analogical reasoning, Cambridge University Press.
Gentner 2001a (Derdre; Holyoak; Keith J., & Konokiv, Boicho N.) The analogical mind,
perspectives from cognitive science, MIT Press, Cambridge Massachusetts, London.
Gineste 1997 (Marie-Dominique) Analogie et cognition, étude expérimentale et simulation
informatique, PUF.
Givón 1979 (Talmy) On understanding grammar, Academic Press.
Goldberg 1995 (Adele) Constructions: a construction grammar approach. The University of
Chicago Press.
Goldsmith 2001 (John A.) Unsupervised learning of the morphology of a natural language.
Association of Computational Linguistics.
Goodman 1951 (Nelson) The structure of appearance, The Bobbs-Merryl Company inc.
Indianapolis, 1952.
Gross 1986-1 (Maurice) Grammaire transformationnelle du français, 1-Syntaxe du verbe, Ed.
Cantilène, Paris, réédition d'un ouvrage publié en 1968 par Larousse.
Gross 1996 (Maurice) Remarques sur le notion de sujet in Auroux 1996.
Habert 1997 (Benoît, Adeline Nazarenko and André Salem) Les Linguistiques de Corpus
Armand Colin, Paris.
Hacking 1975/2002 (Ian) L'émergence de la probabilité Seuil, Paris.
Haegeman 1991 (Liliane) Government and binding theory, Blackwell.
Hagège 1976 (Claude) La grammaire générative, réflexions critiques, PUF, Paris, 1976.
Hagège 1999 (Claude) La structure des langues, PUF.
Harris 1951 (Zellig S.) Methods in structural linguistics (renommé Structural linguistics dans
des éditions ultérieures) The University of Chicago Press.
Harris 1991 (Zellig S.) A theory of language and information. A mathematical approach
Oxford University Press.
Hjelmslev 1933/1985 (Louis) Structure générale des corrélations linguistiques, in Hjelmslev
1985 (Louis) Nouveaux essais gathered and presented by François Rastier, PUF, Paris, p.
25 sq.
Hofstadter 1995 (Douglas) Fluid concepts and creative analogies, Basic Books (Perseus).
Hopper 1993 (Paul J. and Elisabeth Closs Traugott) Grammaticalization, Cambridge textbooks
in linguistics.
Houdé 1998 (O. et al.) Vocabulaire des Sciences Cognitives Presses Universitaires de France,
Paris, 1998.
Householder 1971 (Frederick W.) Linguistic speculations, Cambridge University Press.
Hummel 1997 (J.E. and Holyoak K. J.) Distributed representations of structure: a theory of
analogical access and mapping, Psychological Review 104 (3), 427-66.
Itkonen 1997 (Esa & Jussi Haukioja), A rehabilitation of analogy in syntax (and elsewhere), in:
A. Kertesz (ed.): Metalinguistik im Wandel: die kognitive Wende in Wissenschaftstheorie
und Linguistik. Frankfurt a/M: Peter Lang, 1997, pp. 131-177).
Itkonen 2003 (Esa), Analogy: within reality; between reality and language; between mind and
language; within language, en cours de publication.
391
Jackendoff 1975 (Ray) Morphological and semantic regularities in the lexicon, Language 51,
639-71
Jackendoff 1993 (Ray) Patterns in the Mind, Language and human nature, Harvester
Wheatshef, Hemel Hempstead, Royaume uni, 1993.
Jackendoff 2002 (Ray) Foundations of language, Oxford University Press.
Jakobson 1963 (Roman) Essais de Linguistique Générale, Les Fondations du Langage Ed. de
Minuit, Paris.
Johnston 1997 (Jason C.) Systematic Homonymy and the Structure of Morphological
Categories: Some Lessons from Paradigm Geometry. PhD thesis, Université de Sydney,
Australie, disponible à l'adresse: http://www.astadhyayi.net/
Kager 1999 (René) Optimality Theory, Cambridge University Press.
Kerleroux 1996 (Françoise) La coupure invisible, Etudes de syntaxe et de morphologie
Septentrion, Paris 1996.
Kirchner 2002 (R. M.) (en preparation) Preliminary thoughts on "phonologization with an
exemplar-based speech processing system (ROA-320-0699), Rutgers Optimality Archive,
http://ruccs.rutgers.edu/ROA.
Kodratoff 2000 (Y., E. Diday, P. Brito, M. Moulet, eds.) Induction symbolique numérique à
partir de données, Cépaduès-Editions 11, rue Nicolas Vauquelin, F 31100 Toulouse
France
Koenig 1999a (Jean-Pierre) Lexical Relations Stanford monographs in linguistics, Centre for
the Study of Language Publications, Stanford Ca.
Lacan 1953 (Jacques) Fonction et champ de la parole et du langage, in Ecrits, Seuil, Paris,
1966.
Lafon 1960 (René) L'expression de l'auteur de l'action en basque in Lafon 1999 (René)
Vasconiana Iker-11 Real Academia de la Lengua Vasca, Ezkualtzandia, Plaza Barria, 15
48005 Bilbao
Lakoff 1987 (George) Women, fire and dangerous things, Chicago University Press (BNF).
Laks 1993 (Bernard, and Marc Plénat) De natura sonorum, essais de phonologie, Presses
Universitaires de Vincennes, Paris, 1993.
Laks 1996 (Bernard) Langage et Cognition, Hermès, Paris, 1996.
Lamb 2000 (Sydney M.) Bidirectional processing in language and related cognitive systems, in
Barlow 2000.
Langacker 1987 (Ronald), Foundations of cognitive grammar I, Theoretical Prerequisites,
Stanford University Press.
Langacker 1988b (Ronald) A usage-based model, in Rudzka-Ostyn 1988.
Langacker 1998 (Ronald) Conceptualization, symbolization and grammar in Tomasello 1998.
Langacker 2000 (Ronald) A dynamic-usage based model, in Barlow 2000.
Lemaréchal 1989 (Alain) Les parties du discours PUF.
Lemaréchal 1997 (Alain) Zéro(s) PUF.
Lepage >= 1996 (Yves) Solving analogies on words: an algorithm. Internet, Google (lepage
yves analogy).
392
Ligozat 1994 (Gérard), Représentation des Connaissances Linguistiques, Armand Colin, Pais,
1994.
Lima 1994 (Susan D. ed.) The reality of linguistic rules John Benjamins
Livet 1995 (Pierre) Connectionnisme et fonctionnalisme, Intellectica, 21, 1995.
Ludwig 1997 (Pascal) Le langage, textes choisis, Garnier Flammarion, Paris.
Mac Whinney 1992 (Brian) The dinosaurs and the ring in Lima 1994.
Mac Whinney 1998 (Brian) Models of the emegence of language in Annual review of
Psychology 1998 49: 199-227.
Mac Whinney 2000 (Brian) Connectionism and language learning in Barlow 2000.
Maes 1980 (Hubert) Thème et propos, récurrence de la structure sujet-prédicat en Japanese. In
Bernard Franck (ed.) Mélanges offerts à M. Charles Hagenauer, Etudes Japonaises
L'Asiathèque, Paris 1980.
Magnani 2000 (Lorenzo) Abduction, reason and science. Kluwer.
Mandelbrot 1954 (Benoît) Structure formelle des textes et communication, Word, vol 10, 1954.
Manning 2002 (Christopher D.) Probabilistic syntax, projet à paraître dans Bod, Hay and
Jannedy (eds), Probabilistic linguistics, MIT Press.
Manning & Sag 1995 (Christopher D. Manning and Ivan A. Sag) Dissociations between
argument structure and grammatical relations, in Webelhuth 1999.
Marandin 1997 (Jean-Marie) "Pas d'entité sans identité": l'analyse des groupes nominaux
DET+A, in Fradin and Marandin 1997.
Marcus 2001 (Gary) The algebraic mind Cambridge, MIT Press.
Martinet 1955 (André) Economie des changements phonétiques, Berne, Francke.
Martinet 1958 (André) La construction ergative et les stuctures élémentaires de l'énoncé in
Journal de psychologie normale et pathologique 1958, p. 377-392.
Martinet 1970 (André), Eléments de linguistique générale, Armand Colin Paris, 1965.
Martinet 1979 (André), Grammaire fonctionnelle du français, Didier, Crédif, 1979.
Martinet 1985 (André), Syntaxe Générale, Armand Colin, Paris 1985.
McClelland 1986 (J.L and Rumelhart, D.E). Parallel Distributed Processing, Explorations in
the Microstructure of Cognition, Volume 1 et 2: Psychological and Biological Models,
Cambridge Mass.
McClelland 1986sp (J. and Kawamoto) A. Mechanisms of sentence processing: assigning roles
to constituents in McClelland 1986.
Meillet 1922/1934 (A.) Introduction à l'étude comparative des langues indo-européennes, 1st
publ. Paris 1922, reprinted in 1964 in fac-simile by the University of Alabama Press,
from the 7th publication (1934).
Melamed 2001 (Dan) Empirical Methods for Exploiting Parallel Texts, MIT Press.
Meschonnic 1991 (Henri) Le langage comme défi, Presses Universitaires de Vincennes, SaintDenis.
Milner 1982 (Jean-Claude) Ordre et raisons de langue. Seuil, Paris.
Milner 1989 (Jean-Claude) Introduction à une Science du Langage Seuil, Paris
Milner 1991 (Jean-Claude) Géométries in Le gré des Langues No 2, L'Harmattan, Paris
393
Minsky 1986 (M. L.) The society of mind. Simon and Schuster, New York. A society of
relatively simple and autonomous agents.
Moeschler 1994b (Jacques and A. Reboul) Dictionnaire encyclopédique de pragmatique, Seuil,
Paris.
Nagao 1984 (M.) A framework of a mechanical translation between Japanese and English by
analogy principle in Elithorn 1984.
Neelman 2001 (Ad & Fred Weerman) Flexible syntax, or A theory of case and arguments,
Kluwer, Studies in natural language and linguistic theories, Volume: 47.
Newmeyer 1983 (Frederick J.) Grammatical Theory The Chicago University Press, Chicago
and London.
Nicolas 1999 (David) La distinction entre noms massifs et noms comptables. Aspects
linguistiques et conceptuels Thèse de doctorat, Ecole Polytechnique, 1999 Dir. Daniel
Andler.
Normand 1990 (C., sous la dir. de) La quadrature du sens, PUF, 1990.
Page 2000 (Michael) Connectionist modelling in psychology: a localist manifesto, Behavioral
and Brain Sciences (2000) vol. 23, 443-512.
Paveau 2003 (Marie-Anne and Georges-Elia Sarfati) Les grandes théories de la linguistique, de
la grammaire comparée à la pragmatique Armand Colin, Paris.
Pei 1969 (Mario & Franck Gaynor), Dictionary of linguistics, Littlefield, Adams and Co, New
Jersey.
Pereira 2000 (Fernando) Formal grammar and information theory: together again?
Philosophical Transactions of the Royal Society, London, A, (2000) 358, 1239-1253.
Pierrehumbert 2002 (Janet B.) (sous presse) Exemplar dynamics: word frequency, lenition and
contrast, in J. Bybee & P. Hopper (eds) Frequency effects and emergent grammar, John
Benjamins, ou http://www.ling.nwu.edu/~jpb/
Pinchard 1987 (Bruno) Métaphysique et sémantique, autour de Cajétan, étude et traduction de
"Nominum Analogia", Vrin, Paris.
Pinker 1991 (Pinker Steven and Prince Alan) Regular and irregular morphology and the
psychological status of rules of grammar. Proceedings of the 17th annual meeting of the
Berkeley linguistics society, 230-51, Berkeley, CA, BLS.
Planck 1995 (Frans ed.) Double case, Agreement by suffixaufnahme, Oxford University Press.
Pollar 1987 (C. and Sag I.) Information-based sytax and semantics. CSLI series, University of
Chicago Press.
Pollock 1997 (Jean-Yves) Langage et cognition, Introduction au programme minimaliste de la
grammaire générative. PUF, Paris.
Putnam 1960 (H.) "Minds and machines" in S. Hood ed. Dimensions of mind, New York, New
York University Press.
Rebuschi 1997 (Georges) Essais de Linguistique Basque Universidad del País Vasco Bilbao /
Diputación Foral de Gipuzkoa, San Sebastián.
Rastier 1991 (François) Sémantique et recherches Cognitives Paris, PUF.
Rastier 1996 (François) Problématiques du signe et du texte, Intellectica 1996/2, 23, pp.11-52.
Rastier 1998a (François) Le problème épistémologique du contexte et l'interprétation dans les
sciences du langage, Langages, 129, pp. 97-111.
394
Rastier 2002d (François) Les critères linguistiques pour l'identification des textes racistes Eléments de synthèse, in Valette, Mathieu, éd., Projet européen Princip.net, Plate-forme
pour la Recherche, l'Identification et la Neutralization des Contenus Illégaux et
Préjudiciables sur l'internet. Rapport 2002-1, Inalco, pp. 84-98.
Rey 1973 (Alain) Théories du signe et du sens, Klincksiek, Paris.
Ricœur 1969 (Paul) Le conflit des interprétations, Paris, Seuil.
Rissanen 1989 (Jorma) Stochastic complexity in statistical inquiry, Singapour, World Scientific
Publishing Company.
Rudzka-Ostyn 1988 (Brygida, ed.) Topics in cognitive linguistics Amsterdam, John Benjamins.
Rumbaugh 1991/1997 (James & alli), Modélization et conception orientées objet, Masson,
Paris 1997, original en anglais Object oriented Modelling, Prentice Hall, Englewood
Cliffs, 1991.
Ruwet 1967 (Nicolas) Introduction à la grammaire générative, Plon, Paris.
Sadler 1989 (V.) Working with analogical semantics. Dordrechts: Foris.
Sadock 1991 (Jerrold M.) Auto lexical syntax: a theory of parallel grammatical
representations, Chicago, Londres, University of Chicago Press.
Sadock 2000 (Jerrold M.) Morphologie dérivationnelle en quatre dimensions, kallaallisut,
groenland occidental, in Tersis 2000, p. 183.
Sanctius 1587/1982 (Franciscus) La Minerve, Presses Universitaires de Lille.
Saussure 1915/1975 (Ferdinand de) Cours de linguistique générale (éd. Tullio de Mauro)
Payot, Paris, première éd. en 1915.
Saussure 2002 (Ferdinand de) Ecrits de linguistique générale, Gallimard, Paris.
Shaumjan 1966 (Sebastian), La cybernétique et la langue, in Benveniste 1966.
Shaumjan 1987 (Sebastian), Semiotic theory of language, Indiana University Press.
Shiver 1987 (Bruce & Peter Wegner eds.) Research directions in object oriented programming,
MIT Press.
Skousen 1989 (Royal), Analogical modelling of language, Kluwer.
Smolensky 1999 (Paul) Grammar-based connectionist approaches to language in Cognitive
Science Vol 23 num. 4, 1999.
Soutet (Olivier), 2000, Le subjonctif en français, Orphys, Paris.
Spinoza 1661/1984 (B. de) Traité de la réforme de l'entendement et de la meileure voie à suivre
pour parvenir à la vraie connaissance des choses, Text, translation and notes by par A.
Koyré, Vrin, Paris.
Swiggers 1997 (Pierre), Histoire de la pensée Linguistique, PUF, Paris.
Tager-Flusberg 1996 (H.), Language acquisition: grammar, in Brown (Keith & Jim Miller,
eds.) Concise encyclopedia of syntactic theories Pergamon1996.
Tanenhaus 1988 (Michael K. et alii), Combinatory lexical information and language
comprehension in Altmann 1990, p. 386.
Tersis 2000 (Nicole and Michème Terrien eds.) Les langues eskaléoutes, Sibérie, Alaska,
Canada, Groënland, CNRS Ediitons, 2000.
Tomasello 1998 (Michael, ed.) The new psychology of language, Erlbaum, London.
395
Turenne 1999 (Nicolas) Apprentissage d'un ensemble pré-structuré de concepts d'un domaine
in Math. inf. sc. humaines num. 148 1999, p. 41.
van Gelder 1998 (T. J.) The dynamic hypothesis in cognitive science. Behavioral and Brain
Sciences, vol. 21, num. 5, 1-14.
van Vallin 1997 (Robert D.) Structure, meaning and function, Cambridge university Press,
1997.
Vandeloise 1990 (Claude) Règles ou listes, l'arbitrage de la morphologie Le français moderne,
octobre 1990.
Veale 1998 (Tony & Diarmuid O'Donoghue), How to blend concepts and influence people:
Computational models of conceptual integration,
http://www.compapp.dcu.ie/~tonyv/metaphor.html
Wallon 1945 (Henri) Les origines de la pensée chez l'enfant, PUF, réédition PUF, col.Quadrige
1989.
Weil 1966 (Simone) Sur la science, Gallimard, Paris.
Wilks 1973 (Y.) Understanding without proof, Proceedings of the third international joint
conference on artificial intelligence, pp. 255-261, Stanford 1973.
Williams 1994 (Edwin), Thematic structure in syntax, MIT Press, Cambridge, Ma, United
States of America.
Yaguello 1994 (Maria, ed.) Subjecthood and subjectivity, The status of the subject in linguistic
theory, Proceedings of the colloquium held in London 19-29 march 1993, Orphys, Paris,
1994.
Yvon 1994 (François) Paradigmatic cascades: a linguistically sound model of pronounciation
by analogy. Proc. of ACL-EACL-97, Madrid 1994, p. 428-435.
Zwicky 1985 (A.) Heads In Journal of linguistics 21 pp 1-29.
396
Glossary
A2 'A2 Analogy' plays between two terms. Saying "X and Y are analog", is saying that they
are sinilar without specifying in which way. This is poorer than A4 analogy. A2 analogy,
diverging from the best philosophical, semiotic, and linguistic tradition, is a popular
vision. It is close to the association of associationist psychology. In most of the 20th
century, many scientists perceived in analogy its A2 variety only, which contributed to
the discredit of A4 analogy.
A4 'A4 analogy' plays between four terms as in "X is to Y as A is to B". It is the analogy of
Aristotle, Varro, Saussure, Bloomfield, Gentner and many more.
ABS Agent-Based Solving (ABS) is a possible implementation of the dynamics in the
Analogical Speaker. It is based on agents and channels.
agent ABS consists of agents (and channels). An agent is an organ which contributes to the
computation of a linguistic act or linguistic task. It has a duty which is made up of a few
terms copositioned with respect to one another. To fulfil its duty, an agent uses the
plexus data matching the terms of its duty, it recruits more agents, its commissioners,
and assigns them a duty in turn, derived from its own. An agent is 'short-sighted' it does
not have an entire vision of the task to which it contributes. A linguistic task may
involve ten to a few thousands of agents.
Analogical Speaker The name of the model defended in this work.
analogies which motivated transformations. These are linguistic facts like John sees
Jane, Jane is seen by John, or she speaks, she is the one who speaks. To acount for such
systematicities, generativism postulated transformations. The model defended in this
thesis does not make that postulation. Cf. section 4.2. About non-transformation (p.
107).
copositioning. The computation which is proposed to dynamically account for linguistic
acts is founded mainly on analogy. Instead of simply saying 'analogy', sometimes
'copositioning' is used: a) to insist precisely on copositionality because not everyone
shares this vision of analogy, and b) to make it possible for one or several mechanisms
other than analogy, but presenting this same property to establish copositionings, to
come later and complement the apparatus.
channel ABS uses channels (and agents). A (client) agent recruits other (commissioner)
agents via a channel (the recruitment is then 'opaque') when it needs to see the results
which will accrue to it as associated with different positions. The most common usage
397
of channels is the syntagmatic situation: each constituent of an assembly corresponds to
a channel.
client In ABS, is a client an agent which recruits other agents, which then are its
commissioners.
commissioner In ABS, an agent is commissioner for the agent which recruited it. The
latter is its client.
concrete 'Concrete' is opposed to 'abstract' or 'categorial'. A concrete theory is one which
does not call on categories or abstractions. It is based on exemplars and occurrences.
Idem a model. The Analogical Speaker is a concrete model.
delivery point In ABS an agent has a channel to which its findings are merged into results.
This is the agent's delivery point. It is obligatory and unique for any agent.
edification One of the two processes whereby the heuristic structure is elaborated (the
other one is recruitment). Channels, when adjacent, get federated (as an assumption),
giving rise to an agent which manifests this hypothetical federation. The agent is vested
with a duty derived from the field data of the channels. The agent eventually settles once
or several times. Settlement confirms the agent, and motivates the creation of a channel.
The latter sanctions the success of the federation, thus far hypothetical. The term
'edification' is chosen to avoid confusion with 'construction', which is left for linguistic
constructions in the sense of Fillmore.
exemplar, exemplarist 'Exemplar' is opposed to 'category'. 'Exemplarist' is opposed to
'abstract'. In this model, records are exemplarist. 'Exemplar' is also opposed in another
way, to 'occurrence'.
expansive gate In the plexus, set of records which gives expansive homology an occasion
to operate. Cf. section 3.6.4.2. Expansive gate (p. 86).
familiarity orientation A paradigmatic link bears a familiarity orientation. This is the
indication that one of the records of the link is more familiar than the other, or that they
have equal familiarity. The computation goes from less familiar to more familiar (or as
familiar) mais but not the other way.
field, field data The field is defined as that which the speaking subject has at hand when
he is performing a linguistic task. Field data are indexes on elements of situation:
linguistic form or elements which are perceived but which are not linguistic form. For
example in the reception of an utterance, certain field data are the place, in the received
form, of the various parts (segments, constituents, syntagms) being processed. In this
case, field data are indexes in the unilinear organization of the received form. More
generally, when extending the model to encompass non linguistic perceptual data, field
data are bound to index determinations of space, of time, and of perceptual channel
(hearing, vision, etc.), of elements submited to the computation. In a heuristic structure
which makes recruitment only, there is no field data. There is necessarily in a process by
edification.
finding In ABS, a finding is produced by an agent. When it settles, an agent raises a
finding. Findings are merged into results at the agent' delivery point.
398
form In this work, 'form', unless mentioned otherwise, is used to refer to linguistic form,
opposing it then to meaning. When something else is meant, for example a Gestalt, then
it is explicitly said so.
FW (forward) In the conventional orientation of the heuristic structure, the forward
direction.
heuristic structure The set of agents and channels that it takes to carry out a linguistic task
and find results for it. The heuristic structure is elaborated by recruitment and possibly
by edification.
immersion The process (the procedure) which is proposed to account for the reception of
an utterance: an utterance is received (finally interpreted, understood) when its
immersion could take place. Instead of one-to-one mappings, an immersion establishes
copositionings of several terms at a time, between terms as perceived and terms in the
plexus. Cf. section 8.5.4. What is receiving an utterance, what is understanding (p.
263).
isonomy The fact of following reasons associated to the objects themselves, without
having to rest on their propeties. Is opposed to partonomy. Cf. section 3.6.7.
Partonomy and isonomy (p. 89).
local cf. proximal.
macroscopic determinism This term is from D. Hofstadter. Macroscopically equal
observables may be the effects of mechanisms which differ in their detail. Here,
macroscopic determinism is obtained by linguistic knowledge being exemplarist, by the
possibility to produce a same finding by different settlements, by the multiplicity of
recruitment and edification paths, by the mechanism of merging the findings into
results, by any elementary ressource being potentially useful without any being
indispensable, by the general integrativity property which empowers fragmentary and
heterogeneous resources, etc.
merging In ABS, different agents deliver at a same channel (their delivery point); their
findings are 'merged' into results at the delivery point. The principle is that findings with
the same content (but each belonging to different agents) merge at the delivery point
into a same result. Merging contributes to implement macroscopic determinism.
minimality suspension Of a term, it is not required that it be minimal or elementary,
contrary to requests made by most theories or descriptive traditions. Terms may be at
different grains and terms may overlap. Cf. section 7.2.3. Minimality suspension for
terms (p. 194). However, quasi-general plateaus are the empiry and they can be
described by elementarities of various orders: morphemes, phonemes, etc.); the theory
must explain why they arise, and also why they do not have to be entirely general.
occurrence In the experience of a subject, an ocurrence is an exemplar occurring at a date
and in a context. 'Occurrence' is opposed to 'exemplar'.
opaque In ABS, the recruitment of agents (then commissioners) by another agent (then
client) is opaque when a channel is installed between them (otherwise it is transparent).
orientation Cf. "familiarity orientation".
399
paradigmatic link Link between two plexus records. Beween two A-type records a
paradigmatic link is a systemic analogy. Between two C-type records the paradigmatic
link is a constructional similarity, that is, a structural analogy. A paradigmatic link bears
a familiarity orientation.
partonomy The characterization of linguistic objects by their properties. Is opposed to
isonomy. Cf. section 3.6.7. Partonomy and isonomy (p. 89).
performance For a process, a machine, or a program, the manner to respond, to behave, to
be efficient, to use the resources, to deliver the expected results. Good performance, bad
performance.
plexus In the Analogical Speaker, the static side of the linguistic knowledge of a speaker.
A plexus consists of A-type and C-type records with paradigmatic links between them.
It encodes systemic analogies and structural analogies. The word 'plexus' is chosen, after
Saussure, because it is an entangled mesh.
private term A private term is a term which is not linguistic form. Linguistic form, is
"public" because it crosses the interface between speakers. Counter to this, the private
term does not cross the interface. Cf. section 8.5.3. Formal terms and private terms (p.
262).
product In ABS, 'product' collectively denotes findings and results.
proximal/proximality/proximalist Inscriptions (elements of linguistic knowledge) are
proximal when one of them can be reached from the other with low cost. A process is
proximal when it solicits inscriptions gradually, according to their proximality.
Proximality is central in this model. 'Proximal' is different from 'local': a) 'local' in the
sense in which segments, constituents, syntagms or terms are local when they are
neighbours in the linearity of the form; it is so understood in n-gram approaches, or in
generative grammar in relation with the notions c-command, barrier and island, b)
'localist' in the sense of McClelland 1986: a connectionist network is localist when the
representation of a problem's object (word, morpheme, phoneme, etc.) is assigned to a
defined cell (or group of cells) of the connectionist network; the representation is
'distributed' on the contrary, when there is no such assignment; then it happens much in
the manner of a hologram.
recruitment One of the processes of the heuristic structure elaboration (the other is
edification). An agent, depending on its duty, and on the matching plexus data, either
finds itself sterile, or performs a settlement and/or recruits more agent (then its
commissioners). It assigns them a duty which is a function of its own and of the
matching plexus data. This prolongs the heuristic paths. Recruitment may be transparent
(no intervening channel between client and commissioner) or opaque (the commissioner
is recruited via a channel).
resetting Resetting takes place when a computation branch is pursued by some other
means than just crossing a paradigmatic link. This may involve a) a change of paradigm,
or b) staying in a same paradigm but reallocationg the roles of the current computation
terms. To preserve the integrity of the computation, resetting must be positioned.
result A result is a product at a channel (products at agents are findings). A result comes
from the merging of findings. A result may come from the merging of one finding only.
400
RW (rearwards) In the conventional orientation of the heuristic structure, the rearward
direction.
settlement The computation of a linguistic task is seen as a heuristic process involving
jointly the data of the task and that of the plexus. The process encompasses a number of
parallel paths. Settlement is an event in such a process: two paths (three in the case of
ternary branching) are found coincident upon the discovery of a favourable datum in the
plexus (settlement data). A settlement is made by an agent and its effect is the
production of a finding at the agent.
term A term is that which is singled out to participate in an analogy (structural analogies
and systemic analogies). Analogy A : B :: C : D holds betwen the four terms A, B, C and
D. A term is reidentifiable as "the same" in its recurrences. The most received
frameworks of thinking lead to reify linguistic objects, to see them as having properties
and able to have relations between them. This must not be. On the contrary, a term
needs only be seen as reidentifiable in its recurrences. Cf. section 7.2. Individuality of
terms (p. 193).
transparent In ABS, an agent (then a client) recruits other agents (then commissioners)
transparently when no channel is installed between them (otherwise the recruitment is
opaque). The typical use of transparent recruitment is walking through paradigms by
crossing pradigmatic link.
401
French-English lexicon
This lexicon is for facilitating the relation with related publications in French.
ABS
agent
amorçage
arrière (orientation vers l'arrière)
avant (orientation vers l'avant)
calcul des copositionnements
canal
champ
charge (d'agent)
client (agent client)
commissaire
copositionnement
donnée(s) de champ
édification
enregistrement
escalade (principe d' -)
évoquer (une trouvaille)
fusion
FW (orientation vers l'avant)
immersion
installation
livraison (d'un résultat)
Locuteur Analogique
mouvement (abductif)
opaque
orientation de familiarité
parsing
phore (d'une analogie)
plexus
point de livraison
productivité (linguistique)
ABS, agent-based solving
agent
priming
rearward orientation (RW)
forward orientation (FW)
computation of copositionings
channel
field
duty (agent duty)
client agent
commissioner
copositioning
field data
edification
record
escalation (principle)
raise (to - a finding)
merging
FW (forward orientation)
immersion
setup
delivery (of a result)
Analogical Speaker
movement (abductive -)
opaque
familiarity orientation
parsing
vehicle (in an analogy)
plexus
delivery point
productivity (linguistic -)
403
recrutement
rendement (d'un modèle)
reprise
reprise positionnée
résolution par agents (ABS)
résultat
résultat installation
résultat de résolution
RW (orientation vers l'arrière)
structure heuristique
suspension de minimalité
terme
terme privé
thème (dans une analogie)
thème (opposé à rhème)
transparent
trouvaille
404
recruitment
efficiency (of a model)
resetting
positioned resetting
agent-based solving (ABS)
result
setup result
solving result
RW (rearward orientation)
heuristic structure
minimality suspension
term
private term
tenor
topic
transparent
finding
Index
A2 ................................................................... 38
as a phenomenon, not in the theory .......... 192
definition .................................................. 397
A2 analogy.................................................... 190
A4 ........................................................... 38, 190
definition .................................................. 397
A4 analogy.................................................... 189
abduction ........................................................ 81
abductive mechanisms ............................... 82
by expansive homology ............................. 86
its role in the linguistic dynamics ............... 82
abductive movement ..................................... 110
by constructability transfer......... 84, 101, 317
by expansive homology ............. 85, 101, 318
by transitivity ............................................. 83
by transposition .......................... 88, 144, 319
effective in linguistic form and among private
terms.................................................... 204
abductive path............................................... 209
ablative ........................................................... 27
Abney ........................................ 15, 16, 224, 254
ABS
definition .......................................... 331, 397
integrating two paradigms ........................ 140
intension-extension .................................. 215
introduction ................................................ 94
merging findings ...................................... 378
abstractions refused ...................................... 267
access ............................................................ 296
crossing a paradigmatic link .................... 298
ACME .......................................................... 186
acquisition
acquisitional restiction ............................. 372
incremental .............................................. 255
initial ........................................................ 205
parametric theory ..................................... 208
activation propagation .................................. 223
A-D asymmetry............................................. 306
adaptation of model behaviour ..................... 150
adjunct .................................................. 183, 372
adverbs and prepositions .............................. 277
affix .............................................................. 310
agent ............................................................. 332
agent structure .......................................... 331
agents cooperating .................................... 210
client ......................................................... 334
commissionner ......................................... 334
definition .................................................. 397
life cycle ................................................... 333
number of agents .............................. 305, 343
redundancy ............................................... 338
agent AN2 ..................................................... 154
as controller process ................................. 182
definition .................................................. 383
agent ANZ..................................................... 154
as direct process ....................................... 182
mechanism shown on an example ............ 142
specification ............................................. 377
used by AN2 ............................................. 157
uses the binary index ................................ 297
uses transposition ..................................... 319
vs non transposable analogy ..................... 325
agent B2 ................................................ 101, 372
definition .................................................. 359
does not dry up on its own........................ 347
productivity .............................................. 367
agent B3 ........................................................ 374
definition .................................................. 367
agent CATZ .................................................. 353
does not dry up on its own........................ 347
doubtful .................................................... 205
single-argument ........................................ 357
suspected .................................................. 357
technical architecture................................ 353
agent redundancy control .............................. 341
agent S2A
as suppletion process ................................ 182
definition .................................................. 381
agent tree ....................................................... 378
agent-based solving definiton........................ 331
agentive orientation ....................................... 113
agglutination for Saussure ............................... 31
agglutinative .................................................. 325
agglutinative morphology ............................... 65
agreement
addresed by Skousen? .............................. 190
neglected by B2-B3 .................................. 106
treated by agent AN2................................ 154
405
unconscious? ............................................ 343
without syntatic head ............................... 183
allomorphy............................................ 167, 200
alternation ..................................................... 200
amalgamation in French........................ 122, 195
ambiguity ...................................................... 296
syntactic - contradicts longest match
principle .............................................. 100
analogical copositioning ............................... 207
analogical cosegmentation .................... 196, 198
analogical pair .............................................. 297
analogical set ................................................ 189
analogical task ...................................... 108, 334
not vulnerable to allomorphy ................... 169
analogists ...................................................... 181
analogy
A2 ............................................................ 192
A4 ............................................................ 189
abductively equivalent ............................. 377
addresing anomaly and regularity ............ 189
analogical change....................................... 36
analogy-anomaly ........................................ 21
arithmetic ........................................... 65, 321
as the base of operation............................ 186
bidimensional ........................................... 324
central in this work .................................... 11
class A.................................................. 60, 67
class AC ..................................................... 61
class C .................................................. 60, 68
cognitive .................................................. 324
contrasted with 'agglutination' .................... 31
corrects defects of associationism ............ 212
despised by linguistic theories ................... 10
dismissed ................................................... 37
does not establish relations ...................... 194
does not imply categories........................... 39
elides the predicate .................................... 62
eschews overspecification ........................ 200
false............................................................ 62
for Bloomfield ........................................... 33
for Chomsky .............................................. 37
for Householder ......................................... 34
for Itkonen ............................................... 190
for Popper .................................................. 44
for psychologists and cogniticians ............. 11
for Saussure (old usage restored) ............... 39
for Skousen .............................................. 188
history schematized .................................... 38
iconic ......................................................... 43
implementing contingency ......................... 80
limited because powerful ........................... 63
linguistic .................................................. 324
morphological .................................... 43, 343
motivates transformations ........................ 108
not determined by a pair alone ................... 64
phonological .............................................. 43
406
positional exploitation ................................ 83
powerful because limited ............................ 63
quasi-synonym of morphology ................... 32
reanalysis of somnolent ............................ 258
repairing ................................................... 190
repairing - with synctactic effect ................ 32
set up ........................................................ 210
structural analogy 60, 61, 68, 70, 74, 88, 126,
310
structural analogy analogy .......................... 71
systemic analogy ... 45, 60, 61, 68, 69, 71, 73,
74, 78, 88, 126
not provided by corpus techniques ...... 255
three terms determine le 4th ........................ 63
traditional - too permissive ....................... 188
transposable ................................................ 35
underlying a metaphor ........................ 62, 301
underspecified .......................................... 325
analogy which motivated transformations ..... 191
analysis
self-analysis .............................................. 256
multiple ............................ 105, 123, 125, 315
multiple self-analysis ................................ 257
of a long text............................................. 348
synyactic -, its function............................. 106
analysis tree ................................................... 105
anaphor ......................................................... 222
anomalists ..................................................... 181
anomalous term ............................................. 182
anomaly ......................................................... 189
anomalist conception of linguistics ............ 39
anomaly-analogy ........................................ 26
morphological .......................................... 303
Arabic ........................................................... 246
argumental paradox ....................................... 306
arihmetics
analogy in ................................................. 321
Aristarchos ...............................................