* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Further Cognitive Science
Artificial intelligence in video games wikipedia , lookup
Chinese room wikipedia , lookup
Wizard of Oz experiment wikipedia , lookup
Knowledge representation and reasoning wikipedia , lookup
Computer vision wikipedia , lookup
Intelligence explosion wikipedia , lookup
Human–computer interaction wikipedia , lookup
Existential risk from artificial general intelligence wikipedia , lookup
Ethics of artificial intelligence wikipedia , lookup
Computer Go wikipedia , lookup
Hubert Dreyfus's views on artificial intelligence wikipedia , lookup
Cognitive Computing 2012 The computer and the mind 6. DREYFUS AND DREYFUS Prof Mark Bishop Overview The paper opens with quote from Wittgenstein: “Nothing seems more possible to me than that people someday will come to the definite opinion that there is no copy in the ... nervous system which corresponds to a particular thought, or a particular idea, or memory.” Ludwig Wittgenstein, 1948: last writings on the Philosophy of Psychology, Volume 1. And from Rumelhart (& Norman) on distributed representation: “Information is not stored anywhere in particular. Rather it is stored everywhere. Information is better thought of as ‘evoked’ than found.” 09/05/2017 Rumelhart & Norman, (1981). Albeit that such a ‘distributed representation’ remains spatially located. (c) Bishop: The computer and the mind 2 Symbols and Connections In the late 1940s and early 1950s, people realised computers can do more than just perform arithmetic. In 1950 Turing published his seminal paper ‘Computing Machinery & Intelligence’ on Artificial Intelligence (AI) which took seriously the notion that one day ‘we would speak of machines thinking without fear of contradiction’. From the 1950s two schools of (Artificial) Intelligence (AI) emerged: Symbolic intelligence: Easiest to instantiate a formal world model; Philosophically rationalist & reductionist. Connectionist intelligence: Easiest to model the mind (as interactions of neurons); Philosophy holistic; linked to cognitive neuroscience. 09/05/2017 (c) Bishop: The computer and the mind 3 Symbols and intelligence Symbolists view minds & computers as physical-symbol systems Newell & Simon’s hypothesis This idea is philosophically grounded upon work of Frege, Russell & (early) Whitehead 09/05/2017 Which are in turn heir to the ‘atomistic reductionist’ tradition in philosophy. (c) Bishop: The computer and the mind 4 Atomism and reductionism: Descartes & Hobbes From Descartes: Understanding as the forming and manipulation of appropriate ‘mental representations’ formed from primitive elements, ‘naturas simplices’ - ‘simple elements’ And all phenomena are understood as combinations of these. From Hobbes: 09/05/2017 We deduce that these ‘simple elements’ were formal and related by purely syntactic - rule based – operations; Hence all reasoning is reducible to calculation; ratiocination. (c) Bishop: The computer and the mind 5 The symbolist A.I. research programme From Leibniz To avoid an infinite regress there must be at base simple elements which represent things in the world, and which are mixed together to define complex objects – an alphabet of human thought. From (early) Wittgenstein The picture theory of meaning (atomistic; as defined in the ‘Tractatus’) “World is the totality of facts, not of things”; Such ‘facts’ are logically described via (syntactic) pictures; Elements of such pictures being combined in definite ways to represent the ways things are combined; The ‘Tractatus’ was seen as the culmination of the rationalist tradition. Symbolic AI is the attempt to find such primitive elements; 09/05/2017 Newell & Simons hypothesis effectively turns Wittgenstein’s early vision into an empirical claim and bases a research programme on it. (c) Bishop: The computer and the mind 6 The opposing tradition … Took inspiration from neuroscience not philosophy ... E.g. Hebb’s early work on machine learning; Hebbian learning. … And Rosenblatt, who conceived that it would be easier to formalise the brain and then investigate its behaviour, rather than attempt to formalise behaviour and then design an axiomatic system to implement it. 09/05/2017 (c) Bishop: The computer and the mind 7 A declaration of ‘war’ ! It is possible to consider the symbolists as people building ‘problem solving machines’. Whereas connectionists wanted to build systems to ‘generate their own behaviour’. Initially both systems appeared successful: as early as 1958 Herbert Simon claimed he had machine that ‘can think’ .. .. and by 1959 Rosenblatt publicised much the same opinion of his machines. 09/05/2017 (c) Bishop: The computer and the mind 8 Problems for A.I. Following the publication of Minsky and Papert's monograph ‘Perceptrons’, the symbolists seemed set to ‘win the war’ of A.I. Yet both traditions had their detractors: Connectionism via Minsky and Papert’s 1969 book, ‘Perceptrons’. Symbolism via: The 1973 UK Lighthill Report, ‘Artificial Intelligence: A General Survey’, James Lighthill, Artificial Intelligence: a paper symposium, [UK] Science Research Council. Dreyfus, Hubert (1972), What Computers Can't Do, MIT Press. And the detractors essentially much made the same point: 09/05/2017 That extant A.I. systems only work on TOY problems. (c) Bishop: The computer and the mind 9 A philosophical crusade Minsky and Papert's attack was seen by some as a philosophical crusade: M&P’s 1969 analysis was only of ‘Single Layer Perceptrons’ and yet it succeeded in virtually stopping all research into connectionism: reductionism was being challenged by ‘evil’ neural holism; atomists need ‘hidden nodes’ to refer to symbolic [micro] features of environment – connectionists are not so committed. Rosenblatt’s neural research was discredited and connectionism was not even mentioned in early edition of Margaret Boden’s seminal text on AI, ‘AI and natural man’. However there are other reasons for this prejudice: 09/05/2017 With only limited computing power early symbolists could do more; There is a persistent belief that thinking and pattern recognition are separate and that ‘thinking’ is more important. (c) Bishop: The computer and the mind 10 ‘Atomism’ in A.I. The underlying philosophical idea - from Plato through Leibniz - is that understanding a domain entails having a ‘context free’ theory of the domain; enabling [relatively] easy knowledge transfer from one domain to another. Winograd famously described A.I. as an attempt to find a formalism for knowledge, identify the atoms from which it is built and the forces that act upon it. At the time no one specifically argued for atomism in AI, there just remains an implicit assumption that, because it works in other domains it will work here. 09/05/2017 (c) Bishop: The computer and the mind 11 Problems for simple atomism / reductionism However the conclusions of later Wittgenstein … After publishing ‘The Tractatus’, Wittgenstein spent several years doing ‘phenomenology’ looking for ‘base atoms of meaning’ - yet ended up abandoning rationalist philosophy altogether. And the work of early Heidegger … For Heidegger traditional philosophy is defined from the start by its focusing on facts (in the world) while "passing over" the world as such. I.e. For Husserl, an act of consciousness – noesis - does not grasp onto the object itself, (which is famously left ‘bracketed’), but the noema - an ‘abstract form’ (effectively a hierarchical representation of ‘facts’ [of the world]) correlated with the act [of directed consciousness]. Whereas Heidegger reasoned that it was fundamentally impossible to find such context-free elements of meaning - facts [of the world] - because depriving any element of its context – passing over the world – deprives it of the very organisation that makes possible its [veridical] use. And this boded ill for simple reductionism! 09/05/2017 (c) Bishop: The computer and the mind 12 Husserl’s ‘Phenomenolgy’ For Husserl: An act has ‘directedness’ only because of the intellectual reasons that ‘give it meaning’; Ones ‘predicate senses’ somehow pick out an object’s ‘atomic properties’ [facts] which are subsequently hierarchically combined to form complex descriptions of objects in the world; 09/05/2017 At the top level there is effectively a ‘rule’ defining all the features - and properties - that can be possibly part of this type of object .. .. a system analogous to Marvin Minsky‘s concept of ‘Frames’ – a knowledge representation systems in classical AI. (c) Bishop: The computer and the mind 13 Heidegger In contrast to Husserl, Heidegger suggests there are other ways of encountering things other than as ‘objects’ defined by a set of ‘predicates’ (context free ‘facts’) For Heidegger in [trouble free] use; ‘smooth coping’ … A mode of engagement Heidegger calls the ‘ready-to-hand’; When things go awry (e.g. the nail breaks; the wall is too solid and requires a heavier hammer) Heidegger refines the mode-of-engagement to the ‘un-ready-to-hand’). … everyday objects – hammers; door knobs etc. - are defined by a context of normative social roles (“the manifold assignments of ‘in-order-to’ ”) A ‘mode of engagement’ Heidegger calls the ‘present to hand’. I.e. In the everyday ‘ready-to-hand’ use of a hammer we actualise a skill with no clear division of subject and object (represented in the mind) in the context of a socially organized nexus of equipment, purposes, and human roles. … and the “ ‘sight’ with which they accommodate themselves is ‘circumscription’ ”, (Heidegger). 09/05/2017 (c) Bishop: The computer and the mind 14 A context free theory of the world ? Can there be a context free theory of the world or is the common-sense background rather an ‘impenetrable’ ensemble of skills, practises and judgements which cannot be explained in terms of rules? Husserl sought to answer this by asserting that ‘the background’ is just the interaction of millions upon millions of rule-based axiomatic beliefs (which have truth conditions; and are ‘facts’ if true): However by the age of 75 Husserl concluded that ‘Phenomenology was an infinite task’ - as he had to include more and more of a subject's common-sense understanding of the everyday world to describe it So we can characterise the world by ‘detachment from it’ and then enumerating all such beliefs; So completing the reductionist, atomistic philosophical task began by Socrates… and there is some evidence Minsky felt similar wrt his ‘frame’ knowledge representation. The naiveté of A.I workers regarding (contemporary) philosophical research led Hubert Dreyfus to predict trouble for A.I. (Dreyfus, 1972, “What computers can’t do”); a book [until recently] generally ignored by AI community. 09/05/2017 (c) Bishop: The computer and the mind 15 Three stages of (symbolic) AI 1. Representation and search; 2. Facts and rules; but toy ‘micro-worlds’ did not prove scalable. 3. Common sense knowledge: Via, for example, ‘means-end’ analysis. A.I. believes common-sense is formalisable; but need it be? Perhaps ‘common-sense’ is nothing more than a vast set of ‘special cases’ ? Is even a ‘naive physics’ formalisable – or does a child perhaps simply learn to discriminate (deploy ‘neural judgement’) a large set of special cases? Dreyfuys asserts that, “the rationalist tradition has been put to an empirical test and failed” Justifying Rosenblatt’s ‘connectionist’ approach (re. his intuition that rationalism would be difficult); Now even Terry Winograd has ‘lost faith’ with A.I. and teaches ‘continental’ philosophy.. 09/05/2017 (c) Bishop: The computer and the mind 16 The new connectionism Frustrated A.I. workers flock to connectionism: E.g. My own experience at the Oxford Experimental Psychology conference. NB. If the connectionists are correct then philosophers will have to give up the atomistic, logicist, rationalist tradition. Albeit neural-net researchers, influenced by symbolists, have tried to find features of reality in hidden nodes; but this is only true in a trivial sense (by assigning an invented name): Uni-variate neural codes; compare with SDPs bi-variate data. Yet the connectionists only success is also only with limited models; Perhaps connectionism is simply getting a [deserved] chance to fail, like symbolism… 09/05/2017 (c) Bishop: The computer and the mind 17 Common sense and connectionism The ‘common sense knowledge problem’ in neural computing is that of generalisation… But what counts as a successful generalisation? The neural-net designer has in mind what is a successful generalisation, but how is this defined? If a classifier network produces an output of an unexpected type, it might have merely ‘learnt’ a different definition of type to that the designer intended: Ideally in generalising the network can either interpolate between points or extrapolate beyond them; for a neural net to be useful it must be able to ‘generalise’… in reality an infinite number of curves can go in between and beyond such points; Cf. intelligence tests. Hence in engineering applications the neural network designer determines an architecture to restrict possible transformations, such that net behaves appropriately for the application in mind. 09/05/2017 (c) Bishop: The computer and the mind 18 Conclusion If early Heidegger and later Wittgenstein are correct then intelligence is much more holistic (and social) than either connectionism or symbolism imply. Like symbolism, perhaps connectionist systems need to be properly embedded in a social reality to have any chance of making intelligent progress… 09/05/2017 (c) Bishop: The computer and the mind 19