* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Towards Computational Models of Artificial Cognitive Systems that
Soar (cognitive architecture) wikipedia , lookup
Ecological interface design wikipedia , lookup
Human–computer interaction wikipedia , lookup
Hard problem of consciousness wikipedia , lookup
Adaptive collaborative control wikipedia , lookup
Neural modeling fields wikipedia , lookup
Embodied language processing wikipedia , lookup
Agent-based model in biology wikipedia , lookup
Mathematical model wikipedia , lookup
History of artificial intelligence wikipedia , lookup
Philosophy of artificial intelligence wikipedia , lookup
Monkey Before the Skeleton (Ecce simia), Gabriel von Max, Prague painter (1840-1915) Towards Computational Models of Artificial Cognitive Systems That Can, in Principle, Pass the Turing test Jiri Wiedermann Institute of Computer Science, Prague Academy of Sciences of the Czech Republic SOFSEM 2012 January 21-27, 2012 Spindleruv Mlyn Partially supported GA CR grant No. P202/10/1333 ``I believe that in about fifty years' time it will be possible, to program computers, with a storage capacity of about 100 kB, to make them play the imitation game so well that an average interrogator will not have more than 70 % chance of making the right identification after five minutes of questioning. The original question, "Can machines think?" I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted." From the discussion between Turing and one of his colleagues (M. H. A. Newman, professor of mathematics at the Manchester University): Newman: I should like to be there when your match between a man and a machine takes place, and perhaps to try my hand at making up some of the questions. But that will be a long time from now, if the machine is to stand any chance with no questions barred? Turing: Oh yes, at least 100 years, I should say. Three heretic ideas: We already have a sufficient knowledge to understand the working of interesting minds achieving a high-level cognition Achieving a higher-level AI is not a matter of a fundamental scientific breakthrough but rather a matter of exploiting our best theories of artificial minds, and a matter of scale, speed and technological achievements It is unlikely that thinking machines will be developed by purely academic research since it is beyond its power to concentrate the necessary amount of man power and technology. Approaches to mind understanding: Understanding by philosophying Understanding by designing (specifying) Understanding by constructing Outline Current state: Watson the Computer vs. humanoid robotic systems Winds of Change 1. 2. – – – – – – – – – 3. 4. Escaping the Turing test Escaping Biologism Internal World Models Mirror neurons Global Workspace Theory (Dis)solving the Hard Problem of Consciousness Episodic Memories Real Time Massive Data Processing Comprehensive and Up-To-Date Models of Cognitive Systems HUGO: A Non-Biological Model of a Conscious Agent System Conclusions – lessons from what we have seen Watson - an AI system capable to answer the questions stated in natural language Jeopardy! (in the CR – the TV game „Riskuj!“) – given an answer one has to guess the question. E.g.: 5280 (how many feets has a mile), or 79 Wistful Vista (address of Fibber and Molly McGee) Category: General Science Clue: When hit by electrons, a phosphor gives off electromagnetic energy in this form. Answer: Light (or Photons) Category: Rhyme Time Clue: It’s where Pele stores his ball. Subclue 1: Pele ball (soccer) Subclue 2: where store (cabinet, drawer, locker, and so on) Answer: soccer locker Category: Lincoln Blogs Clue: Secretary Chase just submitted this to me for the third time; guess what, pal. This time I’m accepting it. Answer: his resignation Category: Head North Clue: They’re the two states you could be reentering if you’re crossing Florida’s northern border. Answer: Georgia and Alabama Source: AI Magazine, Fall 2010 Winds of Change New trends in theory: • escaping biologism • escaping the Turing Test • strengthening the position of embodiment: a common sensorimotor basis for phenomenal and functional consciousness • evolutionary priority of phenomenal consciousness over functional one • internal world models, mirror neurons • global workspace theory • episodic memory Technological progress: • maintenance of supercritical volumes of data, and • searching and retrieval of data by supercritical speed A shift in popular thinking about artificial minds - people generally accept that computers can think (albeit in a different sense than some philosophers of mind would like to see) John Searle: “Watson Doesn't Know It Won on 'Jeopardy!' IBM invented an ingenious program—not a computer that can think.” Noam Chomsky: “Watson understands nothing. It’s a bigger steamroller. Actually, I work in AI, and a lot of what is done impresses me, but not these devices to sell computers.” What these gentlemen failed to see is the giant leap from the formal rules of the chess playing to informality of Jeopardy! rules… J.R. Lipton: Big insight – a program can be immensely powerful even if it is imperfect. A new trend: escaping biologism Why should we only think about human brain when designing artificial minds? Rodolfo Llinas (a prominent neuroscientist): “I must tell you one of the most alarming experiences I've had in pondering brain function.... that the octopus is capable of truly extraordinary feats of intelligence… most remarkable is the report that octopi may learn from observing other octopi at work. The alarming fact here is that the organization of the nervous system of this animal is totally different from the organization we have learned is capable of supporting this type of activity in the vertebrate brain.... there may well be a large number of possible architectures that could provide the basis of what we consider necessary for cognition and qualia.... Many possible architectures for cognition A new trend: escaping the Turing test Turing test is explicitly anthropomorphic. Russell and Norvig: "aeronautical engineering texts do not define the goal of their field as 'making machines that fly so exactly like pigeons that they can fool other pigeons’”. All minds Human mind Animal minds Artificial minds Alien minds A new trend: Internal World Models Mechanisms situating an agent in its environment ; they determine the syntax and the semantic of agent behavior and perception in its environment Finite control (Infinite) stream of inputs generated by sensory-motor interaction IWMs capture a “description” of that (finite) part of the world and that part of the self which has been “learned” by agent’s sensori-motor activities. An IWM is fully determined by the agent’s embodiment and is World model Sensory-motor units automatically built during The body agent’s interaction with the real world. A virtual inner world in which an agent can think A new trend: Mirror neurons – a mechanism for “mind reading” of other subjects Mirror neurons: are active when a subject performs a specific action as well as when the subject observes an other or a similar subject performing a similar action (Rizzolatti, 199x) “the discovery of mirror neurons in the frontal lobes of monkeys, and their potential relevance to human brain evolution is the single most important ``unreported“ (or at least, unpublicized) story of the decade. I predict that mirror neurons will do for psychology what DNA did for biology: they will provide a unifying framework and help explain a host of mental abilities that have hitherto remained mysterious and inaccessible to experiments“ V.S. Ramachandran A new trend: Global Workspace Theory a simplistic, very high-level cognitive architecture that has been developed by B. J. Baars by the end of the last century to explain emergence of a conscious process from large sets of unconscious processes in the human brain. The GWT can successfully model a number of characteristics of consciousness, such as its role in handling novel situations, its limited capacity, its sequential nature, and its ability to trigger a vast range of unconscious brain processes. Interesting: Watson the Computer works according to the GWT A new trend: evolutionary approach to phenomenal consciousness (Inman Harvey) A naive “incremental” approach to create phenomenal consciousness: 1. Create a “zombie” with functional consciousness (the easy problem) 2. Add the extra ingredient to give it a phenomenal consciousness (the hard problem) “evolutionary approach allows emulation without comprehension” A new trend: a common sensorimotor basis for phenomenal and functional consciousness A sensorimotor interaction with the environment involving corporality, alerting capacity, richness, insubordinateness, and the self Instead of thinking of the brain as the generator of feel, feel is considered as a way of interacting with the world Source: How to build a robot that feels. J.Kevin O'Regan ,Talk given at CogSys 2010 at ETH Zurich (Drawing by Ruth Tulving) A new trend: Episodic Memory is what people ``remember", i.e., the contextualized information about autobiographical events (times, places, associated emotions), and other contextual knowledge that can be explicitly stated. An agent without episodic memory is like a person with amnesia Episodic memory systems allow “mental time travel” and can support a vast number of cognitive capabilities based on inspecting memories from the past that are ``similar" to the present situation, such as • noticing novel situations, • detecting repetitions, • virtual sensing (reminded by some recall), • future action modeling, • planning ahead, • environment modeling, • predicting success/failure, • managing long term goals, etc. Efficient management and retrieval from episodic memories is a case for real-time massive data processing technologies. A new trend: intelligence might be a matter of scale and speed: maintaining supercritical volumes of data and their searching and retrieval by supercritical speed (cf. episodic memories). A lesson from Watson the Computer: intelligence might not only be a matter of suitable algorithms, but also, and mainly so, of the ability to accumulate (e.g., via learning and episodic memories storing), organize, and exploit large data volumes representing knowledge at a speed matching the timescale of the environmental requirements (real time data processing). Element Number of cores Time to answer one Jeopardy! question Single core 1 2 hours Single IBM Power 750 server 32 <4 min Single rack (10 servers) 320 <30 seconds IBM Watson (90 servers) 2 880 <3 seconds ~1 000 000 million lines of code 5 years development (20 men) Memory: 20 TB 200 million pages (~1 000 000 books) A new trend: Comprehensive and up-to-date models of cognitive systems An urgent need of situatedness via embodiment An embodied cognitive agent is a robot i.e., an embodied computer, which is a computer equipped by sensors by which it “perceives” its environment and by effectors by which it interacts with its environment (from J. A. Comenius, Orbis pictus, 1658) Nuremberg funnel, Harsdörffer, Georg Philipp: Poetischer Trichter, Nuremberg 1648-1653 HUGO: a Non-Biological Model of an Embodied Conscious Agent Semantic world model Global workspace Syntactic world model Mirror net Episodic memory From: J. Wiedermann: A High Level Model of an Embodied Conscious Agent, IJSSCI, 2, 2010 A high-level schema of a robot: The body Finite control (a computer) World model (Infinite) stream of inputs generated by sensory-motor interaction Sensory-motor units Mechanisms situating the agent in its environment must be considered: internal world models Real world The central idea: Educating and Teaching a Robot The purpose of educating and teaching an agent is to build its internal world model The internal world model gives a “description” of that (finite) part of the world (inclusively of agent’s (it)self) which has been “learned” by agent’s S-M activities. The model is fully determined by the agent’s embodiment and is automatically built during agent’s interaction with the real world The idea of two cooperating world models in cognitive systems Dynamic world model: sequences of sensorimotor information Controls the agent’s behavior Motor instructions “action” “cognition” Real world Static world model: perception Elements of a coupled sensory-motor information; responsible for situating the agent Sensory-motor units An architecture of an embodied cognitive agent Control unit G r o u Abstract concepts n d i n g Multimodal information Embodied concepts Units of S-M information (World’s “syntax”) Mirror net Symbolic level Sub-symbolic level Motor instructions Motor instructions Perception Body Environment S-M units The task of the syntactic world model: Coupling the motor instructions with the perception information into so-called multimodal information; Learning frequently occurring multimodal information from the coupled input streams (one coming from the dynamic model and one from the S-M units) Associative retrieval: a partial, or “damaged”, or previously “unseen” incoming multimodal information gets completed so that it corresponds to the “most similar” previously learned information; the result captures the instantaneous agent’s situation The task of the semantic world model: Learning (mining) and maintaining the knowledge from the datastream of multimodal information delivered by static (syntactic) world model Realizing the intentionality: with each unit of multimodal information a sequence of actions (motor commands) – habits gets associated which can be realized in the given context; Implementing the syntactic world model: Mirror neurons: are active when a subject performs a specific action as well as when the subject observes an other or a similar subject performing a similar action (Rizzolatti, 199x) A generalization: … a set of neurons which are active when a subject performs any frequent action as well as when only partial information related to that action is available to the subject at hand Visual inf. Aural inf. Haptic Propriocept. Multimodal information • Learns frequently occurring conjunctions of related input information • It gets activated when only partially excited (by one or several of its inputs) • Works as associative memory, completing the missing input information • Mirror net forms and stores (pointers to) episodic memories The basis for understanding imitation learning, language acquisition, thinking, consciousness. What knowledge is mined and maintained in a dynamic world model: • often occurring concepts • resemblance of concepts • contiguity in time or place • cause and effect An algebra of thoughts… Cognitive tasks: 1. 2. 3. 4. 5. 6. David Hume 1711-1766 Simple conditioning Learning of sequences Operand conditioning (by rewards and punishment) Imitation learning Abstraction forming Habits formation, etc. “Hume’s test” for intelligence Implementing the dynamic world model A cogitoid: an algorithm building a neural net for knowledge-mining from the flow of multi-modal information Excitatory and inhibitory links Multimodal information Passive concepts Currently activated Previously concepts activated concepts Habits: often followed chains of concepts Newly activated concepts affect aaaa Emotions Wiedermann 1999 What both world models jointly do for an agent: A mechanism enabling imitation of activities of other agents (without understanding) A germ of awareness – a mechanism for distinguishing between one’s own action, and that of an observed agent A mechanism of empathy A substrate for a mechanism for predicting the results of an agent’s own or observed actions via their “simulation” in the virtual model of the known part of the real world Understanding: an agent “understands” its actions in terms of their embodiment in terms of habits (and thus: of S-M actions plus associated emotions) Phenomenal consciousness (according to O’Regan) as a habit of conscious awareness of performing one’s own skills Humanoid Robot Mahru Mimics a Person's Movements in Real Time A person wears the motion tracking suit while performing various tasks. The movements are recorded and the robot is then programmed to reproduce the tasks while adapting to changes in the space, such as a displaced objects. The birth of communication and speaking • • • • • By indicating a certain action an agent broadcasts a visual information which is completed by the empathy and prediction mechanism of an observing agent into the intended action Formation of the self concept Possibility for emotions to enter the game The birth of the body language Adding of articulation (vocalization) and gesticulation tempering • • The verbal component of the language gets associated with the motor of speech organs and prevails over gesticulation Development of episodic memory management and retrieval mechanisms The birth of thinking • Beginning of thinking as a habit of speaking to oneself • Multimodal information • • cogitoid Motor instructions Mirror neurons Wiedermann 2004 Subsequent decay of whatever motor activity (of vocal organs) Perception suppressing Switching-off motor instruction realization Mirror neurons complete motor instructions by missing perception learned by experience An agent operates similarly as before, albeit it processes “virtual” data. It works in an „off-line“ mode, it is virtually situated The birth of functional consciousness The agents are said to possess artificial functional consciousness iff their communication abilities reach such a level that the agents are able to fable on a given theme. More precisely, the conscious agents can Communicate in a high-level language Verbally describe past and present experience, and expected consequences of future actions, of self or of other agents Realize a certain activity given its verbal high-level description Explain the meaning of notions Learn new notions and new languages Consciousness is a big suitcase M. Minsky A sketch of the evolutionary development of cognitive abilities, consciousness included P h e n o m e n a l F u n c t. c o n s c. c o n s c. From: J. Wiedermann: A High Level Model of an Embodied Conscious Agent, IJSSCI, 2, 2010 A thinking machine: a de-embodied robot cogitoid Mirror neurons A robot’s thinking mechanism in a computer A brain in a vat Lessons from what we have seen • Achieving a higher-level artificial intelligence no longer seems to be a matter of a fundamental scientific breakthrough but rather a matter of exploiting our best algorithmic theories of thinking machines supported by our most advanced robotic and real time data processing technologies. • An artificial cognitive system is quite a complex system with only a few components none of which could work alone and none of them could be developed separately; • It is unlikely that thinking machines will be developed by purely academic research since it is beyond its power to concentrate the necessary amount of man power and technology. • This cannot be accomplished by large international research programs either since a dedicated long-term open-ended effort of many researchers concentrated on a single practically non-decomposable task is needed. • It seems to be a unique strategic opportunity for giant IT corporations. • The road towards thinking machines glimpses ahead of us and it only is a matter of money whether we set off for a journey along this road. Caspar David Friedrich, Giant Mountains, cca 1830