* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Computer - Aberystwyth University Users Site
Embodied cognitive science wikipedia , lookup
Human–computer interaction wikipedia , lookup
Chinese room wikipedia , lookup
Computer Go wikipedia , lookup
Intelligence explosion wikipedia , lookup
Ethics of artificial intelligence wikipedia , lookup
Existential risk from artificial general intelligence wikipedia , lookup
PH19510 - Chaos, Communication and Consciousness From the movie 2001 Lecture #2 “The Brain Versus the Computer” Oct 2007 12:10-13:00 in Room B34 11th Web Site: http://users.aber.ac.uk/atc/ph19510/lecture2.ppt Dr Tony Cook [email protected] llllllllllllllllllllllllllllllllllllll The Previous Lecture What is Consciousness? How do we map the brain? •EEG - Electrencephalograms •PET – Positron Emission Tomography •CAT – Computerized Axial Tomography •MRI – Magnetic Resonance Imaging Geography of the Brain •Cerebrum •Cererbellum llllllllllllllllllllllllllllllllllllll Today’s Lecture 1. Comparisons between the Brain and the Computer 2. What is Artificial Intelligence? 3. Neural Networks llllllllllllllllllllllllllllllllllllll 1. The Computer Close-up of Fig 5-4 from Tanenbaum (2001) llllllllllllllllllllllllllllllllllllll 1. The brain vs. the computer. • The human brain has about 100,000,000,000 (100 billion) neurons. (multiprocessor) • 1 neuron ~ 1/1,000 MIPS (brain ~ 100million MIPS) • Neuron state is on/off/ excitable, function to process and transmit information • The mind=software and the brain=hardware • The brain is a eletrochemical piece of wetware, a computer is a fully electronic(mechanical) piece of hardware • Good at visual recognition, multi-tasking • Bad at arithmetic, remembering • Cannot replace parts like on a computer llllllllllllllllllllllllllllllllllllll 1. The storage capacity of the human brain • How do we make a computer have something akin to a human brain? • Well it must have similar processing power – there are super computers that are approaching the speeds given on the previous slide • They must have similar memory storage capacity. • Each neuron is connected to ~5000 other neurons through synapses. • Say each synapse can have 256 levels (voltages) then we have a storage capacity of about 100,000,000,000*5000 = 500 Terra bytes llllllllllllllllllllllllllllllllllllll • But it does not work quite like this 1. The storage capacity of the human brain • Another back of the envelope calculation.... • Say someone lived for 90 years and had a photographic memory capable of recording 640x480 size images every 10 sec – they could document their whole life! • If the brain did some data compression then like a digital camera JPEG image, each image might be say 100Kbytes • So 90 years = 90 * 365.25 * 24 * 60 *60 = 2,840,184,000 sec • Divide by 1 image per 10 sec: 284,018,400 • Get this into bytes: 284018400 * 100Kbytes = ~27 Terra bytes or 135x200Gbyte drives llllllllllllllllllllllllllllllllllllll 2. Artificial Intelligence • Artificial Intelligence is the part of computer science concerned with designing intelligent computer systems, that is, systems that exhibit the characteristics we associate with intelligence in human behaivor - understanding language, learning, reasoning, solving problems, and so on. • Expert Tasks: Given the necessary knowledge base, AI is very successful in applications in engineering design, medical diagnosis, scientific simulation, and financial analysis. In human terms these tasks are usually regarded as the most sophisticated and intellectual. • Formal Tasks: Compared to humans, computers excel at solving numerical problems (which is what they were invented for first) as well as other logic-based problems like those encountered in Games chess, backgammon etc. Computer algorithms need to be provided with the rules first. • Mundane Tasks: Where AI task domains have been directed at basic perception, simple language utilisation, common-sense reasoning, robotics etc, these attempts at mimicking human intelligence have only been modestly successful. llllllllllllllllllllllllllllllllllllll 2. Artificial Intelligence cont. • Two schools of thought: – Conventional A.I. mostly involves methods now classified as machine learning, characterized by formalism and statistical analysis. AKA symbolic AI, logical AI, neat AI and Good Old Fashioned Artificial Intelligence (GOFAI). ; • Expert systems • Case based reasoning • Bayesian networks • Behaviour based A.I. – Computational Intelligence involves iterative development or learning (e.g. parameter tuning e.g. in connectionist systems). Learning is based on empirical data and is associated with non-symbolic AI, scruffy AI and soft computing.; • Neural networks • Fuzzy systems • Evolutionary computation llllllllllllllllllllllllllllllllllllll 2. Intellectual Issues Mechanism vs Teleolgy (1640-1945) Natural Biology vs Vitalism (1800-1920) Symbols vs Numbers (1955-1965) Power vs Generality (1965-1975) Replacing vs Helping Humans (1960-) Procedural vs Declarative Representation (1970-1980) Psychology vs Neuroscience (1975-) llllllllllllllllllllllllllllllllllllll 2. Uses of A.I. • Simple A.I. used in web search sites such as Ask Jeeves • Pattern recognition • Games • Robotic control • Used a lot in fiction: – Computers: HAL9000, Matrix – Robots: C3PO, Data llllllllllllllllllllllllllllllllllllll 2. A.I. Comprehension • What A.I. can understand Case 1 - what AI can readily comprehend A man went into a restaurant and ordered a steak. When the steak arrived it was burned to a crisp, and the man stormed out of the restaurant angrily, without paying the bill or leaving a tip. A man went into a restaurant and ordered a steak. When the steak arrived he was very pleased with it and as he left the restaurant he included a large tip when he paid his bill. Did the man eat the steak in each case? Although not explicitly stated, the man did not eat the steak in case 1 but did eat the steak in case 2. AI can assimilate inferences but it costs in terms of associated memory. llllllllllllllllllllllllllllllllllllll 2. A.I. Comprehension cont. • What A.I. can’t understand Case 2 - what AI cannot comprehend yet Saturday morning Mary went shopping. Her brother tried to call her then, but he couldn’t get hold of her. Why couldn’t Mary’s brother reach her? The answer “Because she wasn’t at home” requires knowing that a person cannot be in two places at once and then deducing that Mary could not have been at home because she was shopping instead. The basic dilemma of AI is that in order to handle more than toy problems the system needs a lot of knowledge. This is particularly true in the case of language. llllllllllllllllllllllllllllllllllllll 2. Difference Engine No. 2 • Babbage designed it in 1847 - 49 • More elegant than DE1, benefited from work on AE • Designs offered to Government in 1852 • Nothing happened till 1985 Science Museum project for Babbage’s centenary llllllllllllllllllllllllllllllllllllll G5AHOC – History of Computers and Computing - Lecture 3 – Babbage’s Engines 2. Augusta Ada Lovelace (1815-1852) • Babbage published little on his Engines • Lovelace’s Notes on Menebrae (1842) are the best contemporary description of the proposed Engine • The Enchantress of Numbers • The first programmer, inventor of AI and computer music: a true visionary making a significant individual contribution? • Daughter of Byron – pushed towards mathematics and science by her mother (Annabella Millibanke) in an effort to make sure she did not turn out like him! llllllllllllllllllllllllllllllllllllll 2. Augusta Ada Lovelace • Ada Lovelace’s Dad – Lord Byron (1788-1824) • Poet, writer, bit of a rogue – but very well known in Britain at the time • Some one once described him as “mad, bad and dangerous to know” • Ended up being buried at the Church of St Mary Magdalene in Hucknall after being refused by Westminster Abbey • Ada is buried next to him llllllllllllllllllllllllllllllllllllll 2. Lovelace’s Notes In 1840 Babbage lectured on the Analytical Engine in Turin, seeking international support for his work A “sketch” of the AE was written in French in 1842 by an engineer called Luigi Menebrae. It was late and disappointing (to Babbage) Babbage and Lovelace had met at one of his soirees in 1834; she was fascinated by his models of DE1 and took on the job of translating Menebrae’s sketch at the suggestion of Charles Wheatstone, the publisher She and Babbage were close friends and carried on a lengthy correspondence; he encouraged her to do more than a translation llllllllllllllllllllllllllllllllllllll 2. Lovelace’s Ideas on AI (1843) From Note G…. “The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with. This it is calculated to effect primarily and chiefly of course, through its executive faculties; but it is likely to exert an indirect and reciprocal influence on science itself in another manner. For, in so distributing….” llllllllllllllllllllllllllllllllllllll 2. The Turing Test • Computing Machinery and Intelligence (1950) asked, can machines think? • Turing again changed the question: can a computer hold a sustained conversation in a manner indistinguishable from a human being? • An examiner, a human and a computer Any program would require human traits - ability to deceive, emotion • The publication of Turing’s paper is seen by many as the birth of Artificial Intelligence llllllllllllllllllllllllllllllllllllll •No machine has past the Turing Test! 2. Turing’s Test How Turing thought a computer might need to respond in order to simulate “understanding”. Interrogator In the first line of your sonnet which reads “Shall I compare thee to a summer’s day,” would not “a spring day” do as well or better? Computer It wouldn’t scan. Interrogator How about “a winter’s day”. That would scan all right. Computer Yes, but nobody wants to be compared to a winter’s day. Interrogator Would you say Mr. Pickwick reminded you of Christmas? Computer In a way. Interrogator Yet Christmas is a winter’s day, and I do not think Mr. Pickwick would mind the comparison. Computer I don’t think you’re serious. By a winter’s day one means a typical winter’s day, rather than a special one like Christmas. llllllllllllllllllllllllllllllllllllll No computer has yet passed the Turing Test. Some think none ever will. 2. The Dartmouth Conference The RAND corporation funded a summer school at Dartmouth College in 1956 This is seen by others as the start of AI “We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves…..” llllllllllllllllllllllllllllllllllllll 2. The Dartmouth Conference John McCarthy - coined the term Artificial Intelligence at the Dartmouth meeting, designed LISP, started major AI programs at MIT & Stanford….. Herbert Simon & Allen Newell – GPS (General Problem Solver), Logic theorist, set up Carnegie Mellon University Laboratory….. Marvin Minsky – frames, society of mind, Director of MIT AI Lab…… llllllllllllllllllllllllllllllllllllll Power vs Generality (1965-75) Early AI programs took a single well-defined task (e.g. chess, IQ analogy tests, symbolic integration) and demonstrated that a machine could perform it, usually using some kind of search In the mid 60s there was a shift towards generality in the form of systems exhibiting “common sense” reasoning - small puzzles and artificial problems were used to demo components of (hopefully) more general abilities By 1975 most of these systems had failed to generalise to real problems and emphasis swung back to powerful systems using knowledge to solve real problems - Expert Systems. llllllllllllllllllllllllllllllllllllll What is an Expert System? Try to encode human expertise and knowledge into a computer This is done mostly by defining an explicit set of rules - extracted from human experts The people who extract the rules from experts are known as knowledge engineers Often use Baysian probability to assign conclusions/decisions An early expert system was MYCIN (1970’s) - a medical diagnosis system, with 65% correct diagnosis rate – better than humans physcians (who were not experts) Microsoft Windows troubleshooting guide is an example (perhaps a poor one) of an expert system llllllllllllllllllllllllllllllllllllll 2. The Chinese Room Chinese Room John Searle (1980), invented the Chinese room to counter the assertion that an algorithmic computer passing the Turing Test would have understanding. He imagines that he takes the place of the computer and operates from a locked room. He painstakingly follows the algorithm through required to communicate intelligently to the satisfaction of the interrogator compared to the other person. Only proviso - all communication has to be in Chinese but, symbolic translation is provided e.g. Card system, one card per translated word. llllllllllllllllllllllllllllllllllllll 2. The Chinese Room It has to be that if the AI algorithm is worked through properly, even in symbolic Chinese, then the correct answer “yes” or “no” (in Chinese) will result. Searle, however, does not understand a word of Chinese, so he passes the Turing Test but without being conscious of or of ever knowing about his success. The mental state of understanding is bound up in the algorithm The computer has no intention or purpose. Whilst it might be artificially conscious it does not possess human intelligence. llllllllllllllllllllllllllllllllllllll 2. Dualism and a Definition for Consciousness Searle considers the algorithm is separated from the mechanics of running it. This recalls the philosophical concept of dualism mind and matter - of Descartes (Cogito ergo sum). Descartes “Discourse on Method” (1687) While I could pretend that I had no body, that there was no world....I could not pretend that I was not .... from the fact that I thought of doubting the truth of other things .... it followed I existed .... from this I recognised that I was a substance whose essence or nature is to think and whose being requires no place and depends on no material things. llllllllllllllllllllllllllllllllllllll 2. Dualism and a Definition for Consciousness In the present scenario the mind would be analogous to software and the material brain to hardware. Where might middleware fit in? Dualism distinguishes the mind from the brain, in which case consciousness could be the essential linkage. If the mind encompasses the internal model of reality constructed by the brain consciousness could be defined to be the envelope of capacities of the brain to form subjective representations of reality. llllllllllllllllllllllllllllllllllllll 3. Neurons, Synapses and Neural Networks Sensory information and motor commands are transmitted by neurons. These are cellular structures which are elongated, branching and fractal in geometry. Neurons can be connector, sensory or motor in type. Each comprises a nucleus surrounded by a star-like structure called a soma out of which an axon protrudes - the end of which splits into a mass of dendrites, each dendrite being terminated by a synaptic knob. Once it is fired a neuron transmits electrical signals from the nucleus along the axon to the synaptic knobs which release neurotransmitter chemicals to the dendrites of adjacent neurons. In vertebrates sensory and motor neurons are coated in fatty Schwann cells - generating a myelin sheath. llllllllllllllllllllllllllllllllllllll 3. Neurons, Synapses and Neural Networks •Nerve fibres contain a mixed solution of potassium chloride (KCl) with some sodium chloride (NaCl). When it is inactivated the interior of a nerve fibre is slightly electrically negative. This is because there is a slight excess of chlorine ions compared to potassium and sodium ions. ie NCl > NK or NNa •Electrical impulses are slightly positive and travel down the nerve fibre by transverse exchange of sodium and potassium ions. The approaching signal causes sodium gates to open pumping sodium ions into the fibre and converting the negative charge to positive charge, whilst in the train of the impulse potassium is released through potassium gates restoring the negative charge. Finally the same pumps regenerate the preponderance of potassium within the fibre. •For sensory and motor neurons ion exchange only occurs at gaps in the myelin sheath - Nodes of Ranvier. Signal transfer is very rapid ~ 120 metres per second. llllllllllllllllllllllllllllllllllllll 3. Neurons, Synapses and Neural Networks The neurotransmitters released from the synaptic knobs can be excitory or inhibitory - acetylcholine, dopamine, serotonin etc. The next neuron fires if the combination of impulses is above a certain threshold - summation. This is an all or nothing action which has attracted the analogy with the digital bit. llllllllllllllllllllllllllllllllllllll 3. Neurons, Synapses and Neural Networks Within the brain, the interconnection of neurons is colossal. Each brain cell has typically 5000 input lines. Feeeback is therefore massive. Individual neurons are generally firing 3 or 4 times a second, even when there is no perceptible brain activity in that area. When areas become activated the activity increases and trains of peaks can be detected which encouraged the search for a neural code, analogous to computer code. llllllllllllllllllllllllllllllllllllll 3. Neurons, Synapses and Neural Networks Is the Brain Hardwired? GUI llllllllllllllllllllllllllllllllllllll 3. Neurons, Synapses and Neural Networks The phenomenon of brain plasticity has emerged which argues against the idea of a hard-wired computer model. Whilst the general map of neurons in the brain is laid out at birth, the interconnections between the synaptic knobs and the dendritic spines on the dendrites of neighbouring neurons is constantly changing. The separation is only around 0.0250 millionth of a mm and so connections can be made, lost and re-made with very little movement offering the prospect of a time dependent neural network. In this way it is possible for memories to be laid out by using different synaptic connections for storing the necessary information. Particular correlations between the action of synapses between different neurons (Hebb Synapses) would afford the rudiments of a process of learning. Further versatility in mental activity could come through the “leaking” of neurotransmitters to more distant neurons - neurochemistry. llllllllllllllllllllllllllllllllllllll 3. Computer Neural Networks In 1951 Marvin Minsky built the first neural network called SNARC A kind of neural network, the Perceptron, was also invented by Frank Rosenblatt at Cornell 1957 Initially seemed promising, but it had limitations... could recognize a specific pattern not good at recognizing many classes of patterns Work thus halted for many years on this line of research llllllllllllllllllllllllllllllllllllll Lecture Summary • How the brain compares to a computer – Information processing – Pros and cons • A.I.: Turing test and Chinese Room – Experimental setup – What does it demonstrate – What characteristics are required to pass • Neural Networks llllllllllllllllllllllllllllllllllllll