sb.hyper.afrl - Minds & Machines Home
... (2001) “Are We Evolved Computers? A Critical Review of Steven Pinker’s How the Mind Works” Philosophical Psychology 14.2:227-243. Paper Exhibiting Human Infinitary Reasoning: Bringsjord, S. & van Heuveln, B. (2003) “The Mental Eye Defense of an Infinitized Version of Yablo’s Paradox,” Analysis 63.1: ...
... (2001) “Are We Evolved Computers? A Critical Review of Steven Pinker’s How the Mind Works” Philosophical Psychology 14.2:227-243. Paper Exhibiting Human Infinitary Reasoning: Bringsjord, S. & van Heuveln, B. (2003) “The Mental Eye Defense of an Infinitized Version of Yablo’s Paradox,” Analysis 63.1: ...
the machinery of the mind
... Probability measures "how often" an event occurs Principle of incompatibility (Pierre Duhem) The certainty that a proposition is true decreases with any increase of its precision The power of a vague assertion rests in its being vague (“I am not tall”) A very precise assertion is almost never certai ...
... Probability measures "how often" an event occurs Principle of incompatibility (Pierre Duhem) The certainty that a proposition is true decreases with any increase of its precision The power of a vague assertion rests in its being vague (“I am not tall”) A very precise assertion is almost never certai ...
What is AI? - BYU Computer Science Students Homepage Index
... Play a decent game of table tennis Drive along a curving mountain road Drive in the center of Cairo Buy a week's worth of groceries at Berkeley Bowl Buy a week's worth of groceries on the web Play a decent game of bridge Discover and prove a new mathematical theorem ...
... Play a decent game of table tennis Drive along a curving mountain road Drive in the center of Cairo Buy a week's worth of groceries at Berkeley Bowl Buy a week's worth of groceries on the web Play a decent game of bridge Discover and prove a new mathematical theorem ...
Can Computers Think?
... of an outside observer, you behave exactly as if you understood Chinese, but all the same you don't understand a word of Chinese. But if going through the appropriate computer program for understanding Chinese is not enough to give you an understanding of Chinese, then it is not enough to give any ...
... of an outside observer, you behave exactly as if you understood Chinese, but all the same you don't understand a word of Chinese. But if going through the appropriate computer program for understanding Chinese is not enough to give you an understanding of Chinese, then it is not enough to give any ...
Philosophy and History of AI
... • Objections are basically of two forms: – “No computer will ever be able to pass this test” – “Even if a computer passed this test, it wouldn’t be intelligent” ...
... • Objections are basically of two forms: – “No computer will ever be able to pass this test” – “Even if a computer passed this test, it wouldn’t be intelligent” ...
Slides
... • Objections are basically of two forms: – “No computer will ever be able to pass this test” – “Even if a computer passed this test, it wouldn’t be intelligent” ...
... • Objections are basically of two forms: – “No computer will ever be able to pass this test” – “Even if a computer passed this test, it wouldn’t be intelligent” ...
History of AI - School of Computer Science
... First, it is difficult to take a problem that is presented informally and transform it into the formal terms required by logical notation. This is particularly true when the knowledge you are representing in less than one hundred percent certain. Secondly, solving even small problems (those with onl ...
... First, it is difficult to take a problem that is presented informally and transform it into the formal terms required by logical notation. This is particularly true when the knowledge you are representing in less than one hundred percent certain. Secondly, solving even small problems (those with onl ...
Will Your Smartphone Ever Love You?
... sending electrical activity from one neuron to thousands of others. Paradoxically, ...
... sending electrical activity from one neuron to thousands of others. Paradoxically, ...
artificial intelligence
... Intelligence is the faculty of understanding “Intelligence is not to make no mistakes but quickly to understand how to make them good” ...
... Intelligence is the faculty of understanding “Intelligence is not to make no mistakes but quickly to understand how to make them good” ...
powerpoint - School of Computer Science
... • If the Turing Test was passed Turing would conclude that the machine was intelligent • In 1980 John Searle devised a thought experiment which he called the Chinese Room (Searle, 1980) – Searle, J.R. 1980. Minds, brains and programs. Behavioural and Brain Sciences, 3: 417-457, 1980 ...
... • If the Turing Test was passed Turing would conclude that the machine was intelligent • In 1980 John Searle devised a thought experiment which he called the Chinese Room (Searle, 1980) – Searle, J.R. 1980. Minds, brains and programs. Behavioural and Brain Sciences, 3: 417-457, 1980 ...
Lecture Notes CS405 Introduction to AI What is Artificial Intelligence
... evolve and have higher level emergent behavior. For example, ants, bees, etc. ...
... evolve and have higher level emergent behavior. For example, ants, bees, etc. ...
assign2a
... The developments raise a natural question: If computer processing eventually apes nature's neural networks, will cold silicon ever be truly able to think? And how will we judge whether it does? More than 50 years ago British mathematician and philosopher Alan Turing invented an ingenious strategy to ...
... The developments raise a natural question: If computer processing eventually apes nature's neural networks, will cold silicon ever be truly able to think? And how will we judge whether it does? More than 50 years ago British mathematician and philosopher Alan Turing invented an ingenious strategy to ...
paper-topics-phl-220 - Barbara Gail Montero
... https://www.ted.com/talks/dan_dennett_on_our_consciousness?language=en 4. Can Computers think and understand? Question: Some have thought that if a computer can pass the Turing test, then it can think and has understanding of language. What is the Turning test? And is passing the Turing test suffien ...
... https://www.ted.com/talks/dan_dennett_on_our_consciousness?language=en 4. Can Computers think and understand? Question: Some have thought that if a computer can pass the Turing test, then it can think and has understanding of language. What is the Turning test? And is passing the Turing test suffien ...
Industrial and commercial uses of artificial intelligence
... Hypothetically, what if a computer could output the same amount of calculations a human brain could. The mind can process information, but then again so can a basic calculator. The next question to ask is “Would the computer actually know what it was doing?” This idea leads to the theory known as th ...
... Hypothetically, what if a computer could output the same amount of calculations a human brain could. The mind can process information, but then again so can a basic calculator. The next question to ask is “Would the computer actually know what it was doing?” This idea leads to the theory known as th ...
Beyond the Turing Test - Stanford Vision Lab
... premium on stock answers and other ruses. discern objects, researchers at places such “It’s a parlor trick,” Marcus says. “There’s no as Google and Facebook are developing alsense in which that program is genuinely gorithms that can guide a self-driving car intelligent.” The new Turing Championship ...
... premium on stock answers and other ruses. discern objects, researchers at places such “It’s a parlor trick,” Marcus says. “There’s no as Google and Facebook are developing alsense in which that program is genuinely gorithms that can guide a self-driving car intelligent.” The new Turing Championship ...
Introduction - Stockton College
... • “Will emotions be explicitly programmed into a machine? No. That is ridiculous. Any direct simulation of emotions cannot approach the complexity of human emotions, which arise indirectly from the organization of our minds. Programs or machines will acquire emotions in the same way: as by-products ...
... • “Will emotions be explicitly programmed into a machine? No. That is ridiculous. Any direct simulation of emotions cannot approach the complexity of human emotions, which arise indirectly from the organization of our minds. Programs or machines will acquire emotions in the same way: as by-products ...
lecture
... 3) If functionalism is true there is no requirement that functional states have associated qualia. Thus mental states need not have qualia. (Qualia Zombies) 4) But the presence of qualia is paradigmatic of consciousness. And consciousness is a mental state. ------------------------------------------ ...
... 3) If functionalism is true there is no requirement that functional states have associated qualia. Thus mental states need not have qualia. (Qualia Zombies) 4) But the presence of qualia is paradigmatic of consciousness. And consciousness is a mental state. ------------------------------------------ ...
2013-11-18-CS10-L20-..
... – (B) Behaves similarly to people – when it makes errors, those errors are similar to people’s errors – (C) Carries out the same type of processing (mental representations) people do – i.e., thinks like people ...
... – (B) Behaves similarly to people – when it makes errors, those errors are similar to people’s errors – (C) Carries out the same type of processing (mental representations) people do – i.e., thinks like people ...
Foundations of AI
... Whatever intelligence is, it cannot be achieved by a machine! Machines might be able to simulate (fake) intelligent behavior, but it is not acting because of (real) intelligence So, AI is doomed to failure – if AI is understood in the strong sense, namely, if we want to make machines intellige ...
... Whatever intelligence is, it cannot be achieved by a machine! Machines might be able to simulate (fake) intelligent behavior, but it is not acting because of (real) intelligence So, AI is doomed to failure – if AI is understood in the strong sense, namely, if we want to make machines intellige ...
Slides
... • Objections are basically of two forms: – “No computer will ever be able to pass this test” – “Even if a computer passed this test, it wouldn’t be intelligent” ...
... • Objections are basically of two forms: – “No computer will ever be able to pass this test” – “Even if a computer passed this test, it wouldn’t be intelligent” ...
- ePrints Soton - University of Southampton
... more than the sum of the parts. As an interesting aside, a part of Mays’ argument is based on what we now recognize as the symbol grounding problem (Harnad, 1990). Mays writes: “if we grant that these machines [i.e., digital computers] are complex pieces of symbolism, … it is clear that in order to ...
... more than the sum of the parts. As an interesting aside, a part of Mays’ argument is based on what we now recognize as the symbol grounding problem (Harnad, 1990). Mays writes: “if we grant that these machines [i.e., digital computers] are complex pieces of symbolism, … it is clear that in order to ...
13.1 only
... At a time when the first computers were just being built, to suggest that they might soon be able to think was quite radical. ...
... At a time when the first computers were just being built, to suggest that they might soon be able to think was quite radical. ...
Slides
... Like neural networks, statistical models of how speech (or other intelligent behavior) is formed tend to give us little insight into how intelligence works But they work! ...
... Like neural networks, statistical models of how speech (or other intelligent behavior) is formed tend to give us little insight into how intelligence works But they work! ...
The Philosophical Approach: Enduring Questions
... • Other theories postulate the existence of a corticothalamic circuit in which information is passed recurrently between the cortex and thalamus. ...
... • Other theories postulate the existence of a corticothalamic circuit in which information is passed recurrently between the cortex and thalamus. ...
Chinese room
The Chinese room is a thought experiment presented by the philosopher John Searle to challenge the claim that it is possible for a computer running a program to have a ""mind"" and ""consciousness"" in the same sense that people do, simply by virtue of running the right program. The experiment is intended to help refute a philosophical position that Searle named ""strong AI"":""The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.""To contest this view, Searle writes in his first description of the argument: ""Suppose that I'm locked in a room and ... that I know no Chinese, either written or spoken"". He further supposes that he has a set of rules in English that ""enable me to correlate one set of formal symbols with another set of formal symbols"", that is, the Chinese characters. These rules allow him to respond, in written Chinese, to questions, also written in Chinese, in such a way that the posers of the questions – who do understand Chinese – are convinced that Searle can actually understand the Chinese conversation too, even though he cannot. Similarly, he argues that if there is a computer program that allows a computer to carry on an intelligent conversation in a written language, the computer executing the program would not understand the conversation either.The experiment is the centerpiece of Searle's Chinese room argument which holds that a program cannot give a computer a ""mind"", ""understanding"" or ""consciousness"", regardless of how intelligently it may make it behave. The argument is directed against the philosophical positions of functionalism and computationalism, which hold that the mind may be viewed as an information processing system operating on formal symbols. Although it was originally presented in reaction to the statements of artificial intelligence (AI) researchers, it is not an argument against the goals of AI research, because it does not limit the amount of intelligence a machine can display. The argument applies only to digital computers and does not apply to machines in general. This kind of argument against AI was described by John Haugeland as the ""hollow shell"" argument.Searle's argument first appeared in his paper ""Minds, Brains, and Programs"", published in Behavioral and Brain Sciences in 1980. It has been widely discussed in the years since.