Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Artificial intelligence in video games wikipedia , lookup
Embodied cognitive science wikipedia , lookup
Technological singularity wikipedia , lookup
History of artificial intelligence wikipedia , lookup
Philosophy of artificial intelligence wikipedia , lookup
Intelligence explosion wikipedia , lookup
Existential risk from artificial general intelligence wikipedia , lookup
Page |1 Patrick Davis English 1010 Colin Hull June 21, 2012 Issue Summary In Gordon Young’s book “Colors” he shows one view of a possible future universe for mankind. He backs up his ideas with references to how our technologies will take certain paths. Some of these technologies include the future for mobile devices, transportation, and the use of robotic technologies. Of these technologies the most controversial is robotic technologies and how to approach the idea of creating artificial intelligence and how this new intelligence might affect us. Artificial intelligence is an issue because not only is it a future possibility, but it has the opportunity to change a great many things. These things include: religious complications/possibility of creation, directions for creating an AI/benefits of an existing AI/fears of negative outcomes, and human views on what existence is and how the concepts of it may change. The creation of artificial intelligence not only poses a lot of potential pros and cons but also needs to be looked at in a religious context, due to it being compared to the creation of life. In order to explain the controversy there are some terms that are needed to understand. These terms stated by Sean Robsville in his article Objections to Computationalism and Arguments Against Artificial Intelligence are: Page |2 Philosophical AI (sometimes also known as Strong AI) is the view that all human mental activities are reducible to algorithms, and could therefore be implemented on a computer. Computationalism is an essential tenet of materialism, which states that there is no need to assume any spiritual or non-algorithmic aspect to existence. Technological AI is a set of techniques (reducible to algorithms) for simulating some aspect of human intelligence in a machine. Technological AI has problems posed against the Buddhists belief of a “Mother of all Algorithms” (Robsville).This is one of the issues when dealing with such a multisided topic. The side of the argument that there is more to the human mind than can be recreated is a common one. Robsville’s article states that Buddhists believe that these algorithms are an existential one and can’t be quantified by the use of technological AI and cannot be recreated. Though Robsville’s article tends to have a Buddhist lean he still lends himself to the belief of a strong AIs future creation. However there are alternative views to “if” AI is possible, and one of the views against its possibility is posed by Roger Penrose in his book “The Emperor’s New Mind” published by Vintage Publishing in 1990. Penrose suggests that “insight” is not an algorithm that can be mathematically calculated, and that it is necessary for the creation of a strong AI. “Penrose's authority as one of the greatest living mathematicians to address these things is unique” says Oxford University Press’s summary of his book. While Robsville’s paper is informative to a religious view of AI, Penrose has a more credentials in the field having even co-written a book with Stephen Hawking called The Nature of Space and Time [Princeton University Press, 1996]. Page |3 This fact of unbalanced credentials gives Penrose higher Logos and therefore makes his argument more believable given his standing in the field of technology and mathematics. Though both men disagree on AIs future existence they both give reasonably similar examples of their understanding of what strong AI is and that the proof of its existence would be not only its ability to understand and learn, but create. Even though the more credible source states a disbelief in the existence of a future strong AI many still debate over whether it would be good for us as a society and how it may affect us. Depending on the way robots are created and what they are created for, there may be a change in the way it affects us. According to the article Artificial Intelligence posted in the Economist in March of 1992 there are three different approaches researchers are taking in the creation of artificial intelligence. The first group are the humanists which believe that in order to create a strong AI you need to analyze the human brain from a psychological standpoint. Another group are the logicists who believe that the research needs to be directed towards “programs on formal logic” (Economist). The last group are called the structuralists who believe a simulation of the minds neuro connections need to be developed in order to build a working AI. While structuralists and humanists sound to be trying to create a more humanistic robot, logicists make formal logic sound a lot more like imputing common sense instead of emotion. Emotion being embedded in programming scares some people because the creation of a thinking creation may lead its way to a stronger and more intelligent form of life. Page |4 Nick Bostrom, part of the philosophy faculty at Oxford University posted reasons for some of these fears in his article Ethical Issues in Advanced Artificial Intelligence. This article of Bostroms is a slightly revised version of a paper published in Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence and some of the examples to why we may fear the creation of artificial intelligence. One of these examples is that “artificial intellects need not have humanlike motives”. This example states that even if we program them to obey us, a true strong AI may, like humans, think of themselves as autonomous agents and not be willing to be subjugated or treated differently. This issue could pose problems to those who created them to operate solely for their creators. Bostrom also states that not only will the emergence of AI be rapid, but after creation can be easily duplicated. This view of a rapid increase of new autonomous agents can either help or hinder our future based on its compliance with humans on a friendly level. Bostrom lists “initial motives” as a key factor in how the created AI will interact with humans. Needing to make sure AIs are instilled with a constant understanding that humans are friends, at to be treated as such. This idea of Bostrom’s lends itself to saying that a strong AI will either have problems out of the gate or problems will never occur given AIs relationship understanding between them and humans being friends. However, even though Bostrom explains how the creation of artificial intelligence could be a quick fall for human kind it also has a lot of possible positive outcomes. The article Bostrom wrote says that AIs creation will lend itself to quick technical advancements and take us futher than we could reach on our own. Not only would the advancement be a large help in many areas of life but AI would also not make mistakes and their margin of mathematical and knowledge base errors would be zero. And in Page |5 the medical field “It is no good being brilliant on 96% of your patients if you stupidly kill the other 4%” says the Economist article Artificial Intelligence. Also one of the benefits and fears to the creation of artificial intelligence is that these “superintelligences” says Bostrom “will only create more powerful superintelligences”. This idea that not only will they create all future advancements for us but also advances to themselves means that most creation of all future intelligent inventions will not come from man, but rather an invention man created. These AI will be able to answer many of our questions but one question men will pose given the creation of a newly autonomous agent will be “How do we categorize existence?” and “Can these robots extend the existence of humanity?”. On Wednesday, July 11th at 10PM and episode of Through the Wormhole with Morgan Freeman entitled “Can We Resurrect the Dead?” tackles the concept of mans extended existence through robotics. In this episode a team of “extreme lifeloggers” capture moments on tape of their lives into what they describe as “black boxes”. These boxes are intended to carry with them the memories of the individuals they are attached to in order to possible bring back their personalities after passing away. In the episode Ken Hayworth, a neurologist at Harvard University predicts that by 2110 uploading your mind into a silicon-based operating system will be commonplace. The episode even says that these operating system will be programmed into an AI body called replicants which are being made in Japan by roboticist Hiroshi Ishiguro. The information acquired by logicists, structuralists, and humanists alike in their research for strong AI will prove resourceful in the understanding of the way the mind operates and the ability to map it out. It seems that views suggest that not only may humanity create another form of life with strong AI but we may also lend ourselves to a more robotic form of life. Page |6 These robotic views have ranged from arguments for and against AIs future existence, pros and cons to its creation, views on how it needs to be researched and developed, to what are its impacts on our future forms of existence. From books to articles to television this is a controversy well discussed in the fields of technology and neuroscience. Ideas on the topic are viewed from all angles ranging from religious standpoints, to technical standpoints, to hypothetical ones. Strong AIs future existence will be discussed, theorized, and hypothesized about until its creation becomes a fact, or is proven inconceivable. I believe that AIs creation is a very possible one in the future, though I have no estimation of when it will exist. It will be a technology that changes everything, of that I’m sure. In Gordon Young’s book Colors he looks far into the future, and not even that future has fully functioning AI. However Young believed that AI would not be created because of the fear of them taking over jobs that humans should do. So future robots should only be created to help us do specific things and not be limitless in their capabilities for knowledge and learning. I do not know how the world would view another intelligent form of life or what problems or benefits would arise from this commotion of if some would consider it life, but it would most certainly be a marvel and triumph of man’s enginuity. Page |7 Works Cited Bostrom, Nick. "Ethical Issues of Advanced Artificial Intelligence." www.nickbostrom.com. N.p., 2002. Web. 19 Jul 2012. <http://www.nickbostrom.com/ethics/ai.html>. Economist “Artificial Intelligence.” 14 Mar 1992: 5+. SIRS Issues Researcher. Web, 21 Jul 2012 Freeman, Morgan, perf. "Can We Resurrect the Dead."Through the Wormhole. Science Channel: July 11, 2012. Television. Penrose, Roger. The Emperor's New Mind. London: Vintage, 1990. Print Robsville, Sean. "Transcultural Buddhism." “Objections to Computationalism and Arguments Against Artificial Intelligence” seanrobsville.blogspot.com. N.p., 2009. Web. 23 Jul 2012. Young, Gordon. Colors. Raleigh: Lulu.com, 2006. Print.