* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
Download ARTIFICIAL INTELLIGENCE
Document related concepts
ARTIFICIAL INTELLIGENCE In 1956, an America computer scientist named John McCarthy coined the term “ Artificial Intelligence” (AI). He envisaged an age of intelligent machines that he thought would be reality within a decade. Fast forward to the early part of the 21st century and true AI still seems a very long way off. Or is it? While the idea of artificial intelligence (AI) goes back to the mid 50s, Isaac Asimov was writing about robot intelligence in 1942 ( the word “robot” comes from a Czech word often translated as “drudgery”). A generally accepted test for artificial machine intelligence the Turing test, also dates back to the 1950s, when the British mathematician Alan Turing suggested that we would have AI when it was possible for someone to talk to a machine without realizing it was a machine. The Turing test is problematic on some levels, though. First, a small child is generally intelligent, but most would probably fail the test. Second, if something artificial were to develop consciousness, why would it automatically let us know? Perhaps it would keep this to itself and refuse to participate in childish intelligence test. The 60s and 70s saw a great deal of progress in AI, but breakthroughs failed to come. Instead scientists and developers focused on specific problems, such as speech and text recognition and computer vision. However, we may now be less than a decade away from seeing the AI vision become a reality. AI on the way. In 2008 a personal computer was able to handle around 10 billion instructions per second. Sounds a lot. But that’s roughly the same as the brain of a small fish. By around 2040 machine brains should, in theory, be able to handle around 100 trillion instructions per second. That’s about the same as a human brain. So what happens when machine intelligence starts to rival that of its human designers? Before we descend down this rabbit hole we should first split AI in two. “Strong AI” is the term generally used to describe true thinking machines. “Weak AI” ( sometimes known as “Narrow AI”) is intelligence intended to supplement rather than exceed human intelligence. So far most machines are pre-programmed or taught logical courses of action. But in the future, machines with strong AI will be able to learn as they go and respond to unexpected events. The implications? Think of automated disease diagnosis and surgery, military planning and battle command, customer service avatars, artificial creativity and autonomous robots that predict then respond to crime ( a “ Department of Future Crime”). Are these examples realistic? Some experts might say yes. Ray Kurzweil, an American futurist and inventor, has made a public bet with Mitchell Kapor, the founder of Lotus software, that a computer will pass the Turing test by 2029. Other experts say no. Bill Calvin, an American theoretical neurophysiologist, suggests the human brain is so “buggy” that the computers will never be able to emulate it or, if they do, machines will inherit our foibles and emotional inadequacies along with our intelligence. Think of the computer called HAL in the film 2001 ”A Space Odyssey”. But perhaps we are all looking in the wrong direction. The internet is already fostering an unanticipated form of self-organizing chaos: a highly efficient marketplace for ideas, reputations and information known as collective intelligence, from which AI may emerge. Adam Smith suggested that buyers and sellers, each pursuing his own interested , would together produce more goods, more efficiently , than under any other arrangement. The same is happening with online suppliers and potentially with customers too. Wikipedia, for instance, can create more knowledge, with less bias and over a wider span of disciplines, than any group of experts ever could. What next? There is little doubt that AI will progress significantly in the years ahead. Some commentators say that where AI is now similar to where personal computing was in around 2004, so progress could be astonishing. Historically, our approach to AI has been brute force, but once parallel computing techniques become established (quantum or DNA computing, for instance – see Chapter 17) true AI could be achieved very rapidly. Nevertheless, two big questions remain. First, is the human brain essentially just a machine with a brunch of wiring and some chemistry and electricity thrown in, or is there much more to it than that? If the human brain is simply a collection of atoms, then surely it can be only a matter of time before we design machines that can match and possibly exceed human capabilities. If this happens , we are presumably on the cusp of a new era of human evolution where we start to merge with our machines and gain a level of immortality. Indeed , perhaps this exists already in the sense that the ‘real’ us is our DNA, and our bodies are merely temporary packages to carry it around. Second, even if machines do not reach this level of sophistication, it’s likely that they’ll become very smart indeed, so what happens to the people who previously did the things that machines will do in the future? Welcome to the future. It’s metallic and uses lots of batteries. Hopefully, it’s not angry and it won’t work out a way to enslave the human race. The condensed idea. The machines wake up.