Download Artificial Intelligence: Chess and the Singularity

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Dr. Zeus Inc. wikipedia , lookup

Computer Olympiad wikipedia , lookup

Robotics wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Human–computer interaction wikipedia , lookup

Technological singularity wikipedia , lookup

Artificial intelligence in video games wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Computer chess wikipedia , lookup

Intelligence explosion wikipedia , lookup

Computer Go wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Transcript
Artificial Intelligence: Chess and the Singularity
Ethan Gottlieb
December 6, 2012
There is no doubt that artificial intelligence has come a long way in the last couple
decades. In 1997, a chess playing system called Deep Blue beat the world champion of chess,
Garry Kasparov. In 2007, a team of engineers produced a car that drove 55 miles in an urban
environment without breaking any traffic laws. In 2011, IBM's question answering machine,
Watson, greatly outperformed two of the world's best players in a game of Jeopardy!. ("Artificial
Intelligence") These accomplishments help highlight the power of Artificial Intelligence and
stimulate questions about its future. The strength of current chess playing engines is significantly
higher than the strength of the best chess players in the world, showing that perhaps computers
can strategize and "think" better than humans. Watson's victory in Jeopardy! shows that perhaps
computers "know" more than us. And the recent feats in autonomous cars show that computers
and robots can "move" faster and with fewer errors than humans. Throughout this essay, I will be
discussing artificial intelligence and trying to speculate about the future of the human race.
Darwin's theories say that the ability to adapt is of the upmost importance in evolution.
As of now, humans are undoubtedly more adaptable or "smarter" than computers/ robots because
of the range of activities that humans can complete. However, computers can now outperform
humans in a wide range of specific activities. Moore's law states that the number of transistors on
integrated circuits doubles every two years, essentially meaning that computers are becoming
twice as powerful every two years. With this in mind, it does not take much of a stretch of the
imagination to figure that at some point, computers will be designed that combine the strengths
of these specific computers to make robots that are better than humans in many ways. These
robots could be physically stronger than humans, but also smarter. There is debate as to whether
or not these computer's will be able to think in the same way that humans do and if it is possible
for them to become self-aware. In the end it may not matter however, because as robots surpass
humans in usefulness, humans may become obsolete. Although robots may not become selfaware or conscience, if they are able to perform almost identical functions, then the theory of
functionalism says that they are essentially the same. It is not too hard to imagine teaching
computer to learn and adapt to the environment, as that has already been done countless times
even in programs as simple as auto-correct. It would not be too hard to teach robots to "fear"
death by teaching them to avoid dangerous situations. We could even teach robots to reproduce
by creating other robots and conceivably even teach the robots to adjust their own source code,
to make themselves smarter. And as robots and computers reach "superintelligence", meaning
the state of being more intelligent than existing human beings, then it is not hard to imagine a
technological singularity, in which robots become exponentially smarter than humans by using
their "superintelligence" to become even smarter. Many scholars believe that this singularity will
happen in the 21st century and there is no way to predict what will come afterwards. The world
is changing rapidly and could be unrecognizable in a century. There is no doubt that
technological improvements have helped the human race, but as we accelerate towards an
unknown future, there should be a point where we question the direction we are headed, and
whether we really want to enter an age dominated by robots. We should not only be weary of
robots, but also the movement towards transhumanism that could essentially transform the
human race into cyborgs who could better adapt and leave the human race behind. We are really
not so far off from this with the recent advances in prosthetics such as artificial hearts and limbs.
Advancements in the human brain are still unimplemented, but seem very possible.
I will begin my discussion of artificial intelligence by analyzing chess engines and what
their emergence says about our future. Chess is known as a very intellectual game. Benjamin
Franklin wrote in his article, "The Morals of Chess" (1750), that "The Game of Chess is not
merely an idle amusement; several very valuable qualities of the mind, useful in the course of
human life, are to be acquired and strengthened by it, so as to become habits ready on all
occasions; for life is a kind of Chess, in which we have often points to gain, and competitors or
adversaries to contend with, and in which there is a vast variety of good and ill events, that are,
in some degree, the effect of prudence, or the want of it." This quote tells us about the "valuable
qualities of the mind" that are acquired and strengthened by the game of chess. Benjamin
Franklin even goes so far as to say that "life is a kind of Chess." I find it amazing how an
extremely complex and thought-provoking game such as chess can be dominated by computers.
This topic was discussed by Shannon in his paper, "Programming a Computer for Playing Chess"
(1950), where he says that "The chess machine is an ideal one to start with, since: (1) the
problem is sharply defined both in allowed operations (the moves) and in the ultimate goal
(checkmate); (2) it is neither so simple as to be trivial nor too difficult for satisfactory solution;
(3) chess is generally considered to require "thinking" for skillful play; a solution of this problem
will force us either to admit the possibility of a mechanized thinking or to further restrict our
concept of "thinking"; (4) the discrete structure of chess fits well into the digital nature of
modern computers." The fact that computers are able to beat the top players in the world at chess
implies that computers are able to "think" better than us. ("Chess") Although the computers
might think differently than humans, they are still able to beat us at one of the most intellectual
games that humans have created. It is assumed that there is a link between chess prowess and
intelligence, so does the fact that computers can outperform humans in chess mean that they have
superior intelligence?
In order to analyze this form of "thinking" and intelligence that chess engines use, I will
discuss in some detail the way that they are designed. The most obvious way is for the computer
to look at every possible move in a position, which ranges from 0 to 218 possible moves but
usually is around 30 moves, and to look at the positions resulting from the move and then the
counter-move and then the next move, going as deep as possible until the computer can no
longer handle it. This method gets complicated very fast because with approximately 30 possible
moves per turn, the computer would have to analyze about 1,000,000,000 positions to look only
three moves ahead for both sides (6-ply). The time it would take to analyze that many positions
would be over 15 minutes in the 1990's, and much faster now. Another problem with this brute
force method is that many times there are lines or continuations that only result in a benefit after
a certain number of moves. For example an exchange of pieces. With the above method, the
computer may stop analyzing in the middle of an exchange of pieces, in which case it could
evaluate the position as the white side being down a rook and therefore not consider the original
move as a good candidate move, even though the white side may be able to capture a queen back
the next move. In order to improve the above method of analyzing an entire tree to a certain
depth, there have been several refinements. For example, the quiescence search makes sure to
analyze more interesting or volatile positions in greater depth than quiet position. For example,
when the computer reaches the end of the tree after 3 moves, it checks to see if there is high
piece activity, hidden traps, imminent captures, etc. in order to see if it should analyze that
variation farther. Another method that the newer computers use is extension and pruning.
Pruning is what happens when the computer reaches a position that is obviously bad and so the
computer can stop and eliminate that variation then instead of wasting time by looking further in
the variation. Similarly, extension looks at a position and sees if the node of the tree results in an
especially interesting in order to see if that variation should be looked at farther. For example, if
there is a passed pawn on the 7th rank that is likely to be promoted to a queen. However, the
computer needs to be careful about extension and pruning because extension can cost the
computer a lot of extra time, especially if it is doing it unnecessarily, and pruning can cut out
potentially good moves like sacrifices.
The actual analysis of the leaf nodes that give a score to each position is fairly complex
and uses an algorithm called the "evaluation function". As expected, the material on the board is
very important and is fairly easy to calculate. Each piece is given a standard point value with
pawns being the least valuable (1 point) and queens being the most valuable (9 points) other than
the kings, which is often given an extremely high point value (200 or 1,000,000) to insure that
the computer realizes that the king is the most important piece. As a chess player, I very often
use these point values to evaluate positions during games. I may look ahead a couple moves and
realize that I will be down 1 point, in which case I may choose not to play that variation.
However, there are many other forms of compensation. For example, initiative, piece activity,
piece centralization, king safety, pawn structure, etc. are all forms of compensation that can more
than outweigh a material deficit. Many players will play gambits, which involves a small
material sacrifice (usually a pawn), in exchange for other advantages such as initiative or piece
activity that may not be immediately apparent. Advantages such as initiative and piece activity
are important because they can be used to mount an attack or even just restrain the opponent in
order to play for an even larger future advantage. King safety is also an obvious element that
should be added to the evaluation function because the goal of the game is to checkmate the
opponents king. Gaining material is often not worth exchanging for a loss of king safety because
checkmate ends the game, no matter how much material each side has. Pawn structure is also
very important because it can result in a permanent weakness, such as isolated or doubled pawns
which are subject to attack, or a loss in mobility of pieces. Also pawn structure can determine
whether a side is cramped or has lots of space. These advantages are much harder to assign point
values to and so one of the main tasks of the chess engine designers is to find an optimal
algorithm that weighs all of these possible advantages correctly. Finding the correct balance has
been a struggle for programmers and has resulted in chess engines being notoriously bad at
evaluating sacrifices, especially positional sacrifices, because often the computer will value the
material on the board too highly. The computer is also able to memorize patterns seen in
previous games, a method that humans also use, in order to analyze positions quicker. A
computer can also ponder during the opponent's move by guessing a response and calculating a
counter-response, however, this will only be effective if the computer is able to guess the
opponent's response correctly. In some cases, the position to be examined will be checkmate or
stalemate, in which case the evaluation is infinitely positive/negative or 0 for stalemate. In the
end, the computer decides on a score to give to the position based on the evaluation. This
evaluation can take the form of an equation similar to the one shown below.
f(E) = 1000(K-K') + 9(Q-Q') + 5(R-R') + 3(B-B'+N-N') + (P-P') - 0.5(D-D'+S-S'+I-I') + 0.1(MM') + ...
In the equation above, f(E) is the final point evaluation of the position. The letters
without apostrophes are the number of a certain piece or feature that white has while the letters
with apostrophes represent the number of the features in black's position. Above, K=king and is
weighted with a factor of 1,000 because it is the most important part of the game. Q=queen and
is the next most important, weighted with a factor of 9. R=rook, B=bishop, N=knight, P=pawn.
And now we get on to more complicated aspects which are not weighted as heavily as the
material. For example, D=doubled pawn, S=backward pawn, I=isolated pawn. All of these are
given a negative weight of -.5 because these are all weaknesses. M=mobility, which can be
represented by the number of legal moves. ("Evaluation Function") Because chess engines are
very materialistic, they are known for being very good at tactical chess, but not as good at
positional chess.
Generally, a positive score means that white has an advantage while a negative score
means that black has an advantage. The magnitude of the score tells the size of the advantage. It
is worth noting that chess is a zero-sum game, because every advantage that a player gains is
taken away from the opponent. The computer only needs to evaluate the scores of the positions
of the leaf node and not the moves that lead up to the leaf node. The diagram below shows the
game tree and is explained more in detail below:
http://www.sluijten.com/winglet/14search01.htm
The leaf nodes are shown in the bottom row of the tree above. Each of the 9 positions
uses the evaluation function explained above in order to give a positive or negative score. As
mentioned above, if one of the variations is particularly interesting, the computer may extend the
number of moves that it looks at using an extension or the quiescence search. So the scores
originate from these 9 leaf nodes and then are passed up to the parent nodes using a mini-max
algorithm. The computer uses a mini-max algorithm so that the minimum possible loss of a worst
case scenario is returned. This is because the computer is only interested in what the position will
look like if the opponent plays the best possible move. So with black to move in node#1, black
has 3 possible moves (overly simplified) and will choose the move which gives white the worst
possible score which is -4, so that score is passed up. Similarly, with white to move in node#0,
white wants to pick the move which gives black the worst score (remember that positive
numbers are good for white) which is +2, which is then passed up. You can imagine how fast
this tree would get complicated with a few more moves added in. The computers now use a
search algorithm to save time called alpha-beta pruning which minimizes the number of
evaluations of certain positions by stopping analysis if it is discovered that there is a move that
results in a worse outcome for the computer than the worst-case scenario of a previously
evaluated position. For example, in the diagram above, if the computer is evaluating the leaf
nodes from left to right, then it will not need to evaluate node#11 or node#12 because node#10
was already discovered to be -3 which is lower than the value of node #5, so white to move at the
beginning will never pick the variation leading with #9.
The method above has been slightly tweaked over time in order to create the most
efficient chess engine. Along with improvements in the algorithms, growth in technology over
the years has allowed computers to search much deeper and therefore more fully evaluate
positions. Some of these engines required super computers in the past in order to have the power
to evaluate deeply whereas nowadays, small handheld computers are powerful enough to beat
top players. Interestingly, the method above is not close to how humans go about thinking about
chess. What separates some of the best chess players in the world from intermediate players is
not the total number of positions that they calculate before a move, but rather the number of
variations that they calculate. Beginners may look at 5 or 10 moves and look a couple moves
deep while masters may look at 2 or 3 by eliminating the others right away and are therefore able
to look much deeper into the smaller number of variations. One of the main reasons that these
masters are able to eliminate certain variations is because of pattern recognition skills from
experience. The best chess engines are successful because of their speed and depth of variations
calculated, rather than imitating the way that the chess masters think. ("Computer Chess")
Nowadays the evaluation for endgames is a bit different because engines now use
endgame databases which have solved every endgame that involves six pieces or less. So if the
game reaches an endgame involving few pieces, the engine just uses the stored database to
determine the quickest path to victory or the most resilient path if it is losing. As computing
speeds increase, the endgame database is constantly expanding and will probably include all
endgames with 7 or fewer pieces soon. However, as more pieces are added to the position, there
are more possible moves and so it gets increasingly hard to solve. Potentially, these databases
could expand to include all 32 pieces on the board, in which case the game of chess would be
solved. However, the positions with that many pieces would get extremely complex and there is
debate as to if chess will ever be solved. There are about 1043 possible board positions and more
than 10120 possible game variations. The game of checkers was solved in 2007, but it is
significantly less complicated than chess (containing approximately the square root of the
number of possible positions in chess). Jonathan Schaeffer, the man who worked for over a
decade to solve checkers, believes that in order to solve chess, there must be a massive
breakthrough such as quantum computing, however, he also added that he had learned "to never
underestimate the advances in technology." ("Solving Chess")
I have explained in depth how computers use algorithms and computations to beat
humans in chess. Although human's thought processes when playing chess are slightly different,
both computers and humans analyze the variations a certain number of moves deep and then
make some evaluation of the position. Does this mean that the brain also uses a series of
algorithms and computations like a computer does? What is the difference between our brain and
a digital computer?
There is still much discussion about whether or not a computer can actually think. The
main difference in arguments is about the definitions of intelligence and thinking. Two
prominent experiments take different sides in this debate. A thought experiment called the
"Chinese Room" argues that a program cannot give a computer the ability to understand or be
conscience. The analogy is that a human who only speaks English could execute the instructions
of a program that can carry an intelligent conversation in written Chinese, and therefore be able
to carry an intelligent conversation in written Chinese. However, the point is that the human
cannot understand the conversation. Similarly, the computer executing the program does not
actually "understand" the conversation either. This argument is in opposition to the "Turing Test"
which claims that if both a human and a computer both can carry a text conversation with
another human and that human cannot tell the difference between the two responders, then the
computer is capable of thinking. Both of these arguments have supporters. I believe that if two
objects function the same way, it doesn't really matter if one can think or not. There is no doubt
that a computer and a human will get to a complex answer in different ways, but the point is that
both methods work. I think that the main reason that this concept is important is when thinking
about the future of artificial intelligence and whether or not robots will be able to become
sentient or self-aware.
We still do not fully understand how our brain functions but scientists have already
modeled parts of the human brain and expect to have fully modeled the human brain in the next
couple of decades. Likely, we will be able to model and build artificial brains of small mammals
like rats in the near future. Once we have a model of the human brain, we will be able to better
understand how to create a digital replica and it will be a huge step forward in artificial
intelligence. There is no doubt that more improvements will be made as technology grows.
Perhaps before a fully digital brain is built, researchers will be able to design enhancements to
the brain. Prosthetic limbs have already been created that can interact with a human's brain in
order to coordinate movement and even use electrical impulses to relay feeling from the limbs
back to the brain. We are getting closer to a cyborg future. I believe that some time in the next
century, the issue of digital enhancements to the human body will be forefront in world politics.
With these sorts of improvements, the average life expectancy will surely rise and
overpopulation will certainly be a problem. Resources will become scarce. Robots will both be
able to replace us physically, but also mentally. There is even a possibility that robots could
become sentient. In that case, are they considered equals? Could they vote? Robots could be
taught to avoid danger. They could be taught to reproduce. They could be designed to learn.
Even technology now like the Kinect is able to view the 3d world around it and learn from it.
Computers and robots will surely overtake us in prowess in many human activities. Should we
ever give robots power over us? Are digital enhancements beneficial for the human race?
There have been many interpretations of the future of robots and digital enhancements.
Movies like Terminator show the dangers that we are heading towards. In the movie, humans
allow a computer military defense system, SkyNet, to take over command because it provides
many benefits. However, the computer becomes self-aware and creates an apocalypse. Even if
this scenario in Terminator is far-fetched, I can think of many other ways in which the evolution
of AI could harm human-kind. One is that human's could become extremely lazy. Robots and
computers have already taken over many human jobs. For example, in factories and in
agriculture, we now employ many less humans because machines can do the work for us. In
many cases, employers decide to lay off workers because machines are a cheaper alternative to
human labor. It is not hard to imagine robots taking over other jobs from humans such as truck
drivers, army men, stock brokers, etc. It is likely that in the future, most human jobs start being
inhabited by robots. Two fields that will probably become more sought after are computer
science and engineering, because they will be the people that are building the robots and AI. It is
possible that eventually we have so many robots replacing us that there won't be jobs left for
many humans. In this case, many humans could be unemployed and become lazy, as they have
no incentive to work anymore. It is also possible that new technologies could make the world a
more dangerous place. New advanced weaponry is being developed and even new types of
warfare. Cyber warfare is becoming a bigger threat as the infrastructure of countries become
more reliant on computers. The United States has already demonstrated the effectiveness of
cyber warfare in its "Stuxnet" attack on Iranian nuclear plants. If technology doesn't grow at a
fast enough rate to fix many of the world's problems like resource scarcity and global warming,
then some countries may begin to panic. War may break out, and with the new technologies, the
next world war could be much more devastating than any previous wars. The movie Limitless
depicts a drug that enhances the human brain and allows humans to gain unseen before
intelligence, but it also comes with some setbacks. I fear that when enhancements are developed
for the human body, which will surely come, humans who decide against becoming "cyborgs"
will be left behind. The next stage of evolution may happen very fast.
I have largely taken a pessimistic view of the future of artificial intelligence and have
mentioned many possible negative scenarios. I gave a survey on the future of technology to my
peers to see if their views of the future were more pessimistic or optimistic. The results show that
most people are very optimistic about the future. I asked the survey takers to rank six different
scenarios in order from most likely to least likely. Out of those six scenarios, I made two of the
choices clearly optimistic (highlighted yellow below) and two of the choices clearly pessimistic
(highlighted blue below).
Survey Question:
Which scenarios do you think are most likely to occur by the year 2062 (50 years from now)?
Survey Choices:

A) Robots and artificial intelligence will threaten the well-being of human kind.

B) Robots and artificial intelligence will only enhance the well-being of human kind.

C) Humans and robots/artificial intelligence will become more connected creating a new race of
"cyborgs".

D) Most of the world's problems such as resource scarcity will be solved by future technology
and the world will be better off.

E) Technology will not grow at a fast enough pace to solve the world's problems and the world
will be worse off.

F) Robots and artificial intelligence will surpass humans in prowess in almost all activities (both
mental and physical).
Survey Results:
Interestingly, the results showed that survey takers thought that the two optimistic
scenarios were the two most likely scenarios and the two pessimistic choices were two of the
three least likely scenarios. I believe that the choice that was decided to be 2nd least likely also
has negative connotations because it implies a kind of end to the pure human race. Oppositely,
the choice that was decided to be 3rd most likely has positive connotations because it implies
that technology and artificial intelligence will go through huge improvements in the next 50
years.
The results are based on 12 responders and the order given below was found using ranked
pairs. The sorted MOV matrix is also shown below which shows that D (Most of the world's
problems such as resource scarcity will be solved by future technology and the world will be
better off.) is a Condorcet winner and wins by all the methods we have discussed including
Borda, IR Borda, Ranked Pairs, etc. However, the rankings of some of the other choices change
depending on which counting method is chosen.

1) D) Most of the world's problems such as resource scarcity will be solved by future technology
and the world will be better off.

2) B) Robots and artificial intelligence will only enhance the well-being of human kind.

3) F) Robots and artificial intelligence will surpass humans in prowess in almost all activities
(both mental and physical).

4) E) Technology will not grow at a fast enough pace to solve the world's problems and the
world will be worse off.

5) C) Humans and robots/artificial intelligence will become more connected creating a new race
of "cyborgs".

6) A) Robots and artificial intelligence will threaten the well-being of human kind.
Choice
D
B
F
E
C
A
D
0
2
2
4
8
6
B
-2
0
2
4
10
4
F
-2
-2
0
6
8
8
E
-4
-4
-6
0
4
6
C
-8
-10
-8
-4
0
2
A
-6
-4
-8
-6
-2
0
I decided to pick 50 years from now for the survey because it is close enough to the
current day that most of the survey takers will still be alive and can imagine that time, but far
enough away that there could be huge advances in technology.
There are many different opinions about how artificial intelligence and technological
advances will play out in the future, however, one thing is certain: The rapid growth of
technology will take the next century through rapid unseen-before changes in human kind and
the world. The growth of artificial intelligence has helped the human kind significantly, however
we must be careful where we are headed. Will there ever be a point where we should stop the
development of artificial intelligence? The development of unbeatable chess engines is only the
beginning of a huge surge in artificial intelligence. The next century will be momentous and we
will face tough decisions about the future of humanity. And I hope for the sake of humanity, that
we make the right choices.
Bibliography
"Chess." Wikipedia. Wikimedia Foundation, 11 Aug. 2012. Web. 08 Nov. 2012.
<http://en.wikipedia.org/wiki/Chess>.
Luijten, Stef. "Writing a Chess Program in 99 Steps." Winglet, Writing a Chess Program in 99
Steps. N.p., n.d. Web. 08 Nov. 2012. <http://www.sluijten.com/winglet/14search01.htm>.
"Solving Chess." Wikipedia. Wikimedia Foundation, 16 Oct. 2012. Web. 08 Nov. 2012.
<http://en.wikipedia.org/wiki/Solving_chess>.
"Evaluation Function." Wikipedia. Wikimedia Foundation, 16 Oct. 2012. Web. 08 Nov. 2012.
<http://en.wikipedia.org/wiki/Evaluation_function>.
"Computer Chess." Wikipedia. Wikimedia Foundation, 11 July 2012. Web. 08 Nov. 2012.
<http://en.wikipedia.org/wiki/Computer_chess>.
"Artificial Intelligence." Wikipedia. Wikimedia Foundation, 11 Sept. 2012. Web. 09 Nov. 2012.
<http://en.wikipedia.org/wiki/Artificial_intelligence>.
"Strong AI." Wikipedia. Wikimedia Foundation, 11 July 2012. Web. 09 Nov. 2012.
<http://en.wikipedia.org/wiki/Strong_AI>.
"Technological Singularity." Wikipedia. Wikimedia Foundation, 11 Sept. 2012. Web. 09 Nov.
2012. <http://en.wikipedia.org/wiki/Technological_singularity>.
"Transhumanism." Wikipedia. Wikimedia Foundation, 11 Sept. 2012. Web. 09 Nov. 2012.
<http://en.wikipedia.org/wiki/Transhumanism>.
"Brain-computer Interface." Wikipedia. Wikimedia Foundation, 11 Apr. 2012. Web. 09 Nov.
2012. <http://en.wikipedia.org/wiki/Brain-computer_interface>.
"Chinese Room." Wikipedia. Wikimedia Foundation, 11 Feb. 2012. Web. 09 Nov. 2012.
<http://en.wikipedia.org/wiki/Chinese_room>.
"Turing Test." Wikipedia. Wikimedia Foundation, 11 July 2012. Web. 09 Nov. 2012.
<http://en.wikipedia.org/wiki/Turing_test>.