Download chess_ai_history - Computer Science @ Marlboro

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Human–computer interaction wikipedia , lookup

Computer Olympiad wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Artificial intelligence in video games wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Intelligence explosion wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Computer Go wikipedia , lookup

Computer chess wikipedia , lookup

Transcript
The History of Chess AI
Intro
Jacob Roberts
March 2014
Science fiction has toyed with the idea of robots achieving human intelligence
since the invention of programmable computers. I, Robot was published in 1950,
just six years after the first programmable computer, and the very same year Claude
Shannon published the first paper on computer chess. Asimov and Shannon shared a
similar vision for how robots would emulate human intelligence; each envisioned
their machine applying domain knowledge heavily in an effort to very closely
simulate the human thought process. Asimov created the three laws of robotics to
govern behavior; whereas Shannon believed a successful chess artificial intelligence
(AI) would use chess-specific knowledge to avoid searching pointless variations.1
These are just two indicators of the human inclination to anthropomorphize other
forms of intelligence, projecting our own intelligence onto robots the same way it is
projected onto pets. However, chess AI has come to be viewed as a massive defeat
for those interested in emulating human intelligence.2 Despite the massive
expectations placed on chess AI programmers to prove computers could emulate
human intelligence,3 chess quickly evolved past any desire for the human thought
process; when Deep Blue defeated Garry Kasparov in the famous 1997 match, it was
intentionally designed to evaluate positions at an amateur level.4 The evolution of
chess AI and its methodology may prove to be indicative of a larger trend in AI.
Chess
It’s important to understand why chesss has been the poster child for game
AI development since the field first existed. The first published paper on chess AI,
written by Claude Shannon in 1950, outlined objective reasons why chess would be
a good test case for game AI. Chess is clearly defined in both the allowed operations
and its end goal; a move is either legal or not legal, and the game ends under
specific, pre-defined conditions. The size of the board, the number of the pieces, and
the qualities of the pieces are all defined. Finally, chess hits a fine balance between
simple and complex; it is not trivially solvable, but simple enough to create
satisfactory solutions.5 All of these traits make chess an ideal game for a computer to
represent, but they also fit a massive amount of other games.
The true reason chess was adopted so easily for AI came from subjective
biases on the part of early programmers and the western world. Either because of
coincidence or a shared skill set, many early programmers happened to be avid
amateur chess players.6 Additionally, the widely adopted ELO rating system for
chess would allow them to objectively measure their creation’s strength. Outside of
the bubble of programming, popular conscience of the western world dictated that
chess skill was parallel to general intelligence. It was therefore reasoned that when
computers could play Chess, they could reach intelligence.7 International chess was
just beginning to form cohesively under FIDE, increasing worldwide interest at the
same time as computer chess was making major breakthroughs. The objective
reasoning provided by Shannon, favoritism from early programmers, and the
reputation of chess in the western world all combined to make it clear that chess
was the ideal candidate for beginning the process of designing intelligence.
Shannon initially proposed two types of potential chess AI. Type-A would use
a simple brute-force search to a pre-defined depth and return the best move
according to an evaluation function. Type-B would use specific chess knowledge to
calculate forcing variations, evaluating when the position reached stability. Shannon
reasoned that type-B would be the superior algorithm because it could evaluate
fewer positions than a brute-force search, letting it play faster and searching to a
larger depth.8 Shannon’s reasoning was correct, a complete brute force search even
with modern technology would likely fall behind an intelligent AI.
What Shannon didn’t anticipate was the invention of alpha-beta pruning in
1958.9 With it, type-A programs could massively reduce the number of positions
they searched, while also omitting the positional errors that type-B programs tend
to carry. What Shannon should have anticipated was that the modular design of
minmax search would allow type-A programs to adapt to new innovations in search
much more easily than type-B. Minmax is an algorithm that allows for most of its
components, such as move ordering and position evaluation, to be changed
independently of each other, a luxury type-B programs don’t have. Minmax also
scales very well with computing power, as opposed to a type-B program, which
would scale with human knowledge of chess. Considering these factors, it’s no
surprise that the invention of alpha-beta pruning was considered deadly to type-B
algorithms. The final blow was struck when the AI Chess 4.0, based on minmax and
alpha-beta pruning, won the Association for Computer Machinery Chess
championships in the early 1970s,10 demonstrating the superiority of type-A
algorithms.
After Chess 4.0 fixed the course of chess AI, it was just a matter of hardware
until the inevitable. In 1996, the chess AI Deep Blue won a single match vs Garry
Kasparov, marking the first win by a computer over a world champion under
standard tournament conditions.11 In the famous 1997 rematch, Deep Blue bested
Kasparov in a 6-game series 3.5-2.5. With the score tied 2.5-2.5 going into the last
game, Kasparov shockingly resigned after just 62 minutes.12 Grandmaster Josh
Fedorowicz said of the loss, “Everybody was surprised that he resigned because it
didn't seem lost. We've all played this position before. It's a known position.”13
When asked why he resigned, Kasparov said, “I lost my fighting spirit”.14 Such an
issue did not affect Deep Blue.
When Chess 4.0 made its mark on the AI landscape, it revealed just how
important the speed of hardware would be. Its developers believed its playing
strength was between 1400 and 1600, but when the same code was moved to a
supercomputer, it was able to win a tournament among players rated between 1800
and 2000.15 Deep Blue did nothing to change that trend. Its board evaluation
function only operates at an amateur level, but by evaluating on average 12 moves
ahead,16 Deep Blue was able to defeat the world champion. This wasn’t a novel
concept; chess AI programmers had long known that very simple evaluation
functions tend to outperform complex ones because a deeper search returns better
results than a shallower but more comprehensive one.17 To illustrate how important
a deep search is, increasing the max depth of a search by 1 is said to increase Chess
strength by 200 to 300 points.18 Deep Blue had 30 processors, each containing 16
customized chips designed to perform specific chess functions.19 Its success was not
a matter of broadening human understanding of chess or inventing new and more
efficient algorithms to evaluate a position, it was a matter of searching a lot more
positions.
Go
Originally neglected because of its lack of popularity and prestige in the
western world,20 the board game go began to grow in popularity among AI
enthusiasts as a direct response to Deep Blue’s victory. Deep Blue’s programming
looked nothing at all like human intelligence, which angered many early adopters
who supported type-B programs. They believed that the more complex game of go
would not lend itself to the type-A programs that dominated chess.21 At the time
Deep Blue defeated Kasparov, the best go programs could not defeat a novice.22
Compared to chess, go games have a massive number of branching nodes to be
searched; an average chess game lasts 84 total moves with 38 legal moves per
position,23 whereas an average game of go has 200 turns with 200 legal moves per
turn.24 That’s not the only problem that type-A go programs face. Determining a
board’s score in go requires searching many moves ahead to resolve forced patterns
of stones. A human can “cache” this result, but it has so far not proven possible for a
computer.25 Finally, go endgames have been proven to be PSPACE-hard, and some
other aspects of the game are NP-hard.26 All of this has factored into the historical
tendency to dismiss brute-force solutions to go AI.27 If brute-force approaches to go
AI cannot be done, then the only alternative would be to create an AI with significant
domain knowledge, potentially emulating human intelligence and succeeding where
some believe chess AI failed.
As more resources have been devoted to go AI in the wake of the machine
defeating man in chess, prospects for brute-force appear to be bright, setting go up
to disappoint those who were hoping for human intelligence. Minmax has been
eschewed in favor of monte carlo tree searches,28 with very promising results.
Recently, the monte-carlo based AI Crazy Stone defeated professional go player
Ishida Yoshio, a dominant player in the 1970s, with a four stone handicap. This
places Crazy Stone at a very strong amateur level,29 a massive step forward since
Deep Blue defeated Kasparov. go hasn’t reached its apex software- or hardwarewise. The lead programmer of Deep Blue believes that go programmers can use
recursive null-move pruning combined with alpha beta pruning to reduce the
number of positions searched by a typical AI by the fourth root.30 Should this be
done, he estimates that by 2017, a go AI using the modern equivalent of Deep Blue’s
hardware could play on the level of Deep Blue – well enough to beat any human.31
Conclusion
The success of type-A chess AI was a blow to those looking for human
intelligence in machines. Should go AI continue the path its on, it will continue
eliminating the ties it historically had to emulating human intelligence and end in a
similar position to chess, where domain knowledge is very limited and evaluation is
done on the level of an amateur, if it even ends its search before a terminal node.
When Shannon originally proposed the two types of chess AI, he did not take a
subjective stance on which one he wanted to succeed. Instead, he believed that
chess AIs would either emulate thought or they would stray away from it entirely,
and that either result would have very interesting implications.32 With type-A the
winner of not just chess but also go, those implications can be expanded upon.
Although Deep Blue, Crazy Stone, and other game AIs have very limited domain
knowledge, one could hardly tell unless they knew in advance they were watching a
computer because the deep searching has a very similar result. When Isaac Asimov
wrote Robbie to act hurt when accused of cheating in hide-and-seek, it’s unlikely
that he intended Robbie to be searching possible reactions, pruning ones that
achieved nothing, and selecting the one that had the best outcome for Gloria. If the
course of AI continues on the path set by the test cases of board games, then that’s
what a future Robbie will be doing, and to 8 year old Gloria, Robbie will be
indistinguishable from a human.
Endnotes
1. Shannon, C., "Programming a Computer for Playing Chess",Philosophical
Magazine, 41 (314), 1950
2. Ensmenger, Nathan. “Is Chess the Drosophila of Artificial Intelligence? A
Social History of an Algorithm.” Social Studies of Science 42, no. 1 (February 1,
2012): 5–30. doi:10.1177/0306312711424596.
3. Ibid
4. Hsu, Feng-Hsiung, “Cracking GO”, IEEE Spectrum, 1 Oct 2007
<http://spectrum.ieee.org/computing/software/cracking-go>
5. Shannon, C., "Programming a Computer for Playing Chess",Philosophical
Magazine, 41 (314), 1950
6. Ensmenger, Nathan. “Is Chess the Drosophila of Artificial Intelligence? A
Social History of an Algorithm.” Social Studies of Science 42, no. 1 (February 1,
2012): 5–30. doi:10.1177/0306312711424596.
7. Ibid
8. Shannon, C., "Programming a Computer for Playing Chess",Philosophical
Magazine, 41 (314), 1950
9. Ensmenger, Nathan. “Is Chess the Drosophila of Artificial Intelligence? A
Social History of an Algorithm.” Social Studies of Science 42, no. 1 (February 1,
2012): 5–30. doi:10.1177/0306312711424596.
10. ibid
11. Silver, Alexandra. "Top 10 Man-vs.-Machine Moments." Time. Time Inc.,
15 Feb. 2011. Web.
<http://content.time.com/time/specials/packages/article/0%2C28804%2C20
49187_2049195_2049261%2C00.html>.
12. Josh, Fine. "MSNBC - Deep Blue Wins in Final Game of Match." MSNBC.
N.p., 1997. Web.
<http://www9.georgetown.edu/faculty/bassr/511/projects/letham/final/Ches
s.htm>.
13. “Deep Blue Defeats Garry Kasparov in Chess Match — History.com This Day
in History — 5/11/1997.” History.com. http://www.history.com/this-day-inhistory/deep-blue-defeats-garry-kasparov-in-Chess-match.
14. Ibid
15. Hsu, Feng-Hsiung, “Cracking GO”, IEEE Spectrum, 1 Oct 2007
<http://spectrum.ieee.org/computing/software/cracking-go>
16. Ibid
17. Ensmenger, Nathan. “Is Chess the Drosophila of Artificial Intelligence? A
Social History of an Algorithm.” Social Studies of Science 42, no. 1 (February 1,
2012): 5–30. doi:10.1177/0306312711424596.
18. Hsu, Feng-Hsiung, “Cracking GO”, IEEE Spectrum, 1 Oct 2007
<http://spectrum.ieee.org/computing/software/cracking-go>
19. Ensmenger, Nathan. “Is Chess the Drosophila of Artificial Intelligence? A
Social History of an Algorithm.” Social Studies of Science 42, no. 1 (February 1,
2012): 5–30. doi:10.1177/0306312711424596.
20. Ibid
21. Ibid
22. Hsu, Feng-Hsiung, “Cracking GO”, IEEE Spectrum, 1 Oct 2007
<http://spectrum.ieee.org/computing/software/cracking-go>
23. Ensmenger, Nathan. “Is Chess the Drosophila of Artificial Intelligence? A
Social History of an Algorithm.” Social Studies of Science 42, no. 1 (February 1,
2012): 5–30. doi:10.1177/0306312711424596.
24. Hsu, Feng-Hsiung, “Cracking GO”, IEEE Spectrum, 1 Oct 2007
<http://spectrum.ieee.org/computing/software/cracking-go>
25. Ibid
26. Sensei’s Library 2013, “Computer Go”, wiki article, viewed 17 March,
<http://senseis.xmp.net/?ComputerGo>
27. Hsu, Feng-Hsiung, “Cracking GO”, IEEE Spectrum, 1 Oct 2007
<http://spectrum.ieee.org/computing/software/cracking-go>
28. Sensei’s Library 2013, “Computer Go”, wiki article, viewed 17 March,
<http://senseis.xmp.net/?ComputerGo>
29. Jing. "Crazy Stone Computer Go Program Defeats Ishida Yoshio 9 Dan
with 4 Stones."Go Game Guru. N.p., n.d. Web.
<http://gogameguru.com/crazy-stone-computer-go-ishida-yoshio-4stones/>.
30. Hsu, Feng-Hsiung, “Cracking GO”, IEEE Spectrum, 1 Oct 2007
<http://spectrum.ieee.org/computing/software/cracking-go>
31. Ibid
32. Shannon, C., "Programming a Computer for Playing Chess",Philosophical
Magazine, 41 (314), 1950