Let’s first look at an analogy similar to Dijkstra’s: the car races that used to be held in the early days of the car manufacturing industry. Cars would be pitted against human runners, and later against horses, to see which is actually faster. Regardless of the fact that history has turned full circle, so that in many urban areas it’s much faster to walk than to drive, most of us would agree that such comparisons are useless as sports; the competitors have the same goal – reaching the finish line first – but they aren’t using the same method and don’t have the same resources or constraints. Similarly, most of us would agree that serious comparison of the act of walking to the act of driving is not intellectually interesting, even when comparing the results of these activities could well be interesting from such perspectives as environmental and economic impact, physical fitness, and social interactions.
So, did Dijkstra mean that a computer can’t think? No, he was telling us that this question traps us into looking for clear-cut definitions (“what is thought”?). Instead, we could be devoting our time to make a submarine that performs its “swimming” as well as it can, even if we end up having to put the quotes around “swimming”. Indeed, those who write about computers and about artificial intelligence in particular, have come to use these quotes every time they mention a computer “thinking”, “reasoning”, “deciding” etc.
In a way, however, this misses a deep and central theme of artificial intelligence: many of its practitioners are drawn to this field precisely because of the opportunity to learn about how we humans actually think. The founding document of modern artificial intelligence, the proposal that led to the Dartmouth Summer Research Conference on Artificial Intelligence in 1956, made it quite clear: “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” This proposal, by John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon, went on to explicitly mention quite a few human activities associated with intelligence, including learning, problem-solving, and creativity.
Note the use of the word “simulate”. It can be understood in at least two ways. One way is similar in spirit to Dijkstra’s claim – humans “think”, computers “simulate thinking”, and we’d better not spend too much time arguing whether these two terms are similar or different. The other way, of course, is to see the similarity or difference as the whole reason for investigating how well machines can simulate thought – so that we can gain a better understanding into thought, intelligence and consciousness. At the risk of oversimplification, let’s call the first interpretation “engineering-path” since it’s concerned with how to make better machines, and the second interpretation “philosophy-path” since it dates back at least to the Greek philosophers’ advice “know thyself”, carved into the temple of Apollo at Delphi.
Alan Turing – who among other things was also an engineer and an occasional philosopher – defined his famous “Turing Test”. This test was featured in his 1950 paper “Computing Machinery and Intelligence“, one of the most frequently cited papers in modern philosophical literature. The claim of the paper is often understood to be the following: if a machine can simulate thought to the point where it is indistinguishable from a human, then we should remove the quotes and grant that it really thinks. This is not accurate. In fact, Turing was trying to sidestep the emotionally charged question “can machines think?” and replace it with the question of whether we can distinguish human thought from whatever the computer does as it simulates thought. This isn’t too far from Dijkstra’s opinion that the question of whether a computer can think is not interesting. Yet, Turing emphasizes an important issue that Dijkstra didn’t address: Turing designed his test to draw “a fairly sharp line between the physical and intellectual capacities of a man”. For example, we’d probably not be too interested in any project that would mechanically simulate human swimming to the extent where it is indistinguishable from human swimming: even if we build a swimming robot, it is not generally interesting to design tests that would detect whether the swimmer is human or robotic, or to design robots that would pass such tests. As Turing said, human intellect is different from other human capabilities, so while Turing’s test is meaningless for swimming, it is meaningful for thought. Philosopher Daniel Dennett gave a different example of the difference between simulating physical events and simulating thoughts: A computer simulation of a hurricane does not drench the office. It has none of the physical effects of a real hurricane (fortunately!). However, computers which have simulated thought to at least some extent have proven mathematical theorems, generated music that was appreciated by human audiences, solved difficult problems etc. – so the computations made by these machines had effects comparable to the same tasks preformed by humans.
Now, let’s turn to the game of chess. Top human players of the “game of kings” have been admired for over a thousand years as being among the best at this unique human activity – thinking. As described in the chess section of the web site for the Computer History Museum (a recent TFOT “site of the week”), chess has been a favorite for artificial intelligence research. As chess programs became better, interest in chess competitions between humans and machines grew. Arguably, worldwide interest reached its peak in 1997, during the famous match between then-top-ranking grandmaster Gary Kasparov and IBM’s Deep Blue machine. While Kasparov’s defeat was quite possibly due to limitations in the way the competition was set up (for example, Kasparov didn’t have the opportunity to learn the machine’s “style” before the match, which is unheard-of in competitions between top human players), it was clear that this event proved the emergence of extremely strong machine players. Later matches of this kind, including the match in late 2006 between world champion Vladimir Kramnik and the program called “Deep Fritz“ (a decisive win for the machine in a high-quality match, marred by an unprecedented error by Kramnik in one of the games) drew less popular interest. This loss of interest may have obscured the fact that in the past decade, just about all high-level matches between computers and humans have ended in either draws or – more frequently – machine victories.
How should we react to this? See if your reaction matches one or more of the possibilities below. I’ve named these possibilities after some movies, sometimes due to a similarity in the movie’s plot and sometimes just by free-association:
1. The skies are falling. If computers defeat us in chess, they’ll soon defeat us in everything else. The age of human dominance on this planet is ending, and machines will soon replace us (“Terminator”)
2. We’ll win because we have more options and fewer limitations. Even if they can defeat us in chess, we can always pull their plug (“Space Odyssey”).
3. We humans are a resourceful bunch. We’ll learn how to defeat these computers even in chess (“Rocky”).
4. So what? The fact that cars move faster than people does not mean we won’t have 100-meter dashes anymore. Like running, chess is a human activity, and the performance of machines is irrelevant to this activity (“Forrest Gump”).
5. We know how these pro
grams work. They do nothing but extremely quick examinations of potential moves, the moves that may be played in response etc. etc. This is not how humans think, and it does not teach us anything about ourselves, so let’s not call it thinking (“Drowning by Numbers”).
6. Something deeply meaningful has happened, and it will impact the way we understand ourselves, but the impact will take a long time to unfold (“Artificial Intelligence”).
First, I’d like to dismiss the first two reactions. The “Terminator” reaction is way off – and I doubt that many people really had that reaction. Deep Blue did not feel any motivation to defeat or humiliate Kasparov. Its victory was a victory for its human creators. If we ever enter violent confrontation against self-motivated machines, it will not be as a result of the developments in computer chess. The “Space Odyssey” reaction is actually a different aspect of the same motif – yes, machines are out to get us, and they may be stronger than us in many things, but we know their weaknesses. Again, Deep Blue and current chess programs aren’t “out to get us”, and there is nothing in the development of these machines that would bestow them with such motivations. The question whether other types of Artificial Intelligence research could pose a threat to humanity is much wider, and at least one leading technologist has issued a grave warning, but it’s outside the scope of this column (briefly, my opinion is that while dangers exist in any technology research, Artificial Intelligence research may be one of the areas with the highest potential rewards and the lowest potential for disaster).
Will a new breed of human champions defeat the new, undefeatable machine champions (“Rocky”)? My friends in the world of chess software point at the new generation of human chess players, who have grown up and honed their art with the constant presence and support of chess software. They learn from machines, they use chess software to grow their intuitive grasp of chess positions and to extend their analytical skills, and they use their computers to examine far more potential developments of any position that they consider. They also learn, as Kramnik did in his match against Deep Fritz, that machine players are unforgiving in exploiting any error made by their opponents. The class of players that emerges from such training may actually be qualitatively different from any generation of past players. If any human can defeat today’s computer software, it makes sense that it would be this generation. However, if they do grow up to defeat today’s machines, would they also be able to defeat tomorrow’s software? Probably not.
We’re left with three options. Two of them consider the whole issue irrelevant. One of them (“Forrest Gump”), looking at the human side, says that chess, as a human activity, should not be mixed up with anything computers do. Well, computer chess is already mixed into the life of all professional chess players. They use chess software to prepare for tournaments, to evaluate and develop new strategies, to train themselves and to create training for others. This is not cheating, in the same way that asking other chess players for advice or studying books aren’t considered to be cheating. Cheating using chess machines is possible, of course. During the recent Candidate Matches for the 2007 Chess World Championship, held in Elista, Kalmykia, there were half-humorous allegations that candidates were taking restroom breaks to consult with well-hidden computer screens. In addition, the developers of a leading chess program were denied entry to the hall where one match was taking place (it was feared that they might use their software to help one of the contenders). So, human chess has been intertwined with technology, following the example of many other sports: returning to the swimming analogy, today’s swimmers use advanced materials in their clothing to reduce friction with the water, as well as complex analysis of their movements.
The other reaction that considers computer chess irrelevant to human thought (“Drowning by Numbers”) examines the machine’s side, saying that computer chess is nothing more than intensive number-crunching. Since this is nothing like our experience of thinking about chess or about anything else, it does not teach us about ourselves. This argument has been applied time and again to debates on Artificial Intelligence, and in a sense it is impossible to refute: As long as we don’t define what thinking is, we can’t tell whether computers meet the requirements. And if we say “I know what thinking about chess feels like, and it certainly doesn’t feel like ranking millions of positions every second” – well, do we really have privileged access to the mechanisms of our own thoughts? A century of experimental psychological research seems to indicate otherwise. It is also important to note that computer chess is not confined to brute-force evaluation of as many positions as possible. It had often been the case that cheap hardware prevailed in chess competitions against hardware capable of evaluating a much higher number of positions per second. As human players would tell you, it’s important to decide which positions to consider in detail, and the same goes for computers: examining all possible positions for even eight moves ahead is beyond the capabilities of the fastest computer hardware. For this, you need a way to rapidly rank intermediate positions and decide whether each of these is worthy of examination of the situations that may evolve from it. This “evaluation function” may be described as a guess regarding the ultimate future of positions that would result from the player’s possible moves, from the opponent’s possible replies to each such move, to the player’s counter-response for each such reply etc. If this guess indicates that the position leads to losing the game, the computer’s limited time will be invested in exploring the evolution of the game starting from other moves that the computer can make. Competitions between chess programs are more often determined by the quality of the evaluation function than by the raw speed of evaluation. If it sounds like how we would describe intuitive judgment, I don’t think the similarity is coincidental.
As you might guess, I arranged it so that we’re left with the option with which I feel most comfortable (“Artificial Intelligence”): At least as far as chess is concerned, computers do not think in the same way as humans do, but there are enough similarities to justify leaving out the scare-quotes when talking about machine thinking. Among these similarities are the following:
(A) Human chess players can reconstruct the plans and expectations of other players according to the moves they make on the chess board (“Ah – he’s moving his knight now because he’s expecting an attack on queen’s side, and it seems he’s willing to risk his exposed pawn”). Significantly, they can do this whether the player is human or machine, and in either case they can not only describe the thinking behind good moves but also offer some explanations for the errors in thinking that lead to bad moves. It’s not important whether these reconstructions match the real thoughts (or, whether the computer indeed had any thoughts at all) – the point is that if human players can’t avoid the same type of analysis, and the same descriptions, for human and machine players, why should anybody else force a distinction between human thinking and computer “thinking”?
(B) Computers learn from their own games and from other games.
(C) Each chess-playing program has its own discernible “style” of playing, which chess experts find easy to detect but sometimes hard to describe. Interestingly, even the software’s designers sometimes find it hard to change the overall style of playing without making extensive changes and risking the quality of play. This is one reason why each program retains the general “feel” of its gameplay, even after extensive cycles of software enhancements. The other reason, of course, is that the software reflects the preferences and personalities of its lead developers, and these change far more slowly than computer code.
(D) Computers often make chess moves that human experts consider as “beautiful”, “creative”, “daring” etc. – all an indication that regardless of what goes on inside the machine, its outside behavior is that of a deep and innovative thinker. For example, Kasparov has commented on one of the games between chess programs “Deep Junior” and “Deep Fritz” that he found one of Junior’s maneuvers to be so creative and beautiful that he never expected to see such a maneuver in a game by human players. Kasparov said that humans come up with such ideas only when composing chess problems (where they are concerned with beauty and originality and not with winning, and where they can set the position to suit their needs).
Still, if we admit both entities into the class of “thinkers”, we can now also ask in what way their thinking is different. Not surprisingly, it turns out that there are also substantial differences. As mentioned above, humans use chess software in their training, because they need the different viewpoints that only machines can provide. At least some of the chess programs are well-known for their preferences for “strange positions”, such as unbalanced ones. For example, one type of unbalanced position occurs when one side exchanges several pawns for a bishop and a stronger attack. Human players tend to avoid such situations, since they do not appear often in inter-human competitive play and therefore aren’t as well-covered in chess literature as the more familiar situations. The difference between computer and human play is so distinctive, that Shay Bushinsky, who along with Amir Ban, developed “Deep Junior”, one of the world’s best chess programs, has also developed software that reviews the moves made in any chess game and determines, with high probability, whether each side was human or machine. The success of this software hints that chess computers won’t pass a chess-oriented Turing Test, at least if the judges for this test could use Bushinsky’s software. Ironically, sometimes the difference is that it is the machine that plays the more daring, creating and innovative games: It is the human who plays more like what we might expect from a machine.
There’s still a lot to do in computer chess, even if we accept that it has far surpassed human chess (and I’ve given some reasons why this assumption may be wrong). In many ways, it has sometimes been applied as a hobby, or as a way of gaining publicity, more than in the interests of scientific and philosophical inquiry. As Artificial Intelligence pioneer John McCarthy wrote in 1997, “In 1965 the Russian mathematician Alexander Kronrod said, ‘Chess is the Drosophila of artificial intelligence.’ However, computer chess has developed much as genetics might have if the geneticists had concentrated their efforts starting in 1910 on breeding racing Drosophila. We would have some science, but mainly we would have very fast fruit flies.” In the decade since these words were written, computer chess has made substantial progress in directions other than speed (most notably in the development of evaluation functions and in pattern recognition for situations similar to those seen before), but McCarthy’s criticism should still be heeded by future researchers.
Having seen some hints supporting machine thinking, and others hints that such thinking differs from human thinking, you may still argue that all we have is hints and subjective judgments. Yet, if we accept both as true, their joint implication is that there’s a great future for both kinds of minds working together on chess, as is actually happening everywhere across the world where chess is taken seriously. If both humans and machines can contribute to each other’s success, in chess or in any other field, it will only be because both have minds, and because they are different kinds of minds.
About the author: Israel Beniaminy has a Bachelor’s degree in Physics and Computer Science, and a Graduate degree in Computer Science. He develops advanced optimization techniques at ClickSoftware technologies, and has published academic papers on numerical analysis, approximation algorithms and artificial intelligence, as well as articles on fault isolation, service management and optimization in industry magazines.