Warning: Use of undefined constant add_shortcode - assumed 'add_shortcode' (this will throw an Error in a future version of PHP) in /nfs/c03/h02/mnt/49321/domains/hackingtheuniverse.com/html/wp-content/plugins/stray-quotes/stray_quotes.php on line 615

Warning: Use of undefined constant MSW_WPFM_FILE - assumed 'MSW_WPFM_FILE' (this will throw an Error in a future version of PHP) in /nfs/c03/h02/mnt/49321/domains/hackingtheuniverse.com/html/wp-content/plugins/wordpress-file-monitor/wordpress-file-monitor.php on line 39
AI Chess Machine Learns by Itself

AI Chess Machine Learns by Itself

Deep learning researchers are combining neural networks with the computing power of GPUs to reduce the training time required to produce good results. But this just accelerates the speed of “brute force” learning that tries everything and tests for success or failure. Human intelligence uses filters to eliminate choices that make no sense. Narrowing the branch evaluations allows a deep learning program to accomplish the same thing.

Deep Learning Machine Teaches Itself Chess in 72 Hours, Plays at International Master Level – [technologyreview.com]

In a world first, an artificial intelligence machine plays chess by evaluating the board rather than using brute force to work out every possible move.

It’s been almost 20 years since IBM’s Deep Blue supercomputer beat the reigning world chess champion, Gary Kasparov, for the first time under standard tournament rules. Since then, chess-playing computers have become significantly stronger, leaving the best humans little chance even against a modern chess engine running on a smartphone.

But while computers have become faster, the way chess engines work has not changed. Their power relies on brute force, the process of searching through all possible future moves to find the best next one.

Of course, no human can match that or come anywhere close. While Deep Blue was searching some 200 million positions per second, Kasparov was probably searching no more than five a second. And yet he played at essentially the same level. Clearly, humans have a trick up their sleeve that computers have yet to master.

This trick is in evaluating chess positions and narrowing down the most profitable avenues of search. That dramatically simplifies the computational task because it prunes the tree of all possible moves to just a few branches.

Computers have never been good at this, but today that changes thanks to the work of Matthew Lai at Imperial College London. Lai has created an artificial intelligence machine called Giraffe that has taught itself to play chess by evaluating positions much more like humans and in an entirely different way to conventional chess engines.

Straight out of the box, the new machine plays at the same level as the best conventional chess engines, many of which have been fine-tuned over many years. On a human level, it is equivalent to FIDE International Master status, placing it within the top 2.2 percent of tournament chess players.

Neural network chess computer abandons brute force for selective, ‘human’ approach – [thestack.com]

A chess computer has taught itself the game and advanced to ‘international master’-level in only three days by adopting a more ‘human’ approach to the game. Mathew Lai, an MsC student at Imperial College London, devised a neural-network-based chess computer dubbed Giraffe [PDF] – the first of its kind to abandon the ‘brute force’ approach to competing with human opponents in favour of a branch-based approach whereby the AI stops to evaluate which of the calculated move branches that it has already made are most likely to lead to victory.

Most chess computers iterate through millions of moves in order to select their next position, and it was this traditional ‘depth-based’ approach that led to the first ground-breaking robot>human chess victory in 1997, when IBM’s Big Blue beat reigning world champion Garry Kasparov.

Lai sought instead to create a more evolutional end-to-end AI, building and improving on previous efforts which sought to leverage neural networks, but which paid performance penalties, and faced logical issues about which of the potential millions of ‘move branches’ to explore efficiently.

Synthesizing Intelligence

Comments are closed.