Go-playing Google Deepmind AlphaGo computer defeats human champion - Action News
Home WebMail Tuesday, November 26, 2024, 05:53 AM | Calgary | -16.5°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Science

Go-playing Google Deepmind AlphaGo computer defeats human champion

In what they called a milestone achievement for artificial intelligence, scientists have created a computer program that beat a professional human player at the complex board game called Go, which originated in ancient China.

Game from ancient China is far more complex than chess and 'pinnacle of game AI research'

AlphaGo's victory in the ancient Chinese board game is a breakthrough for artificial intelligence, showing the program developed by Google DeepMind has mastered one of the most creative and complex games ever devised. (Google/YouTube)

You can chalk it up asanother victory for the machines.

In what they called a milestone achievement for artificialintelligence, scientists said on Wednesday they have created acomputer program that beat a professional human player at thecomplex board game called Go, which originated in ancient China.

The feat recalled IBM supercomputer Deep Blue's 1997 matchvictory over chess world champion Garry Kasparov. But Go, astrategy board game most popular in places like China, SouthKorea and Japan, is vastly more complicated than chess.

"Go is considered to be the pinnacle of game AI research,"said artificial intelligence researcher Demis Hassabis of Google DeepMind, the British company that developed the AlphaGoprogram. "It's been the grand challenge, or holy grail if youlike, of AI since Deep Blue beat Kasparov at chess."

DeepMind was acquired in 2014 by Google.



AlphaGo swept a five-game match against three-time EuropeanGo champion and Chinese professional Fan Hui. Until now, thebest computer Go programs had played only at the level of humanamateurs.

Facebook also has Go AI

AlphaGoisone of two Go-playing computers whose successes were announced this week.FacebookCEO MarkZuckerbergwrotein a post Wednesday thatFacebook'sartificial intelligence team was "getting close"to building an AI that can win at Go.A paper published online byFacebookresearchers says their computer,deepfores2, won third place in the January KGS Go Tournament, which takes place online. The paper has not yet been published in a peer-reviewed journal.

In Go, also called Igo, Weiqi and Baduk, two players placeblack and white pieces on a square grid, aiming to take moreterritory than their adversary.

"It's a very beautiful game with extremely simple rules thatlead to profound complexity. In fact, Go is probably the mostcomplex game ever devised by humans," said Hassabis, a formerchild chess prodigy.

Scientists have made artificial intelligence strides inrecent years, making computers think and learn more like people
do.

Hassabis acknowledged some people might worry about theincreasing capabilities of artificial intelligence after the Go
accomplishment, but added, "We're still talking about a gamehere."

Millions of practice games

While AlphaGo learns in a more human-like way, it stillneeds many more games of practice, millions rather than
thousands, than a human expert needs to get good at Go, Hassabissaid

The scientists foresee future applications for such AIprograms including: improving smart phone assistants (Apple's
Siri is an example); medical diagnostics; andeventually collaborating with human scientists in research.

Hassabis said South Korea's Lee Sedol, the world's top Goplayer, has agreed to play AlphaGo in a five-game match in Seoulin March. Lee said in a statement, "I heard Google DeepMind's AIis surprisingly strong and getting stronger, but I am confidentthat I can win, at least this time."

The findings were published in the journal Nature.

With a file from CBC News