Chinese scientists have developed an AI that plays strategic war games like a human. In fact, according to its creators, the AlphaWar AI has passed the Turing test. For those unaware, the Turing test tests whether an AI can fool humans into believing it is another person. Various military strategists played war games with and against AlphaWar, yet no one suspected they were playing with an AI.
Scientists announced that the AI had passed the Turing test in a paper published in the journal Acta Automatica Sinica in February. Scientists named the AlphaWar AI after Google DeepMind’s AlphaGo, which was the first AI to beat top players of a Chinese board game.
The war game playing AI was developed at the Institute of Automation, Chinese Academy of Sciences in Beijing. The team was led by Professor Huang Kaiqi, who claims the AI successfully passed the Turing test in 2020.
AlphaWar learns by playing against itself
The AI learns from playing with and competing against military strategists as well as by playing against itself. Currently, AlphaWar is incapable of beating top war strategists in areas such as weapon use and coordination between units. According to the team responsible for AlphaWar, creating realistic battle simulations is challenging even for the most powerful computers.
Currently, AI still struggles to compensate for the human factor. Human error can turn the tide of battle in war, as can a soldier completing an incredible feat. This means that to surpass top human strategists, AlphaWar would need to learn to better assess human behaviours and instincts. According to Professor Huang’s team, it is exceptionally challenging for AI to account for such factors.
China isn’t the only country to experiment with AI in war games. In 2007, The US Defense Advanced Research Projects Agency launched Deep Green, a program to develop a war game AI. However, the program was scrapped in 2020, although a new program dubbed Game Breaker took its place.