How does artificial intelligence play StarCraft II?

   A great achievement in machine learning, made possible by the complexity of StarCraft!

   DeepMind announced a new milestone in their artificial intelligence development, training AI to master StarCraft II. The AlphaStar project of Google's AI development lab has reached a superior level, surpassing 99.8% of flesh-and-blood players.

   Not only that, DeepMind also said that they tried to make AlphaStar as similar to regular players as possible. For example, they let it learn to play all three ways in the game (Terran, Protoss and Zerg), which means increasing the difficulty of the game for AlphaStar; In addition, limiting the amount of maps that the AI   system is allowed to see, and at the same time limiting the number of mouse clicks of AlphaStar - only performing 22 similar actions every 5 seconds of play, to achieve similarity with players. real.

   Despite being heavily nerfed, AlphaStar still reached the level of Grandmaster in StarCraft II, the rank of the most superhuman players. AlphaStar officially became the first mechanical system to achieve this feat.

   DeepMind sees AlphaStar's new achievement as proof that artificial intelligence's learning capabilities have reached new heights, and that today's success could be the foundation of self-driving cars, autonomous robots, and recognition systems. more advanced image formats.

   “The complexity of video games is higher than chess, where the player has to control hundreds of units of soldiers; more complicated than Go, because there are 10^26 possibilities for each move; and players have so little information about the opponent, even poker knows more about the opponent.”

   In January, it beat the top StarCraft players but still lost to Grzegorz “MaNa” Komincz. Since the time of defeat, DeepMind strives to improve AlphaStar from January to June, and then "takes bets" of every gamer in the world. These fights took place this past July and August.

   The result wowed AlphaStar fans (and smothered those who feared for a bleak future): AlphaStar became the world's highest-ranking StarCraft II player, just barely super-human. Only 0.2% of players can beat AlphaStar at this point, but it will soon be optimized, soon taking the throne.

   This historic milestone is on par with the success of another AI research company in San Francisco, OpenAI. In April, OpenAI's artificial intelligence system defeated the strongest Dota 2 team in the world (maybe machines are immune to trashtalk so they can keep their spirits up), after losing by close to two teams. another one last year.

   Both of these video game achievements by two AI research units demonstrate the superior gaming capabilities of machines.
This effort does not simply demonstrate the capabilities of AI, but shows that with enough resources, time and effort, an artificial intelligence system will surpass humans at any intellectual challenge. , whether classic mind games or video games. This is also a new milestone in reinforcement learning, a special branch of machine learning - machine learning that combines computing power with AI training methods such as creating a simulated environment for it to learn by itself. data is available.

   Like OpenAI, DeepMind trains its AI by pitting it against itself, its fast processing speed allows machines to gain hundreds of years of experience in a few months. That's why they can cope with the players, veteran gamers of extremely complex games.
However, machines are still just machines, only know how to stamp molds and can hardly multitask; No matter how much you play chess, you can't play Dota right away. That's because AI's algorithms don't change that easily, so DeepMind and other AI research units use reinforcement learning to let machines learn how to play on their own; That's why they produce so many "weird" strategies, that even flesh-and-blood players have to learn in part.

   “AlphaStar is a curious player who doesn't follow any rules - it has the reflexes and speed of top players, but has its own strategy and playstyle. The way they trained AlphaStar created an unexpected playstyle; this opens up a new dimension, that there are many more paths that gamers have not exploited yet," said Diego "Kelazhur" Schwimergame, Brazil's top StarCraft player, ranked 19th in the world.

   DeepMind doesn't just apply superhuman machine learning to video games. The most visible application of such a system would be to train robots to help with housework and factory work. If the ability is increased further, machines will be able to safely drive cars in the city.

   Thanks to complex games like StarCraft and Dota, new artificial intelligence technology is evolving as fast as it is today.