What is: DeepMind AlphaStar?
Year | 2000 |
Data Source | CC BY-SA - https://paperswithcode.com |
AlphaStar is a reinforcement learning agent for tackling the game of Starcraft II. It learns a policy using a neural network for parameters that receives observations as inputs and chooses actions as outputs. Additionally, the policy conditions on a statistic that summarizes a strategy sampled from human data such as a build order [1].
AlphaStar uses numerous types of architecture to incorporate different types of features. Observations of player and enemy units are processed with a Transformer. Scatter connections are used to integrate spatial and non-spatial information. The temporal sequence of observations is processed by a core LSTM. Minimap features are extracted with a Residual Network. To manage the combinatorial action space, the agent uses an autoregressive policy and a recurrent pointer network.
The agent is trained first with supervised learning from human replays. Parameters are subsequently trained using reinforcement learning that maximizes the win rate against opponents. The RL algorithm is based on a policy-gradient algorithm similar to actor-critic. Updates are performed asynchronously and off-policy. To deal with this, a combination of and V-trace are used, as well as a new self-imitation algorithm (UPGO).
Lastly, to address game-theoretic challenges, AlphaStar is trained with league training to try to approximate a fictitious self-play (FSP) setting which avoids cycles by computing a best response against a uniform mixture of all previous policies. The league of potential opponents includes a diverse range of agents, including policies from current and previous agents.
Image Credit: Yekun Chai
References
- Chai, Yekun. "AlphaStar: Grandmaster level in StarCraft II Explained." (2019). https://ychai.uk/notes/2019/07/21/RL/DRL/Decipher-AlphaStar-on-StarCraft-II/
Code Implementation
- https://github.com/opendilab/DI-star