Selecting Robust Strategies in RTS Games via Concurrent Plan Augmentation

AAMAS 2015 |

The multifaceted complexity of real-time strategy (RTS) games forces AI systems to break down policy computation into smaller subproblems — strategic planning, tactical planning, reactive control, and others. To further simplify planning at the strategic and tactical levels, state-of-the-art automatic techniques for this task, such as case-based planning, produce deterministic plans for what is inherently an uncertain environment, and fall back on replanning when the game situation disagrees with the constructed plan. A major weakness of this approach is its lack of robustness: repairing a failed plan is often impossible or infeasible due to real-time computational constraints, causing a game loss. This paper presents a technique that selects a robust RTS game strategy by using ideas from contingency planning and by exploiting action concurrency of these games. Specifically, starting with a strategy and a linear tactical plan that realizes it, our algorithm identifies the plan’s failure modes using available game traces and adds concurrent branches to it so that these failure modes are mitigated. In this manner, our approach may train an army reserve concurrently with an attack on the enemy, as defense against a possible counterattack. After augmenting each strategy from an available library (e.g., learned from human demonstration) our approach picks one with the most robust augmented tactical plan. An extensive evaluation on the popular RTS games of StarCraft and Wargus, which shares its engine with several other games, shows that concurrent augmentation significantly improves win rate and lets the agent prevail in scenarios where baseline strategy selection consistently leads to a loss.