Towards Better Turn-Based Strategy Planning Agents: Turn-Based Evolutionary Tree Search

TR Number

Date

2025-08-28

Journal Title

Journal ISSN

Volume Title

Publisher

Virginia Tech

Abstract

While AI agents surpass human performance in classical board games such as Chess, Go, and Shogi, many complex turn-based strategy (TBS) games, particularly those involving multiple actions per player turn, remain challenging due to their enormous branching factors and long planning horizons. Unlike single-action-per-turn games, TBS games like Civilization or TUBSTAP require sequencing multiple unit-level decisions per turn, resulting in state spaces where conventional algorithms like Monte Carlo Tree Search (MCTS) and Rolling Horizon Evolutionary Algorithms (RHEA) struggle to scale. First, we perform an investigation into state-of-the-art Deep Learning based models for game agents in TUBSTAP. Then, we propose Turn-Based Evolutionary Tree Search (TBETS), a novel hybrid algorithm that combines the depth-oriented selection of MCTS with the population-based variation of evolutionary algorithms. Unlike standard MCTS, TBETS treats each tree node as a turn-start state and each branch as a full multi-action sequence. To manage the wide but shallow tree, TBETS applies evolutionary operations, mutation and crossover, to choose child nodes to search, enabling adaptive exploration in high-dimensional action spaces. In experiments conducted on the TUBSTAP platform, TBETS outperformed state-of-the-art baselines, including M-UCT (i.e., MCTS-based Upper Confidence Bound applied to Trees), RHEA, and Flexible Horizon Evolutionary MCTS (FH-EMCTS). Notably, TBETS achieved a >20% higher winrate over RHEA on large 10-unit maps, and surpassed M-UCT by over 50% on 8-unit scenarios. These results demonstrate that TBETS is a scalable and effective approach for TBS games, particularly as complexity increases.

Description

Keywords

Turn-Based Strategy, TUBSTAP, Game AI, MCTS, Evolutionary Algorithms

Citation

Collections