ABSTRACT

In games with a small number of possible actions, like chess or checkers, a successful approach to overcome these issues is to use look-ahead search, that is, simulating the effects of action sequences and choosing those that maximize the agent’s utility. This chapter presents an approach that adapts this process to complex video games, reducing action choices by means of scripts that expose choice points to look-ahead search. It introduces requires scripts that are able to play a full game. The chapter utilizes the Negamax version of the minimax algorithm for simplicity. Once the search produces an answer in the form of decisions at every choice point applicable in the current game state, it can be executed in the game. The parameters of that function would then be optimized either by supervised learning methods on a set of game traces, or by reinforcement learning, letting the agent play itself.