ABSTRACT

This chapter proposes a theoretical framework for learning in risk-sensitive dynamic robust games. It focuses on hybrid and combined risk-sensitive strategic learning for general-sum stochastic dynamic games with incomplete information and action-independent state-transition. The chapter provides convergence and nonconvergence results. It discusses the case of noisy and time delayed payoffs. The chapter describes the model of a non-zero-sum dynamic robust game, and presents different learning patterns. It aims to develop hybrid and delayed learning schemes in a noisy and dynamic environment. If the payoffs in games are utilities, then in a sense risk is irrelevant: the expected value of an action is just its mean utility. If, however, the payoffs are rewards, then the fact that utility is not linear in rewards makes a difference and risk matters. The chapter examines mixed actions in game theory and its applications.