chapter  7
Learning under Random Updates
ByHamidou Tembine
Pages 62

In a situation such as a dynamic environment under uncertainty, one would like to have a learning and adaptive procedure that does not require any information about the other players’ actions or payoffs, and less memory (a small number of parameters in terms of past own-experiences and observed data) as possible. In the previous chapters, we have called such a rule fully distributed. In a dynamic unknown environment, such fully distributed learning schemes are essential for applications of game theory and distributed optimization. In dynamic scenarios with

a random set of active players, and where

the traffic, the topology, and the states of the environment may vary over time and the communications between players are difficult and may be noisy and delayed,