ABSTRACT

In this chapter, we shall expand our focus to large-scale interconnected systems govered by linear dynamics. These systems are composed of multiple subsystems each of which is governed by the drift dynamics, dynamics due to control inputs, and an interconnection dynamics that model the effect of interactions/coupling between the subsystems. For such largescale systems, we shall first extend the Q-learning algorithm described in Chapter 3 to develop a distributed learning algorithm for synthesizing distributed control policies. The distributed learning algorithm, called the hybrid Q-learning algorithm introduced in Narayanan and Jagannathan, 2015, 2016 , shall be used for the design of a linear adaptive optimal regulator for a large-scale interconnected system with event-sampled input and state vector. We shall see that the extension of Q-learning-based controllers to such large-scale systems with event-triggered distributed control execution introduces significant challenges in data sampling and communication protocol design due to network losses. To accommodate these losses, we shall utilize a stochastic dynamic modelling approach (Xu et al., 2012) for the large-scale system and use this model to design the Qlearning algorithm. We shall see that by embedding iterative parameter learning updates within the event-sampled instants along with the time-driven Q-learning algorithm introduced in Chapter 3, the efficiency of the optimal regulator is improved considerably.