ABSTRACT

Recently, a new algorithm combining deep learning (DL) with RL, called deep reinforcement learning (DRL), has been applied to handle high-dimensional input, such as camera images, big vector problems. The security and privacy of DRL need to be fully investigated before deploying DRL on critical real-world systems. Recently, DRL has been proven to be vulnerable to adversarial attacks. Attackers insert perturbations into the input of DRL model and cause the decision errors of DRL. DRL utilizes a deep neural network (DNN) model to achieve high prediction. But it is not robust against input perturbations. Even a small change in the input may lead to dramatical oscillations in the output. This makes it necessary to have a comprehensive understanding of the types and features of adversarial attacks. This chapter aims to introduce the security issues in DRL and current defensive methods to overcome adversary attacks. We will discuss the structure of DRL, the security problems in DRL, and the existing attack targets and objectives in DRL.