ABSTRACT

Deep learning is a popular research topic today, and it aims to let computers automatically learn and solve realistic data analysis problems through deep artificial neural networks. However, any deep learning model can be fooled to generate incorrect results, and an attacker can design a series of attacks to mislead the model. In this chapter, we introduce and sum up a variety of adversarial attacks through different classifications of deep learning adversarial attacks, such as white-box and black-box attacks, from the perspective of model access categories and attack media such as images, text, and data. We also point out the future research trend.