ABSTRACT

Deep neural networks are DeepFakes (DFs) from adversarial examples, which are inputs that are virtually indistinguishable from actual data but are labeled inaccurately by the network. Indeed, some recent discoveries show that the existence of adversarial attacks may be a DL model’s intrinsic flaw. The chapter aims to develop a model to countermeasure the misuse of a deep generative model by utilizing adversarial attacks to create subtle alarms that would cause deep generative algorithms to fail in generating the fake image in the first place.