ABSTRACT

Detection of moving objects is often the first stage in many computer vision applications through the use of static or moving cameras. Subsequently, the separation process between the background and the foreground is carried out using the background subtraction approach. In the state of the art, it has been cited that the application of machine learning as well as signal processing offers promising results in the face of the challenges to overcome in test video sequences. However, the fundamental objective is that algorithms based on the background subtraction approach can be used in real applications under the context of IoT. In this chapter, we analyze different background subtraction strategies developed over time with efficient results in real-time applications. Thus, we survey Graphics Processing Unit (GPU) implementations, embedded implementations in smart cameras and systems, and the development of specific architectures using digital signal processing (DSP), field programmable gate array (FPGA), and very large-scale integration (VLSI) technologies. Second, we review fog computing and edge computing strategies employed for global video systems in the IoT context. Finally, we provide perspectives.