ABSTRACT

This chapter explores data-driven approaches to presentation attack detection for three biometric modalities: face, iris, and fingerprint. The primary aim of this chapter is to show how pretrained deep neural networks can be used to build classifiers that can distinguish between authentic images of faces, irises, and fingerprints and their static imitations. The most important, publicly available benchmarks representing various attack types were used in a unified presentation attack detection framework in both same-dataset and cross-dataset experiments. The pretrained VGG neural networks, being the core of this solution, tuned independently for each modality and each dataset present almost perfect accuracy for all three biometric techniques. In turn, low-classification accuracies achieved in cross-dataset evaluations show that models based on deep neural networks are sensitive not only to features specific to biometric imitations, but also to dataset-specific properties of samples. Thus, such models can provide a rapid solution in scenarios in which properties of imitations can be predicted but appropriate feature engineering is difficult. However, these models will perform worse if the properties of imitations being detected are unknown. This chapter also includes a current literature review summarizing up-to-date data-driven solutions to face, iris and finger liveness detection.