ABSTRACT

Deployment of automated biometric solutions that use cues from multiple modalities enhances the reliability and robustness of authentication. In this chapter, a single sensor based multimodal system is discussed that captures a palm print and three knuckles simultaneously from the input hand image. This set up not only improves the user friendliness to capture biometric samples belonging to multiple modalities but can be developed with economical expenditure. However, the poor/low contrast images, and varying illumination seriously affect any such initiation adversely. Therefore, these challenging issues are addressed by presenting a novel recognition framework. Initially, the original image has been segmented using proposed segmentation algorithms and fixed size region of interest of palm and knuckles have been cropped. Each ROI image is processed for image enhancement using an improved G-L Fractional differential to obtain better robust representations of lines, skin fold, and texture features of palm print, and finger knuckle images. Following this, the images are transformed into illumination invariant representation using LLBP to minimize the effects of non-uniform brightness so that a well-distributed intensity image can be realized. Two kind of feature extraction techniques are applied to design the system so that they complement to each other for efficient recognition results. In particular, a solitary concatenation algorithm using UR-SIFT (local) and BLPOC (global) which can overcome all fundamental challenges has been presented. Next, the scores of each modality computed by local and global methods are individually combined at score level fusion. Finally, a decision-level fusion rule has been applied to consolidate the output scores by the palm print and finger knuckles that reveal the actual identity of individual. The publically available IIT Delhi Contactless Palm print and In-house hand databases have been used for performance evaluation in terms of EER, DI, CRR and speed.