Face verification, though an easy task for humans, is a long-standing open research area. This is largely due to the challenging covariates, such as disguise and aging, which make it very hard to accurately verify the identity of a person. This paper investigates human and machine performance for recognizing/verifying disguised faces. Performance is also evaluated under familiarity and match/mismatch with the ethnicity of observers. The findings of this study are used to develop an automated algorithm to verify the faces presented under disguise variations. We use automatically localized feature descriptors which can identify disguised face patches and account for this information to achieve improved matching accuracy.
We also explore the feasibility of face verification under disguise variations using multi-spectrum (visible and thermal) face images. Our framework classifies the local facial regions of both visible and thermal face images into biometric (regions without disguise) and non-biometric (regions with disguise) classes. The biometric patches are then used for facial feature extraction and matching. The experimental results suggest that the proposed framework improves the performance compared to existing algorithms. We also provide human performace evaulation for matching disguised face. However there is a need for more research to address this important covariate.
We prepared a dataset containing images pertaining to 75 subjects with different kinds of disguise variations. The version 1 of the dataset consists of images captured in visible spectrum. It is named as IIIT-Delhi Disguise Version 1 face database (ID V1). The IIITD In and Beyond Visible Spectrum Disguise database (I2BVSD)consists of face images captured in visible as well as thermal spectrum. Thus, ID V1 is a subset of I2BVSD.