SANTTUcurriculum vitae
04 Sep 2018

Safeguarding Face Recognition System Against Spoof Attacks

Apple released the iPhone X on November 3 2017 which includes a Face ID for authentication. A week later, hackers cracked Face ID with a composite mask of 3-D-printed plastic, and simple paper cutouts. This in combination tricked an iPhone X into unlocking.

This could seriously poke a hole in the expensive security of the iPhone X. Therefore, rendering a robust face recognition system is very important in order to safeguard it against these kind of hacker attacks. Which can be carried out using printed pictures or a replayed video of the user easily available in social networks.

Infact, this problem has already been solved 3 years ago in field of biometrics. I too proposed my technique using artifical intelligence (AI) as “Detection of face spoofing using visual dynamics”. This was published in the prestigious IEEE transactions on information forensics and security in the year 2015.   Interested readers can look into this article at https://lnkd.in/e6WdwBH.

Please cite the paper if you find interest in my work

Tirunagari, Santosh, et al. “Detection of face spoofing using visual dynamics.” IEEE transactions on information forensics and security 10.4 (2015): 762-777.

I will discuss briefly below:

Keypoints

  • Rendering a face recognition system robust is vital in order to safeguard it against spoof attacks.
  • A key property in distinguishing valid accesses from the spoof ones is by exploiting the information dynamics of the video content. Such as, Blinking eyes, moving lips, and facial dynamics.
  • The solution pipeline consisting of dynamic mode decomposition (DMD), local binary patterns (LBPs), and support vector machines (SVMs).
  • The effectiveness of the methodology was demonstrated using three publicly available databases: 1) print-attack; 2) replay-attack; and 3) CASIA-FASD.

Attack Types

For face recognition, there are three types of presentation attacks, namely print attacks, cut photo attacks and replay attacks. A print attack, refers to facial spoofing carried out by presenting a printed, photo to a camera. Indeed, this attack is very easy to, carry out because getting hold of a target’s photo is extremely simple. Cut photo attacks, have cuts on eyes and mouthm which show eye blinkings and mouth speaking as a real user. Replay attacks, on the other hand, are carried out by replaying a previously recorded face image (video) of a target user in order to spoof a biometric system. The video can be replayed easily using a hand-held mobile tablet.

 

Categorisation of specific methodologies existing in the literature

Proposed Solution

we present the pipeline of our method which consists of DMD, Local Binary Pattern histograms and a kernel based Support Vector Machine (SVM) classifier. The overall methodology or process pipeline is shown in Figure 3. First, a video is processed using the DMD algorithm in order to output dynamic mode images. From which, we select a single dynamic mode image. Second, LBP
histogram features are computed for this dynamic mode image. Finally, the produced LBP code is fed into a trained SVM classifier in order to classify whether the processed video is a
valid access or spoof. Half Total Error Rate (HTER) is used to evaluate the performance measure

 

 

Examples of the valid access and spoof attacks in controlled and adverse scenario for cropped face regions. The top row shows original images of valid access, photo and video attacks (left to right respectively). The middle row shows their corresponding first DMD mode and bottom row shows their corresponding first PCA mode.

Examples of the valid and spoof attacks in CASIA-FASD (High Resolution HR). The top row shows original images of valid access, photo, cut photo and video attacks (left to right respectively). The middle row shows their corresponding first DMD mode and bottom row shows their corresponding first PCA mode.

 

Performance

This study shows the significance of the DMD method as a preprocessing technique when coupled with LBP and SVM to effectively detect spoof samples. This pipeline was applied to 1200 video clips of photo and video attacks on 50 clients, under different lighting conditions acquired from the replayattack dataset; 400 video clips of photo attacks from the printattack dataset; and 600 video clips from the CASIA-FASD dataset. The results are exceedingly promising in tackling the photo, cut photo and video attack challenges.

 

 

Overall, the pipeline of DMD + LBP + SVM proves to be efficient, convenient to use, and effective. In fact only the spatial configuration for LBP needs to be tuned.

biometrics • machine learning Leave a comment

Leave a Reply

%d bloggers like this: