Catch up on the latest AI articles

Fast And Secure Biometric Identification: Skin Patch-based Face Recognition Anti-spoofing Technique

Fast And Secure Biometric Identification: Skin Patch-based Face Recognition Anti-spoofing Technique

Face Recognition

3 main points
✔️ Proposes an anti-counterfeiting method for faces that uses only skin patches rather than entire facial images
✔️ No sensitive information is transmitted or stored, eliminating the need for encryption and decryption and significantly speeding up the entire processing process
✔️ Demonstrated on an Android device, with less than 100ms latency for forgery Accurately detects attacks

Enhancing Mobile Privacy and Security: A Face Skin Patch-Based Anti-Spoofing Approach
written by Qiushi Guo
(Submitted on 9 Aug 2023)
C omments: Published on arxiv.
Subjects: Computer Vision and Pattern Recognition (cs.CV)


code :

The images used in this article are from the paper, the introductory slides, or were created based on them.

First of all

Advances in deep learning have led to the use of facial recognition technology in many areas, including online identity verification (eKYC) and secure login for electronic devices. Face recognition is a biometric authentication technology that requires high security. Recently, Face Anti-Spoofing (FAS) has been introduced to improve the reliability of face recognition technology. However, existing methods have several practical problems. Deploying the FAS model on the server side, where multiple components are integrated, raises privacy and security concerns. The risk of privacy violations increases when a user's facial image is transmitted over a network and stored on a server. In addition, the process of sending images is time consuming and compromises the user experience.

To address these issues, this paper proposes a new facial anti-spoofing (FAS) model that uses patches of skin in facial images. While traditional methods send the entire facial image to the server, thus risking privacy violations, the proposed model sends only specific areas of skin to the server. In addition, while traditional methods spend most of their processing time encrypting and decrypting images to be sent, the proposed model sends only patches of skin, which does not contain personally identifiable information and eliminates the need to encrypt and decrypt the image.

To evaluate the effectiveness and robustness of the new method, we experimented with various aspects including accuracy and latency and found that it maintains low latency while detecting forgeries with high accuracy Demonstrated on an Android device, it accurately detects forgery attacks with latency of less than 100 ms The results are shown in the following table.

Proposed Method

As noted above, the main risk in conventional facial anti-spoofing stems from the process of transmitting and storing images. For example, one model spends approximately 240 ms for encryption and decryption, while the main function, convolutional neural network (CNN) inference, takes only 20 ms. Transmitting an entire facial image causes significant latency and also raises privacy concerns because the data is stored.

Previous research has examined methods for separating structure and texture in facial images in applications of facial anti-spoofing. There are two main approaches to patch-based facial forgery detection algorithms. One is to extract patch images of specific facial features (eyes, nose, mouth, etc.) as input features, as shown in the figure below. The other is to extract a face as an input feature by dividing it into multiple sub-patches. However, both of these approaches pose a privacy risk. In addition, the first approach requires the use of four different models for each facial part and may impose a considerable computational burden.

Since previous research has shown that facial patches can be effectively used as input for a variety of deep learning classification tasks, the hypothesis is that facial skin patches may also be applicable to facial anti-spoofing.

This study suggests the following methods

We define face recognition anti-spoofing as a classification task that identifies real and fake faces. We design a model to determine this from the image and use a specific threshold to determine if it is real or fake. Next, a large amount of face image data is needed to train this model. In this paper, we use CelebASpoofing, a large, high-quality image dataset. In each iteration, RetinaFace is used to crop face regions from the original image. The cropped face image is then input into the facemesh model to obtain facial landmarks. These landmarks are used to identify patches of skin that do not overlap with facial features. Skin patches are extracted so that they do not contain personally identifiable information. Ultimately, more than 10,000 patches are collected, including authentic samples and various types of attack instances. To extract facial patches from facial images, we also use a technique called the Patch-Extracted Module (PEM), which uses facial landmark detection guided by the facemesh model to identify landmarks on the input facial image The facemesh model is used to guide the facial landmark detection. The face images are then aligned to extract high-quality facial patches. Candidate region selection focuses on areas that lack salient facial features, particularly the left cheek, right cheek, and chin regions. A CNN is used to extract features from the extracted patches. The model combines two different patches to make more accurate decisions.

Experiment

Three metrics are used to evaluate model performance in facial anti-spoofing: Attack Presentation Classification Error Rate (APCER), Bonafide Presentation Classification Error Rate (BPCER), and Average Classification Error Rate (ACER). These indicators are calculated as follows

  • APCER = FP / (TN + TP)
  • BPCER = FN / (FN + TP)
  • ACER = (APCER + BPCER) / 2

Note that TP (True Positive) represents a fake face image correctly classified as fake, TN (True Negative) represents a real face image correctly classified as real, FP (False Positive) represents a real face image incorrectly classified as fake, and FN (False Negative) represents a fake face image incorrectly classified as real.

In evaluating the performance of the proposed method, three test data sets are used: the Rose-Youtu, MSU, and Mobile-Replay data sets. The results are shown in the table below. The proposed model shows remarkable performance compared to the various algorithms for the three different datasets. In particular, while CDC performs best on all datasets, the model presented in this paper is lightweight compared to CDC, making it very convenient and practical for deployment in back-end infrastructure.

Furthermore, the proposed model achieves comparable results to state-of-the-art models such as FaceDs and FASNet and significantly outperforms traditional algorithms such as LBP and Color Texture. These results indicate that the proposed model is effective in addressing the facial anti-spoofing challenge.

In addition, we experiment with latency. The processing time for anti-spoofing includes image transmission, encryption and decryption, and model inference. We evaluated the latency of the conventional model (resnet34) and the proposed model (two streams resnet34) and the results are shown in the table below.

Although the proposed model's transmission and inference times are slightly greater than the traditional model, the overall latency is only 28% of the traditional model. By omitting the encryption and decryption components, the overall process is faster.

Summary

This paper proposes a new model of facial anti-spoofing that uses facial skin patches as input features. This method does not transmit facial images and does not require encryption and decryption of facial images. Compared to conventional methods, it eliminates the risk of personal information leakage and significantly reduces the anti-spoofing processing time to about one-quarter.

[Source of image used]
・RBB TODAY "Mai Shiraishi reveals 'I prefer sexy people,' ideal marriage partner and proposal," May 20, 2021, https://www.rbbtoday.com/article/img/2021/05/20/188830/700078.html.
・TOKYO HEADLINE "Mai Shiraishi: 'I want to get a license and go to a big supermarket,'" December 12, 2021. https://www.tokyoheadline.com/587349/

  • メルマガ登録(ver
  • ライター
  • エンジニア_大募集!!
Takumu avatar
I have worked as a Project Manager/Product Manager and Researcher at internet advertising companies (DSP, DMP, etc.) and machine learning startups. Currently, I am a Product Manager for new business at an IT company. I also plan services utilizing data and machine learning, and conduct seminars related to machine learning and mathematics.

If you have any suggestions for improvement of the content of the article,
please contact the AI-SCHOLAR editorial team through the contact form.

Contact Us