[ Teaching ]  Implementation of Face Recognition “Access Control System” Using NVIDIA Jetson Nano - PART I
  Comments:

Implementation of Face Recognition “Access Control System” Using NVIDIA Jetson Nano - PART I

  By : Leadtek AI Expert     372

At present, the most widely used applications of deep learning are image recognition related applications, such as self-driving cars, security surveillance systems, industrial defect detection applications, factory security monitoring systems, and medical image detection applications. Among them, face recognition technology has been developed for a long time and has a wide range of applications.

Face recognition technology can be combined with a variety of applications in different industries. For example, it can be used to cooperate with the company's personnel database to develop an access control system, integrate a crime database to develop a security surveillance system, and cooperate with a mall for subsequent market analysis. If the face recognition is integrated into the database and has identified specific people, with integrated control it can further generate many applications.


Although face recognition technology has a wide range of applications, it also has its difficulties. One of the reasons is that the face structure is quite similar. Even two people who are not related to each other may have very similar looks; that’s why some people have celebrity lookalikes. Such a setting is very easy to solve the problem of identifying the face structure, but it is not beneficial to distinguish the identity. How to accurately and quickly identify with less hardware resources is actually the problem that researchers want to solve the most.


The main process of face recognition is divided into three steps:

  • Face Detection
  • Face Alignment
  • Feature Representation


In traditional machine learning face recognition, it can generally be divided into two steps: high-dimensional artificial feature extraction (for example: LBP, Gabor) and dimensional reduction (for example: PCA, LDA). However, through deep learning, face representation can be directly learned from the original image. The three major steps are as follows:


STEP 1. Finding all the Faces

The current commonly used face search method was developed by Navneet Dalal and Bill Triggs in 2005, and is called Histogram of oriented gradient (HOG). References 


 

Finding all the Faces

Image source

 

HOG face pattern generated

Image source


STEP 2. Posing and Projecting Faces

After finding the location of the face, the next step is to further find the feature points of each part of the face. The method used here was developed by Vahid Kazemi and Josephine Sullivan in 2014. Even though the contours of each person's face will be different, these 68 anchor points can usually be found on the face. If the model can be trained to locate accurately, the relative position of each part of the face can be accurately found no matter how the face position changes. References

  

Posing and Projecting Faces

Image source


STEP 3. Encoding Faces

This part is like we are training a set of Convolutional Neural Networks (CNN), but instead of training the entire face, we train a model so that each face can generate 128 measurement values. The main process is loading face images of a known person, loading another face image of the same known person, and loading face images of a completely different person. The 128 measured values are compared through the face. Since any 10 measured values of the same person should be approximately the same, the system can quickly compare and identify by this method. This method was developed by Google researchers Florian Schroff, Dmitry Kalenichenko, and James Philbin in 2015. References

 

Face encoding comparison

Image source


The above briefly introduces the concept and theory of face recognition.


Next, we will let everyone understand that face recognition technology is not difficult by simple implementation.


Here are all the materials, and of course you have to prepare your own monitor. The display interface of NVIDIA Jetson Nano is HDMI.

 

Item
Quantity
NVIDIA Jetson Nano
1
5V4A power supply (for Jetson Nano)
1
House model with door
1
PWM control board (PCA9685)
1
SG90 motor
1
Logitech camera (C310)
1
Dupont Cable (Female to Female)
6
Keyboard and mouse
1


Except for the house model, other items can be easily obtained. Please use creativity to build a good model. The assembled product is shown in the picture below. I have installed a set of additional buttons to facilitate switching on and off.

 


The motor is installed at the lower door shaft, and everyone can play with ingenuity.


For example, if the motor shaft is installed at the mouth, the mouth of the skull model can open and close.

 

After the assembly is completed, the next step is to load the face recognition model to Jetson Nano. I not only introduce the face recognition model, but also introduce four models including emotion, age and gender. The reference data of the model is as follows:

You can refer to the above URL.

I will introduce how to assemble and import four models for identification in the next article.


Comments as following