IAES International Journal of Artificial Intelligence (IJ-AI)

Received May 15, 2022 Revised Jan 1, 2023 Accepted Jan 10, 2023 Recognition systems have received a lot of attention because of their various uses in people's daily lives, for example in robotic intelligence, smart cameras, security surveillance or even criminal identification. Determining the similarity of faces by different face variations is based on robust algorithms. The validation of our experiment is done on two sets of data. In this paper, we compare two facial recognition system techniques according to the recognition rate and the average authentication time: in order to increase the accuracy rate and decrease the processing time. our approach is based on feature extraction by two algorithms principal components analysis scaleinvariant feature transform (PCA-SIFT) and speeded up robust features (SURF), then uses the random sample consensus (RANSAC) technique to cancel outliers. Finally, face recognition is established on the basis of proximity determination. The second technique is based on the association of support vector machine (SVM) classifier with the key point recovery technique. the results obtained by the second technique is better for both databases: The recognition rate of the base olivetti research laboratory (ORL) should be 98.125800 and that of the Grimace base 97.2851500. The evaluation according to the time of the second technique does not exceed 300ms on average.


INTRODUCTION
Facial recognition is a technology that identifies or verifies a subject using a facial image, video, or any audiovisual element of the subject's face.Among applications using facial recognition, such as security or voiceprints.Each face is unique and has inimitable characteristics.Facial recognition systems, programs, or software compare facial biometrics and recognition algorithms with other applications.The resolution of contributions to machine learning is based on two steps which are feature extraction and classification.
Feature extraction is the first step of the authentication process and is performed by robust techniques such as principal components analysis scale-invariant feature transform (PCA-SIFT), speeded up robust features (SURF) and three patch local binary pattern (TP-LBP), then the second step is performed by classifiers (distance measure and support vector machine (SVM)).Over the past few decades, many characteristics have been developed and the most well-known and popular is the local binary patterns (LBP) [1], three and four pache local binary pattern (TP-LBP, TF-LBP) [2], complete local binary pattern (CLBP) [3], [4], scale.

ISSN: 2252-8938 
New approach to similarity detection by combining technique three-patch local binary … (Ahmed Chater) 1645 Invariant feature transform principal component analysis (PCA-SIFT) [5], speeded up robust features (SURF) [6].The properties of this technique characterized by a simple calculation that facilitates real-time facial analysis and applied to real applications are its robustness to changes in grayscale caused, for example, by variations in lighting.This work is based on our publication in [7], [8].We use the three techniques of extraction of the descriptor vector by (TP-LBP), PCA-SIFT [9], SURF [6] and their classification to determine the similarity between the bases (Training and Test) which are based on distance metrics and linear SVM [10] to avoid the sensitivity of the parameters to measure the recognition rate.To validate our experiment, we will use the two databases of facial images (ORL) [11] and Grimace [12].The results obtained give good results in terms of recognition rate by method (TPLBP with SVM); the similarity rate reaches 98.125800% and the processing time does not exceed 300ms.They are applied by real-time applications, e.g. in the field of security and robotics.

FEATURE EXTRACTION
The next section, we will deal with feature extraction by the three techniques which are: (three-patch LBP, PAC-SIFT and SURF).The determination of the key point extraction technique is based on statistical measurements of key points.We have used key points since they describe regions of the image where the information is important.This approach is generally used to recognize objects [13] and in facial and biometric recognition algorithms [6].For the calculation of the descriptor vector in the proximity.There are many techniques such as scale invariant feature transform (SIFT) [14], shape contexts [15], and speed up robust features (SURF), to name a few [6].
Among these techniques, the SIFT technique proposed by Lowe [14] is retained for two main reasons.Firstly, the SIFT algorithm is efficient for scaling and 2D rotation.Second, a comparative study [16] of different descriptors shows that SIFT is the most efficient.The SIFT algorithm was also used by Berretti et al. [17] in the case of 3D facial recognition.Then the descriptor (TP-LBP) which is based on the comparison of square patches as described in the next section [18].

Three-patch local binary patterns (TP-LBP)
Three-patch local binary patterns (TP-LBP) extend the operator by allowing multi-scale (multiresolution) processing of an image [4].The detector (TP-LBP) of a pixel is computed by comparing the three patches to determine a binary value in the code assigned to it.The choice of the best pixel by the detector (TPLBP) is made by considering that the area of a region is focused on the pixel and by placing the probe in a circle of radius  pixels.
On the other hand, the choice of the best pixel by the detector (LBP) based on the determination of the circle radius then the comparison of neighboring pixels and so on.The confrontation based on the windows  chosen by the detector (TPLBP) is done by comparing two neighboring windows that separate by an angle.Every confrontation that happens from a single bit is activated.Specifically, we apply the technique of TP-LBP algorithm to determine the relevant features, which base on the basis of (1) is mentioned,  ,,, = ∑ ((  ,   ) − ( +   ,   ))2    (1) translate (1) into Figure 1.The technique (TP-LBP) allows to determine the important descriptors in the images.The extraction of these is based on a number of parameters.These parameters are the filter size w, α the deviation angel and the filter center, as shown in Figure 1 and the calculation of the descriptors is based on (2). −  ,,3,2 () = (( 0 ,   ) − ( 2 ,   ))2 0 + (( 1 ,   ) − ( 3 ,  4 ))2 1 + (( 2 ,   ) − ( 4 ,  5 ))2 2 + (( 3 ,   ) − ( 5 ,  6 ))2 3 + (( 4 ,   ) − ( 6 ,  7 ))2 4 + (( 5 ,   ) − ( 7 ,  8 ))2 5 + (( 6 ,   ) − ( 0 ,  1 ))2 6 + (( 7 ,   ) − ( 1 ,  2 ))2 7  (2) Where   and  +   are two zones along the ring and C is the zone of the nucleus.The function (., .)is any distance function between two patches (for example, the Euclidean distance norm of their grayscale differences) and f(x) is defined by the (3).We apply a value  a little more than zero,  to obtain a better visibility in homogeneous areas [9].In application, we use closest neighbor sampling to retrieve patches instead of interpolating them, which speeds up the processing time with little or no performance.Thirdly, the linking of the data (characteristics) chosen by the histogram.

Detector principal component analysis -scale invariant feature transform (PAC-SIFT)
The detector (PCA-SIFT) [5], [9], like the SIFT [14], also use the measure of the similarity of the descriptors is done by the metric called Euclidean Distance.Which is based on the four essential elements that are: Detection of spatial extrema, location of key points, steering control.The scale space of an image is defined as a function, L(x, y, σ), which is generated from the convolution of a scalable Gaussian, G(x, y, σ), with an input image I(x, y) is written as, where * is the convolution operator in x and y and G (x, y, σ), the Gaussian equation.And then to determine the key points that are stable.We apply the Gaussian difference between two neighboring levels separated by a k-factor is written as, (, , ) = (, , ) − (, , ) and then using Hessian matrix to determine the threshold in order to retain the relevant key points.The Hessian matrix defined as (6), but the downside of this step is getting a lot of points of interest.From there hessian matrix represents in the (7) it can be determined the threshold metric.
With: Det(H): the determinant of matrix Hessian, Tr(H) the trace of matrix Hessian a high value of the parameter r ensures that the point of interest of high intensity variation.A key point is characterized by the five parameters (, , , , ) ; The pair (x, y) corresponds to its position in the original image; The pair (σ, θ) describes its scale and orientation.And the u vector is its descriptor vector which is obtained from its neighborhood.The neighborhood is split by a 4 × 4 grid.Then, the gradient is calculated on each of the 16 locations of the grid and quantified according to an 8-orientation histogram.The concatenation of these features allows us to get a descriptor vector with 128 elements.Figure 2 represents the concatenation of descriptor vectors and the extraction of key points by the SIFT technique.
Then the association this detector SIFT by the technique principal component analysis (PCA).Principal component analysis (PCA) [19], [20] is a well-known technique for dimensionality reduction that transforms 2D images into 1D column vectors, it projects high-dimensional data into an affine subspace.First, the (2D) image is transformed into a 1D vector which is written in the form in (8).
The second step is the normalization of the input images by subtracting each image element from the average of all the training images according to the following (9),  ̄ =   −  avec m= 1  ∑    =1 (9) thirdly combine the set of vectors side by side to obtain a size matrix (PxN) where (P is the number of training images, N the vector size of the image).Fourth, calculate the covariance matrix according to the following (10).
Then the eigenvalues and eigenvectors of the covariance matrix are calculated.The different steps are summarized in the algorithm,

Algorithm: Determination of (PCA)
Input: X matrix Outputs: mean value, eigenvectors and eigenvalues Normalization is done by ( 9

Detector speed up robust features (SURF)
SURF Detector [6] proposes a new method for local description of points of interest.Strongly influenced by the SIFT [14] approach, it couples a recording step of the analysis area with the construction of a histogram of the oriented gradients.The computational technique allows to establish the rotation tab, for each

CLASSIFICATION 3.1. Support vector machine (SVM)
This section describes how to compare two feature vectors and gives a brief overview of the classifiers used by SVM and distance measurement.After the extraction of the characteristics of each face image by (SIFT-PCA, SURF and TPLBP) the classification is carried out in the last step as indicated in our method proposed in Figure 3. SVM is a powerful statistical learning technique and generally used to solve shape detection difficulties.Initially, the SVM is used as a binary classification technique, which is based on a twoset problem.The binary SVM tries to optimize the hyperplane to divide the set into two subsets by maximizing the difference between the hyperplane and the two sets labeled -1 and 1.
Suppose that  is a dataset;   : :takes values between 1 to K are the features extracted by the faces that represent the k-dimensional and   are the labels, learning feature set, the separation of the whole to give in the (11), 11) the function of separation between the two sets by the linear technique can be expressed by ( 12 In the case of nonlinear classification by ( 12) is not valid.In this way, the input characteristics are transformed into a high dimensional space based on the kernel function, which helps to improve the accuracy of the classification [25], [26].The nuclei of the linear classification and the polynomials give good results that the nucleus Radial Basis Function (RBF) in the application of the easy recognition [27].Then, the SVM classifier used to answer the multi-class challenge.The multi-class SVM technique is divided into two sets: One-versus-all and one-versus-one [28].In our work, we used the kernel Gaussian to make a check between the two bases (test and training).Evaluation of the performance of our approach and comparison with other exciting techniques.The evaluation of our approach is tested on two databases.Each database is divided into two sets with different percentages.The test of our approach is done by (12),

Classification by distance measurement
The similarity between vectors is based on the measure of distance.The latter serves as the basis for calculating the minimum distance between two vectors.Among these distances is the Euclidean distance and the Manhattan distance.A distance on a set E is an application in [29], [30].
:  ×  →  +  ∀,  ∈ , (, ) = (, ) ∀,  ∈ , (, ) = 0 ⇔ ( = ) (, ) Designated a function that calculates a scalar value, representing the similarity of two vectors(, ∈   ); Usual distances include the Manhattan distance (or 1-distance) and the Euclidean distance (or 2-distance) the measurement of the similarity is done by the two ( 13) and ( 14).The determination of the classification rates is based on the measurement of the minimum distances between the two vectors, using the distances of (1-distance and 2-distance).

DATABASE ORL AND DATABASE GRIMACE OF FACIAL EXPRESSIONS
The choice of the database to evaluate our contribution on the facial expression recognition or detection system is important.For the validation of our contribution, we use two databases (ORL) reference in [11] and Grimace reference in [31] contains.The (ORL) database contains 40 subjects with 10 variations.And the Grimace database contains 18 subjects with 20 poses and variations of the image lighting, expression, as we show in Figure 4.

. Proposed method
Our technique is based on the work referred to in [7], [8].First, we merge the main database into two databases which are the training database and the text database with different percentages.For example, 80% test base and 20% training and training base 60%, 40% test base.In the second step, we extracted the key points using the following three techniques (PCA-SIFT, SURF and TP-LBP).Finally, classification by the statistical methods (SVM), Manhattan and Euclidean to determine the recognition rate.The steps to follow in our technique are summarized in the flowchart of Figure 5.
The figures represent some results of our simulation applied to the database (ORL).Figure 6 shows the determination of similar points by the PCA-SIFT and the SURF detector combined by RANSAC.the result of our simulation shows face authentication by different variations based on distance measurement.Figure 7 shows some simulations of our results represented in Table 1 applied to the database ORL [10].The results of our simulations show that the second technique is more accurate in terms of calculating the similarity rate than the first technique.Our results exceed the results of the literature.
Tables 2 and 3 represent the simulation results by the two techniques applied to the Grimace database [31].the evaluation of our results on two parameters which are the similarity rate and the processing time.The simulated results show that the second technique is more accurate in calculating the similarity rate than the first technique.and also, in terms Authentication does not exceed 0.560 on the ORL database and 0.675 on the database GRIMACE.The technique offers good results in terms of recognition rate and an acceptable time in terms of processing.On the other hand, the first technique gives good results in terms of processing time and the disadvantage of this technique and the recognition rate decreases.In the next works, we text our technique on the public database .ck,oulu-CASIA [31].The latter contains more variation compared to other databases.

CONCLUSION
In this article, we propose two approaches to measure the classification rate and the average processing time which are.Evaluation of the first technique (PCA-SIFT and SURF with RANSAC) in terms of recognition rate.The results of the simulations applied on the two databases for the first technique shows that when using the given Euclidean distance metric good results compared to the Manhattan metric.The second technique extracts the characteristics by (TP-LBP+SVM) and then measures the decision rate.The validation of our technique is performed on two databases with several changes.Then we do the reptation on two databases called the test database and the training database with different percentage.Simulation results show that the

Figure 1 .
Figure 1.The three-patch LBP code


ISSN: 2252-8938Int J Artif Intell, Vol. 12, No. 4, December 2023: 1644-1653 1648 local interest point filtering window.The authors propose to apply Haar wavelets to the integral image in order to decrease the processing time.This technique is based on the calculation of the drift along the horizontal and vertical axes.The solutions obtained by the wavelets can be used in order to plot the gradient creep and determine the deviation angle from the initial image.Compared to other techniques in terms of robustness to different face changes.The latter give a good result in terms of processing time[21]-[23].The SURF technique which allows to extract the points of great variation.This last one is based on the following modalities: − Points of interest based on the Hessian matrix.− Location of points of interest.− Description of the point of interest.− Descriptor components.

Int
New approach to similarity detection by combining technique three-patch local binary …(Ahmed Chater)    1649

Figure 4 .
Figure 4.A certain variation of the faces belonging to the databases (ORL) and Grimace

Figure 7 .
Figure 7. Descriptor similarity measurement by our technique: Application on the base (Grimace)

Table 1 .
Performance evaluation on the database (ORL)

Table 2 .
Performance evaluation on the database Grimace

Table 3 .
Estimation of the average processing time for each variation: application to databases ORL and GrimaceDatabase PCA-SIFT SURF The proposed method Estimation of the average time of different variations according to ORL.