You are here: Resources > FIDIS Deliverables > HighTechID > D3.2: A study on PKI and biometrics >

# D3.2: A study on PKI and biometrics Title: APPENDIX 2: THE EIGENFACE APPROACH # Appendix 2: The Eigenface Approach

As an example, the eigenface approach [TUR91] offers the advantage of efficiency combined with simplicity. It classifies faces based on general facial patterns which include more than just the specific features of the face. In this case, each facial image is interpreted as a two-dimensional set of light and dark areas in a particular pattern with these areas being regarded as the eigenfaces. Eigenfaces decompose a human face the same way Fourier analysis decomposes a signal. In this way, both the trained face images and new face images can be represented as linear combination of the eigenfaces. The initial steps followed by the system are the following:

• The system is fed with the images of the humans to be identified (training images). Suppose the each image is NN. Then ΓΓ, …, Γ are the N1 vectors.

•  The eigenvalues and eigenvectors are computed (average face vector, mean face, covariance matrix). The mean of the training images is the average face:

Ψ =. The difference of each image from the average face is: .

The scalars and the vectors are the eigenvalues and eigenvectors of the covariance matrix of the face images : C =, with A=[].

Given the fact that AA is a NN matrix the computation of its eigenvalues and eigenvectors is rather impractical and thus taking into account that the matrices AAand AA (KK matrix) have the same eigenvalues and their eigenvectors are related as follows , the matrix used for their computation is AA and the eigenvectors are normalised.

• The M eigenvectors (or eigenfaces) with the largest eigenvalues that can be used without important loss of information are found. The first three eigenfaces include mainly luminance and not face information, whereas more than M eigenfaces offer no valuable information for the face.

• Each one of the training faces minus the average face is represented as a linear combination of these eigenfaces, in other words is projected into this basis using , with these weights forming the vector : Ω = [].

Each time there is input from the face detection and localisation module of the system to the face recognition module the face image is processed the same way as the training images (normalisation, projection to the eigenspace and representation with vector Ω) and then the distance between the input image and the face space (Ω of each training image) is computed and compared to a threshold. In case, if this distance is smaller than the threshold, then the face is recognised as one of the training images. This distance often refers to the Euclidean distance; however some studies and results have shown that the Mahalanobis distance performs very well.

Feature-based face recognition uses a priori information of the face in order to find local face features or local significant geometries of the face that regard the local features of faces. The initial steps followed by the system are the following:

• The system is provided with training images.

• Individual features of the face (such as eyes, mouth, head outline, hair and nose) are identified. Quite often the head outline is the first feature to search for since it encloses most of the features.

• A-priori information is used to refine the result of the above search in order to locate pre-defined face features.

The same process is applied to the images arriving from the face detection and localisation module, the results of which are compared to the ones of the training images in order to determine the detected individual.

Instead of using a priori information to extract face features, they are located through the finding of important geometries in the face. Manjunath et al. [TSE03] presented a method that performed face recognition in three steps. In the first step features from faces were extracted through the use Gabor wavelets. The second step involved the representation of the extracted face features through a graph-like data structure. This structure was of two types. One type regarded the training images (face gallery set), whereas the other type concerned the images to be processed. The third step involved the matching of the input face graphs to the face gallery set by calculating the distance between them and in this way determining the grade of similarity.

The suitability of a classifier for classifying the produced feature vectors is based on their characteristics. Nearest neighbour and the neural networks classifier are of the most popular ones. The first one saves the average coefficients for each individual and classifies each new face as the person with the closest average. In the case of neural network classifiers, the input face is projected onto the eigenface space in case eigenfaces are used and afterwards a new descriptor derives. The new descriptor is provided as network input and applied to each person’s neural network. The one with the maximum output is selected provided that this is over the predefined recognition threshold.

 Denis Royer 39 / 40