Real Time Face Recognition Using Minimum Measurements when at Least Two Thirds of the Face is Present in the Image

brasscoffeeΤεχνίτη Νοημοσύνη και Ρομποτική

17 Νοε 2013 (πριν από 4 χρόνια και 1 μήνα)

73 εμφανίσεις

Real Time Face Recognition Using Minimum Measurements
when at Least Two Thirds of the Face is Present in the Image
1
st
Manishankar Mondal, 2
nd
G. M. Atiqur Rahaman

, 3
rd
Debashish Tarafder


Department of Computer Science and Engineering, Khulna University, Khulna, Bangladesh.


Department of Computer Science and Engineering, Khulna University, Khulna, Bangladesh.

Department of Computer Science and Engineering, Khulna University, Khulna, Bangladesh.
mani_ku_cse_01@yahoo.com, atiq_cse_ku@yahoo.co.in, deb_ku_cse_01@yahoo.com

Abstract
A real time face recognition architecture using mini-
mum number of face-measurements is presented in this
paper. The measurements that we have taken are – neck
width, eyebrow width, distance between eyebrow and
eyeball, eyeball width, amount of black points in the
eyebrow and eyeball regions and amount of forehead.
In this paper we have presented an algorithm which is
able to recognize faces with different expressions for
moving persons when horizontally at least two thirds of
the face is present on the image-frame. There are no
color measurements in this method because color mea-
surements some times greatly depend on the operating
systems, graphical resolutions etc and cause problems
in face recognition. The measurements that we have
taken are expression invariant. For a training face
these measurements are determined and stored and
recognition is performed based on the weighted sum of
errors of these measurements. This method is faster
than most of the existing methods and it has a high rec-
ognition rate.
Keywords
: Face Recognition, Laplacian Operator, Prin-
cipal Component Analysis (PCA), Independent Component
Analysis (ICA), Liner Discriminant Analysis (LDA),
Weighted Modular Principal Component Analysis (WMPCA).

I. INTRODUCTION

Image processing covers a vast area in the realm of re-
searches. Pattern recognition is one of the strong bows
of image processing. This covers face recognition.
There are too many research-papers on face recognition.
Many algorithms are implemented for detecting and
recognizing faces. There are also many well-developed
systems concerning face recognition. Face Recognition
is mainly of two kinds-static face recognition and dy-
namic or real time face recognition. In the first process
of recognition the face-image of the person to be recog-
nized will be taken when the person will remain in a
fixed or predefined style. But the second process is
more complex where a person will remain in moving in
video sequences and some of his images will be taken
from which the face will be extracted and recognized.
Many techniques have been proposed and developed for
static face recognition that give us good results. But it is
very difficult to recognize faces while the images are
taken from the video sequences of moving persons. The
existing algorithms are not strong enough to recognize
faces when the face-images are taken from persons with
their faces in some angle. When the faces remain in
some angle in the images the full face is not available.
Different papers present different mechanisms for face
recognition and in almost all papers the core mechanism
is the PCA. Moreover in some proposed systems ICA ,
LDA etc are used. PCA and ICA mechanisms fit best
for static face recognition system where the face images
will always remain in a fixed sized frame.
Dominique Valentin et al. [3] in their paper have inte-
grated psychological and physiological information for
recognizing faces. Their recognition system is view
based. Recognition performance increases with the
number of views. They have taken views for different
orientations of a face and have seen that when 10 views
per face are taken at the learning time their system per-
forms very well. They want to say that the recognition
or identification of a face from a new orientation can be
achieved efficiently by interpolating between two ex-
treme orientations.
Bruce A. Draper et al. [1] in their paper have described
the comparison between PCA and two architectures of
ICA methods. They have shown us that PCA deco-
relates input data using second order statistics and gene-
rates compressed data with minimum mean-squared re-
projection error where ICA minimizes both second-
order and higher-order dependencies in the input. For
some cases PCA gives us better results than ICA and for
some other cases ICA performs better. They have
shown ICA as the generalization of PCA. We see that
for facial identity detection ICA architecture-ii performs
the best and for recognition PCA performs better.
A. Pavan Kumar et al. [4] in their paper have used
‘Weighted Modular PCA.’ where they divide a face into
different sub-regions consisting of forehead, eyes, nose
and chin. For each sub-region they have applied PCA
and determined the eigen vectors. Most significant S
eigen vectors have been considered. Then they express
each sub-region as a linear combination of these S eigen
vectors. When a face is recognized the eigen vectors for
all the sub-regions of the training face are compared
with the corresponding.
In paper [2] a method is presented which is frame size
invariant and also expression invariant. Here second
order Laplacian operator is used for face detection and
then the eyebrow and chin lines are detected for a face
and second order polynomial equations are determined
for these lines. The coefficients of these equations are
stored for recognition purpose. In this method the color
values of three regions – eye region, nose region and lip
region are considered and average red, green and blue
color values are stored for a training face.
Yongbin Zhang et al. [5] in their paper have used sub-
space projection method using the concept that a test
face may have some expressions not present in the
training images of the same face. In this case they have
separated different portions or sub-spaces from a face-
image and then weighted these spaces such that the sub-
spaces that show little change in different expressions
will get greater weights than those which show huge
changes. They have used PCA, ICA and LDA for pro-
jecting sub-spaces and proved that LDA is the most
promising method.
PCA method is a blind method, which does not consider
the actual shape of the face. In this paper we have pre-
sented an algorithm which is able to recognize faces for
moving persons when horizontally at least two thirds of
the longitudinal image of face is present on the image.
From a training face we have extracted seven feature
values and stored these for further recognition. The fea-
tures that we consider are eyebrow width, neck width,
eyeball width, distance between eyebrow and eyeball,
amount of black points in the eyeball region and eye-
brow region and the amount of forehead. All these fea-
tures are expression invariant. Here we do not consider
the contribution of color because in different situations
lighting changes the actual color of face. We only con-
sider the expression invariant portions of the face for
recognition.
The rest of the paper is organized as follows: Section: 2
describes edge detection by Laplacian convolution
mask, Section: 3 describes architecture of face recogni-
tion, Section: 4 describes recognition mechanism and
Section: 5 elaborates performance considerations. Sec-
tion: 6 describes conclusion.


II. EDGE DETECTION BY LAPLACIAN
CONVOLUTION MASK



Here we have used a very popular second-order Laplacian
operator to detect edge. The Laplacian of a function f(x,y) is
denoted by
( )

∇ yx,
2
.The definition of Laplacian is here:
( )
( )
( )
2
,
2
2
,
2
,
2
y
yx
x
yx
yx



+



=


(1)
Once more we can use discrete difference approxima-
tions to estimate the derivatives and represent the Lap-
lacian operator as the following 3 x 3 convolution mask:



0 1 0
1 -4 1
0 1 0

Fig. 1 Convolution Mask

However there are disadvantages to the use of second
order derivatives. (We should note that first derivative
operators exaggerate the effects of noise.) Second order
derivatives will exaggerate noise twice as much. No
directional information about the edge is given.
The mask in Fig. 1 convolutes through all pixels in an
image and changes the color of the pixel corresponding
to its center. This operator triggers along the discontinu-
ities or edges in an image. A main face and its detected
face by matrix convolution are given in Fig. 2.



Fig. 2 Main Face, Detected Face

In the second picture of Fig. 2 the blue line determines
the boundary of the face.


III. ARCHITECTURE OF FACE
RECOGNITION

Face Recognition is a mammoth task. The process how
we store the values from a training face is elaborated
here as an architecture. This consists of detection of the
face area, eye region, eyebrow region, forehead region,
calculating the required measurements such as – width
of the neck, width of the eyebrow, distance between
eyebrow and eyeball, amount of forehead, number of
black points in the eyebrow and storing these measure-
ments in the database. Training values are extracted
from only one sample image of a person. We extract
feature values from horizontally two thirds of the longi-
tudinal image of face starting from any side of the im-
age including the eye region. The total task of extraction
and storing values is shown in Fig. 3:





















Fig. 3 Architecture of Face Recognition
A. Detect Face Area

We have detected the face area by edge detection using
second order Laplacian operator that is stated above. At
first the outer edges of the input image are detected.
Then we detect the head top and neck positions. Thus
we get the upper and lower boundaries and the bounda-
ries of two sides of the face. Two of our program-
generated examples including blue-colored face boun-
daries are shown in Fig. 4. The first example includes
the full face where the second one does not.






Fig. 4 Main Face, Detected Face (for a full face, for a
face which is not complete)

B. Region Detection
After face detection the y-coordinates of head-top and
neck position are determined to calculate the perpendi-
cular face-length. From a great many faces we have
determined that the eyes, nose and lips are placed at
definite distance ratios from the top of the head. If we
multiply the perpendicular face-length by 0.49 we get
the distance of eyes from the head-top. If we multiply
the perpendicular face-length by 0.77 we can go to the
lips from the top of head. For the eye-region we deter-
mine two horizontal lines, one at the distance of (face-
length * 0.33) and the other at the distance of (face-
length * 0.53) from the head-top. These two lines exact-
ly cover the eye-region. The upper and lower bounda-
ries of the forehead and hair region of the face are re-
spectively the horizontal line going through the y co-
ordinate of the head top and the upper horizontal boun-
dary of the eye region.
C. Feature Extraction
We have extracted features from forehead and eye re-
gions mainly. From the forehead region the amount of
forehead in pixels is determined and then a rational
measurement is calculated by dividing the amount of
forehead by the face length. In the eye region we have
detected the eyeball and eyebrow. Then the diameter of
the eyeball and the perpendicular width of the eyebrow
are calculated in pixels. The features that we take are
listed here:
1. (Neck Width / Total Length of the
Face)*100
2. (Eyeball Width / Total Face Length) * 100
3. (Eyebrow Width / Total Face Length)*100
4. Amount of Forehead / Total Face Length
5. (Distance Between Upper Boundaries of
Eyebrow and Eyeball / Face Length)*100
6. (Number of Black Points in Eye Region /
Total Face Length)*10
7. (Number of Black Points in Eyebrow Re-
gion / Total Face Length)*10
Rational distances are taken to make face recognition
frame size independent. Also our process can recognize
face if horizontally two thirds of longitudinal-image of
the face is present beginning from any side of the face.
In this way we can get an eye-region. This is stated be-
low in Fig. 5 where a portion of the face on the right
side is absent. So we take features from the left portion
of the face including the left eye. For this face we de-
termine a right boundary, which is shown as the larger
blue vertical line in the program-generated image in
Fig. 5. We cut features from the left side of this boun-
dary. Here the smaller vertical line is the left boundary
of the eye region. The larger vertical line is the right
boundary of the eye region. Table 1 shows the extracted
feature values for two persons.
Get Image
Detect Face
Area
Calculate
N
eckWidth
Detect
Forehead
Region
Detect
E
y
ebrow
Detect
E
y
eball
Find Forehead
Amount
Detect Eye
Region
Determine
Black Points
in Eye
R
e
g
ion
Store Feature
Values
Determine
Eyebrow
Width
Find
Eye-
ball
Width
Find Distance
Between
Eyebrow and
eyeball
Find Black
Points in
Eyebrow

Fig. 5 Eye Region Boundaries and Neck Width

Table 1 Feature Values for Two Persons

Person -----------------> Person 1 Person 2
Features
Neck Width 51.44444

48.77778

Eyeball Width

9 10
Eyebrow Width

0 7
Distance Between Eye-
brow and Eyeball

5.714286

8.847584
Amount of Black point in
Eye-region

2.040816

4.869888
Amount of Black points in
eyebrow-region

0 11.22677
Amount of Forehead

14.72245

12.4461

D. Determine Neck Width
Neck width is determined as the horizontal distance
between the two sides of the face at the y-coordinate of
neck position. For the face in the Fig. 5 the neck width
is determined as the length of the horizontal blue line,
which is generated by our program. The horizontal line
extends to the blue boundary line and does not cross it
because we have defined the limit of the face region by
this blue boundary line.

E. Determination of Eye Related Measurements
Eye related measurements are calculated after face de-
tection using the following algorithm:
1. Detect the eye region determining its up-
per, lower, left and right boundaries.
2. Determine the RGB color values of the
blackest point in the eye region
3. The points that are very near to this most
black point considering the color values
are converted to fully black points by set-
ting their RGB values to (0,0,0).
4. Respectively the vertical beginning and
ending points of the eyeball are deter-
mined as we check black points of the eye
region beginning from the bottom limit.
We get two horizontal limits indicating the
y-coordinates of the vertical beginning and
ending of the eyeball. The distance of
these two limits give us the eyeball-width.
5. We get eyebrow region after the eyeball
region as we proceed from the bottom of
the eye region. According to 4 we deter-
mine the two horizontal limits of this re-
gion and from the distance of these two
limits we determine the eyebrow-width.
6. The distance between the upper limits of
the eyebrow and eyeball is also deter-
mined.
7. Numbers of total black points in the eye
region (including eyeball and eyebrow)
and also only in the eyebrow region are
determined.
In Fig. 6 and Fig. 7 there are program generated serial
images for two faces. Here the eyebrow-width of the
first face is greater than that of the second face. The
ratio based forehead amount of the second face will be
greater than that of the first.






Fig. 6 Eyebrow boundaries, Eyeball boundaries, Eye-
brow Black Points, Eyeball Black Points, Black Points
in the Hair Region Above Eye Region






Fig. 7 Eyebrow boundaries, Eyeball boundaries, Eye-
brow Black Points, Eyeball Black Points, Black Points
in the Hair Region Above Eye Region

F. Determination of Forehead Amount
We can determine forehead according to the following
algorithm.
1. Detect the forehead region in the face. De-
termine the RGB color values of the
blackest point. The right boundary of the
forehead region is that of the eye region.
2. All the points of this region whose RGB
values are very near to the RGB values of
the blackest point are made black by set-
ting their RGB values to (0,0,0).
3. All the non-black points are counted. This
count gives us the forehead-amount.
The Fig. 6 and Fig. 7, generated by our program, show
the totally black hair-region of the head. The region
above the eye region excluding the black points is the
forehead region. The number of points in the forehead
region is calculated and the measurement is found.
IV. RECOGNITION
We perform recognition using weighted error mechan-
ism. As stated above we take seven measurements on
the face area for recognition purpose. The measure-
ments that are mostly required for recognition are:
(1). Neck Width, (2). Black Points in Eye Region, (3).
Black Points in Eyebrow Region, (4). Forehead
Amount, (5). Eyebrow Width, (6). Distance Between
Eyebrow and Eyeball.
The rest measurement eyeball-width only helps a little
in the recognition purpose. The weighted error is calcu-
lated by the addition of the absolute differences between
the stored features of a face and extracted features of
the face to be recognized. The weighting factor of the
above six mostly required feature is one and for eyeball
width this is 0.5. That means the absolute differences of
the above six features are multiplied by 1 and that for
the eyeball is multiplied by 0.5 and then these differ-
ences are added together to obtain the weighted error.
The stored face for which the total weighted error is the
lowest is selected as the recognized face.
V. PERFORMANCE CONSIDERATIONS

Most of the previous face recognition methods are PCA
based. PCA is a blind method in the sense that here no
shape related knowledge is considered. Point to point
correspondence is performed for recognition purpose,
which is really worthless in case of real time face rec-
ognition. In our method we have considered the mini-
mum number of shape related features and this is the
fastest of all existing methods.
A. Time and Storage Complexities
PCA based calculations are very much time consuming
because these are matrix related. The method in [2] is
also time-consuming because of the difficulty in curve
extraction and polynomial regressions. In our process
there is no matrix related calculations. As we calculate
only seven values we need a very little storage. Ours is
the fastest of all existing processes.
B. Recognition Efficiency
We have used images for 100 persons where each of the
persons has more than 5 samples. We have implemented
PCA and WMPCA methods and the method described
in [2]. We have used three samples per person to deter-
mine the eigen values for PCA and WMPCA methods.
PCA and WMPCA methods are not frame size invariant
but the method in [2] is independent of frame size. The
comparison for the same sized images is shown here:
0
20
40
60
80
100
120
10
30
5
0
70
90
PCA
WMPCA
Method in
[2]
Our
Method


Fig. 8 Comparison Among Four Methods When Frame
Size is Constant

Our process can recognize different sizes of faces of the
same person in different frames.








Fig. 9 Faces With Different Parts and Different Expres-
sions

In the above figure there are 7 face images: first 4 im-
ages are of one person and the rest 3 are of another. The
faces are of different sizes and expressions and their
frame-sizes are different. Two frames include only a
portion of total face. Our face recognition program can
recognize these faces efficiently. When the sizes of the
faces are different the PCA and WMPCA methods do
not give correct results at all. The comparison when the
face sizes are different is shown here:
0
20
40
60
80
100
20 40 60 80 100
PCA
WMPCA
Method
in [2]
Our
Method


Fig. 9 Comparison Among Four Methods When
Face Size is Different


VI. CONCLUSION

The proposed method of face recognition is expression
invariant and gives a high recognition rate. It needs a
very little storage because only seven feature values are
extracted from a face and stored in this method. The
time complexity of this process is also very low. Our
process extracts only shape knowledge and excludes
color-based knowledge. Here we have used second or-
der edge detection operator that is Laplacian operator
for face-detection. Gradient-based edge detection opera-
tors produce too much complexity. Our Process is an
improvement over the current real time face recognition
techniques.
REFERENCES

[1] W. S. Yambor, B. A. Draper, R. J. Beveridge.
“Analyzing PCA-based Face Recognition Algo-
rithms: Eigenvector Selection and Distance Meas-
ures”. Available at:
http://www.cs.colostate.edu/evalfacerec/papers/eem
cvcsu.pdf

[2] Manishankar Mondal, Md. Maitur Rahman, Md.
Almas Hossain, Kamrul Hasan Talukder. “Real-
Time Face Recognition Using Polynomial Regres-
sion and Sub-region Color Component Distribu-
tion”. Proc ICCIT 2005, pp. 1177-1182.
[3] D. Valentin, H. Abdi., B. Edelman “What
Represents A Face: A Computational Approach for
the Integration of Physiological and Psychological
Data ”. Perception, 1997, volume 26.
[4] P. A. Kumar, V. Kamakoti, S. Das. “An Architec-
ture for Real Time Face Recognition Using
WMPCA”. Available at:
http://vplab.cs.iitm.ernet.in/publi_journal/frac.pdf

[5] Y. Zhang, A. M. Mart´ýnez. “Recognition of Ex-
pression Variant Faces Using Weighted Subspac-
es”. Available at:
http://www.ece.osu.edu/~aleix/icpro4.pdf

[6] G. Rajkiran, K. Vijayan “An improved face recog-
nition. technique based on modular PCA ap-
proach”. Pattern Recog-nition Letters, vol. 25 , No.
4:429–436, 2004.
[7] T. E. de Campos, R. S. Feris, R. M. Cesar Junior.
“A Framework for Face Recognition from Video
Sequences Using GWN and Eigenfeature Selec-
tion”. Available at:
http://www.nd.edu/~flynn/papers/xin_thesis.pdf

Al;kjdfl;
B. A. Graf, F. A. Wichmann. “Gender Classifi-
cation of Human Faces”.
[8] B. A. Graf, F. A. Wichmann. “Gender Classifica-
tion of Human Faces” Available at:
http://www.cns.nyu.edu/~graf/my_papers/proceedi
ngs/grawico2.pdf

[9] D. S. Turaga, T. Chen. “Face Recognition Using
Mixtures of Principal Components.” Available at:
http://amp.ece.cmu.edu/Publication/Deepak/icip200
2_deepak.pdf

[10] L. Torres, L. Lorente, J. Vila. “Automatic Face
Recognition of Video Sequences Using Self-
Eigenfaces”. Available at:
http://gpstsc.upc.es/GTAV/Torres/Publications/ISI
C00_Torres_Lorente_Vila.pdf

[11] C. Hesher, A. Srivastava, G. Erlebacher. “Auto-
mated Face Tracking and Recognition”. Available
at:
http://etd.lib.fsu/theses/available/etd_09172003_20
5355/unrestricted/dissert.pdf

[12] R. Qahwaji, R. Green. “Improving the Recognition
Performance for PCA”. Available at:
http://digitalimaging.int.brad.ac.uk/publication/Cate
e3_f.pdf