AUTOMATIC FACE RECOGNITION SYSTEM FROM ...

parathyroidsanchovyΤεχνίτη Νοημοσύνη και Ρομποτική

17 Νοε 2013 (πριν από 4 χρόνια και 1 μήνα)

81 εμφανίσεις

AUTOMATIC FACE
RECOGNITION SYSTEM FROM
SURVEILLANCE VIDEOS


Project guide:

Mrs.S.Abirami




S.Nisha

Pepsi 20062340



Problem statement

Face

localization

in

a

video

frame
.

Extracting

facial

features

such

as

face

contour,

eyes,

nose

and

lips

contour
.

Comparing

features

with

the

standard

templates

to

recognize

faces

of

human

beings
.

Application

Automatic Attendance Marking of students in a
class room through videos.

Description

This project has two phases

1.
Training phase. (Training Images)

1.
Feature vectors

2.
Classifier Model

2.
Testing phase. (Input Video)

1.
Distance Measurement

2.
Results


High Level Design

Stages of Processing
-
Explanation

Frame Extraction from video:


This

stage

involves

extracting

clear

video

frames

from

input

video

such

that

it

covers

the

maximum

number

of

faces
.

Software used: X2X Video Capture


Modules

Face Localization

Face Contour Extraction

Facial Features Extraction


1.Detecting eyes


2.Detecting mouth


3.Detecting nose

Module
-
1


Face Localization in the Frame.


The faces contained in the frame
are localized and extracted separately and
stored in the database.


Algorithm used: Box merge

Detailed Design of Face Detection

Algorithm
-
Explanation

Color segmentation:

color segmentation is locating skin color
regions in an input image.

The input image is transformed from RGB
into YCbCr format


Y = 0.299R + 0.587G + 0.114B


Cb =
-
0.169R
-

0.332G + 0.500B


Cr = 0.500R
-

0.419G
-

0.081B

Algorithm
-
Explanation

Image Segmentation:


separate the image blobs in the color
filtered binary image into individual
regions.

Remove the black isolated holes and small
isolated regions

Algorithm
-
Explanation

Separate integrated images into individual
faces using Roberts cross edge detection

Finally the edge detected image is
integrated with previous image .

Erosion is applied.

Algorithm
-
Explanation

Based on the color segmentation process,
square boxes are drawn on the skin color
regions and they are merged.


Algorithm
-
Explanation

Finally

based

on

the

information,

the

images

are

cut

into

squares

and

normalized

for

average

brightness
.


Output of Face Detection

Module
-
2

This

algorithm

makes

use

of

Voronoi

properties

and

Delaunay

triangulations
.

Voronoi

region

for

a

point

p

is

defined

as

the

set

of

all

the

points

that

are

closer

to

p

than

to

any

other

points
.

DT

is

constructed

by

connecting

any

2

sides

whose

Voronoi

polygon

share

an

edge
.


Algorithm

Finding local minima:

Input: Image

Steps:

1.Generate the host image histogram.

2.Generate VD/DT to obtain the list of vertices and
get the 2 peaks.

3.Set all points below the first peak to zero and all
points beyond the second peak to zero.

4.Set all points that are equal to zeros to be equal to
the
argmax
(peak1,peak2).




Contd..


5.Derive the new point



Output:

One dimensional vector(V) of minima points.


|
)))
(
max(
)
(
(
|
)
(
x
val
x
val
x
val
new


Algorithm

Segmentation:

Input : Host gray scale image
ψ
,vectors generated by previous procedure V.

Steps:

1.Initialise a new vector d=[ ].

2. Set
all pixels in

smaller than
V
(1, 1) to black (0)
.

3.
Set
all pixels in

greater than
V
(
length
(
V
(:)), 1) to black (0)



for
i
= 1 to
length
(
V
(:)) − 1
do



if
i
==1
then



set
all (
>
=
V
(
i
)
AND
<
=
V
(
i
+ 1))



to
V
(
i
+ 1)



d
= [
d
;
V
(
i
+ 1)]



else



set
all (
>V
(
i
)
AND
<
=
V
(
i
+ 1))



to
V
(
i
+ 1)



d = [
d
;
V
(
i
+ 1)]



end if



end for

Output:
Segmented gray scale image
Segmented

Extraction of face region

This is based on the fact that faces present
elliptical shapes with minor variations in
eccentricity.

Euclidean distance is used to separate face
region from its background.



Ellipse is drawn by connecting components from
the above stage.



)
(
)
(
2
1
2
1
2
2
y
y
x
x
DisT




Detecting eyes

Input:
Original image, possible feature points, eye binary template

model (
TM
)

Construct:
Voronoi Diagram and derive the Delaunay Triangles

For
i
= 1: length (Triangles_matrix)
do

Calculate the area of each triangle using Hero's formula

If
area
>
4
then

Delete (Triangle)

End if

End for

Use the remaining triangles to form clusters1

For
each cluster
do

Combine it with each of the other clusters simultaneously

Calculate the corresponding correlation with (TM)

Get the maximum likelihood of a pair of eyes

End for

For
each of the two eyes blobs
do

Fill the blob's pixels with their gray values from

Search for the darkest pixel's coordinates

End for

Output:
The two eyes coordinates (
x
,
y
)

Detecting nose and mouth

Input: Two eyes coordinates (x,y)

The ellipse is drawn using centre of the ellipse (
X
0,
Y
0):
is the centre of the distance between the two eyes.

Minor axis length (
a
) : is the distance between the two
eyes where both eye centres lie on each side of the
ellipse.


Major axis length (
b
): is 2
D
where
D
denotes the
distance between the two eyes.


Angle ( ): the ellipse must have the same orientation as
the detected

face. We can easily determine a face orientation based
on the angle made by the baseline (a line connecting
both eyes) and the horizontal
x
axis.

Contd..

Contd..

D


distance between the two centers of eyes ,
vertical distance between two eyes and center of
mouth.

Euclidean distance is calculated from center of
ellipse to its border points.

Two maxima regions are found.

Those regions are correlated with mouth
template to find the mouth.


Detection of nose

The circle is drawn using the midpoint of
the line that connects center of two eyes
as its center and D/2 as its radius.



Data Structures Used

V1


D (Distance between center of two
eyes)

V2


Distance between the midpoint of the
line that connects center of two eyes and
nose tip.

V3
-

Distance between the midpoint of the
line that connects center of two eyes and
mouth center.

Performance Measures

Time

taken

for

processing

each

image
.

Precision




Number

of

the

face

regions

correctly

identified

out

of

the

total

faces

identified
.

Identification

Rate


Number

of

people

identified

correctly

out

of

the

total

number

of

people
.