Facial Recognition

brasscoffeeAI and Robotics

Nov 17, 2013 (3 years and 4 months ago)


Facial Recognition

CSE 391

Kris Lord


Face recognition is one of the fundamental
problems in pattern analysis

Difficulties arise due to large variation in
facial appearance, head size, orientation
and change in environmental conditions

Computerized face recognition system still
cannot achieve a completely reliable

Main Issues

Often in practical situations, recognition must be
achieved in real
time so efficiency and speed are crucial

Variance in lighting, angles, and other environmental
areas make recognition more of a problem to deal with

May be hard to obtain a complete database of a
population’s faces in optimal posture/lighting for

False positives/inability to recognize a face still common
in current state of algorithms

Storage space a large issue, especially when dealing
with matrix
based algorithms (the more detailed the
picture, the larger the storage space needed)

3 Main Steps

Face detection


Facial area is singled out and removed

for processing within a noisy image

Face normalization


Facial image is processed to counteract posture issues such as tilt,
angle, lighting, and other environmental noise

Face verification/recognition


Facial features are analyzed via a recognition algorithm to
determine a match with an existing face in a database

“Eigenfaces” Approach

Patterns, in the domain of facial recognition could be the
presence of some objects (eyes, nose, mouth) in a face
as well as relative distances between these objects.
These characteristic features are called
the facial recognition domain (or
principal components
generally). They can be extracted out of original image
data by means of a mathematical tool called
Component Analysis



represents only certain features of the
face. If the feature is present in the original image to a
higher degree, the share of the corresponding

in the ”sum” of the

should be greater.

In order to cut down on large computational processing,

with the highest value (most
characteristic facial features) are kept for processing

Common “Eigenface” Algorithm

A set of training data (pictures of faces) are
transformed into a set E of Eigenfaces

Afterwards, the weights are calculated for each
image of the training set and stored in the set

Upon observing an unknown image X, the weights
are calculated for that particular image and stored
in the vector WX. Afterwards, WX is compared
with the weights of images, of which one knows
for certain that they are faces (the weights of the
training set W)

If this average distance exceeds some threshold

, then the weight vector of the unknown

lies too ”far apart” from the weights of
the faces. In this case, the unknown
considered to not a face. If it is considered to be a
face, its weight vector

is stored for later
classification, where it can be tested against
specific images and their eigenfaces.

Success rate?

Some algorithms are
much more successful
than others

Success rate depends
greatly on database of
faces used

Rate can vary
considerably if
databases are combined
(“eigenface” success rate
drops considerably, to
66% with combined

Practical Applications

Combat Terrorism/Airport Security

Large event (e.g. Superbowl) security

ability to
scan the crowd with a video camera and match
against a database of criminal records

Eliminate fake IDs

Eliminate identity theft (ATMs)

Casino security

Tailored (personalized) advertisements of the

Online dating profiling

Current State of the Art

Neural Net algorithms

Elastic matching algorithms

NEC Developed 3D face recognition
algorithm with over 96.5% recognition rate
under bad environmental conditions