FaceMatch Final Report Introduction


Nov 30, 2013 (3 years and 6 months ago)


Ariel Brown, Lucy Zhang, and Ravi Yegya

ORF 401


FaceMatch Final Report



Facial recognition technology is generally categorized as one of three
types: holistic methods, feature
based methods, or hybrid methods.

A prominently
used holistic method is Principal Components Analysis (PCA) which decomposes
facial structure into orth
ogonal components known as eigenfaces.

Each face may be
represented as a weighted sum of the eigenfaces, which are stored in a 1
dimensional array.

Another holistic method is Linear Discriminant Analysis (LDA)
which aims to maximize between
class (i.e. a
cross users) variance and minimize
class (i.e. within user) variance.

Among feature
based methods, a primary method is Elastic Bunch Graph
Matching (EBGM), which is modeled after the human visual system.

This method
works by first determining
the pose of the face.

Then each stored image is formed
by picking an elastic grid of points and storing values of these nodes.

A node on this
elastic grid is called the Gabor jet, and it describes the image behavior around a
given pixel.

Recognition is
based on the similarity of the Gabor filter response at
each Gabor node.

Hybrid methods combine the basic principles from various

The facial reco
gnition technology used by
is a proprietary
algorithm which is likely a hybrid of various a

A username and password are required for access to any of Face.com’s

The system is set up so that users from the class can log in with their netID
and the default password is set as ‘orf401’.

FaceMatch keeps track of the netID of

logged in user through the use of cookies.

Adding Photos
(source: developers.face.com/docs)

One of the main

features of FaceMatch is the ability of use
rs to add pictures
of themselve

to FaceMatch’s database
. Added pictures are displayed on the user’s
profile page, but more importantly, adding photos allows for more accurate facial
recognition. FaceMatch offers two ways to add photos: users can add
by providing the URL of a

picture, or they can a
dd photos through their
The only difference between the two methods is that in the webcam mode, the
picture must be stored and the URL of its location generated to call the appropriate
functions in face.com’s API.

Photos are added to face.com’s

abase via calls to
three methods in face.com’s API: faces.detect, tags.save, and faces.train. When a call
to face.com’s API is made, the response
from their server
the form of an XML
document. After each call to the API
the XML

is parsed f
or informa
needed for subsequent API calls

To add a photo, first the URL of the photo to be added

provided as the
argument to the function

faces.detect. Faces.detect

scans the picture for faces, and
returns temporary ID tags for each
group of pixe
ls detected as a face
faces.detect scans a picture for faces, it does so from left to right. For every group of
pixels it detects as a face, it returns a confidence level between 0 and 100 that the
group of pixels is actually a face. A minimum thresh
old of 50% confidence is the
default setting for a face to be assigned a temporary ID so the face can then be
associated with a user ID and saved to the database.
If no faces are detected, an
error message is displayed saying that no faces were detected.

After calling fac
es.detect, a call to
tags.save method must be made

takes two arguments:

the temporary ID tag returned by faces.detect and
user ID associated with the face. Tags.save

the temporary ID tag with a
permanent user ID (t
he user’s name, such as Ravi Yegya
Raman or Ariel Brown) so
the face can be stored in the database under the proper name. Once the face is added
to the database under the correct user ID, a call to faces.train must be made to
update the facial recognition
model of t
he user. Faces.train
combines the feature
vectors obtained from each face
that has been
added for a given user
to generate a
model of the user’s face. This
is the

that is

used for facial recognition
purposes. The more photos a user adds of

himself, the more accurate the


the person’s face
, as noise in photos is masked and a more accurate feature vector is

FaceMatch’s adding pictures feature does not allow for adding a picture with
multiple people. The reason is that

fails to recognize a face in
a picture, or detects faces where none are present. Because pictures are processed
from left to right, someone trying to add a picture with multiple people would have
o supply the names of the
people from lef
t to right. However, if faces.detect
detected too many or too few faces, then the user IDs associated with each face

would be wrong. Furthermore, once

faces are trained with the wrong user IDs, then
this impairs any subsequent recognition calls for th
e giv
en face. Therefore, we
decided that FaceMatch would only let users upload pictures with a single face, to
avoid these complications.

(sources: developers.face.com/docs, code.google.com/p/jpegcam)

The main service that FaceMatch offers is

the ability to recognize people

its database in real time just by pointing a camera at the person. FaceMatch offers
live as well as static recognition modes. The static recognition mode takes a URL of a
picture, scans the picture for faces, then compar
es each face to the database of faces,
and returns the top recognition hit for each face. If a user with a FaceMatch profile is
then the browser is redirected automatically

to the person’s profile
If the person does not have a FaceMatch p
rofile, the uploaded photo along
with the person’s name (if recognition is successful, otherwise ‘Sorry, we could not
recognize this
) will displayed on the page.

The mor
e useful recognition mode is

live recognition, via
a user’s
ng with the webcam is done via jpegcam, which allows web applications to
“capture JPEG webcam images…and submit to your server” using Flash and

the live recognition page
is opened
the site is

access to
the computer’s
webcam, yo
u have the option to “Go Live”. Once this button is
pressed, pictures are taken continuously
by the webcam and passed to the


of face.com.

A call to f

consists of two parts. First, it scans the picture from
left to right, and identifies groups of pixels that resemble a face, much in the same
way the faces.detect method works. For each group of pixels deemed a face

with a
high enough confidence (50% thre
shold is the default)
, it compares th
is face to the
faces stored in our

database. For each face in the database, a c
onfidence with which
the detected

face could be a match is returned. Faces with a 0% confidence result
are not returned, but a list of all u
ser IDs with some confidence between 1 and 100
is returned. FaceMatch

the recognition results generated by
faces.recognize and outputs the highest confidence match for each face.

Facial recognition results are returned in

the form of links to

profile page
s of detected faces
, or “Unknown” if the face is not recognized. Multiple
users can be recognized at once, and the facial recog
nition results for a picture a

from left to right in the picture.
Once a picture’s facial recognitio
n results
are returned, another picture is taken, providing the real
time aspect to the
application. Because calls to face.com’s API require several server requests, average
processing time for a photo is between 3
5 seconds, so there is a noticeable delay

recognition results. Ideally, facial recognition should be done on the client side for a
time application. At any time, there is the option to halt live recognition, by
hitting the “Pause” button. This is especially useful if you obtain recognitio
n results
for a group of faces and want to access their FaceMatch profiles, before another
picture is taken that might be out of focus, thus possible returning “Unknown” for
many of the same faces that were previously recognized.

Rating System

The last feature that we added to FaceMatch was a rating feature where
users were able to rate a set of target images and then see how any one in their
network that had rated the set of target images would rate other classmates.

first step of the rati
ng system was to gather a set of training images that would
match up well with anyone in the network.

We decided on thirty images so as to not
be too time consuming to rate, but also to get accurate results.

The user would then
assign a rating of one thr
ough ten to all the training images and we stored the
ratings in a SQL table.

We allowed the user to choose the criteria by which he or she
rated the images, so one user could rate them by attractiveness, while another user
could rate them by perceived in

Once the thirty ratings were completed, the user was directed to a link
where he or she could find out how other people that had rated the training images
would rate the people in the network.

We did queries to the SQL tables storing the

that had rated people and the people in the network to provide a drop down
menu for the user that would allow the user to ask, “how does person X rate person

Once the user decided on the two people, we ran facial recognition technology
on the person
being rated (the target) against the set of training images.

We then
selected the top three matches and their corresponding levels of confidence and
used our rating algorithm to produce a rating for how person X would rate person Y.

The rating algorithm w
orked by taking the rater’s ratings of the target’s
matches and weighting them according to the percent confidence.

For example, if
the facial recognition technology said that the target was training image 5 with 30
percent confidence and training image 2
2 with 10 percent confidence, we would
look at the rater’s ratings of training image 5 and training image 22.

If the rater
rated training image 5 as an 8 and training image 22 as a 4, we would training image
5 more heavily as it had a larger percent confi

The formula for the rating was
Rating = Sum(Each rating*Each Confidence) divided by Sum(Each Confidence).

in the example above, the rating would equal ( rating(training image 5) *
confidence(training image 5) + rating(training image 22) * confi
image 22) ) / ( confidence(training image 5) * confidence(training image 22) = (
8*30 + 4*10 ) / ( 30 + 10 ) = 280/40 = 7.

So the rater would rate the target as a 7 in
this example.

To further improve the rating system, we would allow users

to rate by
different criteria, so for example, a user would be able to rate the training images
according to attractiveness and then rate them again according to likability.

Additionally, we would improve our database of people in the network to include
their ratings of each person and then allow users to both sort by ratings and search
by ratings.

A user would then be able to make the follow search “2012 ORFE
student rated over a 7 by me.”

This is especially useful in larger networks where the
user doe
s not have time to individually rate everyone in the network, so the training
images and facial recognition technology allow the user to do queries much more

This feature could also be useful for dating sites where a user rates training
images an
d then is matched up with people in his or her area according to the
ratings he or she would assign to other people on the dating site.

Division of Work

Lucy Zhang


Created login system which includes calling a javascript function upon
loading each page

Upon successful login, a cookie is created which includes
the users netID, so that the website can identify the user later.

If login is
unsuccessful, an alert indicates whether it is because the password was
incorrect or if the user is not in the datab


Created and managed SQL database which contained information for all
users in the class, including login credentials, contact information, and their
profile picture.

This information was manually taken from the Princeton
residential college facebook


Ravi Yegya


Original idea for FaceMatch and FaceMatch’s rating system


All functionality and design associated with

live detection, live recognition,
static detection, and static recognition webpages
. This entailed all of the
Javascript and
PHP coding, as well as discovering face.com and jpegcam as
the most viable methods to facilitate a web application that performs facial
recognition through a webcam.


Server side coding that allows FaceMatch profiles to display all pictures
uploaded that co
ntain the user’s face.


Server side facial recognition coding

for ratings query between users on
ratings page

Ariel Brown


Implementation of FaceMatch rating system, including creating pages to rate
images as well as query ratings between users.


Implementation of search feature for contacts


General l
ayout and design of all pages on the site


Implementation of contacts page, including ability to sort by multiple fields

List of




Link to site:


Login instructions: enter your netID as the username, and orf401 as the password.