EMS: Expression based Mood Sharing for Social Networks

beadkennelAI and Robotics

Oct 15, 2013 (3 years and 9 months ago)

110 views



1

EMS: Expression based Mood Sharing for Social Networks

Md Munirul Haque, Mohammad Adibuzzaman

md.haque@mu.edu
,
mohammad.adibuzzaman@mu.edu


Department of
Mathematics, S
tatistics, and Computer Science

Marquette University

P.O. Box 1881, Milwaukee, WI 53201






Category of Submission: Research Paper

Contact Author:
Md Munirul Haque


Email:
md.haque@mu.edu




2

EMS: Express
ion based Mood Sharing for Social Networks

ABSTRACT

Social networking sites like Facebook, t witter, and myspace are becoming overwhelmingly
powerful media in today’s world. Facebook has 500 million active users and t witter has 190
million visitors per mont
h and increasing each second. On the other hand number of smart phone
users has crossed 45 millions. Now we are focusing on building an application that will connect
these two revolutionary spheres of modern science that has huge potential in different sec
tors.
EMS a facial expression based mood detection model has been developed by capturing images
while the users use webcam supported laptops or mobile phones. This image will be analyzed to
classify one of several moods. This mood information will be share
d in the user profile of
Facebook according to privacy settings of the user. Several activities and events will also be
generated based on the identified mood.

Keywords:

Mood detection, Facebook, Eigenfaces, Web service, Distributed application.

1.

INTRODUCTI
ON

Facebook
, a social networking website used throughout the world by users of all ages. The website
allows users to friend each other and share information resulting in a large network of friends. This
allows users to remain connected or reconnect by
sharing information, photographs, statuses, wall posts,
messages, and many other pieces of information. In the recent years, as smartphones have become
increasingly more popular

[38]
, the power of
Facebook
, has become mobile. Users can now add and see
pho
tos, add wall posts, and change their status right from their iPhone or Android powered device.


Several researches are based on Facial Action Coding System (FACS), first introduced by Ekman and
Friesen in 1978 [18]. It is a method for finding taxonom
y of almost all possible facial expressions initially
launched with 44 Action Units (AU). Computer Expression Recognition Toolbox (CERT) have been


3

proposed [32
, 33, 1, 36, 35, 2
] to detect facial expression by analyzing the appearance of Action Units
relat
ed to different expressions. Different classifiers like Support Vector Machine (SVM), Adabooster,
Gabor filter, Hidden Markov Model (HMM) have been used alone or in combination with others for
gaining higher accuracy. Researchers in [6, 29] have used activ
e appearance models (AAM) to identify
features for pain from facial expression. Eigenface based method was deployed in [7] for an attempt to
find a computationally inexpensive solution. Later the authors included Eigeneyes and Eigenlips to
increase the cla
ssification accuracy [8]. A Bayesian extension of SVM named Relevance Vector Machine
(RVM) has been adopted in [30] to increase classification accuracy. Several papers [28, 4] relied on
artificial neural network based back propagation algorithm to find cla
ssification decision from extracted
facial features. Many other researchers including Brahnam et al. [34, 37], Pantic et al. [7, 8] worked in the
area of automatic facial expression detection. Almost all of these approaches suffer from one or more of
the f
ollowing deficits: 1) reliability on clear frontal image, 2) out
-
of
-
plane head rotation, 3) right feature
selection, 4) fail to use temporal and dynamic information, 5) considerable amount of manual interaction,
6) noise, illumination, glass, facial hair,

skin color issues, 7) computational cost, 8) mobility, 9) intensity
of
expression

level, and finally 10) reliability. Moreover, there has not been any work regarding automatic
mood detection from facial images in social network.
We have also done an anal
ysis on the mood related
applications of Facebook. Our model is fundamentally different from all these simple models. All these
mood related applications in Facebook require users to choose a symbol that represents his/her mood
manually but our model detec
ts the mood without user intervention.

Our proposed system can work with laptop or hand held devices. Unlike other mood sharing applications
currently in
Facebook
,
EMS

does not need manual user interaction to set his or her mood.
Mobile device
like iPhone has front camera which is perfect for EMS.
The automatic finding of mood from facial
features is the first of its kind regarding
Facebook

applications. Thus it gains robustness and also bring
s

several challenges. Contexts like loc
ation and mood are considered in case of sharing to increase users’


4

privacy. Users may or may not like to share their mood when they are i
n specific location. Again they
may think differently about publishing their mood when they are in a specific mood. W
e have already
developed small prototype of the model and showed some screenshots of our deployment.

The rest of the paper is organized as follows: In Section 2, we
have detailed the related works
with comparison table followed by motivation of such applic
ation in section 4.

In Section
5
, we
present

the
concept design of our approach with high level architecture. Details of implementation have been
provided in
section
6 with application characteristics in section 6. Some of the critical and open issues
have

been discussed in section 7
and finally offer our conclusions in
s
ection
8.

2.

RELATED WORK

Automatic facial expression detection models differ from one another in terms of subject focus,
dimension, classifier, underlying technique, and feature selection str
ategy. Here we provide a high level
view for the categorization.


Figure
1
: Classification of automatic facial expression detection models

Bartlett et al. [
1
] proposed
a

system that can automatically recognize frontal view
s

of the image from a
video stream
,

in which
20
Action Units (
AU
)

are detected for each frame. Here context
-
independent


5

training has been used. One binary classifier has been used for each of the 20 AUs. These classifiers have
been trained to recognize the
occurrence of AUs regardless of their co
-
occur
re
nce (a specific AU
occurring alone or with others). They also compared the performance analysis of AdaBoost and linear
SVM. Paired t
-
test
s

showed a slight advantage of AdaBoost over linear SVM. One interestin
g feature that
the authors tried to measure is the intensity of specific AUs. They used the output margin of the system
which actually describes the distance to the separating hyperplane as an interpretation for the intensity of
the AU.

Braathan et al. [2]

address a natural problem with image collection and shift the paradigm from 2D to 3D
facial images.
Older

automated facial expression recognition systems

have
relied on posed images where
images clearly show the frontal view of the face. But this is

impra
ctical. Many times the head is

in an
out
-
of
-
image
-
plane
(turned
or
nodded) in spontaneous facial images.

Braathan et al. tackled t
hree
issues
sequentially in th
eir

project. First the face geometry
was

estimated. Thirty images
we
re taken for each
subject a
nd
the
position of
eight

special features
was

identified (ear lobes, lateral and nasal corners of the
eyes, nose tip, and base of the center upper teeth)
.
3D location
s

of these
eight

features
were

then
recovered. These eight 3D location points
were

fitted in the canonical view of the subject. Later a
scattered
data interpolation technique [3
]
was

used to generate and fit the other unknown points in the
face model. Second
,

a 3D pose estimation technique
known as
Markov Chain Monte
-
Carlo method or
par
ticle filtering method

was

used. This generate
d

a sequence of 3D poses of the head. Canonical face
geometry
was then
used to warp these images on to a face model and rotate

these into

a

frontal view.
Then this image
was
projected back to the image plane. F
inally
,

Braatthan et al.
used Support Vector
Machines (SVM) and Hidden Markov Models (HMM) for the training and learning of their spontaneous
facial expression detection system. Since normally the head is in
a slanted position

during severe pain
,

th
is

syst
em
is

especially useful for analyzing real time painful expression
s
.



6

Jagdish and Umesh
proposed a simple architecture for facial expression recognition based on token
finding and standard error
-
based back propagation neural network

[48]
. After capturing th
e image from
a
webcam
,

they used the face detection technique devised by Viola and Jones [11]. In order to process the
image,
histogram
equalization
was

done to enhance image quality followed by edge detection and
thinning. Tokens
,

which denote the smalles
t unit of information
,

were

generated from the resultant image
and passed into the neural network. The network
was

trained with 100 samples and c
ould

classify
three

expressions. The p
rovided report did

not say anything about the number of nodes
in the
inpu
t
layer and
hidden layer of the network. It also did not mention
about
the training samples.

Active Appearance Model

(AAM) [
6
]

has

been proposed
as

an innovative way for pain recognition from
facial expression
s
. AAMs were used to develop the automated
machine learning
-
based system. Use of
a
Support Vector Machine (SVM) and leave
-
one
-
out procedures lead to a hit rate of 81%
.
The main
advantage of AAMs is the feature of decoupling appearance and shape parameters from facial images. In
AAM
,

shape
s
is expr
essed as a 2D triangulated mesh. Location of the mesh vertices is related
to
the
original image from which the shape
s

was derived. In AAMs
,

a shape
s

can be expressed as a
combination of a base shape
s
0

and a set of shape vectors
s
i

.

Three AAM
-
derived re
presentations are
pointed
:

Similarity Normalized Shape (s
n
), Similarity Normalized Appearance (a
n
), and Shape
Normalized Appearance (a
0
). The authors
developed

their own representations using AAM
-
derived
representations that they used in painful face detec
tion. Temporal information
was not

used in recognition
of pain which may increase the accuracy rate.

Monwar et al.
proposed an automatic pain expression detection system using Eigenimage

[7]
. Skin color
modeling
was

used to detect the face from video sequence. Mask image technique
was

used to extract the
appropriate portion of the face for detecting pain. Each resultant image from masking
was

projected into a
feature space to form an Eigenspace based on training samp
les. When a new image
arrived,

its position in
the feature space
was

compared with that of the training samples and
based on that
a decision
is

drawn


7

(pain or no pain). For this experiment
,

38 subjects
of

different ethnicity, age, and
gender were

videotape
d
for two expressions


normal and painful. First the chromatic color space
was

used to find the distribution
of skin color. Later
the
Gaussian model
was

used to represent this distribution. Then the probability of a
pixel being skin
was

obtained. After se
gmenting the skin region
, a

meaningful portion of the face
was

detected using mask image and later a bitwise ‘AND’ operation
was

used with the mask image and
original image to develop the resultant image. These resultant images
were

used as samples for tra
ining

the
Eigenfaces method and M
number of
Eigenfaces with highest Eigenvalues
being

sorted out. When
detecting a new face, the facial image
was

projected in the Eigenspace
,

and the Euclidian distance
between the new face and all the faces in the Eigenspa
ce
was

measured. The face that represents the
closest distance
was

assumed
to be

a match for the new image. Average hit rate
was

recorded to be 90
-
92%. Later
,

the researchers extended their model [
8
] to create two more feature spaces


Eigeneyes and
Eigenl
ips. Portion
s

of eyes and lips were used from facial images for Eigeneyes and Eigenlips methods.
A
ll possible combinations of Eigenfaces, Eigeneyes, and Eigenlips (alone or in combination with others)
were used to find pain from images.
A c
ombination of al
l three together provided the best result in terms
of accuracy (92.08%). Here skin pixels have been sorted out using chromatic color space.
However

skin
color varies a lot based on rac
e and
, ethnicity
, and d
etail
ed

elaboration of the subjects

skin color
s

is
missing

in these studies.

Here

[9]

the authors proposed a methodology for face recognition based on information theory. Principle
component analysis and feed forward back propagation Neural Network has been used for feature
extraction and face recogniti
on respectively. The algorithm has been used on 400 images of 40 different
people taken form Olivetti and Oracle Research Laboratory (ORL) face database. Training dataset is
composed of 60% of the images and the rest are left for test database. The Artific
ial Neural Network
(ANN) had 3 layers


input, output, and one hidden layer. Though this method achieved 97% accuracy
this had two major drawbacks. The method incorporates one ANN for each person in the face database


8

which is impractical in terms of scalab
ility. Another issue is they have tested only with different images
of the same person whose picture has been used in the training of the ANN which is not practical for a
real life scenario.

A
uthors proposed a modified methodology for eigenface based facia
l expression recognition

in [10]
. The
authors formed a separate eigenspace for each of the six basic emotions from the training images. When a
new image comes, the new image is projected in each of the eigenspaces and a reconstructed image is
formed from e
ach eigenspaces. A classification decision is taken by measuring the mean squeare error
between the input image and the reconstructed image. Using Cohn
-
Kanade and JAFFE (Japanese Female
Facial Expression) database, the method received a maximum of 83% accu
racy for ‘happiness’ and 72%
accuracy for ‘disgust’. The paper does not say anything about facial contour selection. The classification
accuracy rate is also quite low.

A
uthors compared algorithms based on 3 different color spaces RGB, YCbCr, and
HIS in
[11]
. Later they
combined them to derive a modified skin color based face detection algorithm. An algorithm based on the
venn diagram of set theory has been used to detect the skin color. A face portion from the original image
is taken apart by detecting 3

points


midpoint of eyes and leaps. This method showed 95% accuracy rate
on IITK database. But this method only detects the facial portion from any image. It does not say
anything about expression classification.

In table I we provide a comparison table
for the different models
we have reviewed.

Table I. Features of several reported facial recognition models

Name

Number of
S
ubject/
I
mg

Learning
Model/

Classifier


2D/3D

Computational
Complexity

Accuracy

Intensity

Eigenfaces [7]

38 sub
ject

Eigenface/

Principal
Component
Analysis

2D

Low

90
-
92%

No

Eigenfaces+
38
sub
ject

Eigenfaces+
2D

Low

92.08%

No



9

Eigeneyes+
Eigenlips [8]

Eigeneyes+
Eigenlips

AAM[6]

129
sub
ject

SVM

Both

Medium

81%

No

[32,35]
Littlewort

5500 im
a
g
e

Gabor filter,
SVM,
Adaboost

Both

High

72%

Some
-

what

ANN [28]

38
sub
ject

Artificial
Neural
Network

2D

Medium

91.67%

No

RVM [30]

26
sub
ject

204 im
a
g
e

RVM

2D

Medium

91%

Yes

Facial Grimace
[31]

1
sub
ject

1336 frame

High and
low pass
filter

2D

Low

Cont.
monitoring

Some
-

what

Back
propagation
[29]

100
sub
ject

Back
propagation
NN

2D

Medium to
high

NA

No

Support Vector Machine
-

SVM, Relevance Vector Machine
-

RVM, ANN
-

Artificial Neural Network,
Cont.


continuous, NA


Not Available.


We have also done an analysis on the mood related applications of Facebook.
Rimé et al. [
12
] argued that
everyday life emotional experiences create an urge for social sharing. Authors also

showed that most
emotional experiences are shared with others short
ly after they occurred.

These research findings show
that mood sharing can be an important area in social networks.
At present there are many applications in
Facebook which claim to do mood detection for the user. Here, we have listed top 10 such applicati
ons
based on the number of active users.

Table II. Mood applications in Facebook

Name of
Application

No. of users

Mood
Categories

My Mood

1,822,571

56

SpongeBob Mood

646,426

42



10

The Mood
Weather Report

611,132

174

Name and Mood
Analyzer

29,803

13

Mood

Stones

14,092

-

My Friend's
Mood

15,965

-

My Mood Today!

11,224

9

How's your mood
today?

6,694

39

Your Mood of the
day

4,349

-

Patrick’s Mood
=
㌳㈹
=

=

3.

MOTIVATION


In fact there is no real time mood detection and sharing application in
Facebook
. Normally, all
applications of this sort request the user to select a symbol that represents his/her mood. And that symbol
is published in the profile. We accepted the research challenges concerning real time mood detection
system from facial features and

use
Facebook

as a real life application of the system. We chose
Facebook

due to its immense power of connectivity. Anything can be
spread

to all including friends, relatives etc. in
a matter of seconds. We plan to use this stronghold of
Facebook

with our
application to improve quality
of life.



11

Scenario 1:


Mr.
Johnson

is an elderly citizen living in a remote place all by himself. Yesterday he was very angry
with a banker who failed to process his pension scheme in timely manner. But as he chose not to
share his
angry mood with others while using
EMS

that information was not leaked. But today he started with a
pensive mind and feeling depressed. His grandchildren were in his friend list of
Facebook
. They noticed
the depressed mood. They called him and se
nd him an online invitation card in a restaurant which is the
most favorite of his grandfather using event manager. Within hours they can see the mood status to be
changed to happy.

Scenario 2:


Dan, a college student has just received his semester fin
al result online. He felt upset since he did not
receive the result he wanted to achieve.
EMS

has taken his picture and classified it as sad. The result has
been uploaded in Dan’s
Facebook

account. Based on Dan’s activity and interest profile the system wi
ll
suggest for a recent movie of Tom Hanks


Dan’s favorite actor. It will also show the local theater name
and timing of the movie. Dan’s friends also see it and they all decide to go for the movie. By the time they
time return from the movie, everyone is

smiling including Dan.

Scenario 3:


Mr. Jon
es

is working as a system engineer in a company. He is a bit excited today because the
performance bonus will be announced today. But it was a shock for him as he received a very poor bonus.
His sad mood has
been detected by
EMS
. But as Mr. John is in office, it would not publish his mood due
to
its location aware context mechanism.

4.

CONCEPT DESIGN

Development of the architecture can be classified in three broad categories. In the first phase the facial
portion has to be extracted out from an image. Then in the second phase the extracted image has to be


12

analyzed for facial features and classification

follows to identify one of the several mood categories.
Finally we need to integrate the mobile application with Facebook.
Figure
2

depicts the architecture.


Figure
2
. Architecture of EMS

4
.1
Face Detection

Pixels corresponding to skin have difference w
ith other pixels in an image. Skin color modeling in
chromatic color space [5] has shown the clustering of skin pixels in a specific region. Though the skin
color of persons vary widely based on different ethnicity, research [4] shows that still they form
a cluster
in the chromatic color space. After taking the image of the subject we first crop the image and take only
the head portion of the image. Then we use skin color modeling for extracting the required facial portion
from the head image.

4
.2
Facial Ex
pression Detection

For this part we plan to use a combination of Eigenfaces, Eigeneyes, and Eigenlips methods based on
Principal Component Analysis (PCA) [6,7]. This analysis method includes only the characteristic features


13

of the face corresponding to a s
pecific facial expression and leaves other features. This strategy reduces
the amount of training sample and helps us to make our system computationally inexpensive. These
resultant images will be used as samples for training Eigenfaces method and M Eigenf
aces with highest
Eigenvalues will be sorted out. When detecting a new face, the facial image will be projected in the
Eigenspace and the Euclidian distance between the new face and all the faces in the Eigenspace will be
measured. The face that represents

the closest distance will be assumed as a match for the new image.
Similar process will be followed for finding results using Eigenlips and Eigeneyes methods. Here is a step
by step break down of the whole process.

1.
The first step is to obtain a set S w
ith M face images. Each image
is transformed into a vector of
size
N
2

and placed into the set,
S={Г
1

2

3..


M
}

2.
Second step is to obtain the mean image Ψ













3. W
e find the difference Φ between the input image and the mean image,
Φ
i
= Г
i
-

Ψ

4.
Next we seek a set of M orthonormal vectors, u
M
, which best describes the distribution of the data. The
k
th

vector, u
k
, is chosen such that







(





)






5.
λ
k

is a maximum, subject to











where u
k

and
λ
k

are the eigenvectors and eigenvalues of the covariance matrix C

6. The covariance matrix C has been obtained in the following manner Ω



















































Where,
















7. To find eigenvectors from the
covariance matrix is a huge computational task. Since M is far less than
N
2

by N
2
, we can construct the M by M matrix,



14


















8.
We find the M Eigenvectors, v
l
of L.

9.
These vectors (v
l
) determine linear combinations of the M
training set face images to form the
Eigenfaces u
l



























10. After computing the Eigenvectors and Eigenvalues on the covariance matrix of the training images



M eigenvectors are sorted in order of descending Eigenvalues



Some top eigenvectors are chosen to represent Eigenspace

11. Project each of the original images into Eigenspace to find a vector of weights representing the
contribution of each Eigenface to the reconstruction of the given image.

When detecting a new fa
ce, the facial image will be projected in the Eigenspace and the Euclidian
distance between the new face and all the faces in the Eigenspace will be measured. The face that
represents the closest distance will be assumed as a match for the new image. Simil
ar process will be
followed for Eigenlips and Eigeneyes methods. The mathematical steps are as followed:



Any new image is projected into Eigenspace and find the face
-
key







(



)



















where, u
k

is the k
th
eigenvector and ω
k

is the k
th

weight in the weight vector



















The M weights represent the contribution of each respective Eigenfaces. The vector Ω, is taken as
the ‘face
-
key’ for a face’s image projected into Eigenspace.



We compare any two ‘face
-
keys’ by a
simple Euclidean distance measure














An acceptance (the two face images match) or rejection (the two images do not match) is
determined by applying a threshold.



15

4
.3
Integration to Facebook

The application uses device camera to capture facial image
s of the user and recognize and report their
mood. The mobile version, for iPhone and Android powered devices, uses the devices’ built in camera to
capture images and transfer the image to a server.

The server extracts the face, the facial feature points,

and classifies the mood using a classifier, and sends the recognized mood back to the mobile
application.

The mobile application can connect to Facebook allowing the user to publish their recognized
mood.

5.

IMPLEENTATION

The ultimate goal is to use the application from any device, mobile, or browser. Because of the huge
computational power needed for the image processing for facial expression, we needed software like
MATLAB for image processing and facial expression recogn
ition. Hence the total design can be thought
of the integration of three different phases. First, we need MATLAB for facial expression recognition.
Second, that MATLAB script needs to be called using a web service. That way, we ensure that the script
is av
ailable from any platform, including handheld devices. Lastly, we need an Facebook application
which will call the web service.

First we made a training database of eight persons with six basic expressions. The expressions are anger,
fear, happiness, neut
ral, sadness, and surprise. Initially we also took pictures for the expression
‘depression’. But we even failed to distinguish between the expression of ‘sadness’ and ‘depression’ with
naked eyes. So we later discarded images of ‘depression’ from database.

The following
figure 3
is a
screenshot of the training database.




16



Fig
ure 3
: Facial Expression Training Database

After training, we implemented the client side for web service call using PHP and javascript. In the
client
side, we upload an image and then using the web service call, the expression is detected. In the server side
we used Apache Tomcat container as the application server with Axis2 as SOAP engine. Then using a
PHP script we called that web service from

a browser. User uploads a picture from the browser and then
the facial expression is detected using the web service call. Figure
4

shows the high level architectural
overview.



17





















AXIS2

Apache Tomcat
Container

Application Server

SOAP/Web Service
Engine

MATLAB

Expression Detection
Script

Server

WAMP

PHP Web Server

Browser/Mobile

HTTP Call

Client

Figure 4
:
Web based Expression

Detection Architecture



18

Here we provide a screenshot of a sample user
with corresponding detected expression.






Fig
ure 5
: Facial Expression Recognition from a Web page.

6.

CHARACTERISTICS

Our application has several unique features with research challenges compared to other such applications.
Several important functionalit
ies of our model have been described here.


6
.1
Real T
ime
Mood to Social M
edia



19

Several researches have been done in facial expression detection. But the idea of real time mood detection
and integrate this with
Facebook

is a novel idea. The use of ‘ex
treme
connectivity’ feature of F
aceb
o
ok
will help people to distribute their happiness over others at the same time this feature will help people to
overcome their sadness, sorrows, depression etc. Thus it will improve the quality of life. With the
unbelievable
amount of
Facebook

users and its current growth, it will have a real positive impact on the
society.

6
.2 Location

A
ware

S
haring

Though people like to share their moods with friends and others there are scenarios when they do not
want to publish their mood
when they are in specific location. For example if someone is in office and in
depressed mood since he had an argument with his boss, definitely he would not like to publish his
depressed mood. If that information is published he might be in a false positi
on. Our model will take
location as a context to analyze the publishing decision.

6
.3
Mood A
ware
S
haring


There are moods that people like to share with everyone and there are moods which people do not like to
share. For

example, one might like to share all moods but anger to everyone. Or he might like to share
specific groups of mood with special people. Someone may want to share only happy face with kids (or
people below a specific age) and all available moods with othe
rs. All these issues have been taken care of.

6
.4 Mobility:

Our model will work for both fixed and handheld devices. This will ensure the feature of mobility. Two
different protocols are being developed for connecting the web server from fixed devices (la
ptop, desktop
etc.) and handheld devices (cell phone, PDA) etc. A recent statistics showed that around 150 million
active users access
Facebook

from their mobiles [
39
]. This feature will help the users to access our mood
based application on the fly.



20

6
.5 R
esources of Behavioral Research

An appropriate use of this model can provide huge amount of user certified data for the use of behavioral
scientist. Right now billions of dollars are being spent on many projects that are trying to improve quality
of life.
In order to do that behavioral scientists require to analyze the mental state of different age groups,
their likes and dislikes with other issues. Statistical resources about the emotion of millions of users could
provide them with invaluable information t
o reach a conclusive model.

6
.6 Co
ntext Aware Event M
anager

Event manager suggests of events that might suit with the current mood of the user. It will work as a
personalized event manager for each user by tracking the previous records of the user. When it

is going to
suggest about some activity for making a person happy, it will try to find out whether there was any
special event that made the person happy before. This feature would enable the event manager to be more
specific and relevant to someone’s per
sonal wish list and likings.

7.

CRITICAL & UNRESOLVED ISSUES

7.1
Deception of Expression (suppression, amplification, simulation):

The volume of control over suppression, amplification, and simulation of a facial expression is yet to be
sorted out while determining any type of automatic facial expressions. Galin and Thorn [26] worked on
the simulation issue but their result is not con
clusive. In several studies researchers obtained mixed or
inconclusive findings during their attempts to identify suppressed or amplified pain [26, 27].

7.2
Difference in Cultural, Racial, and Sexual Perception:

Multiple empirical studies performed to col
lect data have demonstrated the effectiveness of FACS.
Almost all these studies have selected individuals mainly based on gender and age. But the facial
expressions are clearly different in people of different races and ethnicities. Culture plays a major r
ole in
our expression of emotions. Culture dominates the learning of emotional expression (how and when) from


21

infancy, and by adulthood that expression becomes strong and stable [19, 20]. Similarly, the same pain
detection models are being used for men and

women while research shows [14,15] notable difference in
the perception and experience of pain between the genders. Fillingim [13] believed this occurs due to
biological, social, and psychological differences in the two genders. This gender issue has been

neglected
so far in the literature. We have put ‘Y’ in the appropriate column if the expression detection model deals
with different genders, age groups, and ethnicity.



Table I
I
I. Comparison Table Based on the Descriptive Sample Data

Na
me

Age

Gender

Ethnicity

Eigenfaces[7]

Y

Y

Y

Eigenfaces+ Eigeneyes+
Eigenlips [8]

Y

Y

Y

AAM[6]

Y

Y (66 F, 63 M)

NM

[32,35] Littlewort




AAN [28]

Y

Y

Y

RVM [30]

Y (18 hrs to 3
days)

Y (13 B, 13 G)

N (only
Caucasian)

Facial Grimace [31]

N

N

N

Back
propagation [29]

Y

Y

NM


Y


Yes, N


No, Not mentioned


NM, F


Female, M


Male, B


Boy, G


Girl.

7.3
Intensity:

According to Cohn [23] occurrence/non
-
occurrence of AUs, temporal precision, intensity, and aggregates
are the four reliabilities that are needed to be analyzed for interpreting facial expression of any emotion.


22

Most researchers including Pantic and Rothkr
antz [21], Tian, Cohn, and Kanade [22] have focused on the
first issue (occurrence/non
-
occurrence). Current literature has failed to identify the intensity level of facial
expressions.

7.4
Dynamic Features:

Several dynamic features including timing, durat
ion, amplitude, head motion, and gesture play an
important role in the accuracy of emotion detection. Slower facial actions appear more genuine [25].
Edwards [24] showed the sensitivity of people to the timing of facial expression. Cohn [23] related the
mo
tion of head with a sample emotion ‘smile’. He showed that the intensity of a smile increases as the
head moves down and decreases as it moves upward and reaches its normal frontal position. These issues
of timing, head motion, and gesture have been neglec
ted that would have increase the accuracy of facial
expression detection.

8.

CONCLUSION

Here we have proposed
a
real time mood detection and sharing application for
Facebook
. The novelties
and its impact on the society have been described. Several features of

context awareness have made this
application unique compare
d

to other applications of this kind. A customized event manager has been
incorporated for suggesting based on user mood which is a new trend in current online advertisement
strategy.

A survey res
ult has also attached to show the significance and likeliness of such application
among the population.

We have already built a small demo for detecting one expression (happy face) and integrated it with web
services. Currently we are working on detecting
different other moods. We are also trying to incorporate
intensity level of the facial expressions along with the detected mood.

There are still some open issues. There is no
Facebook

application that appropriately handles user
privacy.
Facebook

also avoi
d its responsibility by putting the burden on developer’s shoulder. We plan to
corporate this issue especially location privacy issue in our extended model. We also plan to delve other


23

mood detection algorithms to find the most computationally inexpensive
and robust method for
Facebook

integration.

9.

REFERENCES

[1] Bartlett, M.S.,

Littlewort
, G.C.,

Lainscsek
, C.,

Fasel
, I., Frank, M.G.,

Movellan
, J.R., “Fully automatic
facial action recognition in spontaneous behavior”, In
7th International Conference on Auto
matic Face
and Gesture Recognition
, 2006, p. 223
-
228.


[2]Braathen
, B., Bartlett, M.S.,

Littlewort
-
Ford, G., Smith, E. and

Movellan
, J.R. (2002). An approach to
automatic recognition of spontaneous facial actions. Fifth International Conference on
automatic face and
gesture recognition, pg. 231
-
235
.


[3] F. Pighin, J. Hecker, D. Lischinski, R. Szeliski, and D. H. Salesin, “Synthesizing realistic facial
expressions from photographs”,
Computer Graphics
, 32(Annual ConferenceSeries):75

84, 1998.

[4] Jag
dish Lal Raheja, Umesh Kumar, “Human facial expression detection from detected in captured
image using back propagation neural network”, In International Journal of Computer Science &
Information Technology (IJCSIT), Vol. 2, No. 1, Feb 2010,116
-
123.

[5] Pa
ul Viola, Michael Jones, “Rapid Object Detection using a Boosted Cascade of Simple features”,
Conference on computer vision and pattern recognition, 2001.


[6] A. B. Ashraf, S. Lucey, J. F. Cohn, T. Chen, K. M. Prkachin, and P. E. Solomon
,


The painful fac
e II
-
-

Pain expression recognition using active appearance models
”, In
International Journal of Image and
Vision Computing
, 27(12):1788
-
1796, November 2009.

[7] Md. Maruf Monwar, Siamak Rezaei and Dr. Ken Prkachin, “Eigenimage Based Pain Expression
Recogni
tion”,

In
IAENG International Journal of Applied Mathematics
, 36:2, IJAM_36_2_1. (online
version available 24 May 2007)



24

[8]
Md. Maruf Monwar,

Siamak Rezaei
: Appe
arance
-
based Pain Recognition from Video
Sequences.

IJCNN 2006
: 2429
-
2434

[9] Mayank Agarwal, Nikunj Jain, Manish Kumar, and Himanshu Agrawal, “Face recognition using
principle component analysis,eigneface, and neural network”, In
International Conference on Signal
Acquisition and Processing
, ICSAP, 310
-
314.

[10] Murthy, G. R. S
. and Jadon, R. S.

(2009). Effectiveness of eigenspaces for facial expression
recognition.
International Journal of Computer Theory and Engineering,
Vol. 1, No. 5, pp. 638
-
642.

[11] Singh. S. K., Chauhan D. S., Vatsa M., and Singh R.

(2003). A robust skin
color based face detection
algorithm.
Tamkang Journal of Science and Engineering,

Vol. 6, No. 4, pp. 227
-
234.

[12] Rimé
, B.,

Finkenauera, C.,

Lumineta, O.,

Zecha, E.,

and Philippot, P. 1998.
Social Sharing of
Emotion: New Evidence and New Questions.

In
Eu
ropean Review of Social Psychology
, Volume

9.


[13] Fillingim, R. B., “Sex, gender, and pain: Women and men really are different”,
Current Review of
Pain
4, 2000, pp 24

30.

[14] Berkley, K. J., “Sex differences in pain”,
Behavioral and Brain Sciences
20,
pp 371

80.

[15] Berkley, K. J. & Holdcroft A., “Sex and gender differences in pain”, In
Textbook of pain,
4th edition.
Churchill Livingstone.

[16] Pantic, M. and Rothkranz, L.J.M. 2000. Expert System for automatic analysis of facial expressions.
In

Image and Vision Computing
, 2000, 881
-
905

[17] Pantic, M. and Rothkranz,, L.J.M. 2003. Toward an affect sensitive multimodal human
-
computer
interaction. In
Proceedings of IEEE
, September 1370
-
1390

[18] Ekman P. and Friesen, W.
Facial Action Coding Syste
m: A Technique for the Measurement of Facial
Movement
, Consulting Psychologists Press, Palo Alto, CA, 1978.



25

[19] Malatesta, C. Z., & Haviland, J. M., “Learning display rules: The socialization of emotion expression
in infancy”,
Child Development, 53
, 1982,

pp 991
-
1003.

[20] Oster, H., Camras, L. A., Campos, J., Campos, R., Ujiee, T., Zhao
-
Lan, M., et al., “The patterning of
facial expressions in Chinese, Japanese, and American infants in fear
-

and anger
-

eliciting situations”,
Poster presented at the Intern
ational Conference on Infant Studies, Providence, 1996,RI.

[21] Pantic, M., & Rothkrantz, M., “Automatic analysis of facial expressions: The state of the art”, In
IEEE Transactions on Pattern Analysis and Machine Intelligence, 22
, 2000, pp 1424
-
1445.

[22]
Tian, Y., Cohn, J. F., & Kanade, T., “Facial expression analysis”, In S. Z. Li & A. K. Jain (Eds.),
Handbook of face recognition,
2005,

pp. 247
-
276. New York, New York: Springer.

[23] Cohn, J.F., “Foundations of human
-
centered computing: Facial expression
and emotion”, In
Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI’07),
2007
,
Hyderabad,
India.

[24] Edwards, K.., “The face of time: Temporal cues in facial expressions of emotion”, In
Psychological
Science, 9
(4), 1998, pp
-
270
-
276.

[25] Krumhuber, E., & Kappas, A., “Moving smiles: The role of dynamic components for the perception
of the genuineness of smiles”, In
Journal of Nonverbal Behavior, 29
, 2005, pp
-
3
-
24.

[26]
Galin, K. E. & Thorn, B. E. , “Unmasking pain: Detection
of deception in facial expressions”,
Journal of Social and Clinical Psychology
(1993), 12, pp 182

97.

[27] Hadjistavropoulos, T., McMurtry, B. & Craig, K. D., “Beautiful faces in pain: Biases and accuracy in
the perception of pain”,
Psychology and Health
1
1, 1996, pp 411

20.



26

[28] Md. Maruf Monwar and Siamak Rezaei, “
Pain Recognition Using Artificial Neural Network”, In
IEEE International Symposium on Signal Processing and Information Technology
,
Vancouver, BC,

2006,
28
-
33.

[29] A.B. Ashraf, S. Lucey, J. Cohn, T. Chen, Z. Ambadar, K. Prkachin, P. Solomon, B.J. Eheobald: The
Painful Face
-

Pain Expression Recognition Using Active Ap
-
pearance Models: In ICMI. 2007.

[30] B. Gholami, W. M. Haddad, and A. Tannenbaum,


Relevance
Vector Machine Learning for Neonate
Pain Intensity Assessment Using Digital Imaging”,

In
IEEE Trans. Biomed. Eng.
, 2010. Note: To Appear

[31] Becouze, P., Hann, C.E., Chase, J.G., Shaw, G.M. (2007) Measuring facial grimacing for quantifying
patient agitati
on in critical care. Computer Methods and Programs in Biomedicine, 87(2), pp. 138
-
147.

[32] Littlewort
, G., Bartlett, M.S., and Lee, K. (2006). Faces of Pain: Automated measurement of
spontaneous facial expressions of genuine and posed pain. Proceedings of

the 13th Joint Symposium on
Neural Computation, San Diego, CA.


[33] Smith, E., Bartlett, M.S., and

Movellan
, J.R. (2001). Computer recognition of facial actions: A study
of co
-
articulation effects. Proceedings of the 8th Annual Joint Symposium on Neural
Computation.


[34] S. Brahnam, L. Nanni, and R. Sexton, “Introduction to neonatal facial pain detection using common
and advanced face classification techniques,”
Stud. Comput. Intel.
, vol. 48, pp. 225

253, 2007.

[35]
Gwen C. Littlewort, Marian Stewart Bar
tlett, Kang Lee, “Automatic Coding of Facial Expressions
Displayed During Posed and Genuine Pain” , In
Image and Vision Computing,

27(12), 2009, p. 1741
-
1844.

[36] Bartlett, M.,

Littlewort
, G.,

Whitehill
, J.,

Vural
, E., Wu, T., Lee, K.,

Ercil
, A., Cetin, M
.

Movellan
,
J., “Insights on spontaneous facial expressions from automatic expression measurement”, In
Giese
,M
.
Curio, C.,

Bulthoff
, H. (Eds.) Dynamic Faces: Insights from Experiments and Computation, MIT Press,
2006.



27

[37] S. Brahnam, C.
-
F. Chuang, F. Shih, and M. Slack, “Machine recognition and representation of
neonatal facial displays of acute pain,”
Artif. Intel. Med.
, vol. 36, pp. 211

222, 2006.

[38]
http://www.Facebook.com/press/info.php?statistics

[39]
http://metrics.admob.com/