Several researches are based on Facial Action Coding System (FACS), first introduced by Ekman and Friesen in
]. It is a method for finding taxonomy of almost all possible facial expressions initially launched with 44
Action Units (AU). Computer Expression Recognition Toolbox (CERT) have been proposed [32
detect facial expression by analyzing
the appearance of Action Units related to different expressions. Different
classifiers like Support Vector Machine (SVM), Adabooster, Gabor filter, Hidden Markov Model (HMM) have been
used alone or in combination with others for gaining higher accuracy. R
esearchers in [
] have used active
appearance models (AAM) to identify features for pain from facial expression. Eigenface based method was
deployed in [
] for an attempt to find a computationally inexpensive solution. Later the authors included Eigene
and Eigenlips to increase the classification accuracy [
]. A Bayesian extension of SVM named Relevance Vector
Machine (RVM) has been adopted in [
] to increase classification accuracy. Several papers [
] relied on
artificial neural network based b
ack propagation algorithm to find classification decision from extracted facial
features. Many other researchers including Brahnam et al. [
], Pantic et al. [7, 8] worked in the area of
automatic facial expression detection. Almost all of these approa
ches suffer from one or more of the following
deficits: 1) reliability on clear frontal image, 2) out
plane head rotation, 3) right feature selection, 4) fail to use
temporal and dynamic information, 5) considerable amount of manual interaction, 6) nois
e, illumination, glass,
facial hair, skin color issues, 7) computational cost, 8) mobility, 9) intensity of pain level, and finally 10)
reliability. Moreover, there has not been any work regarding automatic mood detection from facial images in social
We have also done an analysis on the mood related applications of Facebook. Our model is fundamentally
different from all these simple models. All these mood related applications in Facebook require users to choose a
symbol that represents his/her mo
od manually but our model detects the mood without user intervention.
Table I. Features of several reported facial recognition models
Support Vector Machine
SVM, Relevance Vector Machine
Artificial Neural Network, Cont.
Not Available, subject
CRITICAL AND UNRESOL
Deception of Expression (suppression, amplification, simulation):
The volume of control over suppression, amplification, and simulation of a facial expression is yet to be
sorted out while determining any type of autom
atic facial expressions. Galin and Thorn [
] worked on the
simulation issue but their result is not conclusive. In several studies researchers obtained mixed or inconclusive
findings during their attempts to identify suppressed or amplified pain [
Difference in Cultural, Racial, and Sexual Perception:
Multiple empirical studies performed to collect data have demonstrated the effectiveness of FACS. Almost
all these studies have selected individuals mainly based on gender and age. But the facial expressions are clearly
different in people of different rac
es and ethnicities. Culture plays a major role in our expression of emotions.
Culture dominates the learning of emotional expression (how and when) from infancy, and by adulthood that
expression becomes strong and stable [19, 20]. Similarly, the same pain
detection models are being used for men
and women while research shows [14,15] notable difference in the perception and experience of pain between the
genders. Fillingim  believed this occurs due to biological, social, and psychological differences in
genders. This gender issue has been neglected so far in the literature. We have put ‘Y’ in the appropriate column if
the expression detection model deals with different genders, age groups, and ethnicity.
. Comparison Table Based on the D
escriptive Sample Data
Y (66 F, 63 M)
Y (18 hrs to 3
Y (13 B, 13 G)
No, Not mentioned
According to Cohn  occurrence/non
occurrence of AUs, temporal precision, intensity, and aggregates
are the four reliabilities that are needed to be analyzed for interpreting facial expression of any emotion. Most
researchers including Pantic and Rothkr
antz , Tian, Cohn, and Kanade  have focused on the first issue
occurrence). Current literature has failed to identify the intensity level of facial expressions.
Several dynamic features including timing, duration,
amplitude, head motion, and gesture play an important
role in the accuracy of emotion detection. Slower facial actions appear more genuine . Edwards  showed the
sensitivity of people to the timing of facial expression. Cohn  related the motion
of head with a sample emotion
‘smile’. He showed that the intensity of a smile increases as the head moves down and decreases as it moves upward
and reaches its normal frontal position. These issues of timing, head motion, and gesture have been neglected
would have increase the accuracy of facial expression detection.
 Bartlett, M.S.,
, I., Frank, M.G.,
, J.R., “Fully automatic facial action
recognition in spontaneous behavior”, In
International Conference on Automatic Face and Gesture Recognition
, 2006, p.
, B., Bartlett, M.S.,
Ford, G., Smith, E. and
, J.R. (2002). An approach to automatic recognition
of spontaneous facial actions.
Fifth International Conference on automatic face and gesture recognition, pg. 231
 F. Pighin, J. Hecker, D. Lischinski, R. Szeliski, and D. H. Salesin, “Synthesizing realistic facial expressions from
, 32(Annual Con
 Jagdish Lal Raheja, Umesh Kumar, “Human facial expression detection from detected in captured image using back
propagation neural network”, In International Journal of Computer Science & Information Technology (IJCSIT), Vol.
2, No. 1,
 Paul Viola, Michael Jones, “Rapid Object Detection using a Boosted Cascade of Simple features”, Conference on computer
vision and pattern recognition, 2001.
 A. B. Ashraf, S. Lucey, J. F. Cohn, T. Chen, K. M. Prkachin,
and P. E. Solomon.
The painful face II
recognition using active appearance models
International Journal of Image and Vision Computing
 Md. Maruf Monwar, Siamak Rezaei and Dr. Ken Prkachin, “Eigenimage
Based Pain Expression Recognition”,
International Journal of Applied Mathematics
, 36:2, IJAM_36_2_1. (online version available 24 May 2007)
Md. Maruf Monwar,
based Pain Recognition from Video Sequences.
 Mayank Agarwal, Nikunj Jain, Manish Kumar, and Himanshu Agrawal, “Face recognition using principle component
analysis,eigneface, and neural network”, In
International Conference on Signal Acquisition and Processing
, ICSAP, 310
 Murthy, G. R. S. and Jadon, R. S.
(2009). Effectiveness of eigenspaces for facial expression recognition.
Journal of Computer Theory and Engineering,
Vol. 1, No. 5, pp. 638
 Singh. S. K., Chauhan D. S., Vatsa M., and Singh R.
003). A robust skin color based face detection algorithm.
Journal of Science and Engineering,
Vol. 6, No. 4, pp. 227
and Philippot, P. 1998.
Social Sharing of Emotion: New Evidence and
European Review of Social Psychology
 Fillingim, R. B., “Sex, gender, and pain: Women and men really are different”,
Current Review of Pain
4, 2000, pp 24
 Berkley, K. J., “Sex differences in pain”,
Behavioral and Brain Sciences
20, pp 371
 Berkley, K. J. & Holdcroft A., “Sex and gender differences in pain”, In
Textbook of pain,
4th edition. Churchill Livingstone.
 Pantic, M. and Rothkranz, L.J.M. 2000. Expert System for automatic analys
is of facial expressions. In
Image and Vision
, 2000, 881
 Pantic, M. and Rothkranz,, L.J.M. 2003. Toward an affect sensitive multimodal human
computer interaction. In
, September 1370
 Ekman P. and Friesen, W
Facial Action Coding System: A Technique for the Measurement of Facial Movement
Psychologists Press, Palo Alto, CA, 1978.
 Malatesta, C. Z., & Haviland, J. M., “Learning display rules: The socialization of emotion expression in infancy”,
, 1982, pp 991
 Oster, H., Camras, L. A., Campos, J., Campos, R., Ujiee, T., Zhao
Lan, M., et al., “The patterning of facial expressions in
Chinese, Japanese, and American infants in fear
eliciting situations”, Po
ster presented at the International Conference
on Infant Studies, Providence, 1996,RI.
 Pantic, M., & Rothkrantz, M., “Automatic analysis of facial expressions: The state of the art”, In
IEEE Transactions on
Pattern Analysis and Machine Intelligence, 2
, 2000, pp 1424
 Tian, Y., Cohn, J. F., & Kanade, T., “Facial expression analysis”, In S. Z. Li & A. K. Jain (Eds.),
Handbook of face
276. New York, New York: Springer.
 Cohn, J.F., “Foundations of human
omputing: Facial expression and emotion”, In
Proceedings of the International
Joint Conference on Artificial Intelligence (IJCAI’07),
 Edwards, K.., “The face of time: Temporal cues in facial expressions of emotion”, In
ical Science, 9
(4), 1998, pp
 Krumhuber, E., & Kappas, A., “Moving smiles: The role of dynamic components for the perception of the genuineness of
Journal of Nonverbal Behavior, 29
, 2005, pp
Galin, K. E. & Thorn, B. E. ,
“Unmasking pain: Detection of deception in facial expressions”,
Journal of Social and Clinical
(1993), 12, pp 182
 Hadjistavropoulos, T., McMurtry, B. &
Craig, K. D., “Beautiful faces in pain: Biases and accuracy in the perception of
Psychology and Health
11, 1996, pp 411
 Md. Maruf Monwar and Siamak Rezaei, “
Pain Recognition Using Artificial Neural Network”, In
m on Signal Processing and Information Technology
 A.B. Ashraf, S. Lucey, J. Cohn, T. Chen, Z. Ambadar, K. Prkachin, P. Solomon, B.J. Eheobald: The Painful Face
Expression Recognition Using Active Ap
In ICMI. 2007.
 B. Gholami, W. M. Haddad, and A. Tannenbaum,
Relevance Vector Machine Learning for Neonate Pain Intensity
Assessment Using Digital Imaging”,
IEEE Trans. Biomed. Eng.
, 2010. Note: To Appear
 Becouze, P., Hann, C.E., Chase, J.G.
, Shaw, G.M. (2007) Measuring facial grimacing for quantifying patient agitation in
critical care. Computer Methods and Programs in Biomedicine, 87(2), pp. 138
, G., Bartlett, M.S., and Lee, K. (2006). Faces of Pain: Automated measureme
nt of spontaneous facial expressions
of genuine and posed pain. Proceedings of the 13th Joint Symposium on Neural Computation, San Diego, CA.
 Smith, E., Bartlett, M.S., and
, J.R. (2001). Computer recognition of facial actions: A study of co
effects. Proceedings of the 8th Annual Joint Symposium on Neural Computation.
] S. Brahnam, L. Nanni, and R. Sexton, “Introduction to neonatal facial pain detection using
common and advanced face classification
Stud. Comput. Intel.
, vol. 48, pp. 225
Gwen C. Littlewort, Marian Stewart Bartlett, Kang Lee, “Automatic Coding of Facial Expressions Displayed During Posed
and Genuine Pain” , In
and Vision Computing,
27(12), 2009, p. 1741
 Bartlett, M.,
, E., Wu, T., Lee, K.,
, A., Cetin, M.
, J., “Insights on
spontaneous facial expressions from automatic expression measurement”, In
. Curio, C.,
, H. (Eds.) Dynamic
Faces: Insights from Experiments and Computation, MIT Press, 2006.
] S. Brahnam, C.
F. Chuang, F. Shih, and M. Slack, “Machine recognition and representation of neonatal facial displays of acute pain,”
, vol. 36, pp. 211