UNIVERSITY OF BRIDGEPORT

connectionviewAI and Robotics

Nov 17, 2013 (3 years and 8 months ago)

104 views

1


UNIVERSITY OF BRIDGEPORT

BRIDGEPORT, CT

DEPARTMENT
OF

COMPUTER SCIENCE

AND
ENGINEERING

Fall

200
9




CS
449


Senior Design

A.A.A. (Auto Attendance Assistant)

Advisor:
Prof.
Ausif Mahmood



Name: Sungho Cho

Kazutaka Nakayama

Rushin Parikh

Khemmaraks Srey


S
ubmit Date: Dec. 21, 2009

2


Table of Content

Abstract











3

Introduction & Background








4

Proposed System









5

Hardware Design









6

Software Design








10

Testing & Results








26

Economic Analysis







28

Conclusion









34

Re
ferences









34

Appendix









35

Camera class







35

Cluster class







40

Eigenface class







41

Evaluation class







42

EvEvec class







43

FaceRecogByEF class






44

MyImage class







49

NormalizeFuncs class






51

3


ABSTRACT

Face reco
gnition is one of the
hottest

and

the most

challeng
ing

technologies,
which is based on biometrics, and also one of the most potential technologies. As the
most natural and friendly identification method, automatic face recognition has become
the important
part of the next generation computing technology. The central challenge in
face recognition lies in understanding the role of different facial features play in our
judgments of identity. Accurate face recognition is critical for many security
applications.

Current automatic face
-
recognition systems are defeated by natural

changes in lighting and pose, which often affect face images

more deeply and may result
in change of identity. Lots of system available in the market are costly and may not be
affordable f
or common people. This is the motivation for our project. Our purpose is to
build the system which is more accurate and can eliminate the dilemma of light, angle
and expression, with this to make it affordable for individual use and to the company for
wide
r use. In this paper we discuss the hardware approach of our system as well as the
software approach including the algorithm used to detect the characteristics of the
human face.





















4


INTRODUCTION

& BACKGROUND

The problem of face recognition

has been one of the most important areas of
machine vision for about a decade. Current systems have advanced to be fairly accurate
in recognition under controlled scenarios, but extrinsic imaging parameters such as pose,
illumination, and facial expressio
n still cause much difficulty in correct recognition. In
order to build face detection and normalize it, our group decided to use C# and come up
with our own program that can detect the face and not the background. Depending on
the nodal points such as, di
stance between eyes
,

we want to normalize each picture to
match our database. As facial features concentrate on the above characteristics, our
system will be based on the most important feature of detecting the eyes in the human
face, then calculate the ge
ometric characteristics of human face. Through our developed
system we will have the feature of
identifying
student
s

for
a

class

they have registered
for by comparing stored images in our database
.
This project

i
s to develop such a
system that will recogni
ze the person walking in the classroom and will be marked as
present with the time, avoiding the hassle of taking attendance

manually
.

However this
system can be deployed not only in classrooms, but also in environments where
relatively small number of peo
ple needs to be identified for a security purpose, for
example, company offices.

Since
face recognition is a very fascinating modern technology, many
applications using face recognition have been developed.

United States maintains
databases of mugshots of
criminals, so that searching previously arrested criminals can
be facilitated. United States also operates one of the largest face recognition systems in
the world with over 75 million photographs that is actively used for visa processing.
However, the eff
iciency
of these large systems is unknown, since the accuracy of face
recognition fails dramatically when the number of datasets in databases increases and
exceeds a certain limit.

FastAccess and VeriFace are applications based on face
recognition, which e
nable users to eliminate the process of typing passwords in system
logging in environments.
These applications automatically log in for the user by
recognizing unique features of individual faces to sample photographs taken prior to
logging in.
Since there

are only few people share the same machine, these applications
work great with very high accuracy. Our system also maintains reasonable efficiency
and accuracy, since it is mainly used in environments where a small set of images is
used. Our system enable
s to eliminate wasting time of showing identifications for a
security purpose, or taking attendances in the classroom.



5


PROPOSED SYSTEM



























































Infrared
Transceiver

Infrared
Receiver

Holding
Circuit

Microcontroller

Microcontroller
Module

Webcam
Module

Normalization
Module

Eye Detection
Module

Webcam

Face Recognition
Database

Recognition
Module



Infrared emission



Signal sent when the IR is broken



Holding circuit keeps and sends the same value to the
microcontroller until it is reset



Send a signal to the microcontroller module



Send a signal to reset the holding circuit



Keep check the signal from the microcontroller
and
inform the webcam module when IR is broken



Stream of video images is received from a Webcam



Webcam module captures an image when it receives a
signal from the microcontroller module that the IR is
broken




Eye detecting module calculates the eye positi
ons and
sends this information to the normalization module



Face is cropped out from the entire image based on the
eye positions and passed to the recognition module



Recognition module contacts the database to recognize
the normalized image by comparing ima
ges stored in the
database

6


HARDWARE DESIGN

The hardware design is one of the most important aspects in our project. We
wanted our hardware to be small and compact. For our proj
ect we want to develop a
scanner or signal that can be able to trigger an event to snap a photo whenever the beam
is broken. Therefore, we use an infrared beam to monitor door passageways or any other
area. Once the beam is broken it shall send a signal t
o the microcontroller and
therefore taking a picture. The design work consists of a receiver, transmitter, and a
microcontroller.



The control method is handled by a microcontroller which trigger only when
the receiver module send a signal that the infr
ared beam is broken. When the beam is
broken a relay is tripped which will send a signal to our camera to take a photo. Our
design is suitable for detecting a person walking through the passage doorway or
anything that is oncoming.
















The hardware emits IR and it is broken whenever a person walks in through the
door. In the hardware, DFF and NAND gates are controlled by this IR barrier, and send
a signal to the microcontroller. Our microcontroller is connected to our laptop and is

program to trigger our digital camera

to take a picture whenever
a person walks

across
breaking the IR beam. We then process the image and use face recognition to detect the
person.




Infrared






Laptop



Camera

IR


DFF +
NAND
Gates

Microcontrolle
r

7


Hardware Consist of:

Microcontroller


An Intel 8051 clone microcontro
ller, 8KB RAM, and USB (SIE) peripheral

interface.

These

headers allow

interfacing of external circuits to the board.






Signals on the headers include 16bits parallel I/O, VCC (+5v), GND, and

several other signals. We

control our microcontroller by us
ing our program

written in C#.

Transceiver

The IR LED is mounted vertically.
The IR is strong
, and

the special feature of
this transceiver is that the distance can be as much as 25 yards without giving
false reading.

Receiver


Detect on coming signal fro
m the receiver. The receiver consists of an IR

receiver module that

detects the incoming IR beam from the transmitter. The IR

signal is used to keep a capacito
r charged which in turn holds
rely operated.

When the beam is broken the capacitor discharges
and the relay releases.











Whenever the IR is broken, high signal will be sent to the microcontroller. The
microcontroller will reset the clock so it can be ready to scan the next person in line. We
are using 7474 DFF and 740
0 NAND gates and LEDs.














Reset



Controlled by IR barrier




















To microcontroller








CL
K

8


Our circuit consist of four NAND gates and one 7400 IC





Using three NAND gates we will construct an inverter and a latch.

A NAND gate can act as inverter of both input A and B are connected from one signal.



A





Y


B


If A
, B

are Low. Then Y is high

If A
, B

are High. Then Y is Low

Below is our schematic of an I
nverter and a latch using NAND
g
ates
.







The reason why we need to convert the signal from IR is because the latch needs lo
w
signal to set the latch. Also the microcontroller wi
ll send out low signal to

reset the

latch.
Whenever the latch is set, it will send out a high signal.












Set'










Reset'

IR

7474
DFF

M/C

9


We are using

two DFF in a 7474 DFF IC. Once clock goes high, Q will be outputting
the value
of D.




An important aspect in our circuit design is to prepare the floor plan and to be
able to assign the correct I/O
pin in order to get an output.

There are numerous
approaches for distributing I/O pins. These headers allow interface to the external
board.
The challenge is know the correct signal on the header of the 16 bit parallel I/O. We
program our microcontroller using C# and successfully which sends out signal high or
low to desire I/O ports. Our IR transceiver and receiver successful can send o
ut high
signal from the IR receiver when the IR is broken and low signal if IR beam is stable.
Minimizing the size and increasing its capability is our goal in the hardware design
aspect. At design process, our microcon
troller I/O pins
ha
ve
their

own respo
nsibilities
and therefore is crucial to understand each design and test each pin individually. Our
design is able to detecting any person walking through the passage doorway and once
the IR beam broken it will trigger a photo shoot to occur.


10


SOFTWARE DESI
GN


Along with the hardware design, the software has to be thoroughly designed to
achieve the goal of face recognition. Majority part of our software is designed and
implemented from the scratch. Software design can be divided into six major
components. Th
ese components are as follows: Micro
-
controller, Web camera, Eye
detection, Normalization, Face recognition, and Database design. Each component will
be explained in detail.


1.

Micro
-
controller component

Micro
-
controller is one of the most important parts of

the hardware design. The
main purpose of the micro
-
controller is to let the software know that there was an
infrared beam interruption by detecting a signal from the holding circuit through its
input ports. Another important task of the micro
-
controller i
s to reset the holding circuit
by sending a certain combination of signals through its output ports. The holding circuit
has to be reset after the infrared beam is broken to detect the next infrared beam
interruption.

In order to perform these important ta
sks of the micro
-
controller, it has to be
programmed, so that it can be used to accomplish above tasks. C# is used to program
the micro
-
controller. The micro
-
controller supports “ActiveWire USB Simple I/O
ActiveX Component,” which provides a number of usef
ul functions t
o utilize the micro
-
controller. Below is the C# code, which demonstrates the usage of functions of
ActiveWire ActiveX Component.


axAwusbIO1.Open(0);
// Open the micro
-
controller

axAwusbIO1.EnablePort(16386);
// Enable I/O Ports

ResetDevice();
// Reset the device

Thread

myThread =
new

Thread
(
new

ThreadStart
(
this
.checkIR));

myThread.Start();


Open() function opens connection with the micro
-
controller connected to the
machine through a USB cable. The parameter pas
sed to the function, 0, indicates the
first device will be enabled. EnablePort() function enables whichever port the user
desires to enable, which can be manipulated by passing a parameter to the function. The
micro
-
controller has total of sixteen I/O port
s, from I/O#0 to I/O#15. In this project, two
I/O ports are used. One I/O port is used to receive a signal from the holding circuit as an
input port, and the other I/O port is used to send a combination of signal to the holding
11


circuit as an output port. T
he parameter 16386 is a decimal number for a binary
sequence of “0100 0000 0000 0010”. This binary sequence indicates that the second and
the fifteenth port will be enabled, which are port number 1 and 14, respectively. It also
indicates that all other por
ts will be disabled.
After enabling necessary ports, the holding
circuit gets reset, and a separate thread is created to constantly check the infrared bean
interruption. The code for this is described below.


// Reset the device

axAwusbIO1.OutPort(0);

Th
read
.Sleep(500);

axAwusbIO1.OutPort(16386);


// checkIR()

while

(axAwusbIO1.InPort() != 65535) { ;}

Bitmap

rawImage =
null
;

if

(
this
.InvokeRequired ==
false
)

{

rawImage = myCamera.pCapture();

}

else

{


C a p t u r e D e l

c d =
n e w

C a p t u r e D e l
( m y C a me r a.p C a p t u r e );


r a w I ma g e = (
B i t ma p
)
t h i s
.I n v o k e ( c d );

}


E i t h e r a h i g h o r a l o w s i g n a l c a n b e s e n t t o e a c h I/O p o r t o f t h e mi c r o
-
c o n t r o l l e r
b y u s i n g O u t P o r t ( ) f u n c t i o n. T h e p a r a me t e r 0 i n t h i s p r o j e c t i n d i c a t e s t h a t a l o w s i g n a l
w i l l b e s e n t t o a l l I/O p o r t s o f t h e mi
c r o
-
c o n t r o l l e r. T h e p a r a me t e r 1 6 3 8 6, c o r r e s p o n d i n g
d e c i ma l n u mb e r o f a b i n a r y s e q u e n c e “ 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0,” i n d i c a t e s
a h i g h s i g n a l
w i l l b e s e n t t o
t h e s e c o n d a n d t h e f i f t e e n t h p o r t,
w h i c h a r e p o r t n u mb e r 1 a n d 1 4,
r e s p e c t i v e l y. T h e h o l d i n g c i r c u i t i s r e
s e t w h e n a c o mb i n a t i o n o f a l o w a n d a h i g h s i g n a l
i s i n p u t t e d.

O p p o s i t e o f O u t P o r t ( ) f u n c t i o n, I n P o r t ( ) f u n c t i o n r e c e i v e s s i g n a l f r o m o u t s i d e
s o u r c e, w h i c h i s t h e h o l d i n g c i r c u i t i n t h i s p r o j e c t. T h e i n f r a r e d b e a m i n t e r r u p t i o n c a n
b e d e t e c t e d b y c h e c k i n g
t h e v a l u e o f e a c h p o r t c o n s t a n t l y u n t i l t h e v a l u e ma t c h e s t o
12


65535, which is the decimal number for the binary sequence of “1111 1111 1111 1111.”
Once the infrared beam interruption is detected, an image from the web camera device
is captured and processed

further for the face recognition.


2.

Web camera component

In order to capture a person’s face image when the infrared is broken, the web
camera component is implemented. For the implementation of Web camera component,
DirectShowLib
-
2005.dll library is used.

Camera class is implemented using this library,
which is a user defined class providing functions to find connected video devices, set up
the video device, play video stream, capture an image, save the captured image, and so
on.
Below is the pCapture() fu
nction from Camera class. This function is called after the
infrared beam is broken.


public

Bitmap

pCapture()

{


i n t

b u f S i z e = 0;


I n t P t r

i mg D a t a;



p S a mp l e G r a b b e r.G e t C u r r e n t B u f f e r (
r e f

b u f S i z e,
I n t P t r
.Z e r o );


if

(bufSize < 1)


{


Me s
s a g e B o x
.S h o w (
"F a i l e d t o g e t b u f f e r s i z e"
);


r e t u r n

n u l l
;


}


i mg D a t a =
Ma r s h a l
.A l l o c C o T a s k Me m( b u f S i z e );


p S a mp l e G r a b b e r.G e t C u r r e n t B u f f e r (
r e f

b u f S i z e, i mg D a t a );



// S a v e a s B i t ma p


B i t ma p

b m = s a v e T o J p g ( i mg D a t a, b u f S i z e, V i d e o _ H e i g h t,

V i d e o _ Wi d t h );


Ma r s h a l
.F r e e C o T a s k Me m( i mg D a t a );


r e t u r n

b m;

}





13


3.

Face detection and
Normalization component

To recognizing face, only the face image is required, but not background. In
order to accomplish the goal of face recognition, implementatio
n of face detection is
required, which crops out the face from the entire image. There are

many approaches to
detect face
. At the beginning, we

ve attempted to detect the face using skin detection.
However, the accuracy of the skin detection did not satisf
y our requirement. Therefore

an eye detection component is implemented to detect a face in this project. The
component detects eye position
s
,
and then

it crops out the face based on
the distance
between the
positions

of
eye
s
.

In the eye detection process,
various filters

using GDI+ image processing

are
applie
d

to

the captured image from the webcam module

which

will be used for face
recognition. After all filters are applied,
the expected outcome is
the
position
s

of eyes.

Once the positions of eyes are detec
ted, the image can be normalized based on the
positions of eyes. As the result, only the face will be croped out from the entire image
,
and the rest of the image will be discarded
.

Normalized image will have width and
height of 200 pixels each.

The followi
ngs are necessary steps and detailed explanations
to achieve the goal of face detection.


Step1. Increase brightness

In this step, the brightness of all pixel values of the image is increased. The
maximum value of RGB color spectrum is 255. From experiment
s, value of 170 is
added to each RGB component of all pixels of the entire image. If any of the RGB
components exceed the value of 255, all RGB component
s

for the pixel will be 255,
in
other words

to change the pixel to white. Below

images

shows the result

after the
increasing brightness step
.






14


Step2. Change remaining pixels to black color

After the first step, only dark color pixels in the original image remain. All
remaining non
-
white pixels will be turned to black pixel in this step by assigning pix
el
value of 0 to any pixel that is not white pixel. Below is the result
after

this step.




Step3.
Mark possible cluster
s

After the
second step
,
most of the time the
eye
s

become

black and
form a
collection of black pixels, which is called a cluster
. In th
is step, every cluster
is assigned
unique number, and created as an object of Cluster, so that each cluster can be
distinguished from others.

The image is scanned from top to bottom, and left to right, finding a black pixel.
If any black pixel is encounte
red, the neighbor pixels will be checked if any of neighbor
pixels are numbered. If any numbered pixel is found, the encountered pixel will be
assigned the least number found from its neighbor pixels. The neighbor pixels will be
checked one more time, and
if there is a black pixel, the same procedure will be
repeated for the neighbor pixel. This is a recursive function, and it effectively numbers
all cluster and creates an object of Cluster for each cluster in the image. These Cluster
objects are then store
d in a list of Cluster objects. Cluster object contains useful
information about the cluster such as the cluster number, the center position, number of
pixels, width and height.

Below is a part of CreatClusterObject() function
, where the
Cluster objects ar
e created
.


if

(p[1] == 0)
// if the pixel is black

{


t o p = b o t t o m = y;


l e f t = r i g h t = x;


t r a v e r s e C l u s t e r ( b mD a t a, x, y, p );


w i d t h = r i g h t
-

l e f t + 1;

15



h e i g h t = b o t t o m
-

t o p + 1;


c e n t e r.X = (
i n t
) ( ( l e f t + r i g h t ) / 2 );



c e n t e r.Y = (
i n t
) ( ( t o p + b o t t o m) / 2 );


m y C l u s t e r =
n e w

C l u s t e r
( c e n t e r, w i d t h, h e i g h t, n u mP i x e l, p [ 0 ] );


e y e C a n d i d a t e s.A d d ( my C l u s t e r );


n u mP i x e l = 0;


r e c u r s i o n = 0;


r e c u r s i o n L i mi t =
f a l s e
;

}


S t e p 4. R e m o v e n o i s e c l u s t e r s

O n c e t
h e l i s t o f C l u s t e r o b j e c t s i s c o n s t r u c t e d, a l l c l u s t e r s a r e e x a mi n e d a n d
c l u s t e r s w h i c h a r e o b v i o u s l y n o t a n e y e a r e r e mo v e d f r o m t h e i ma g e. I f a c l u s t e r
ma t c h e s c e r t a i n c o n d i t i o n s, i t s h o u l d n o t b e a n e y e a n d mu s t b e r e mo v e d. H e r e a r e t h e
c o n d i t i o n s:



C l u s
t e r ’ s w i d t h i s l e s s t h a n i t s

h e i g h t.



N u mb e r o f p i x e l i s g r e
a t e r t h a n 5 0 0 o r l e s s t h a n 4 0.



C l u s t e r ’ s Y a x i s i s l e s s t h a n 1/8 i ma g e ’
s h e i g h t o r o v e r 4/5.



C l u s t e r ’ s
w i d t h
i s
l o n g e r t h a n 1/1 1 i ma g e ’ s w i d t h o r h e i g h t

is

less than 3
.



Cluster’s density

is

less th
an 30%

The first condition indicates that the width of an eye cluster must be longer than
its height. From the experiments, number of pixels of the eye cluster is between 40~500
pixels per cluster. Otherwise, it is probably hair, cloth or the background.
Density is
c
alculated

by the

c
l
u
ster’s area, then substruct
the

number of white pixels from this area.
Iris is always black after the pr
e
vious step. It means cluster of
the
eye should have high
density.

Blow show the result after removing noise clusters.




16


Step5.
Evaluate eye pair

Any cluster remained up to this point is called a possible cluster.

Among these
possible clusters
two clusters

form an eye

pair
. Clusters in an e
ye pair should have
similar shape
s
. To find
the
eye

pair from the possible clusters
, shape
s

and position
s of

all clusters combinations

are

evaluated.
An e
valuation equation

is defined from
experiments
.


eval = pixelDiff * 2 + widthDiff + heightDiff * 4 + angle * 5 + clusterDist * 10;



The pixelDff means number of pixel’s difference of
two clusters pair.

The
widthDff and
heightDiff

mean difference of length of width and height.

The angle
means angle between two clusters. Eye pair’s angle should be near 0º.

The clusterDist
means difference between average eyes distance and two clusters’ d
istance.
From
experiments,
average distance of eyes is

defined to be

Width / 8.6

= 76,
when the
image
size is

640*480 for our project.

The

number behind each status is its weight.

As we mentioned that both left and right eye should have similar shape and
p
roper position each other.
The b
est candidate
eye
pair have
the
smallest value of “eval”
after
evaluation calculation
.

B
elow

picture show the result of this step
.
A

line between
left and right eye
cluster is drawn to show the eye pair
.




Step
6
. Evaluate
eyes and brows pair

In the
fifth
step
,

the
best two similar clusters are detected and it is probably
eyes.
Ho
wever, an eyebrow pair also has similar shape and proper position each other as
an eye pair. The system sometime detects eyebrows instead of eyes.
In the evaluation
method at the
fifth step
, there is no way to distinguish eyes and eyebrows pair.
A new
approach is required to

decide eyes pair
,

if both

the

eye and eyebrow pair appear
s

on the
17


image at the same

time
. Otherwise,
the eyebrow pair can be se
lected as an eye pair,
which results in a failure of the eye detection
.

If more than two candidate cluster pairs are detected
in the fifth

step, the system
obtain five candidate pairs in maximums. Then compare two pairs each for all
combinations with evalu
ation method.

If the following conditions are satisfied, the eye detecting module concludes
there are both an eye and an eye brow pair on the image, and consider the bottom pair to
be the eye pair, instead of the top pair regardless of evaluation value.



An

angle of
the
eye and brow pair is almost parallel, which mean 0º.



An angle of Vertical line

from the eye pair to the brow pair

is near 90 º.



Distance between
the
eye and
brow

is
less than 1/12
image’s

height
.




Once the location
s
of
two
eye

clusters are

found, Normalization() function will
be called to
c
ro
p

out

the face image

with the dimension of

200x200
,

based on

the
locations of

eye

clusters
. This size is enough to
cover
the face of
most
people
.

Below is
the normalized image after eye detection and no
rmalization.



18


4.

Face recognition component


There are many face recognition techniques including Eigenface, Fisherface,
three
-
dimensional face recognition, and skin texture analysis. Among many approaches,
Eigenface technique is chosen, because of it is re
latively simple and fast compare to
other approaches. Detailed steps and explanations for Eigenface recognition is described
below.


Step1. Preparation


All Eigenfaces for all face images stored in the database should be constructed
before recognition. Ima
ges are stored as binary files in the database. These images are
queried and stored as MyImage objects. Each MyImage object contains an array of
bytes, which has all grayscale pixel values for the image. Each MyImage object is then
stored in a list of MyIm
ages. Maintaining this array of pixel values in very important,
since it will be used many times throughout the recognition process. Below code shows
the core part of constructing MyImage object.


// Contructing MyImage object

for

(
int

i = 0; i < origArray
.Length;)

{


b l u e = o r i g A r r a y [ i ];


g r e e n = o r i g A r r a y [ i + 1 ];


r e d = o r i g A r r a y [ i + 2 ];


a v e r a g e = (
b y t e
) ( ( b l u e + g r e e n + r e d ) / 3 );


i ma g e A r r a y [ k + + ] = a v e r a g e;


i = i + 3;

}


S t e p 2.
F i n d

Av e r a g e Ve c t o r


O n c e t h e l i s t o f My I ma g e o b j e c
t s i s c o n s t r u c t e d, t h e n e x t s t e p i s
t o f i n d t h e
a v e r a g e f a c e v e c t o r,
, w h i c h i s c a l l e d a v g Ve c t o r i n t h i s p r o j e c t. T h e a v e r a g e f a c e v e c t o r
c a n b e c
o mp u t e d u s i n g t h e f o l l o w i n g f o r mu l a, w h e r e

i s a n a r r a y o f g r a y s c a l e p i x e l
v a l u e s f o r e a c h My I m a g e o b j e c t i n t h e l i s t, a n d M i s t h e n u mb e r o f

i ma g e s i n t h e
d a t a b a s e.



19



The average face vector simply means the av
erage vector of all image arrays.
Pixel values for each position are added, and the total is divided by the number of
images to find the average. This process is repeated to calculate all average pixel values
for each position in an image. Below is the cod
e for computing the average face vector.


private

static

void

ComputeAvgVector()

{


a v g V e c t o r =
n e w

d o u b l e
[ n u mP i x e l s ];


f o r

(
i n t

j = 0; j < n u mP i x e l s; j + + )


{


f o r

(
i n t

i = 0; i < i ma g e L i s t.C o u n t; i + + )


{


My I ma g e

a u x i m =
i ma g e L i s t [ i ];


a v g V e c t o r [ j ] = a v g V e c t o r [ j ] + a u x i m.i ma g e A r r a y [ j ];


}


a v g V e c t o r [ j ] = a v g V e c t o r [ j ] / i ma g e L i s t.C o u n t;


}

}


S t e p 3.
F i n d

S e t Ve c t o r


N e x t s t e p i s t o f i n d a s e t o f v e c t o r s t h a t r e p r e s e n t s t h e d i f f e r e n c e b e t w e e n t h e

o r i g i n a l i ma g e v e c t o r s a n d t h e a v e r a g e v e c t o r. T h e s e s e t s o f v e c t o r s a r e c o mp u t e d b y
s u b t r a c t i n g t h e a v e r a g e f a c e v e c t o r f r o m e a c h o f o r i g i n a l i ma g e v e c t o r. T h i s s e t o f
v e c t o r s,
, i s c a l l e d S e t Ve c t o r i n t h i s p r o j e c t.
H e r e i s t h e f o r mu l a,

a n d
t h e b
e l o w
i s
c o r r e
s p o n d i n g i mp l e me n t a t i o n o f t h e f o r mu l a.



p r i v a t e

s t a t i c

v o i d

C o mp u t e S e t V e c t o r ( )

{


s e t V e c t o r =
n e w

d o u b l e
[ i ma g e L i s t.C o u n t, n u mP i x e l s ];


f o r

(
i n t

i = 0; i < i ma g e L i s t.C o u n t; i + + )


{


My I ma g e

a u x i m = i ma g e L i s t [ i ];


f o r

(
i n t

j = 0; j <

n u mP i x e l s; j + + )


s e t V e c t o r [ i, j ] = a u x i m.i ma g e A r r a y [ j ]
-

a v g V e c t o r [ j ];


}

}

20


Step4.
Find

Covariance Matrix


Covariance Matrix C can be computed as
, where
.
Assume each image’s dimension is N x N, the dimension of matrix A is
, since
’s dimension is

x 1, and there are

number of
s. If C is computed as the
equation above, the dimension of C matrix will be
, and return

Eigenvectors, which is a huge number. Therefore, instead of calculating matrix C a
s
described above, consider matrix
, which returns matrix C with its dimension of
. Below is the corresponding implementation of this step.


private

static

void

ComputeCovVector()

{


d o u b l e

s u m;


c o v V e c t o r =
n e w

d o u b l e
[ i ma g e L i s t.C o u n t, i ma g e L i s t.C o u n t ];


f o r

(
i n t

i = 0; i < i
ma g e L i s t.C o u n t; i + + )


f o r

(
i n t

j = 0; j < i ma g e L i s t.C o u n t; j + + )


if

(i <= j)


{


s u m = 0.0;


f o r

(
i n t

k = 0; k < n u mP i x e l s; k + + )


s u m = s u m + ( s e t V e c t o r [ i, k ] * s e t V e c t o r [ j, k ] );


c o v V e c t o r [ i, j ] = s u m;


c o v V e c t o r [ j, i ] = s u m;


}

}


S t e p 5.
F i n d

E i g e n v e c t o r a n d E i g e n v a l u e


F r o m t h e c o v a r i a n c e ma t r i x C, w h o s e d i me n s i o n i s
,

E i g e n v e c t o r s
a n d i t s c o r r e s p o n d i n g E i g e n v a l u e s a r e c a l c u l a t e d u s i n g a l r e a d y e x i s t i n g l i b r a r y c a l l e d
m
a t t e s t d l l.d l l. F i n d E i g e n s ( ) f u n c t i o n o f t h i s l i b r a r y i s u s e d t o c o mp u t e

E i g e n v e c t o r s
a n d

E i g e n v a l u e s f r o m t h e c o v a r i a n c e m a t r i x C. T h e s e E i g e n v e c t o r a n d E i g e n v a l u e
p a i r s a r e s t o r e d a s E v E v e c o b j e c t s, a n d f u r t h e r s t o r e d i n a l i s t o f E v E v e c s o b j e c t s c a l l e d
E V L i s t i n t h i s p r o j e c t. B e l o w i s t h e F i n d E i
g e n s ( ) f u n c t i o n

a n d a p p r o p r i a t e p a r a me t e r s

o f
ma t t e s t d l l.d l l.


F i n d E i g e n s ( i ma g e L i s t.C o u n t, c o v A r r a y, E v a l s, E v e c s A r r a y );


21


Step6. Find Eigenfaces


Next step is to find Eigenfaces for each Eigenvectors computed from covariance
matrix C. Eigenface is compute
d using the following equation,
, where

is
the set vector and

is the Eigenvector. Eigenface can be computed simply multiplying
each Eigenvector to the set vector. Computed Eigenfa
ce is stored as Eigenface object
along with its Eigenvalue. The same process is repeated for all

Eigenfaces. As the
result, a list of Eigenfaces is c
onstructed, which contains all Eigenfaces and their
corresponding Eigenvalues. Below is the code for findng Eigenfaces.


private

static

void

ComputeEigenFace(
double
[] ev,
Eigenface

ef)

{


// C o mp u t e a n E i g e n f a c e f o r o n e g i v e n E i g e n v e c t o r


d o u b l e

su
m;


f o r

(
i n t

i = 0; i < n u mP i x e l s; i + + )


{


s u m = 0.0;


f o r

(
i n t

j = 0; j < i ma g e L i s t.C o u n t; j + + )


{


s u m = s u m + ( e v [ j ] * s e t V e c t o r [ j, i ] );


}


e f.E F [ i ] = s u m;


}

}


S t e p 7. F i n d We i g h t Ve c t o r


A w e i g
h t v e c t o r c a n b e c o mp u t e d f r o m e a c h E i g e n f a c e b y mu l t i p l y i n g t h e
E i g e n f a c e w i t h t h e t r a n s p o s e o f t h e s e t v e c t o r. To t a l o f

w e i g h t v e c t o r s a r e c a l c u l a t
e d
a n d t o g e t h e r t h e y f o r m a w e i g h t ma t r i x c a l l e d w e i g h t Ve c t o r i n t h i s p r o j e c t.

B e l o w i s t h e
c o d e f o r c o mp u t i n g t h e w e i g h t v e c t o r.


p r i v a t e

s t a t i c

v o i d

C o mp u t e We i g h t ( )

{


d o u b l e

s u m;


E i g e n f a c e

e f;


//e f.E F * t r a n s p o s e s e t V e c t o r,
-
> M b y n ^ 2 * n ^ 2
b y M = M b y M


w e i g h t V e c t o r =
n e w

d o u b l e
[ i ma g e L i s t.C o u n t, i ma g e L i s t.C o u n t ];

22



f o r

(
i n t

k = 0; k < i ma g e L i s t.C o u n t; k + + )


{


f o r

(
i n t

i = 0; i < i ma g e L i s t.C o u n t; i + + )


{


s u m = 0.0;


e f = E i g e n f a c e L i s t [ i ];



f o r

(
i n t

j = 0; j < n u mP i x e l s; j + + )


{


s u m = s u m + ( e f.E F [ j ] * s e t V e c t o r [ k, j ] );


}


w e i g h t V e c t o r [ k, i ] = s u m;


}


}

}


S t e p 8. F i n d We i g h t Ve c t o r o f Ta r g e t I m a g e


T h e w e i g h t v e c t o r o f t a r g e t
i ma g e i s c o mp u t e d a n d c o mp a r e d t o t h e i ma g e s i n t h e
d a t a b a s e. F i r s t, t h e t a r g e t i ma g e i s n o r ma l i z e d b y s u b t r a c t i n g t h e a v e r a g e f a c e v e c t o r
f r o m t h e t a r g e t i ma g e v e c t o r, u s i n g t h e e q u a t i o n,
. T h e n t h e w e i g h t v e c t o r
c a n b e c o mp u t e d b y p r o j e c t i n g t h i s n o r ma l i z e d p r o b e o n t o t h e c o l l e c t i o n o f E i g e n f a c e s,
u s i n g t h e e q u a t i o n,
. A s t h e r e s u l t, a w e i g h t v e c t o r f o r t h e t a r g e t i ma g e w i
c r e a t e d, w h i c h i s c a l l e d n e w I ma g e We i g h t i n t h i s p r o j e c t. B e l o w i s t h e c o d e f o r
c o mp u t i n g n e w I m a g e We i g h t.


p u b l i c

s t a t i c

v o i d

C o mp u t
e N e w I ma g e We i g h t ( )

{


d o u b l e

s u m;


E i g e n f a c e

e f;


n e w I ma g e We i g h t =
n e w

d o u b l e
[ i ma g e L i s t.C o u n t ];


f o r

(
i n t

i = 0; i < i ma g e L i s t.C o u n t; i + + )


{


s u m = 0.0;


e f = E i g e n f a c e L i s t [ i ];


f o r

(
i n t

j = 0; j < n u mP i x e l s; j + + )



{


s u m = s u m + ( e f.E F [ j ] * n e w Me a n V e c t o r [ j ] );

23



}


n e w I ma g e We i g h t [ i ] = s u m;


}

}


S t e p 9. C o m p a r e D i s t a n c e


N e x t s t e p i s t o c o mp a r e t h e n e w I ma g e We i g h t v e c t o r s t o a l r e a d y e x i s t i n g w e i g h t
v e c t o r s o f i ma g e s s t o r e d i n t h e d a t a b a s e.

E u c l i d e a n d i s t a n c e t e c h n i q u e i s u s e d t o
c o mp u t e t h e d i s t a n c e. T h e f o r mu l a o f E u c l i d e a n D i s t a n c e i s g i v e n a s f o l l o w s.


B e l o w c o d e s h o w s c o mp u t i n g t h e d i s t a n c e


p u b l i c

s t a t i c

v o i d

C o mp u t e D i s t a n c e ( )

{


d o u b l e

s u m;


d i s t a n c e =
n e w

d o u b l e
[ i ma g e L i s t.C o u n t ];



f o r

(
i n t

i = 0; i < i ma g e L i s t.C o u n t; i + + )


{


s u m = 0.0;


f o r

(
i n t

j = 0; j < i ma g e L i s t.C o u n t; j + + )


{


s u m = s u m + ( ( n e w I ma g e We i g h t [ j ]
-

w e i g h t V e c t o r [ i, j ] )


* ( n e w I m a g e We i g h t [ j ]
-

w e i g h t V e c t o r [ i, j ] ) );



}


d i s t a n c e [ i ] =
Ma t h
.S q r t ( s u m);


}

}


S t e p 1 0. F i n a l
i z a t i o n


O n c e t h e d i s t a n c e s a r e c o mp u t e d, t h e i ma g e w i t h l e a s t d i s t a n c e d i f f e r e n c e c a n b e
c o n s i d e r e d a s a r e c o g n i z e d i ma g e. H o w e v e r,
i t i s
n o t a l w a y s t h e

c a s e t h a t t h e

t a r g e t
i ma g e i s s t
o r e d i n a d a t a b a s e. S o me t i me s, t a r g e t i ma g e c a n b e a t o t a l l y n e w i ma g e t h a t
i s n o t s t o r e d i n a d a t a b a s e. I n o r d e r t o d e t e r mi n e i f t h e r e c o g n i z e d i ma g e i s r e a l l y t h e
ma t c h i n g i ma g e, a t h r e s h o l d v a l u e i s d e c i d e d f r o m e x p e r i me n t s. Av e r a g e d i s t a n c e i s
c o mp u t e d

a n d t h e d i s t a n c e o f r e c o g n i z e d i ma g e i s d i v i d e d b y t h e a v e r a g e d i s t a n c e. I f
24


this result exceeds 0.3, the threshold value, it is considered the target image does not
exist in the database. If the result is under the threshold value, the image is considered

to be recognized image.


5.

Database design component


Database plays one of the most important roles in this project. Images of each
individual are stored in the database, and they are queried whenever they are needed
from the application. Although storing
images is the most important reason why the
database has to be maintained, but the database is needed to store auxiliary information
like student, courses, prerequisites information, so that the whole project can
demonstrate the use of face recognition in
classroom environment. Each database table
design is explained below.


Students table



Student table maintains the information of each individual student. It is very
important keep these information, especially Picture column, because each individual
pic
ture will be stored in this table. Each student’s picture will be queried prior to the
face recognition phase as preparation of face recognition. Student ID is required to
identify the student, and other information shows the details of each student.


Cour
ses table



Courses table maintains information of each course registered in the university.
Course number is set as primary key, which is used to distinguish from other courses. In
order for the students to register for courses, the course must be listed

in this table.

Other than the course number, its title and credit hours are provided in this table.


25


CoursesTaken table



CoursesTaken table maintains information about courses completed by a particular
student. This information is important, because som
e classes require prerequisite classes
to be completed prior to registering a course.


Prerequisites table



Prerequisites table shows all prerequisite classes required prior to registering for
the particular class.
If the prerequisites are not satisfied
, the student cannot register for
the course.


Enrollment table



Enrollment table maintains information of which student is registered for which
classes. This information is useful when displaying who is registered for which classes.


Attendances table



Attendances table maintains information of which student is present for which
classes and the date. Whenever the face recognition process is completed without any
errors, a record is added to this table, so that the system can keep track of which student

attended which class and the attended date.





26


TESTING AND RESULTS


Total of 15 cases are tested under the same condition. A light stand is placed in
front of a person whose face picture is taken for the testing. First, an image of the
person is captured

from the webcam to insert a record to the face recognition database.





Once the record with the information of the person is inserted into the database,
the next step is to register for a particular class, CS101 in this testing. The information
include
s the person

s first name, last name, address, major, and the face image in a
format of

binary file. After the
registration
, the status of the person can be checked by
the enrollment table, where user can select a course number to find out who is registere
d
for which classes.



27



Once the person is registered for a class, then an image of the person is taken
one more time to recognize the newly taken image by comparing images from the
database. This time, the capturing of the image is triggered by the infra
red beam
interruption.
Once the image is taken, the image will go through eye detection,
normalization, and recognition process. If the image is recognized, a new window pops
up to confirm if the recognition is valid. If the image is no recognized, a messa
ge box
pops up to inform that the system could not find any image matching.



1
4

cases out of 15 cases were successful to recognize the face image of
individual person who participated this testing. Two failed cases failed in the
recognition process, but
the system was able to find the eye positions correctly and was
successful to detect the face of individual, when the individual information was inserted
into the database. As the result, the system has
achieved

93.3
% of accuracy. The result
might change i
f the number of tested cases is increased.


28


ECONOMIC ANALYSIS




EXECUTIVE SUMMARY:



Development of a face recognition system which is affordable for the education
institute and can be simple to operate, with low maintenance cost.



An affordable system that c
an recognize the student walking in the classroom
and can be marked as present.



A.A.A is a system that can take a picture of a person walking through the door
and to find a match with the database of the security unit of the institute to
record the presenc
e.



The system has the ability to differentiate and match the picture taken on the
bases of distance between the face nodes and the skin color.



The main purpose of building this system is to provide a substitute to the
attendance ceremony and make the

attendance count easy for the professor
during their class.



And we assure because we do not need any kind of certification or approval, as
all the parts used in A.A.A are approved with the required certification.




O
WNERS
:

The people working on this proje
ct are students from University of Bridgeport:



Sungho Cho



Khemmaraks Srey



Kazutaka Nakayama




Rushin Parikh






PRODUCT AND SERVIVES:

A.A.A is designed to have the following features and functions:



Ability to take a picture when a person breaks the IR.



Abili
ty to detect the face in the picture.



Ability to analyze the face in terms of the face structure, distance between the
face nodes like eyes and nose, skin color and etc.



Ability to match the results with the database.



Ability to identify the face.


29




GENERAL

COMPANY DESCRIPTION:



The central challenge in face recognition lies in understanding the role of
different facial features play in our judgments of identity. Accurate face
recognition is critical for many security applications.

Current automatic face
-
re
cognition systems are defeated by natural

changes in lighting and pose, which
often affect face images

more deeply and may result in change of identity. Lots
of system available in the market are costly and may not be affordable for
common people. This i
s the motivation for our project. Our purpose is to
build the system, which is more accurate and more affordable for individual to
use in many areas for wider use, for example, classroom for attendance checking.



A.A.A is an emerging company and promoter o
f a system for the school,
colleges and university that can provide an easy solution to the attendance part
of the class without the professor getting involved. Our system provides all the
features necessary to get the task done with an affordable price.




SALES AND MARKETING:



Market Niche

The target market for our system would be

o

Educational institute which provides education on different levels in the
country likes school, colleges and university where the professor takes
attendance.

Universities and colle
ges in North east area (CT, NJ, NY, MA area)



CT


total 28

o

Universities like Yale, UCONN, UB, and UNH.



NY


total 102

o

Universities like NYU, Hofstra, and Pace.



NJ


total 27

o

Universities like Rutgers, Princeton, Stevens.



MA


total 72

o

Universities like MIT
, BU, Harvard, and NEU.




PUBLIC RELATION STRATEGY

o

We send surveys to schools and institution.

o

We send brochures to schools and institutions.

o

Executive marketing


30




SWOT ANALYSIS:

Strength, Weakness, Opportunity and Threats of our product:



Strength


o

Good mana
gement skills

o

Unique, high quality and multifunction

o

Low price

o

Excellent customer service

o

No direct competition

o

Add values for end users



Opportunity

o

Technology advancement

o

Competitive pricing

o

Market share

o

First movers



Weakness

o

Lack of capital investment

o

Young image



Threats


o

Failure to treat initiative as startup venture

o

Lack of brand awareness




Competition:

Face Recognition is a field where people come up with new ideas every day. The
product which we are going to launch in the market is on a small scale

basis with
relevant features along with being affordable for nonprofit educational institutes.

The competitors for out A.A.A would be the students from different universities who
can build similar product.




Barriers to Entry:



Patents



Brand awareness



Stat
e of the art of the technology





31




MARKETI
N
G

and SALES STRA
TE
G
Y:

Marketing our system in today’s world where there are many existences of the same
product but with different features and prices. We as developers would take the
following marketing strategie
s:



Increase the Number of Customers:
This is the first and the most basic step
which we are planning to do to launch our product in the market. We can do this by
approaching schools and colleges.



Increase the Frequency of Repurchase:

We can increase the fr
equency of
repurchasing the product by consistently offering customers more of what they
want. Communicating frequently with the past and present customers via telephone
or mail generally increases frequency of repurchase. This step will help us to grow
ou
r business.



Advertising and Promotion:
Our advertising channels will include



University, Schools Advertising



Internet Ads, Radio advertising



Newspapers, Magazines



Banners in the local streets near to the school



Conference



Distribution Channels:
We wil
l sell our system



Directly to consumers via mail order



Through our own sales force



By bidding on contacts



Marketing Schedule:


1


2

3

4

5

6

7

8

9

10

11

12

NYT




































USA TODAY




































NEWSLETTER





































E
-
TECH CONFERENCE


















INTERNATIONAL CEA
















WWE




































EXTERNAL




















DIRECT MAIL




















32




FINANCE

Price per Unit = $500




Startup and Cap
italization

Projected Cash Flow Statement

2010

2011

2012

2013

2014

Source of Funds



Beginning Cash (Joint Venture, Investors, Personel)

$250,000

$255,000

$281,250

$356,750

$531,750

Sales/Svc Income

$200,000

$250,000

$437,500

$625,000

$950,000

Available Cash

$450,000

$505,000

$718,750

$981,750

$1,481,750

Use of Funds


Technical Support

12,500

11,250

12,000

7,500

10,000

Salaries

$70,000

$100,000

$137,500

$150,000

$162,500

Operating Expenses

$50,000

$50,000

$50,000

$57,500

$72,
800

Other Expenses

$37,500

$25,000

$37,500

$42,500

$47,500

Tax Payments

$25,000

$37,500

$75,000

$100,000

$125,000

R&D

-

-

$50,000

$92,500

$200,000

Total Cash out

$195,000

$223,750

$362,000

$450,000

$617,800

Net Cash Flow

$255,000

$281,250

$356,750

$531,750

$863,950


Sales Chart

2010

2011

2012

2013

2014

Number of Units sold

400

500

875

1,250

1,900

Amount of Dollars

$
2
00,000

$250,000

$437,500

$625,000

$ 950,000



Expected ROI in 5 years is 36% = $82,200 (Net case


Beginning Cash)

33






Share in Investment


Amount in dollars (Return)

Initial Investment

University

of Bridgeport

50%


$

41,100


$
125,000

Personel Investment

20%


$
1
6,440


$ 75,000

Other Investors

20%


$ 16,440


$ 50,000

Reinvestment

10
%


$
8,220





Projected Balance Sheet

2010

2011

2012

2013

2014

Current Assets:



Cash

$255,000

$281,250

$356,750

$531,750

$863,950

Total Current Assets

$255,000

$281,250

$356,750

$531,750

$863,950

Fixed Assets:


Office Furniture & Equi
pments

$50,000

$25,000

$37,500

$50,000

$56,250

Less Accum Depreciation

$2,000

$1,000

$1,500

$2,000

$2,250

Total Fixed Assets

$48,000

$24,000

$36,000

$48,000

$54,000

Total Assets

$303,000

$305,250

$392,750

$579,750

$917,950

Liabiliti
es:


Accounts Payable

$298,000

$279,000

$317,250

$404,750

$585,750

Total Liabilities

$298,000

$279,000

$317,250

$404,750

$585,750

Stockholder's Equity


Retaining Earnings

$5,000

$26,250

$75,500

$175,000

$332,200

Total (L + SE)

$303,00
0

$305,250

$392,750

$579,750

$917,950




Exit Strategy:




We can start our company by introducing a system which provides new and
innovative features.



We can sell our A.A.A to an existing company providing similar system or a
company who can use our sys
tem as a part of the larger security system.






34


CONCLUSION

We were successful in developing the proposed system. The popular stages
were evaluated which included registering the student in our database, taking the picture
when the IR was broken, normaliz
ing the picture and finally recognition of the student
by matching through the database. The strategy of normalizing the picture was done to
get better accuracy during the final stage of recognition.

Initial experiments were carried within the group membe
rs and later, the testing
was done with random students of University of Bridgeport. The developed
E
i
g
enface
algorithm increased the accuracy of the recognition. Testing revealed 9
3.3
% of positive
results. Overall the product did give the satisfactory resu
lts.



REFERENCES

1.

Yue Gao.

EYES DETECTION.


February 1, 2008.

2.


A Histogram based New Approach of Eye Detection for Face Identification


http://cryptome.org/dprk/Eye_Detection.doc

3.

Shubhendu Trived
i,

Face Recognition using Eigenfaces and Distance
Classifiers.


February 11, 2009
http://onionesquereality.wordpress.co
m/2009/02/11/face
-
recognition
-
using
-
eigenfaces
-
and
-
distance
-
classifiers
-
a
-
tutorial/

















35


APPENDIX

1.

Camera class

// File: Camera.cs

// Author: Sungho Cho


using

System;

using

System.Collections.Generic;

using

System.Linq;

using

System.Text;

using

DirectShowLib;

using

System.Drawing;

using

System.Windows.Forms;

using

System.Runtime.InteropServices;

using

System.Drawing.Imaging;


namespace

UniversityApp

{


class

Camera


{


public

IBaseFilter

theDevice =
null
;


public

IGraphBuilde
r

pGraphBuilder =
null
;


public

IMediaControl

pMediaControl =
null
;


ICaptureGraphBuilder2

pCaptureGraphBuilder2 =
null
;


IVideoWindow

pVideoWindow =
null
;


IBaseFilter

pVideoRenderer =
null
;


IBaseFilter

pSampleGrab
berFilter =
null
;


ISampleGrabber

pSampleGrabber =
null
;


int

Video_Width, Video_Height;


Bitmap

copy =
null
;


Panel

pnlCam =
null
;


Form

fmCam =
null
;



public

Camera(
Form

fmCam,
Panel

pnlCam)


{



this
.fmCam = fmCam;


this
.pnlCam = pnlCam;


}




public

void

SetupGraph()


{


AMMediaType

am_media_type =
new

AMMediaType
();


pSampleGrabber = (
ISampleGrabber
)
new

SampleGrabber
();



pSampleGrabberFilter = (
IBaseFilter
)pSampleGrabber;


pVideoRenderer = (
IBaseFilter
)
new

VideoRenderer
();


pGraphBuilder = (
IGraphBuilder
)
new

FilterGraph
();


pMediaControl = (
IMediaControl
)pGraphBuilder;


pCa
ptureGraphBuilder2 =

(
ICaptureGraphBuilder2
)
new

CaptureGraphBuilder2
();


pCaptureGraphBuilder2.SetFiltergraph(pGraphBuilder);

36



am_media_type.majorType =
MediaType
.Video;


am_media_type.subType =
MediaSubType
.RGB24;



am_media_type.formatType =
FormatType
.VideoInfo;


pSampleGrabber.SetMediaType(am_media_type);


pGraphBuilder.AddFilter(theDevice,
"WebCam Source"
);


pGraphBuilder.AddFilter(pSampleGrabberFilter,
"Sample Grabber"
);



pGraphBuilder.AddFilter(pVideoRenderer,
"Video Render"
);


pCaptureGraphBuilder2.RenderStream(
PinCategory
.Preview,

MediaType
.Video, theDevice, pSampleGrabberFilter, pVideoRenderer);


initWinDow(pnlCam);


pSampleGrab
ber.GetConnectedMediaType(am_media_type);


VideoInfoHeader

pVideoInfoHeader =

(
VideoInfoHeader
)
Marshal
.PtrToStructure(am_media_type.formatPtr,

typeof
(
VideoInfoHeader
));


Video_Width = pVideoInfoHeader.BmiHeader.Width;


Vi
deo_Height = pVideoInfoHeader.BmiHeader.Height;


DsUtils
.FreeAMMediaType(am_media_type);


pSampleGrabber.SetBufferSamples(
true
);


}



public

Bitmap

pCapture()


{


int

bufSize = 0;



IntPtr

imgData;


pSampleGrabber.GetCurrentBuffer(
ref

bufSize,
IntPtr
.Zero);


if

(bufSize < 1)


{


MessageBox
.Show(
"Failed to get buffer size"
);


return

null
;


}


imgData

=
Marshal
.AllocCoTaskMem(bufSize);


pSampleGrabber.GetCurrentBuffer(
ref

bufSize, imgData);



// Save as Bitmap


Bitmap

bm = saveToJpg(imgData, bufSize, Video_Height, Video_Width);


Marshal
.FreeCoTaskMem(imgData)
;


return

bm;


}



private

Bitmap

saveToJpg(
IntPtr

Source,
int

Size,
int

height,
int

width)


{


int

stride =
-
3 * width;


IntPtr

Scan0 = (
IntPtr
)(((
int
)Source) + (Size
-

(3 * width)));


Bitma
p

img =
new

Bitmap
(width, height, stride,

PixelFormat
.Format24bppRgb, Scan0);


copy =
new

Bitmap
(img);


return

copy;


}



37



private

void

initWinDow(
Control

winPanel)


{


pVideoWindow = (
IVideoWindow
)pGr
aphBuilder;


pVideoWindow.put_Owner(winPanel.Handle);


pVideoWindow.put_WindowStyle(
WindowStyle
.Child |

WindowStyle
.ClipSiblings);


Rectangle

rect = winPanel.ClientRectangle;


pVideoWindow.SetWindowPosition(0, 0
, rect.Right, rect.Bottom);


}



public

void

CloseInterface()


{


if

(pMediaControl !=
null
)


{


pMediaControl.StopWhenReady();


}


if

(pVideoWindow !=
null
)


{



pVideoWindow.put_Visible(
OABool
.False);


pVideoWindow.put_Owner(
IntPtr
.Zero);


}


if

(pGraphBuilder !=
null
)


{


Marshal
.ReleaseComObject(pGraphBuilder);


pGraphBuild
er =
null
;


}


if

(pMediaControl !=
null
)


{


Marshal
.ReleaseComObject(pMediaControl);


pMediaControl =
null
;


}


if

(pVideoWindow !=
null
)


{


M
arshal
.ReleaseComObject(pVideoWindow);


pVideoWindow =
null
;


}


if

(pCaptureGraphBuilder2 !=
null
)


{


Marshal
.ReleaseComObject(pCaptureGraphBuilder2);


pCaptureGraphBuilder2 =
null
;


}


}



#region

Device Option Setting


[
DllImport
(
@"oleaut32.dll"
)]


public

static

extern

int

OleCreatePropertyFrame(


IntPtr

hwndOwner,


int

x,


int

y,


[
MarshalAs
(
Un
managedType
.LPWStr)]
string

lpszCaption,

38



int

cObjects,


[
MarshalAs
(
UnmanagedType
.Interface,

ArraySubType =
UnmanagedType
.IUnknown)]



ref

object

ppUnk,


int

cPages,


IntPtr

lpPageClsID,


int

lcid
,


int

dwReserved,


IntPtr

lpvReserved);



public

void

DisplayPropertyPage(
IBaseFilter

dev)


{


ISpecifyPropertyPages

pProp = dev
as

ISpecifyPropertyPages
;


int

hr = 0;


if

(pProp ==
null
)


{


IAMVfwCompressDialogs

compressDialog = dev
as

IAMVfwCompressDialogs
;


if

(compressDialog !=
null
)


{


hr = compressDialog.ShowDialog(
VfwCompressDialogs
.Config,


IntPtr
.Zero);


DsError
.ThrowExceptionForHR(hr);


}


return
;


}



FilterInfo

filterInfo;


hr = dev.QueryFilterInfo(
out

filterInfo);


DsError
.ThrowExceptionForHR(h
r);



DsCAUUID

caGUID;


hr = pProp.GetPages(
out

caGUID);


DsError
.ThrowExceptionForHR(hr);



IPin

pPin =
DsFindPin
.ByDirection(dev,
PinDirection
.Output, 0);


ISpecifyPropertyPages

pProp2 = pPin
as

ISpe
cifyPropertyPages
;


if

(pProp2 !=
null
)


{


DsCAUUID

caGUID2;


hr = pProp2.GetPages(
out

caGUID2);


DsError
.ThrowExceptionForHR(hr);



if

(caGUID2.cElems > 0)


{


int

soGuid =
Marshal
.SizeOf(
typeof
(
Guid
));



IntPtr

p1 =
Marshal
.AllocCoTaskMem((caGUID.cElems +


caGUID2.cElems) * soGuid);



for

(
int

x = 0; x < caGUID.cElems * soGuid; x++)

39



{


Marshal
.WriteByte(p1, x,
Marshal
.ReadByte(caGUID.pElems, x));


}



for

(
int

x = 0; x < caGUID2.cElems * soGuid; x++)


{


Marshal
.WriteByte(p1, x + (ca
GUID.cElems * soGuid),


Marshal
.ReadByte(caGUID2.pElems, x));


}



Marshal
.FreeCoTaskMem(caGUID.pElems);


Marshal
.FreeCoTaskMem(caGUID2.pElems);



caGUID.pElems = p1;



caGUID.cElems += caGUID2.cElems;


}


}



object

oDevice = (
object
)dev;


hr = OleCreatePropertyFrame(fmCam.Handle, 0, 0, filterInfo.achName, 1,


ref

oDevice, caGUID.cElems, caGUID.pElems, 0, 0,
IntPtr
.Z
ero);


DsError
.ThrowExceptionForHR(hr);



Marshal
.FreeCoTaskMem(caGUID.pElems);


Marshal
.ReleaseComObject(pProp);


if

(filterInfo.pGraph !=
null
)


{


Marshal
.ReleaseComObject(filterInfo.
pGraph);


}


}



public

IBaseFilter

CreateFilter(
Guid

category,
string

friendlyname)


{


object

source =
null
;


Guid

iid =
typeof
(
IBaseFilter
).GUID;


foreach

(
DsDevice

device
in

DsDevice
.GetD
evicesOfCat(category))


{


if

(device.Name.CompareTo(friendlyname) == 0)


{


device.Mon.BindToObject(
null
,
null
,
ref

iid,
out

source);


break
;


}


}



return

(
IBaseFilter
)source;


}


#endregion


}

}


40


2.

Cluster class

// File: Cluster.cs

// Author: Sungho Cho


using

System;

using

System.Collections.Generic;

using

System.Text;

using

System.Drawing;


namespace

UniversityApp

{


cla
ss

Cluster


{


private

Point

center;


public

Point

Center


{


get

{
return

center; }


}


private

int

width;


public

int

Width


{


get

{
return

width; }


}


private

int

height;


public

int

Height


{


get

{
return

height; }


}


private

int

numPixel;


public

int

NumPixel


{


get

{
return

numPixel; }


}


private

byte

clusterNum;


public

byte

ClusterNum


{


get

{
return

clusterNum; }


}



public

Cluster(
Point

center,
int

width,
int

height,

int

numPixel,
byte

clusterNum)


{


this
.center = center;


this
.width = width;


this
.height = height;


this
.numPixel = numPixel;


this
.clusterNum = clusterNum;


}


}

}

41


3.

Eigenface class

// File: Eigenface.cs

// Author: Sungho Cho


using

System;

using

System.Collections.Generic;

using

System.Linq;

using

System
.Text;


namespace

UniversityApp

{


class

Eigenface

:
IComparable
,
ICloneable


{


public

double
[] EF;
// Eigenface vector


public

int

size;
// size of an image (number of pixels)


public

double

EigenValue;
//
Eigen value



public

Eigenface() { }



public

Eigenface(
int

sz)


{


EF =
new

double
[sz];


size = sz;


}



#region

IComparable Members



int

IComparable
.CompareTo(
object

obj)


{



Eigenface

ef = (
Eigenface
)obj;


return

ef.EigenValue.CompareTo(
this
.EigenValue);


}



#endregion



#region

ICloneable Members



object

ICloneable
.Clone()


{


Eigenface

clone =
new

Eigenface
();



if

(
this
.EF !=
null
)


clone.EF = (
double
[])
this
.EF.Clone();


clone.EigenValue =
this
.EigenValue;


clone.size =
this
.size;


return

clone;


}



#endregion


}

}

42


4.

Evaluation class

// Fi
le: Evaluation.cs

// Author: Sungho Cho, Kazutaka Nakayama


using

System;

using

System.Collections.Generic;

using

System.Linq;

using

System.Text;

using

System.Drawing;


namespace

UniversityApp

{


class

Evaluation

:
IComparable
<
Evaluation
>


{


private

double

eval;


public

double

Eval


{


get

{
return

eval; }


}


private

Point

p1;


public

Point

P1


{


get

{
return

p1; }


set

{ p1 =
value
; }


}


private

Point

p2;


public

Point

P2


{


get

{
return

p2; }


set

{ p2 =
value
; }


}



private

double

angle;


public

double

Angle


{


get

{
return

angle; }


}



private

double

clusterD
ist;


public

double

ClusterDist


{


get

{
return

clusterDist; }


}



private

double

dist;


public

double

Dist


{


get

{
return

dist; }


}


43



private

double

angleSign;


publ
ic

double

AngleSign


{


get

{
return

angleSign; }


}



public

Evaluation(
double

eval,
Point

p1,
Point

p2,
double

angle,


double

clusterDist,
double

dist,
double

angleSign)


{


this