Joint and implicit registration for face recognition Dr. Peng Li and Dr ...

brasscoffeeAI and Robotics

Nov 17, 2013 (4 years and 1 month ago)

65 views

Joint and implicit registration for
face recognition


Dr. Peng Li and Dr. Simon J.D. Prince


Department of Computer Science

University College London


{p.li,
s.prince}@cs.ucl.ac.uk

14:00
-
15:00 Tuesday, 23 June 2009

The face recognition pipeline

Matching

Probe

Gallery

Keypoint
registration

Result

Detected face

Global approaches


Eigenfaces [Turk 1991]


Fisherfaces [Belhumeur 1997]

Local approaches


AAM [Cootes 2001]


ASM [Mahoor 2006]


EBGM [Wiskott 1997]

Distance
-
based approaches


Fisherfaces [Belhumeur1997]


Laplacianfaces [He2005]


KLDA [Yang2005]

Probabilistic approaches


Bayesian [Moghaddam 2000]


PLDA [Ioffe 2006, Prince 2007]

Feature
extraction

Face
recognition

Face
detection

Original
Image

The face recognition pipeline


Extract Gabor jet around each keypoint



Generative probabilistic model


Independent term for each keypoint

……

Matching

Probe

Gallery

Keypoint
registration

Original
Image

Result

Detected face

Feature
extraction

Face
recognition

Face
detection

Hypothesis 1

H1:

We can use the
same probabilistic model

for

registration and recognition.

Probabilistic
model

Result

Keypoint
registration

Feature
extraction

Face
recognition

Face
detection

Keypoint
registration

Feature
extraction

……

Matching

Probe

Gallery

Detected face

Original
Image

Hypothesis 2: Joint Registration

Gallery

Probe

+

+

+

Generic eye

Particular eye

+

x

H2:

We can use the gallery image to help find

keypoints in the probe image.

Hypothesis 3: Implicit Registration


Probe

Posterior

distribution

+

*

Hidden
variable

H3:

We do not need to make hard estimates of

keypoint positions.

t
p



keypoint
position

Outline


Background


Hypotheses


Probabilistic face recognition


Frontal face recognition


H1: Same model for registration and recognition


H2: Joint registration


H3: Implicit registration


Cross
-
pose face recognition


Conclusion

Probabilistic linear discriminant analysis




(Prince & Elder,ICCV 2007)

mean

m

Signal

Noise

+

+

+

x
ij

=

μ

+

+

+

Fh
i

Gw
ij


ij

=

G(:,1)

G(:,2)

G(:,3)

w
1j

w
2j

w
3j

Within
-
individual variation

Between
-
individual variation

F(:,1)

F(:,2)

h
1

h
2

h
3

F(:,3)

i
-

# of identity

j
-

# of image

Image

x
ij

Independent
per
-
pixel
Gaussian
noise,


Face recognition by model selection

x
p

x
g

h
g

h
p

M
d

w
g

w
p


Match

x
p

x
g

h
g

w
g

M
s

w
p


No
-
Match

Observed
Variables

Choose MAP
model

Pr
(x
p
, x
g
|M
d
)

Pr
(x
p
, x
g
|M
s
)

Observed
Variables

Hidden
Variables


X
p

-

Probe image


X
g


-

Gallery image

Hidden
Variables

Hidden
Variables

Hidden
Variables

Methodology

Gallery

Probe

+

+

t
p

1: Find keypoint in probe image alone by MAP

2: Joint registration by MAP

3: Implicit registration using probe image alone

4: Joint and Implicit registration

Posterior over
keypoint position

t
p


keypoint
position

x
p

x
g

h
g

h
p

w
g

w
p

x
p

x
g

h
g

w
g

w
p

Experimental Setting:
XM2VTS Database


Dataset


Training: First 195 identities


Test:


Last 1
00 identities


Gallery data: 1st image of 1st session


Probe data: 1st image of 4th session



Feature Extraction: Gabor filter
at all possible





locations of 13 keypoints

Experiment 1: finding keypoints using
recognition model in probe alone

Recognition



First match identification rate


Higher is better

0
20
40
60
80
100
120
140
160
0.65
0.7
0.75
0.8
0.85
0.9
0.95
1
Subspace dimension
Correct identification rate


Using keypoints labeled manually
Using keypoints found by MAP
Registration



Average error of all keypoints


Lower

is better

0
20
40
60
80
100
120
140
160
0
0.02
0.04
0.06
0.08
0.1
Subspace dimension
Normalized Eulidean Distance


Finding keypoints with MAP
Manually labeled by another subject

Gallery image helps find keypoints in probe image


Localization errors are close to human labelling

0
20
40
60
80
100
120
140
160
0.65
0.7
0.75
0.8
0.85
0.9
0.95
1
Subspace dimension
Correct identification rate


Using probe image alone
Using both gallery and probe images
Experiment 2: joint registration

0
20
40
60
80
100
120
140
160
0
0.02
0.04
0.06
0.08
0.1
Subspace dimension
Normalized Eulidean Distance


Using probe image alone
Using both probe and gallery images
Manually labeled by another subject
0
20
40
60
80
100
120
140
160
0.65
0.7
0.75
0.8
0.85
0.9
0.95
1
Subspace dimension
Correct identification rate


MAP
Marginalization
Experiment 3: implicit registration


Marginalizing over keypoint position is better than using
MAP keypoint position

Experiment 4: joint and implicit

registration


Joint and implicit registration

performs best.


Comparable to using manually labeled keypoints.

0
20
40
60
80
100
120
140
160
0.65
0.7
0.75
0.8
0.85
0.9
0.95
1
Subspace dimension
Correct identification rate


Using keypoints labeled manually
Using both images by marginalization
Using probe image by marginalization
Using both images by MAP
Using probe image by MAP
Cross
-
pose face recognition using tied PLDA
model
(Prince & Elder, 2007)

Key idea:

separate within
-
individual and between
-






individual variance at each pose


Data:


XM2VTS database:
with 90
°

pose difference.

Gallery (frontal face)





Probe (profile face)


Feature extraction:

Gabor feature for

6 keypoints

FRONTAL IMAGE

PROFILE IMAGE

x
ijk

=

μ
k

+

+

+

F
k
h
i

G
k
w
ijk


ijk

K = 1

K = 2

K


Pose Index

Experiment 5: Cross
-
pose face recognition
and registration


Similar results to frontal face recognition & registration


Comparable to using manually labeled keypoints.

0
10
20
30
40
50
60
70
0.55
0.6
0.65
0.7
0.75
0.8
Subspace dimension
Correct identification rate


Using keypoints labeled manually
Using both images by marginalization
Using probe image by marginalization
Using both images by MAP
Using probe image by MAP
0
10
20
30
40
50
60
70
0.05
0.06
0.07
0.08
0.09
0.1
0.11
0.12
0.13
0.14
Subspace dimension
Normalized Eulidean Distance


Using probe image alone
Using both probe and gallery images
Manually labeled by another subject
Concluding Remarks


Three hypotheses


Same model for both face registration & recognition.


Joint registration for face recognition


Implicit registration for face recognition



All work well for both frontal & cross
-
pose face
registration & recognition