Human Gait Recognition by Using Motion Silhouette Contour Templates and Static Silhouette Templates

crumcasteΤεχνίτη Νοημοσύνη και Ρομποτική

17 Νοε 2013 (πριν από 3 χρόνια και 11 μήνες)

178 εμφανίσεις

Available ONLINE
www.visualsoftindia.com/vsrd/vsrdindex.html





VSRD
-
TNTJ, Vol. I (
4
), 2010,
207
-
221


____________________________

1
Associate Professor, Dean of Academic, HOD, Computer Science & Information Technology Department, Naraina Vidya
Peeth Engineering & Management Institute, Kanpur, Uttar Prades
h, INDIA.
2
Vice Chancellor, Uttrakhand Open University,
Haldwani, Uttar Pradesh, INDIA.
3
Teacher Fellow, Computer Science & Engineering Department, Harcourt Butler
Technological Institute, Kanpur, Uttar Pradesh, INDIA.
*Correspondence :
nirvikar_katiyar@re
diffmail.com

R
R
R
E
E
E
S
S
S
E
E
E
A
A
A
R
R
R
C
C
C
H
H
H



C
C
C
O
O
O
M
M
M
M
M
M
U
U
U
N
N
N
I
I
I
C
C
C
A
A
A
T
T
T
I
I
I
O
O
O
N
N
N



Human Gait Recognition by Using Motion Silhouette
Contour Templates and Static Silhouette Templates

1
Nirvikar Katiyar
*
,
2
Vina
y Kr. Pathak

and
3
Rohit Katiyar

ABSTRACT

In this paper, we propose a gait recognition algorithm that fuses
motion and static spatio
-
temporal templates of
sequences of silhouette

images, the motion silhouette contour templates (MSCTs) and static silhouette templates
(SSTs). MSCTs and SSTs capture the motion and static characteristic of gait. These templates woul
d be
computed from the silhouette sequence directly. The performance of the proposed algorithm is evaluated
experimentally using the SOTON data set and the USF data set. We compared our proposed algorithm with
other research works on these two data sets. E
xperimental results show that the proposed templates are efficient
for human identification in indoor and outdoor environments. The proposed algorithm has a recognition rate of
around 85% on the SOTON data set. The recognition rate is around 80% in intrins
ic difference group (probes
A

C) of USF data set.

Keywords :

Gait
R
ecognition; Motion
S
ilhouette
C
ontour
T
emplates; Static
S
ilhouette
T
emplates; Biometrics
.

INTRODUCTION

Biometrics has received substantial attention from researchers. Biometrics is method
of recognizing a human
according

to physiological or behavioral characteristic. Gait is one of the biometrics that different from the
traditional biometrics. Gait is the manner of walking. Early medical study showed that individual gaits are
unique, varyin
g from person to perso
n and are difficult to disguise
[1]
. In addition, it has been shown that gaits are
so characteristic that we
recognize friends by their gait
[2]

and that a gait can
even reveal an individual’s sex
[3]
.
Unlike other biometrics such as fin
gerprints and palm
-
prints, gait recognition requires no contact with a capture
device as a gait can be captured in a distance as a low resolution image sequence. Gait recognition is basically
divided into two types: model based and model free recognitio
n
[4
]
. In model
-
based recognition, researchers use
information gathered from the human body, especially from the joints, to construct a model for recognition. In
general, model
-
based approach is view and scale invariant. To gather these gait information, a hig
h quality of
the gait sequences is required. Thus, some of the model
-
based recognition systems require multi
-
camera for
Nirvikar Katiyar

et. al

/ VSRD Technical & Non
-
Tech
nical Journal Vol. I (4
), 2010

Page
208

of
221

collecting the information. One of the classical model
-
based gait recognition experime
nts was undertaken by
Johansson
[5]
, who attached l
ight bulbs to a human and then used the movement of the light bulbs to captur
e the
subject’s motion. In Ref.
[6]
, the body contours are computed in each frame of the walking sequence. A stick
model is then created from the body contours for recognition. Joh
nson and Bobick proposed a multi
-
view gait
recognition algorithm which used static
body parameters for recognition
[7]
. These static body parameters are the
height of the silhouette, the distance between the head and pelvis, the distance between the left an
d right foot
and the maximum value of the distance between the pelvi
s and the feet. Lee and Grimson
[8]

proposed a similar
approach for recognition. They used the silhouette images to compute the seven features vectors, such as the
aspect ratio and centroid

of the silhouette, for recognition. In addition, they also proposed gait features which
based on spec
tral components for recognition
[8]
. Wagg and Nixon
[9]

presented an automated model
-
based
method for gait extraction which based on the mean shape and moti
on information of gait.


At present, most current gait recognition research uses model
-
free (or holistic) recognition. This means using
motion information directly without the need for a model reconstruction. Model
-
free approaches usually use
sequences of
binary silhouettes, extracting the silhouettes of moving objects from a video using segmentation
techniques such as background subtraction. Techniques for use in moving object
recognition include Murase
and Sakai
’s
[10]

proposed parametric eigenspace repres
entation. The eigenspace technique was orig
inally used in
face recognition
[11]
, but Murase and Sakai
[10]

applied it to gait recognition and lip reading, projecting the
extracted silhouette images onto the eigenspace using principle component analysis (PCA)
. The sequence of
movement forms a trajectory in the eigenspace, a parametric eigenspace representation. The input image
sequence is preprocessed to form a sequence of binary silhouettes and this binary sequence is projected to form
a trajectory in the eig
enspace. The smallest distance between the input trajectory and the reference trajectory
is
the best match. Huang et al.
[12]

applied a similar technique for purposes of gait recognition using linear
discriminating analysis (LDA), also known as canonical an
alysis. The advantage of using LDA is that it
discriminates better between different classes. Three different types of temporal templates were proposed, all
generated by the computation of optical flow. Canonical analysis allows the temporal template seque
nce to be
projected to form a manifold in the subspace.

Foster et al.
[13]

presented an area
-
based metric which is called gait masks. First, they masked each silhouette in
an image sequence and then measure the unmasked area. Difference in this information
is used to form a time
-
varying signal which can be used as a signature in automatic gait rec
ognition. Hayfron Acquah et al.
[14]

proposed a gait recognition method that uses a generalized symmetry operator which exploits the symmetry of
human motion, using
a symmetry operator on the edge of the silhouette image to generate a symmetry map. A
gait signature is then generated by using a fast Fourier transformation (FFT)

on the mean of the sym
metry map.
BenAbdelkader et al.
[15]

had a similar idea for gait recogn
ition that proposed a new gait representation called
image
-
self similarity plot. These plots are then projected to an eigens
pace by using PCA. Wang and Tan
[16]

proposed a new transformation method for reducing the dimensionality of the input feature space
by unwrapping
a 2D silhouette image and transforming it into a 1D distance signal. The sequences of silhouette images are then
transformed to create a time
-
varying distance signals. Finally, they apply the eigenspace transformation to the
distance signal.
Liu and Sarkar
[17]

proposed another representation for gait recognition which is called averaged
silhouette. The silhouette sequence is transformed into a single image representation for recognition. Euclidean
Nirvikar Katiyar

et. al

/ VSRD Technical & Non
-
Tech
nical Journal Vol. I (4
), 2010

Page
209

of
221

distance is adopted for similarity measures be
tween these representations. In this paper, we propose a fast,
robust and innovative gait recognition algorithm based on motion and static spatiotemporal templates. We
propose to recognize gaits using two gait feature templates in combination, motion silho
uettes contour templates
(MSCTs) and static silhouette templates (SSTs). MSCTs and SSTs embed critical spatial and temporal
information. The use of these templates reduces the computation cost and the size of the database. The efficacy
of the proposed meth
od in indoor and outdoor environments has

been demonstrated on the SOTON
[18]

and USF
data sets
[19]
. The rest of this paper is organized as follows. In Section 2, we describe the detail on the MSCT and
SST. Section 3 provides details of the proposed recogni
tion algorithm. Section 4 presents the experimental
results and Section 5 offers our conclusion.

F
EATURES EXTRACTION

In this section, we describe the details of the motion spatiotemporal template, MSCT, and static spatio
-
temporal
template, SST.

Motivation

The main motivation of the proposed templates is to construct discriminative representations from the motion
and static characteristics of the walking sequence for recognition. As mentioned in previous section, there are
different approaches for gait recog
nition such as holistic and model
-
based approach. One of the common
approaches is extracting features from each frame of a walking sequence and then ge
nerating a sequence of
features
[10,12,13,16]
. Unlike these methods, in this paper, we proposed a method w
hich simply extract two feature
templates, exemplar MSCT and exemplar SST, from a sequence of silhouette images for recognition. We
consider the motion characteristic of gait is the part of the body in motion during walking, such as hand and leg.
This moti
on characteristic is captured by using the contour of the silhouette images. We suppose the static
characteristic of gait is the torso which remains steady

during walking. This static characteristic is captured by
using the silhouette images directly. Comp
ared with the existing research works, instead of creating only one
representation

for recognition, two representations are constructed for reorganization in our proposed algorithm.
The proposed templates for gait recognition are simple and computationally

efficient. It could be computed
without the need to generate a sequence of features or perform any transformations. Different from the model
-
based approach, it is not necessary to construct any model for recognition. The template construction process is
s
imple and the construction time is short. Thus, it is suitable for real
-
time gait recognition. The following
section describes how to construct exemplar MSCT and exemplar SST
.

M
otion
Silhouette Contour Template
(MSCT) and
Static Silhouette Template
(SST)

T
he basis of these templates is a sequence of silhouette images. A MSCT contains information about the
movement characteristics of a human gait and a SST contains information about the static characteristics of a
human gait. These templates

are used togethe
r for gait recognition. First, the silhouettes are
extracted and
normalized to a fixed size. Then, the gait period

is estimated from the silhouette sequence. The silhouette
sequence is further divided into several cycles according to the

estimated gait per
iod. In each cycle, two
templates, an MSCT and a SST, are computed. There are a number of MSCT and SST in each silhouette
Nirvikar Katiyar

et. al

/ VSRD Technical & Non
-
Tech
nical Journal Vol. I (4
), 2010

Page
210

of
221

sequence. To ease of computation, an exemplar MSCT and an exemplar SST are computed by averaging the
MSCT and SST in each sequence.
Fi
g. 1
shows a flow diagram of the proposed gait recognition algorithm.

Preprocessing

In our proposed algorithm, silhouettes are the basis of gait recognition. The silhouettes are extracted by simple
background

subtraction and threshold
ing
[20]
. Then, the bin
arization process renders the image in black and
white; the background is

black and the foreground is white. The bounding box of the silhouette image in each
frame is computed. The silhouette image

is extracted according to the size of the bounding box and

the extracted
image is resized to a fixed size (128
X

88 pixels). The

purpose of normalization is to eliminate the scaling
effect.
Fig. 2
shows

the
examples of the normalized silhouette images.



Nirvikar Katiyar

et. al

/ VSRD Technical & Non
-
Tech
nical Journal Vol. I (4
), 2010

Page
211

of
221

Gait Period Estimation

Human walking repeats its motion i
n a stable frequency. Since our proposed gait feature templates depend on
the gait period, we must estimate the number of frames in each walking cycle. A single walking cycle can be
regarded as that period in which a person moves from the mid
-
stance (both
legs are closest together) position to
a double support position (both legs are furthest apart), then the mid
-
stance position, followed by the double
support position, and finally back to the midstance position.
Fig. 3
shows samples of silhouette images in

one
cycle. The gait period
P
gait
can then be estimated by calculating the number of foreground

pixels in the silhouette
image
[19]
. In mid
-
stance position, the silhouette image contains a smallest number of foreground pixels. In
double support position, th
e silhouette contains a greatest number of foreground pixels. However, because sharp
changes in the gait cycle are most obvious in the lower part of the body, gait period estimation makes use only
of the lower half of the silhouette image, with the gait pe
riod being the median of the distance between two
consecutive minima.

Generating
E
xemplar MSCTs

A MSCT contains the motion information about the human gait. A MSCT could be constructed in three steps.
First, the silhouette images are extracted and normaliz
ed to a fixed size of 128*88 pixels. Then, the sequence of
silhouettes is used to estimate the gait period
Pgait
. Finally, the silhouette image sequences are then divided
into several cycles according to the estimated gait period
Pgait
. MSCTs are created

by using the image
sequences and the gait period
Pgait
. The exemplar MSCT is the average MSCTs. The MSCT is generated from
a sequence of silhouette contours. The contour of the silhouette
CSi
is obtained by subtracts the original
silhouette
Si
with the e
roded silhouette
ESi
, as in Eq. (1). The eroded silhouette
ESi
could be computed by
erosion operation, as in Eq. (2):


where
Si
is the original silhouette which is used to erode,
ESi
is the eroded silhouette,
CSi
is the silhouette
contour,
S
is the

struct
uring element and
θ
is the eroding operator.
(Si)

s

represents

the translation of silhouette
image
Si
by
s
.

The structuring element
S
is a set of coordinate points. The

foreground and background pixels are
represented by 1’s and

0’s respectively.
Fig. 4
shows the structurin
g element which we

adopted for erosion
operation. The eroding operator is used to

superimpose the structuring element with the input image. If

each
nonzero element of the structuring element is contained in

the input image, then the output pixel is 1. Othe
rwise,
the output

pixel is 0
[21]
.
Fig. 5
shows the image of original silhouette,

eroded silhouette and the silhouette
contour which computed

by erosion operation.


Nirvikar Katiyar

et. al

/ VSRD Technical & Non
-
Tech
nical Journal Vol. I (4
), 2010

Page
212

of
221


An algorithm (3) is used to create the sequence of silhouette contour images:


where
i
i
s the cycle number in the gait sequence, α

is the intensity decay parameter and
MSCT
i

is the number
i
motion

silhouette contour template. The intensity decay parameter α

could be computed using the following
formula:


where
P
gait

is the estimated gait per
iod. The use of a dynamic decay value rather than a fixed intensity decay
parameter eliminates the walking speed effect.
Fig. 6
shows some examples MSCT. The number of MSCT in a
sequence depends on the gait period and the number of frames in the walking se
quence. The fact that different
subjects may produce different numbers of MSCT may increase the degree of computational complexity. To
reduce this complexity, an exemplar MSCT is obtained. The exemplar MSCT is the average MSCT
MSCT
i

in
each walking sequenc
e as


where
n
is the number of MSCT in the sequence.
Fig. 7
shows

some examples of exemplar MSCT.

A great
advantage of using MSCT is that the contour images

from which they are formed are an order of magnitude
smaller

than
silhouette images and are thus m
ore computationally efficient.

However, if the silhouettes are
extracted at a low quality,

an MSCT may embed irrelevant information which affects the

recognition rate. In the
following section we describe how this

error can be reduced by using the SST.

Gen
erating
E
xemplar SSTs

SSTs are used in our recognition algorithm in conjunction

with MSCTs as a way of reducing the recognition
rate error. A

SST is generated in much the same way as a MSCT except that

it uses the entire silhouette image.
The SST can be ge
nerated

by using the following algorithm:

Nirvikar Katiyar

et. al

/ VSRD Technical & Non
-
Tech
nical Journal Vol. I (4
), 2010

Page
213

of
221


where
i
is the cycle number in the gait sequence and
SST
i

is the
i
static silhouette template.
Fig. 8
shows the
examples of the SST. As in the generation of MSCT, the number of SST
SST
i

in the sequence depends on

the
gait period and the number of

frames in the sequence. We further obtain the exemplar SST by averaging the
SST
i

in each walking sequence given as


where
n
is the number of cycle in the sequence.
Fig. 9
shows some examples of exemplar SST.

RECOGNITION

The similarity score represents the level of similarity between

the testing data and the training data. In this
section, we explain

the detail of the similarity measures in our proposed algorithm.

For ease of understanding,
gallery
means training and
probe

means testing. Suppose there are
N
gallery

subjects in the gallery

data set,
N
probe

subjects in the probe data set and each subject

contains a walking sequence. A probe sequence is captured

and
then measures the similarity score between each training

seque
nce.


Suppose
u
is the probe sequence and
v
be the gallery sequence.

A probe sequence
Seq
u

= {
Seq
u
(
1
), Seq
u
(
2
)


Nirvikar Katiyar

et. al

/ VSRD Technical & Non
-
Tech
nical Journal Vol. I (4
), 2010

Page
214

of
221

.
Seq
u
(P )
}
from training data set and a gallery sequence

Seq
v

= {
Seq
v
(
1
), Seq
v
(
2
) . . . Seq
v
(Q)
}
from testing data
set

are used for calculating

the similarity score, where
P
and

Q
are, respectively, the number of frames in the
probe and

gallery sequences. We calculate the gait period of each subject

in the probe and gallery sequences.
We follow the procedures

described in Section 2 to create the
exemplar MSCT and SST

for the gallery and
probe sequences. After that, each subject

would have two templates,
MSCT

u

and
SST
u

, for the probe

sequence
and another two templates,
MSCT
v

and
SST
v
, for

gallery sequence.

Our algorithm makes use of two similari
ty
scores. To measure

the similarity between gallery and probe MSCT, we calculate

the similarity score
SimScore
MSCT

. To measure the

similarity between gallery and probe SST, we calculate the

similarity score
SimScore
SST

. These similarity scores are calcu
lated

by using the Euclidean distance. The similarity score

SimScore
MSCT

can be computed by Eq. (8) and similarity

score

SimScore
SST

can be calculated by Eq. (9):

SimScore
MSCT
(MSCT
u

,MSCT
v
)


(8)


where
MSCT
u

is the exemplar MSCT and
SST
u

is the exemplar

SST of the probe sequence
u
,
MSCT
v

is the
exemplar

MSCT and
SST
v

is the exemplar SST of the gallery sequence

v
,
SimScore
MSCT
and
SimScore
SST

are the
mean similarity

score of the exemplar MSCT and exemplar SST, respectively.

SimScore
MSCT

an
d
SimScore
SST

can be computed by Eqs.

(10) and (11), respectively:



where
N
gallery

is the number of subjects in gallery set and

N
probe

is the number of s
ubjects in probe set. The final
similarity

score
SimScore
between two subjects can be calculated

as
follows:


In our proposed recognition algorithm, nearest neighbor

(NN) classifier is adopted for classification. For a
testing sample

u
, we calculate the final similarity score
SimScore
with

each subject in the gallery data set by Eq.
(12). Thus, there ar
e

totally
N
gallery

final similarity score
SimScore
. The sample
u

is classified as
v
when the final
score
SimScore
is the minimum

obtained from any of the training patterns. Thus, the testing

sample
u
is
classified as the subject
v
if

Nirvikar Katiyar

et. al

/ VSRD Technical & Non
-
Tech
nical Journal Vol. I (4
), 2010

Page
215

of
221


Where i = 1, 2, ….,
N
gallery
.


EXPERIMENTS

In this section, we show the performance of proposed gait

recognition algorithm in tw
o data sets, the SOTON
data set
[18]

and the USF data set
[19]
. The SOTON data set was captured in

indoor environment and the USF data
set was captur
ed in outdoor

environment.
Fig. 10
shows some silhouette images from

these two data sets. For
the evaluation, we adopted the FERET

scheme
[1]

and measured the identification rate and the verification

rate
using cumulative match characteristics (CMCs). All

e
xperiments are implemented using Matlab and tested on a
P4

2.26 GHz computer with 512MB memory.

R
ecognition on the SOTON data set

For the SOTON data set, we had to make a number of adjustments

to the data set. Since there were insufficient
frames in

each w
alking sequence of SOTON data set to estimate the gait

period, to generate the proposed feature
templates, we fixed the

gait period as 30. There were varying numbers of walking sequences

for each subject in
the data set, so we constructed three

further dat
a sets: data sets A, B, and C. In data set A, 50% of

the image
sequences of each subject were selected for training

and the remainders were used for testing. In data set B, 75%
of

the image
sequences of each subject were selected for training

and the remai
nders were used for testing. In
data set C, 90% of

the image sequences of each subject were selected for training

and the remainders were used
for testing.

The proposed algorithm was tested to determine its ability to

recognize by using an MSCT and an
SST
together, an MSCT

alone, and an SST only. The NN classifier was used in these

tests.
Table 1
shows that
the algorithm achieved its best result

on the combined templates, with a recognition rate of above

86% for all
three subsets.
Fig. 11
shows the recognit
ion rate

for the three data sets plotted as a CMC curve. The algorithm

performs well on both the MSCT and SST but MSCT is the

better of the two.

Recognition on the USF data set

In this experiment, the proposed algorithm was evaluated on the outdoor data se
t

USF data set. The USF
version 2.1 data set contains 1 gallery set and A

L (12) probe sets. This data set offers experimental challenges
Nirvikar Katiyar

et. al

/ VSRD Technical & Non
-
Tech
nical Journal Vol. I (4
), 2010

Page
216

of
221

in that it contains a number of covariates such as shoe types, surface types and viewing angles. We compared
our propo
sed algorithm with baseline algorithm
[19]

and UMD Hidd
en Markov Model (HMM) algorithm
[22]
.




To ease our explanation, we placed under three group headings:

(I) intrinsic difference, (II) surface difference,
and (III) extrinsic

difference on USF version

2.1 data set.
Table 2
provides more

detailed information about
these groupings. The recognition

performance is illustrated in
Table 3
.

The experimental result shows that the proposed algorithm

is only slightly worse than the baseline algorithm in
group (I
I).

Compared with the baseline algorithm, the proposed algorithm

has a better recognition rate for
groups (I) and (III). The rank 1

performance in group (I) of the proposed algorithm is slightly

worse than the
UMD HMM algorithm by 2%. However, there

is a d
istance in the performance of the proposed algorithm

compared with UMD HMM approach in groups (II) and (III).


This gives rise to a number of interesting observations. It

would seem that the proposed templates are insensitive
Nirvikar Katiyar

et. al

/ VSRD Technical & Non
-
Tech
nical Journal Vol. I (4
), 2010

Page
217

of
221

in different

viewing angle and

shoe types. In group (I), the recognition

rate is around 66% by using baseline
algorithm. Compared

with baseline algorithm, there is average 14% improvement in

recognition rate in group (I)
by our proposed algorithm. The

performance of the UMD HMM algorit
hm and the proposed algorithms

is
nearly the same. The rank 1 recognition rate of the

UMD HMM algorithm and the proposed algorithms are 82%

and 80%, respectively. In group (III) (probes H

L), the average

identification rate of proposed algorithm is
higher
than baseline

algorithm by 2% with a significantly high recognition rate in

probes K and L. Although
there is a distance between our proposed

algorithm with the UMD HMM algorithm, there is also

a high
recognition rate in probe K. This would indicate that t
he

proposed templates retain their discriminative power
over time.

The fact that the proposed algorithm does not work very well

in group (II) (probes D

G, surface
differences) indicates that

the proposed templates are sensitive to the surface type.
Fig.

12

shows the recognition
rate of proposed gait recognition algorithm in USF data set with respect to different ranks. In the

illustration,
rank
n
means the individual is matched with one

of the top
n
samples in ordered similarity scores.


We also applied the

feature templates individually for gait

recognition in USF data set. The recognition rate also
recorded

in
Table 3
. The recognition rate of using two templates together

is higher than the recognition rate of
using feature template

individually. MSCT had a

higher recognition rate than SST in

group (III). It means that
MSCT could retain more distinctive

information than SST in carrying, clothing and time covariant.

Figs. 13
and
14
show the recognition rate of using MSCT and

SST when applied to the USF data s
et individually.


Compared with baseline algorithm, the proposed algorithm

achieves a significant improvement in groups (I) and
(III).
The
experiments showed that the algorithm does not work very well

if the surface type is different from
the gallery set.
The extracted

silhouette images may include noise such as shadow

under different surface types.
The distorted silhouette images

may affect the recognition rate. To further improve the recognition

rate, we
should find out some methods to reconstruct the

dis
torted silhouette images to an noise
-
free silhouette images.

UMD HMM approach uses
Hidden Markov Model for
recognition.

Compared with UMD HMM approach, our
proposed

algorithm uses the feature templates directly for recognition.

Since our proposed algorithm

does not
have any model construction

and transformation before
recognition, this probably

affects the recognition
performance. In the future, we would

like to further investigate to adopt HMM or other statistical

model with
our proposed templates for gait

recognition.


Nirvikar Katiyar

et. al

/ VSRD Technical & Non
-
Tech
nical Journal Vol. I (4
), 2010

Page
218

of
221




We further compared our proposed algorithm with other research

work, CMU key frame algorithm
[23]
, on USF
data

set. In this experiment, we adopted USF version 1.7 data set.

In this data set, the silhouette images are
extracted from par
ameterized

algorithm. It contains 1 gallery set and A

G (7)

probe sets. The data set covariates
are similar to version 2.1

data set. The research work of CMU uses the key frame of

the walkin
g sequence for
gait recognition
[23]
.
Table 4
shows

the match score
d of the proposed algorithm and key frame approach

(CMU)
in USF version 1.7 data set. The experimental

results showed that the proposed algorithm is comparable with

the
key frame algorithm. The performance of the proposed algorithm

is better than the key f
rame approach in
probes D and

E. The rank 1
recognition rate of the proposed algorithm is

slightly worse than CMU algorithm in
probes A and F. Different

from version 2.1 data set, the proposed representations,

MSCT and SST, have a better
Nirvikar Katiyar

et. al

/ VSRD Technical & Non
-
Tech
nical Journal Vol. I (4
), 2010

Page
219

of
221

performance in gro
up (II). Since

the silhouette images in USF version 1.7 are extracted by parameterized

algorithm, the quality of the extract silhouettes depends

on the parameter value. It reveals that the recognition

performance depends on the quality of the silhouette im
ages.

C
ONCLUSIONS

In this paper, we proposed gait recognition algorithm for

human identification by fusion of motion and static
spatiotemporal

templates. The proposed algorithm has a promising

perfo
rmance under indoor and outdoor
environments. Two feature

templates are proposed in this paper, they are motion silhouette

contour template
(MSCT) and static silhouette template

(SST). These templates embed the static and motion characteristic

of gait.
The performance of the proposed algorithm is

evaluated experi
me
ntally using the SOTON data set
[18]

and

the
USF data set
[19]
. In the experiments, there is around 85%

recognition rate in SOTON data set. In USF data set
(version

2.1), under the same surface type, the recognition rate of the

proposed algorithm is higher

than the
baseline recognition. The

average recognition rate is 80% and 34% in group (I) (probes

A

C) and group (III)
(probes H

L). The experimental results

showed that performance of proposed algorithm is promising

in indoor
and outdoor environments.

In o
ur proposed algorithm, two features templates, MSCT

and SST, are used together for gait recognition. These
feature

templates retain their discriminative power under the various

covariates such as shoe type, viewing angle
and time. However,

when surface typ
e of the probe set is different from the

gallery set, the performance of our
proposed algorithm is a little

worse than baseline algorithm and USF HMM algorithm. This

showed that the
discriminative
power of these feature templates

is affected by the surface

type. The recognition rate is lowered

due to the distorted silhouette images. Since the shadow of human

is different under different surface types, this
may affect

the accuracy of the silhouette extraction. To further improve

the recognition rate under su
rface type
difference situation, we

will investigate some algorithms for reconstructing silhouette

images. The algorithm
could create an noise
-
free silhouette image

under different conditions such as shoe difference, clothing

difference and surface type di
fference.

In our proposed algorithm, two templates are directly used

for recognition without any model creation,
parameter setting

and transformation. The proposed algorithm is simple and

suitable for real
-
time recognition.
Experiments showed that the

aver
age processing time is around 7.7 s. The
performance is comparable with
some existing works. However, the algorithm

still has room for improvement. In the future, we shall also

seek to
apply some dimension reduction techniques like kernel

PCA for reducing
the computation complexity. In
addition, by

using such technique, it could further reduction the execution

time. In addition, we would also like
to adopt other statistical

model such as Hidden Markov Model to further improve

the recognition performance
of
our algorithm. Furthermore,

we would like to find some new gait feature templates for

recognition.

REFERENCES

[1]

M.P. Murray, A.B. Drought, R.C. Kory, Walking patterns of normal men, J. Bone Joint Surg. 46 A(2)
(1964) 335

360.

[2]

J. Cutting, L. Kozlowski, Recogn
izing friends by their walk: gait

perception without familiarity cues, Bull.
Psychon. Soc. 9 (5) (1977)

353

356.

Nirvikar Katiyar

et. al

/ VSRD Technical & Non
-
Tech
nical Journal Vol. I (4
), 2010

Page
220

of
221

[3]

C. Barclay, J. Cutting, L. Kozlowski, Temporal and spatial factors in

gait perception that influence gender
recognition, Percept. Psychophys.

2
3 (2) (1978) 145

152.

[4]

N.V. Boulgouris, D. Hatzinakos, K.N. Plataniotis, Gait recognition: a

challenging signal proc
essing
technology for biometric

identification,

IEEE Signal Process. Mag. 22 (6) (2005) 78

90.

[5]

G. Johansson, Visual motion perception, Sci. A
m. (1975) 75

88.

[6]

S.A. Niyogi, E.H. Adelson, Analyzing and recognizing walking

figures in XYT, Proc. Computer Vision
Pattern Recognition (1994)

469

474.

[7]

A. Johnson, A. Bobick, A multi
-
view method for gait recognition

using static body parameters, in:
Procee
dings of the Third International

Conference in Audio
-

and Video
-
based Biometric Person
Authentication,

2001, pp. 301

311.

[8]

L. Lee, W.E.L. Grimson, Gait analysis for recognition and classification,

in: Proceedings of IEEE
International Conference in Automati
c

Face

and Gesture Recognition, 2002, pp. 148

155.

[9]

D.K. Wagg, M.S. Nixon, On automated model
-
based extraction and

analysis of gait, in: Proceedings of
IEEE International Conference in

Automatic Face and Gesture Recognition, 2004, pp. 11

16.

[10]

H
. Murase, R. S
akai, Moving object recognition in eigenspace

representation: gait analysis and lip reading,
Pattern Recognition Lett.17 (1996) 155

162.

[11]

M. Turk, A. Pentland, Face recognition using eigenfaces, Proc. Comput.

Vision Pattern Recognition (1991)
586

591.

[12]

P.S.
Huang, C.J. Harris, M.S. Nixon, Human gait recognition in canonical

space using temporal templates,
IEE Proc. Vision Image

Signal Process.

146 (2) (1999) 93

100.

[13]

J.P. Foster, M.S. Nixon, A. Prügel
-
Bennett, Automatic gait recognition

using area
-
based metric
s, Pattern
Recognition Lett. 24 (2003)

2489


2497.

[14]

J.B. Hayfron
-
Acquah, M.S. Nixon, J.N. Carter, Automatic gait

recognition by symmetry analysis, Pattern
Recognition Lett. 24 (2003)

2175

2183.

[15]

C. BenAbdelkader, R. Culter, H. Nanda, L.S. Davis, EigenGait:

m
otion
-
based recognition of people using
image self
-
similarity, in

Proceedings of International Conference on Audio and Video
-
based

Person
Authentication (AVBPA), 2001.

[16]

L. Wang, T. Tan, Silhouette analysis
-
based gait recognition for human

identification, IE
EE Trans. PAMI 25
(12) (2003) 1505

1518.

[17]

Z. Liu, S. Sarkar, Simplest representation yet for gait recognition:

averaged silhouette, in: Proceedings of
International Conference on

Pattern Recognition, vol. 4, 2004, pp. 211

214.

[18]

J.D. Shutler, M.G. Grant, M.S.

Nixon, J.N. Carter, On a large sequence

based

human gait database, in:
Proceedings of the Fourth

International

Conference on Recent Advances in Soft Computing, 2002, pp. 66

71.

[19]

S. Sarkar, P.J. Phillips, Z. Liu, I.R. Vega, P. Grother, K.W. Bowyer, The

huma
nID gait challenge problem:
data sets, performance, and analysis,

IEEE Trans. PAMI 27 (2) (2005) 162

177.

[20]

P.S. Huang, C.J. Harris, M.S. Nixon, Human gait recognition in canonical

space using temporal templates,
IEE Proc. Vision Image

Signal Process.

146 (2
) (1999) 93

100.

[21]

R. van den Boomgaard, R. van Balen, Methods for fast morphological

image transforms using bitmapped
binary images, Comput. Vision

Graphics Image Process.: Graphical Models Image Process. 54 (3) (1992)

Nirvikar Katiyar

et. al

/ VSRD Technical & Non
-
Tech
nical Journal Vol. I (4
), 2010

Page
221

of
221

252

258.

[22]

A. Kale, A. Sundaresan, A. Ra
jagopalan, N. Cuntoor, A. RoyChowdhury,

V. Kruger, R. Chellappa,
Identification of humans using gait,

IEEE Trans.

Image Process. (2004) 1163

1173.

[23]

R.T. Collins, R. Gross, J. Shi, Silhouette
-
based human identification from

body shape and gait, in:
Proceedin
gs of the International

Conference on

Automatic Face and Gesture Recognition, 2002, pp. 351

356.