> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE

CLICK HERE TO EDIT) <
1
Abstract
—
Nonlinear dimension reduction, namely manifold
learning,
provides more adequeat
Dimension
reduction
methods
are widely used in analysis
of hyperspectral data to
extract
information
using
a small
number
of features.
Most
dimension
reduction techniques
ignore
spatial context of
image
data,
assuming
that
the samples
are
independent.
We
propose a
new
method
where
the derived manifold coordinates
integrat
e
spatial
proximity
of the samples into the kernel PCA
(KPCA)
framework.
Un
like the full kernel matrix in
KPCA,
the new approach
considers
only the
similarities between spatially local samples in
a
sparse spatio

spe
ctral kernel
. The
resulting
manifold coordinates
produce a map of spatial coherence in each feature
, associated
with
homogeneous regions .
T
he
capability of the proposed
feature extraction approach is investigated
using
nearest

neighborhood (NN) classifica
tion and compared to other
linear methods such as
principal component analysis (
PCA
)
and
the
m
aximum
noise fraction (
MNF
) transform
.
Experiments are
conducted using airborne hyperspectral data acquired over
agricultural
and urban
areas
.
Index Terms
—
Kernl
PCA, manifold learning, dimension
reduction
,
spatio

spectral kernel
I.
INTRODUCTION
NHANCED
spectral resolution
of
hyperspectral
data
provides better characterization of
ground objects
than
multispectral data, enabl
ing
discrimination of
subtle
difference
s
between similar materials
.
The
large number of
spectral
bands
also
provides improved capability to
accommodate
the
nonlinear
response
resulting from
multipath
scattering,
localized
differences in
bi
direct ional reflectance
,
and
non

uniform
a
ttenuation
in w
ater
.
[
1
]
.
N
onlinear dimension
reduction techniques, namely
manifold learning
, have
recently
been
applied
to
hyperspectral data
to exploit the
se
nonlinear
phenomena
in studies involving
retrieval of
bathymetr
y
[
2
],
anomaly detection
[
3
]
,
and
land cover
classification
[
4][5
]
.
Results achieved by both global
methods
,
such as
isometric
feature mapping (
Isomap
)
[
6
],
kernel PCA (KPCA) [8]
, and
local approaches such as
locally linear embedding (
LLE
)
[
7
],
have
demonstrated classification results
superior to
those
achieved by
traditional linear methods.
Traditional
manifold learning
methods
do not
incorporate
the
spatial context of
data
by
assuming the samples
are
drawn
This work was supported by the
National Science Foundation
Grant
0705836.
The authors are with the School of Civil Engineering and the
Laboratory for Applications of Remote Sensing (LARS), Purdue University,
203 Martin Jischke Drive, West Lafayette, IN 47907

1971. E

mail: {wkkim,
mcrawford}@purdue.edu
independent
ly
.
Local s
patial relat ionship
s
between sa
mples in
remote sensing data
are
,
how
ever
,
important
because
spatial
proximity is often
indicative of
spectral similarity
of pixels.
A
few studies
have investigated approaches to incorporate
spatial
informat ion
in the
dimension reduction
problem
.
In [
9
],
Mohan
et al.
modified
the
LLE method
by computing distances
between samples
with
stacked feature
vector
s constructed from
spatially neighboring samples
.
They
obtain
ed
spatially
more
coherent results
than
from
pixel

based
distances
.
Velasco et al
.
[
10
]
consider
ed
spatial
context
of an image
as a composite of
the kernel
from the training data and
spatial kernel derived from
the unclassified image.
S
patial
proximity
has
also
been
incorporated
in a kernel matrix
for
classificat ion
using
support
vector machines. Although the
se approaches
do not
explicitly
compute manifold coordinates,
higher classification accuracies
were achieved for example data set
s relative to using only
spectral values were used
[
11
][
12
]
[chen07]
.
In this paper, w
e propose a novel
manifold learning
method
which
produces
a
spatially coherent
representation of
hyperspectral
data
by incorporating spatial
proximity
informat ion
into the
K
PCA framework.
In
K
PCA,
similarity
between samples in
the
kernel matrix
is
computed
for
e
very
pair of
the
samples
in the image
. A
disadvantage of such
global
methods is
the
ir inability to adapt to
local patt
erns
, instead
focusing on
the global structure of
the
manifold
.
Local methods
such as
LLE
and local tangent space alignment
(LTSA) [
13
]
define the relationships between samples on
ly in
a
local
spectral
space
,
considering the global structure only through
the
local connections. A
key idea of the proposed method is to
implement the local properties of manifold in
the
spatial
domain. A
kernel mat rix is made sparse based on spatial
proximity of samples, rather than based on spectral similarity.
The spa
rse
kernel is then used to obtain manifold coordinates as
in
the
regular KPCA. We call the proposed method spatially
local kernel PCA (SL

K
PCA).
A key characteristic of the proposed SL

KPCA method is
that it produces patterns
of spatial homogeneity
that capture
uniform
regions
in the scene.
SL

KPCA
differs from traditional
image segmentation algorithms
which have
fixed boundaries
and unique
correspondence
of
each pixel
to
a part icular
segment, in that
SL

KPCA
directly measure
s
the degree of
homogeneity
for
various
spatial
patterns
.
The proposed method
has some advantages in terms of explaining the spatial
continuity of samples over tradition
al segmentation algorithms.
First,
it allows continuous representation of homogeneity
whereas samples in segments are treated the same for a give
Re
gion Based Landmark Selection for Manifold
Learning
Using Spatially Induced Sparse Kernel
Wonkook Kim
,
Student Member, IEEE
,
and
Melba M. Crawford
,
Fellow
, IEEE
E
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE

CLICK HERE TO EDIT) <
2
segmentation level. This is useful for measuring marginality of
samples for a particular region. Secondly, SL

KPCA is not
subject to a particular level as is in the segmentation algorithms.
One of the tricky problem
s
in using the segmentation
algorithms for image analysis is that the segmentation level
needs to be known a priori. However, the segmentation levels
m
ay be variable for different images, and different object
classes in a scene, depending on the
spectral composition of
each class.
The
homogeneity patterns in
SL

KPCA coordinates
are in various scales which we don
’
t actually depend on the
particular
level.
We used the homogeneity measure for selecting landmarks
for Isomap.
While the nonlinearity is evident in hyperspectral
data, the use of manifold learning has been prohibited due to its
heavy computational load in solving the eigenvalue problem of
n by n kernel matrix.
I
n case of global methods such as Isomap,
the
computation
scales as O(n^3) when n is the number of
samples. Landmark
approaches
mitigate this computational
problem by approximating the manifold using a subset of
samples as in Nystrom
methods. Various landmark schemes are
available
, but random selection is commonly used when no
prior
information
is assumed about the spatial continuity.
We
used this homogeneity representation for developing an
intelligent landmark selection scheme so tha
t we collect less
samples from homogeneous regions which have high
redundancy in data.
The
structure
of the
paper
is outlined as follows. The kernel
PCA
method
is
summarized
in Section II, and the proposed
method with the sparse spatio

spectral kernel is
developed in
Section III. Experimental results and conclusions are presented
in Section
s
IV and V, respectively.
II.
M
ETHODOLOGY
In this section, we present
the
whole procedure
of the
proposed
region bas
ed landmark selection
scheme.
First,
the formulation
of
t
he
proposed
SL

KPCA
method is developed
by introducing
the spatially derived sparsity
.
Using a rejection method then
uses the resultant homogeneity representation for a
region

based sampling scheme
.
A.
Spatially local Kernel PCA
The KPCA formulat ion is briefly described in this section using
the primal and dual representation of principal component
analysis.
Given
n
data samples
x
1
,
x
2
,
…
,
x
n
, in
p

dimensional
column vectors, we can construct an
n
p
data
matrix
X
by
storing the data samples as row vectors. Principal components
of the data set
are
obtained by solving
for
the eigenvalue
problem of the
estimated
covariance
matrix
,
(
1
)
where
U
is an eigenvector matrix and
is a diagonal
eigenvalue matrix.
The
projection
of a novel sample
x
on the
k

dimensional subspace is obtained by
, where
the
p
k
matrix
U
(
k
)
contains the
eigenvectors corresponding to
the
k
largest eigenvalues for a target dimension
k
.
The
same problem
can be formulated
using the dual
representation of the principal component analy
sis which is
based on the
matrix
,
G
=
XX
T
.
Note that n^2 elements in G are
inner products between n data samples.
In this
dual
representation,
the projections
on the
principal components can
be computed as
.
(
2
)
w
h
ere
V
and
are the eigenvectr matrix an eigenvale
matrix fthematrix
G
, i.e.
G
=
V
V
T
, respectively.
The
kernel
PCA
can be formulated
by introducing
the kernel
trick
into the dual representation
,
which replaces the inner
product in the matrix
G
by a kernel f
unction
K
, i.e.
G
ij
=
x
i
T
x
j
G
ij
=
K
(
x
i
,
x
j
).
(
3
)
The projection onto the
k

th feature can be represented using
the kernel function as follows.
then co
mputed as
(
4
)
k
=
k
1/2
v
k
(
5
)
where
v
k
and
k
are the
k

th eigenvector

eigenvalue pair of
G
.
An advantage of t
he
dual representation
is that it
allows
further operations between samples,
which is
not possible
in the
primal representation.
The SL

KPCA
constructs a sparse kernel
based on spatial proximity between samples. First,
a
sparse
adjacency
kernel
A
is
developed by identifying spatially local
neighbors.
Let
the
spatial location
of the
i

th sample
z
i
and
c
orresponding
spectral data sample
x
i
as (
x
i
,
z
i
)
. Then
, t
he
n
n
adjacency
kernel is defined as
S
ij
= 1 if
(
6
)
= 0 otherwise
where
is a user

defined threshold for the neighborhood size.
To consider the spectral similarit ies between the samples
only
in t
he spatial neighbor, the adjacency
kernel
A
is
combined
with
the kernel matrix
G
through mult iplication
of
corresponding elements
G
*
ij
=
G
ij
A
ij
,
,
,
(
7
)
where
G
*
denotes the derived sparse spatio

spectral kernel
matrix.
For
the spectral kernel
, we only
consider
spectral
angular mapper (SAM)
or the normalized linear kernel
in this
paper
, since
it reduces the effect of
vector sizes
which is more
suitable for measuring the
similarity
between samples
. T
he
SAM kernel is
given as
(
8
)
The sparse spatio

spectral kernel is then incorporated into the
KPCA framework
to produce the manifold coordinates.
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE

CLICK HERE TO EDIT) <
3
The kernel mat rix
of
the proposed method is sparse with a
small number of non

zero entries (
cn
non

zero entries when
c
is
a constant such that
c
n
), whereas a full matrix has
n
2
non

zero entries.
T
he eigenvalue problem of
a
large sparse
symmetric matrix can be solved effectively by various iterative
methods such as
the
Lanczos [16] algorithm.
B.
Region based landmark selection
A region based landmark selection scheme
is developed
in this
section using the
coordinates derived by the
SL

KPCA
method.
A key idea is to collect more landmark samples in
inhomogeneous regions than in homogeneous regions.
This is
because the
“
spectrally
”
inhomogeneous regions
of
high
curvature
in manifold
r
equire
more samples to characterize the
region than in homogeneous regions.
Whereas it
’
s often
difficult and time consuming to identify
“
spectrally
”
homogeneous regions, i.e. clustering in high dimensional space,
it is relatively easy to obtain the spatial homogeneity, since this
only consider the spatially neighboring samples.
Another
advantage of getting
the s
pectrally similar classes are also hard
to separate in high
dimensional space, whereas different classes
are often located in spatially disjoint regions in real cases.
[1]
In the proposed landmark s
election sc
heme,
we use rejection
method
[2]
to
select samples based
on the degree of
homogeneity. The rejection method is a sampling scheme that
generates samples that follows a given probability distribution.
When p(x) is the p.d.f. It first
a good set of landmark samples
are
obtained
after
removing samples
from homogen
eo
us
regions based on
the homogeneity patterns in SL

KPCA
coordinates. For a given landmark rat io
t, total (1

t)*n samples
are to be removed from n samples, leaving t*n landmarks
samples in the end.
Let x* be coordinates from SL

KPCA and
p be the number o
f features to use. Then in each feature of p
SL

KPCA features, (1

t)*n/p samples are selected and removed
from the candidate for the landmarks based on the
homogeneity
pattern.
Taking the homogeneity image of each feature as a
probability distribution on x

y spatial plane, we perform a
rejection method to select samples that follows the probability
distribution.
For a given homogeneity measure, we set an
overarching distribution as f(x)=c, where c is the maximum
values of x in the feature.
Then we collect
T
= p(x)/f(x).
T
his sampling is repeated until we collect the number of
samples required for each feature, (1

t)*n/p.
We repeat for
every feature so we identify (1

t)*n samples to be removed and
t*n landmark samples.
III.
D
ATA
S
ETS AND
E
XPERIMENTAL
R
ESULTS
The new method was evaluated using
airborne hyperspectral
data sets
collected over
different land cover scenes were used .
The
Indian Pine
NE
image is a mosaic subset
of
data from
multiple flightlines of data
that were
acquired
by
the
ProSpecTIR system
du
ring
May 24

25, 2010 over an
agricultural area
near
Purdue University in Indiana, USA. The
image has 424
449 pixels
collected in 360 bands at 5 nm
spectral resolution and
2

m
spatial resolution. The land cover
data contains 12 clas
ses which consist of
geometrically regular
agricultural
fields with different
crop
residue
cover
, vegetated
areas, and man

made structures (Fig. 1(a)). The
KSC
wetland
image is a subset
of
an AVIR
I
S scene over
the
Kennedy Space
Center located in Florida, USA, which was acquired in March
23, 1996. The water area and the urban area in the original
scene were removed to focus on
the wetland and upland classes
which are spectrally similar and have complex spatial patter
ns.
T
he image
has
296
324 pixels with ground resolution of 18

m
and 176 spectral bands. The land cover types in the scene
include wet land and upland classes comprised of marshes with
similar
spectral signatures and upland
vegetatio
ns (
Fig. 1(b)).
The third data set,
Pavia Univ. c
enter
,
is a subset of
an urban
area collected by ORASIS

03 system July 8, 2002 over Pavia
University, Italy [17].
T
he subset image has 341
300 pixels
with
ground resolution 1.6m. Amo
ng the original 115 bands
over 430

860 nm, 12 noisy bands were removed resulting in
102 bands. The scene consists of materials from urban
structures such as buildings and pavement, vegetation, and bare
soil (Fig. 1(c)).
Figure 2

3 shows the gray
scale
im
ages of the first 5 features
obtained from SL

KPCA
linear
(5) and SL

KPCA
SAM
(5)
respectively for
the
Indian Pine NE
image. The results show
that
various
patterns of homogeneous regions are captured in
the derived features. Regions identified in the SL

KPCA
linear
(5)
results (Fig. 2) are s mall compared to the SAM kernel results
(Fig. 3). In Fig. 2, it is observed that the first few features
correspond to very bright objects of high reflectance such as
the
urban area (feature 1~4) and a building in
the
woods (
feature 5).
R
egions in the SL

KPCA
SAM
(5) results (Fig. 3) cover
spatially
contiguous areas
such as a hay field (feature 1), a large soybean
field
(feature 2), a wood
ed
area
with texture
(feature 3), wood
s
and hay field
s
(feature 4), and two areas in a soyb
ean field
whose spectral values
exhibit
an offset created by the
mosaic
king
process (feature 5). The difference in the overall
size of homogeneous regions between the two kernel types
seems to be that the SAM kernel
smooths
out the similarit ies by
the norm
alizat ion and allows the characterization of subtle
variation over large areas, whereas the similarity
in the
linear
kernel is extremely high for high

reflectance regions resulting
in the initial features.
Although
feature
images for other dimensions are
not shown
due to the limited space, the size of the homogeneous regions
tends to decrease as the dimension increases. Figure 4 supports
th
is
observation by showing that the change in the number of
samples in the homogeneous regions for all the features in
both
kernels. For the SL

KPCA
linear
(5) method, a tendency is
observed that the number of samples in homogeneous areas
increases as the dimension becomes higher, and the number of
homogeneous regions also increases for higher dimensions.
The trend is clear
er in the SAM kernel results. Figure 4(c

d)
shows the change in mean value of samples in the
homogeneous regions. The mean values decrease as the
dimension increases, indicating that the latter dimensions of the
proposed method find s mall homogeneous regio
ns with low
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE

CLICK HERE TO EDIT) <
4
values in the coordinates.
To evaluate the spatial coherence of the manifold
coordinates, NN classificat ion is performed with small size
training sets. The classification was performed for various
dimensions of manifold coordinates by selectin
g the first
k
features, to investigate how
effectively
the features represent
the coherence in the images. For the experiments, 10% of the
labeled samples are first randomly selected for a
training
set,
and the classification results were tested on exclusi
ve test
samples that accounts for 30% of the labeled samples.
Classification
w
as repeated 10 t imes for 10 different sets of
training and testing data, to obtain
reliable
estimates of the
accuracies.
Figure 5 shows the classification results for the three
data
sets, in which the results of SL

KPCA are compared to PCA
and MNF. The results show that SL

KPCA with the SAM
kernel produced
consistently
robust results
compared to
the
other methods
. R
esults
obtained using
the linear kernel
required
more features to
achieve
the best accuracies. MNF transform
has higher accuracies than the non

spatial PCA method. The
results of SL

KPCA with the SAM kernel are
similar
to those
of MNF in
Indian Pine NE
, and more effect ive than MNF in
KSC wetland
and
Pavia Univ. center
. Table I shows the
accuracies of the methods for all the data set at target dimension
k
=5. It shows that SL

KPCA
SAM
(5)
obtain
s high Kappa statistic
compared to other linear methods (~5%, ~22%, and ~6%,
respectively for each data set), while
maintaining
th
e same
level of variance. The large improvement for
KSC

wetland
data
in the first few features seem
s
to be due to the characteristics of
the ground reference data which has low area coverage for the
entire area. That is because a relatively large set of f
eatures are
required to prevent the confusion between spatially adjacent
labels of different classes.
The SL

KPCA with the linear kernel show a slow
increase
in
accuracy as the dimension gets larger. That is because the
regions in the liner kernel case ar
e so small and take many
dimensions until all the areas in the entire scene are covered by
the features. The linear kernel produced equivalent
performance to the SAM kernel for KSC wet land data when
used with
=5.
The classification result on the whole s
cenes obtained by
using only the first 5 features (
k
=5) are presented in Fig. 6 for
Indian Pine NE
and
Pavia Univ. center
data. The results show
that the salt and pepper patterns in the homogeneous regions in
the PCA result has been mitigated in the MNF re
sults, and the
pattern is further removed in the SL

KPCA
SAM
(5) result.
I
n
particular, elongated features such as road, bricks are clearly
identified
without
being confused with spatially adjacent
classes.
IV.
C
ONCLUSION
S
AND
F
UTURE
W
ORK
The proposed method pr
oduces a spatially coherent
representation
of the data,
where homogeneous regions are
indicated
in the
KPCA
features
,
which serves as diverse basis
for representing
homogeneity
of the fields. The classification
results show that SL

KPCA with the SAM kernel
produces the
coherence
pattern more effectively than with the linear kernel
with fewer features. While the major patterns are captured in
the first features, the latter features provide more detailed
patterns resulting in steady increase in
classification
accuracy.
Future work includes the use of different kernels including
nonlinear kernels such as radial basis function (RBF) and
polynomial kernel.
A
CKNOWLEDGEMENT
The authors would like to thank
P. Gamba from the University
of Pavia, Italy
for providing the hyperspectral data over Pavia
University.
R
EFERENCES
[1]
C. M. Bachmann, T. L. Ainsworth, and R. A. Fusina, "Improved
m
anifold
c
oordinate
r
epresentations of
l
arge

s
cale
h
yperspectral
s
cenes,"
IEEE
Trans. on Geosci
ence
and Remote Sen
sing,
vo
l. 44, no. 10, pp.
2786
–
2803,
Oct.
2006.
[2]
C. M. Bachmann, T. L. Ainsworth, R. A. Fusina, M. J. Montes, J. H.
Bowles, D. R. Korwan, and D. B. Gillis, "Bathymetric
r
etrieval
f
rom
h
yperspectral
i
magery
u
sing
m
anifold
c
oordinate
r
epresentations,"
IEEE
Trans. o
n Geosci
ence
and Remote Sen
sing,
vol. 47, no. 3, pp. 884
–
897,
Mar.
2009.
[3]
L. Ma, M. M. Crawford, and J. W. Tian, "Anomaly detection for
hyperspectral images based on robust locally linear embedding,"
Journal
of Infrared, Millimeter, and Terahertz Waves
, vo
l. 31,
no
. 6, pp. 753
–
762,
2010.
[4]
L. Ma, M. M. Crawford, and J. Tian, "Local Manifold Learning

Based
k

Nearest

Neighbor for Hyperspectral Image Classification,"
Geoscience
and Remote Sensing, IEEE Transactions on
, vol.
48
,
no
.
11
, pp.
4099
–
4
1
09
, 2010.
[5]
M.
Fauvel, J. Chanussot, and J. A. Benediktsson, "Kernel principal
component analysis for the classification of hyperspectral remote sensing
data over urban areas,"
EURASIP Journal on Advances in Signal
Processing
, vol. 2009, p
p
. 11, 2009.
[6]
J. B. Tenenbaum, V
. de Silva, and J. C. Langford, "A global geometric
framework for nonlinear dimensionality reduction,"
Science
, vol. 290,
no
.
5500, pp. 2319
–
2323, 2000.
[7]
S. T. Roweis and L. K. Saul, "Nonlinear dimensionality reduction by local
linear embedding,"
Science
, v
ol. 290,
no
. 550
0, pp. 2323
–
2326, 2000.
[8]
B. Schö
lkopf, A. J. Smola, and K.
R. Muller, "
Kern
el principal
component analysis
,"
Lecture notes in computer science
, vol. 1327, pp.
583
–
588, 1997.
[9]
A. Mohan, G. Sapiro, and E. Bosch, "Spatially Coherent Nonlinear
Di
mensionality Reduction and Segmentation of Hyperspectral Images,"
Geoscience and Remote Sensing Letters, IEEE
, vol. 4,
no
. 2, pp. 206
–
210,
2007.
[10]
S. Velasco

Forero and V. Manian, "Improving Hyperspectral Image
Classification Using Spatial Preprocessing,"
Ge
oscience and Remote
Sensing Letters, IEEE
, vol. 6,
no
. 2, pp. 297
–
301, 2009.
[11]
G. Camps

Valls, N. Shervashidze, and K. M. Borgwardt,
"Spatio

Spectral Remote Sensing Image Classification With Graph
Kernels,"
Geoscience and Remote Sensing Letters, IEEE
, vol. 7,
no
. 4, pp.
741
–
745, 2010.
[12]
L. Gomez

Chova, G. Camps

Valls, L. Bruzzone, and J. Calpe

Maravilla,
"Mean Map Kernel Methods for Semisupervised Cloud Classification,"
Geoscience and Remote Sensing, IEEE Transactions on
, vol. 48,
no
. 1,
pp. 207
–
220, 20
10.
[13]
H. Li, L. Teng, W. Chen, and I.

F. Shen, "Supervised Learning on Local
Tangent Space,"
Advances in Neural Networks
ISNN 2005
, vol. 3496, pp.
546
–
551, 2005.
[14]
J. Shawe

Taylor and N. Cristianini,
Kernel methods for pattern analysis
.
Cambridge Univ
.
Pr
ess
,
2004
.
[15]
R. B. Lehoucq, D. C. Sorensen, and C. Yang,
ARPACK users' guide:
solution of large

scale eigenvalue problems with implicitly restarted
Arnoldi methods
. Siam, Philadelphia, 1998
.
[16]
J. K. Cullum and R. A. Willoughby,
Lanczos
Algorithms for Large
Symmetric Eigenvalue Computations: Theory
. Society for Industrial
Mathematics, 2002
.
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE

CLICK HERE TO EDIT) <
5
[17]
P. Gamba "A collection of data for urban area
characterization", Proc. of
IGARSS’04, Anchorage (USA), Sept. 2004,
v
ol. I, pp. 69

72.
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE

CLICK HERE TO EDIT) <
6
Wonkook Kim
received the B.S. and the M.S
degree in Civil Engineering from Seoul National
University, Korea, and Purdue University,
respectively in
20
04 and 2005. He served in
the
Korea
Army as infantry from March 1999 to May
2001. He is currently with the Laboratory for
Applications of Remote Sensing (LARS), and is
working toward the Ph.D. degree in department of
civil engineering, Purdue University
. His research
interests include spatial variation in remote sensing
data, transfer learning under changing environment,
and manifold learning and semi

supervised learning of hyperspectral image
data. He is a student member of the IEEE.
Melba M.
Crawford
received the B.S. and M.S.
degrees in Civil Engineering from the University of
Illinois, Urbana, in 1970 and 1973 respectively, and
the Ph.D. degree in Systems Engineering from Ohio
State University, Columbus, in 1981. She was a
faculty member at the Uni
versity of Texas at Austin
from 1990

2005, where she founded an
interdisciplinary research and applications
development program in space

based and airborne
remote sensing. She is currently at Purdue
University, where she holds the Purdue Chair of Excellen
ce in Earth
Observation, is Director of the Laboratory for Applications of Remote Sensing,
and Associate Dean of Engineering for Research. Dr. Crawford’s research
interests focus on development of statistical techniques for analysis of
spatial

temporal pr
ocesses and their application to remotely sensed data,
including classification of high dimensional data, data fusion techniques for
multi

sensor problems, multi

resolution methods in image analysis, and
knowledge transfer in data mining. She is a Fellow o
f the IEEE, and an
Associate Editor of IEEE TGRS.
[1]
C. M. Bachmann et al., “Bathymetric Retrieval From Hyperspectral
Imagery Using Manifold Coordinate Representations,”
Geoscience
and Remote Sensing, IEEE
Transactions on
, vol. 47, no. 3, pp.
884
–
897, 2009.
[2]
C. Robert,
Monte Carlo statistical methods
. Springer Verlag, 2004, p.
645.
Enter the password to open this PDF file:
File name:

File size:

Title:

Author:

Subject:

Keywords:

Creation Date:

Modification Date:

Creator:

PDF Producer:

PDF Version:

Page Count:

Preparing document for printing…
0%
Comments 0
Log in to post a comment