Playing with features for

companyscourgeAI and Robotics

Oct 19, 2013 (3 years and 9 months ago)

65 views

Playing with features for

learning and prediction

Jongmin

Kim

Seoul National University

Problem statement


Predicting outcome of surgery

Predicting outcome of surgery


Ideal approach

. . . .

?

Training Data

Predicting outcome

surgery

Predicting outcome of surgery


Initial approach


Predicting partial features




Predict witch features?

Predicting outcome of surgery


4 Surgery


DHL+RFT+TAL+FDO

flexion of the knee

( min / max )

dorsiflexion

of the ankle

( min )

rotation of the foot

( min / max )

Predicting outcome of surgery


Is it good features?




Number of Training data


DHL+RFT+TAL : 35 data


FDO+DHL+TAL+RFT : 33 data




Machine learning and feature

Data

Feature

representation

Learning

algorithm

Feature

representation

Learning

algorithm


Joint position / angle


Velocity / acceleration


Distance between body parts


Contact status




Features in motion

Features in computer vision

SIFT

Spin image

HoG

RIFT

Textons

GLOH

Machine learning and feature

Outline


Feature selection


-

Feature ranking


-

Subset selection: wrapper, filter, embedded


-

Recursive Feature Elimination


-

Combination of weak prior (Boosting)




-

ADAboosting
(
clsf
) / joint boosting (
clsf
)/
Gradientboost

(regression)



Prediction result with feature selection



Feature learning?

Feature selection


Alleviating the effect of the curse of
dimensionality


Improve the prediction performance


Faster and more cost
-
effective


Providing a better understanding of the
data


Subset selection


Wrapper



Filter



Embedded

Feature learning?


Can
we automatically learn a good feature
representation?


Known as: unsupervised
feature learning, feature
learning, deep learning, representation learning, etc.



Hand
-
designed features (by human):


1. need expert knowledge


2. requires time
-
consuming hand
-
tuning.



When it’s unclear how to hand design features:
automatically learned features (by machine)


Learning
Feature Representations


Key
idea:



Learn statistical structure or correlation of the data
from unlabeled data



The learned representations can be used as features
in supervised and semi
-
supervised
settings

Learning
Feature Representations

Encoder

Decoder

Input (Image/ Features)

Output Features

e.g.

Feed
-
back /

generative /

top
-
down

path

Feed
-
forward /

bottom
-
up path

Learning
Feature Representations

σ
(
Wx
)

Dz

Input Patch
x

Sparse Features
z

e.g.


Predictive Sparse
Decomposition
[
Kavukcuoglu

et al
., ‘09]

Encoder filt
ers W


Sigmoid fu
nction
σ
(.)

Decoder f
ilters D



L
1

Spar
sity

Stacked Auto
-
Encoders

Encode
r

Decode
r

Input Image

Class label

Features

Encode
r

Decode
r

Features

Encode
r

Decode
r

[Hinton
&
Salakhutdinov


Science ‘06]

At Test Time

Encode
r

Input Image

Class label

Features

Encode
r

Features

Encode
r

[Hinton
&
Salakhutdinov


Science ‘06]


Remove decoders


Use feed
-
forward pat
h



Gives standard(Conv
olutional)

Neural Network



Can fine
-
tune with
ba
ckprop

Status & plan


Data
파악

/ learning technique survey…



Plan : 11


실험




12


논문

writing


1


시그랩

submit


8
월에

미국에서

발표



But before all of that….

Deep neural net vs. boosting


Deep Nets
:


-

single highly non
-
linear system


-

“deep” stack of simpler modules


-

all parameters are subject to
learning



Boosting & Forests
:


-

sequence of “weak” (simple) classifiers that are
linearly
combined to
produce a powerful classifier


-

subsequent classifiers do not exploit
representations of
earlier classifiers
, it's a “shallow”
linear mixture


-

typically features are not learned

Deep neural net vs. boosting