QUICK DESIGN GUIDE

voltaireblingData Management

Nov 20, 2013 (3 years and 4 months ago)

73 views

QUICK DESIGN GUIDE

(
--
THIS SECTION DOES NOT PRINT
--
)


This PowerPoint 2007 template produces a 36x48
inch professional poster
. You can use it to create
your research poster and save valuable time placing
titles, subtitles, text, and graphics.


We provide a series of online tutorials that will
guide you through the poster design process and
answer your poster production questions.


To view our template tutorials, go online to
PosterPresentations.com
and click on
HELP DESK.


When you are ready to print your poster, go online
to

PosterPresentations.com
.


Need Assistance?
Call us at 1.866.649.3004


Object Placeholders


Using the placeholders

To add text, click inside a placeholder on the poster
and type or paste your text. To move a placeholder,
click it
once

(to select it). Place your cursor on its
frame, and your cursor will change to this symbol
Click
once

and drag it to a new location where you
can resize it.


Section Header placeholder

Click and drag this preformatted section header
placeholder to the poster area to add another
section header. Use section headers to separate
topics or concepts within your presentation.




Text placeholder

Move this preformatted text placeholder to the
poster to add a new body of text.





Picture placeholder

Move this graphic placeholder onto your poster, size
it first, and then click it to add a picture to the
poster.












RESEARCH POSTER PRESENTATION
DESIGN ©
2012

www.PosterPresentations.com

Student

discounts are available on our
Facebook

page.

Go to
PosterPresentations.com

and click on the FB icon.

QUICK TIPS

(
--
THIS SECTION DOES NOT PRINT
--
)


This PowerPoint template requires basic PowerPoint
(version 2007 or newer) skills. Below is a list of
commonly asked questions specific to this template.

If you are using an older version of PowerPoint some
template features may not work properly.


Template FAQs


Verifying the quality of your graphics

Go to the VIEW menu and click on ZOOM to set your
preferred magnification. This template is at 100%
the size of the final poster. All text and graphics will
be printed at 100% their size. To see what your
poster will look like when printed, set the zoom to
100% and evaluate the quality of all your graphics
before you submit your poster for printing.



Modifying the layout

This template has four different

column layouts.
Right
-
click


your mouse on the background

and click on LAYOUT to see the


layout options. The columns in

the provided layouts are fixed and cannot be moved
but advanced users can modify any layout by going
to VIEW and then SLIDE MASTER.



Importing text and graphics from external sources

TEXT
:
Paste or type your text into a pre
-
existing
placeholder or drag in a new placeholder from the
left side of the template. Move it anywhere as
needed.


PHOTOS
:
Drag in a picture placeholder, size it
first
,
click in it and insert a photo from the menu.


TABLES
:
You can copy and paste a table from an
external document onto this poster template. To
adjust the way the text fits within the cells of a
table that has been pasted,
right
-
click

on the table,
click FORMAT SHAPE then click on TEXT BOX and
change the INTERNAL MARGIN values to 0.25.



Modifying the color scheme

To change the color scheme of this template go to
the DESIGN menu and click on COLORS. You can
choose from the provided color combinations or
create your own.








© 2013 PosterPresentations.com


2117 Fourth Street , Unit C


Berkeley CA 94710


posterpresenter@gmail.com

Independent Component Analysis (ICA) is a form of feature extraction and
blind source separation. Feature extraction is a step in pattern
classification that creates a new feature subset based on transformations
and combinations of the original feature set [1]. The goal of ICA for
feature extraction is to find the most linearly independent signals possible
based on the original signal. Blind source separation is a method for
calculating an original signal based on a mixed signal and with no
knowledge of the mixing process or the original signal.


The concept of ICA can be explained through an adaptation of the cocktail
party problem. If there are two conversations going on in opposite corners
of a room and there are two recording devices, the recordings will be a
mixture of the two conversations. The goal of ICA is to separate the
original conversations out of the mixed signals [2].

Introduction

ICA Algorithm

Decision Tree

A decision tree is recursively created by selecting the split in the data that
will lead to the purest sub
-
datasets in the children nodes. Test samples are
then classified by comparing the selected feature at each node to the split
values until a leaf node is reached [3].

Classifiers

Decision Tree Classifier

Results

Conclusions

Based on the decision tree classifier it would seem that ICA improves the
accuracy of classification when the dimensionality is not reduced and
worsens the accuracy when the dimensionality is reduced. However,
decision tree classifiers are often suboptimal classifiers. Based on the
LDA and Naïve Bayes classifiers, reducing the dimensionality increases
the accuracy of classification. These are generally more reliable
classifiers so I would accept these results over those from the decision tree.
However, this is not extensive enough research to
conclude without
hesitation
that
ICA improves the predictability of the stimulus.

References

[1]
Jain, A.K.;
Duin
, R. P W;
Jianchang

Mao, "Statistical pattern
recognition: a review,"

Pattern Analysis and Machine Intelligence, IEEE
Transactions on

, vol.22, no.1, pp.4,37, Jan
2000


[2] A
.
Hyvärinen
; E.
Oja
, “Independent component analysis: algorithms
and applications”, Neural Networks, Volume 13, Issues 4

5, June 2000,
Pages 411
-
430


[3]
Sreerama

K. Murthy. 1998.
“Automatic
Construction of Decision
Trees from Data: A Multi
-
Disciplinary
Survey”.

Data Min.
Knowl
.
Discov
.

2, 4 (December 1998), 345
-
389.



[4
]
Sayad
, S. (2010). Linear Discriminant Analysis. Retrieved April 16,
2013, from An Introduction to Data Mining website:
http://www.saedsayad.com/data_mining_map.htm


[5] Mitchell, T. M. (1997). Generative and discriminative classifiers: Naive
bayesand

logistic regression. In
Machine Learning
. New York, NY, USA:
McGraw
-
Hill.


[6]
Borovicka
, T.,
Jirina
, M., Jr.,
Kordik
, P., &
Jirina
, M. (2012). Selecting
representative data sets. In A.
Karahoca

(Ed.),
Advances in data mining
knowledge discovery and applications
.

Acknowledgments

Thank you to:

Dr.
Arye

Nehorai

Ed Richter

Chuck Holmes

Xiaoxiao

Xu

NSF Grant No.


CCF
-
1014908

Technique for Classifying

Goal is to maximize the non
-
gaussianity

which is a measure of
independence.




1.
Choose an initial (e.g. random) weight vector
w
.

2.
Let
w
+ =
E
{
x
g
(
w
T

x
)}−
E
{
g
0(
w
T

x
)}
w

3.
Let
w
=
w
+/
||
w
+
||

4.
If not converged, go back to 2.

This is an algorithm for extracting only one independent component [2].

The Brain Computer Interface (BCI) is a
way of using brain waves to send
commands to a computer. It has
important application for handicapped
individuals and individuals that have
suffered from brain injury who no longer
have full physical capabilities. The goal
of this project is to increase the accuracy
of the signals processing stage of the BCI.

Data Set

10
-
Fold Cross Validation [6]

1.
Split data set into
10
groups

2.
Perform classification using each group as the test set and the rest of
the data as the training set

3.
Average the accuracy across all of the folds


Naïve Bayes

A
N
aïve Bayes classifier calculates and maximizes the posteriori
probability of a test sample being in a specific class with the following
formula. The training data are used to calculate the prior probabilities [5]:





This formulas is derived with Bayes’ Theorem.


A
j

= attributes,
a
j

= observed values of attributes


C = the actual class,
c
j

= the predicted class

Reducing dimension positively impacted accuracy of classification.

Problem: Results vary greatly depending on chosen threshold for
classification

I selected my threshold value by
looking at the LDA classifier for
the ICA signal with 14
dimensions and plotting the
probability of classifying each
class incorrectly at different
thresholds. I then selected a
threshold that performed well
for both but minimized the risk
of classifying imagined right
hand movement wrong.

Reducing dimension positively impacted accuracy of classification.

Naïve Bayes Classifier is a generally reliable and efficient classifier if the
conditional independence assumption is not broken.

ICA is used as a method to
reduce the dimensionality of a
feature space. By selecting the
eigenvalues with highest value
one can select the most salient
features.

Linear Discriminant Analysis (LDA)

LDA classification finds a linear combination of features which separates
two classes. Test samples are then classified by mapping them to the class
boundary and classifying based on a selected or calculated threshold [4].

Future areas of research for this topic could include:


Other criterion for classifier error rate and accuracy


ICA under differing conditions like using a different function to estimate
the
negentropy


Other forms of blind source separation (i.e. Principal Component
Analysis, etc.)


Measure effectiveness of ICA using other, more robust classifiers


Consider efficiency as well as accuracy in measuring the effectiveness
of IC A


Implement ICA in an online scenario

Reducing dimension negatively impacted accuracy of classification.

Problem: Decisions trees are sub
-
optimal classifiers

Applications of Classifiers

BCI Background and Data

Future Directions

Preston M. Green Department of Electrical and Systems Engineering, Washington University in St. Louis

Rachel Blake mentored by Dr.
Arye

Nehorai

Analyzing EEG data from the Brain Computer Interface with

Independent Component Analysis

Contact Information

For more information about this project please contact:


Rachel Blake


RachelcBlake@wustl.edu

Linear Discriminant Analysis Classifier

Naïve Bayes Classifier

Reduction of Dimensionality

The data set used for this project was EEG data collected from 14 nodes on
a subjects head and this makes the feature space have a dimension of 14.
In each trial, the subject was asked to imagine right hand movement at
specific times and this stimulus was recorded along with the signal
recorded at each node. The features being considered are the signals from
the different nodes.

Accuracy calculated as:

 
=




 

 𝑖𝑖 𝑖

𝑖

 

 


𝑖



 

 

Figure 2: Original 14 EEG signals.

Figure 1: The flow of the signal in a BCI.

Figure 3: Eigenvalues used for reducing dimensionality

Figure 6: Example of 10
-
fold cross validation

Figure 5: ROC curve comparing the probability of false
classification of right hand movement and the
probability of false classification of another stimulus.

Figure 4: Part of decision tree used in classifying ICA signals

Figure 7: Accuracy of decision tree classifier for different
levels of dimensionality reduction.

Figure
8:
Accuracy of
LDA classifier
for different levels of
dimensionality reduction.

Figure
9:
Accuracy of
Naïve Bayes classifier
for different
levels of dimensionality reduction.

Classifiers have applications in many different fields. They can be used for
classifying images, sounds, language, handwriting, and human features. It
has become

an increasingly

studied field recently due to

higher levels of computing

power and the widespread

availability of large sets of

data which can be more easily

manipulated with

classifiers [1].

Table 1: Possible applications of classifiers [1]