Sentiment analysis of movie reviews using Support Vector Machines A. Introduction Sentiment analysis is a subfield of NLP concerned with the determination of opinion and subjectivity in a text, which has application in the analysis of online product reviews, recommendations, blogs, and other types of opinionated documents. In this assignment you will be developing classifiers for sentiment analysis of movie

crazymeasleΤεχνίτη Νοημοσύνη και Ρομποτική

15 Οκτ 2013 (πριν από 4 χρόνια και 24 μέρες)

259 εμφανίσεις


Sentiment analysis of movie reviews using Support Vector Machines


A. Introduction

Sentiment analysis is a subfield of NLP concerned with the determination of opinion and
subjectivity in a text, which has application in the analysis of online product revi
ews,
recommendations, blogs, and other types of opinionated documents.

In this assignment you will be developing classifiers for sentiment analysis of movie
reviews using Support Vector Machines (SVMs), in the manner of the paper by Pang,
Lee, and Vaithyan
athan [1], which was the first research on this topic. The goal is to
develop a classifier that performs sentiment analysis, assigning a movie review a label of
"positive" or "negative" that predicts whether the author of the review liked the movie or
disl
iked it.

You may use
Java or Python
programming and scripting languages of your choice for this
assignment, but for the machine learning you must use SVMlight (section D).

http://svmlight.joachims.org/


B. Dat
a

The data (available on the course web page) consists of 1,000 positive and 1,000 negative
reviews. These have been divided into training, validation, and test sets of 800, 100, and
100 reviews, respectively. In order to encourage you not to optimize agai
nst the testing
set while developing your classifiers, the testing data will not be immediately available.

The reviews were obtained from Pang's website [2], and then part
-
of
-
speech tagged using
a bidirectional Maximum Entropy Markov Model [3, 4].

Each doc
ument is formatted as one sentence per line. Each token is of the format
word/POStag, where a "word" also includes punctuation. Each word is in lowercase.
There is sometimes more than one slash in a token, such as in writer/director/NN.


C. Baseline system

For a baseline system, think of 20 words that you think would be indicative of a positive
movie review, and 20 words that you think would be indicative of a negative review.


To develop the baseline classifier, take this approach: given a movie review, co
unt how
many times it contains either a positive word or a negative word (token occurrences).
Assign the label POSITIVE if the review contains more positive words than negative
words. Assign the label NEGATIVE if it contains more negative words than positi
ve
words. If there are an equal number of positive and negative words, it is a TIE.


D. Machine learning

The machine learning software to be used is SVMlight [5], which learns Support Vector
Machines for binary classification. It is available for Unix syst
ems, Windows, and Mac
OS X.

You will need to read the documentation on the SVMlight website in order to figure out
how to use the software. To test whether you know how to use it, it might be helpful to
first create a small, "toy" dataset by hand, and then

train and test the SVM on it.

When training the classifier, select the option for classification:

-
z {c,r,p}
-

select between classification (c), regression (r), and

preference ranking (p)

A training file is of the format:

<line> .=. <target> <feature>:<v
alue> <feature>:<value> ... <feature>:<value> # <info>

<target> .=. +1 |
-
1 | 0 | <float>

<feature> .=. <integer> | "qid"

<value> .=. <float>

<info> .=. <string>

Since we are doing binary classification, the value of <target> should be +1 or
-
1.

Every feat
ure (which may be expressed as an integer or a string) is associated with a
value, which is a floating
-
point number. If you want a feature to be binary
-
valued, you
may use values of 0.0 and 1.0.

With binary features, it is not necessary to include an expli
cit representation feature of
features that do not occur. For example, suppose a document contains 100 different
words out of a vocabulary of 50,000 possible words. If you are using binary features, it
suffices to include a feature with a value of 1.0 for
each of the words that do occur. You
do not have to include a feature with a value of 0.0 for each of the 49,900 words that do
not appear in the document.

You do not need to perform smoothing.


E. Feature sets

Use these feature sets for training and testin
g your classifier:

1. unigrams

2. bigrams

3. unigrams + POS

4. adjectives

5. top unigrams

6. optimized

Detailed explanation:

1. unigrams: use the word unigrams that occurred >= 4 times in the training data. Let this
quantity be N.

2. bigrams: use the N mos
t
-
frequent bigrams.

3. unigrams + POS: use all combinations of word/tag for each of the unigrams in (1).
Since a word may occur with multiple tags, the quantity of this type of feature will be
greater than N.

4. adjectives: use the adjectives that occurred

>= 4 times. Let this quantity be M.

5. top unigrams: use the M most
-
frequent unigrams.

6. optimized: choose any combination of features you would like, to try to produce the
best classifier possible. For example, you might choose different cutoff values f
or
frequencies of different types of features. You could also create entirely new types of
features. You could also try different settings for training the SVM. The optimized
classifier should be produced through a process of repeatedly training the classi
fier and
computing its performance on the validation set.


F. Evaluation

Train the SVMs on the training data and perform preliminary tests on the validation data.
To evaluate your classifiers, compute the accuracy rate on the testing data, which is
percent
age of movie reviews correctly classified. For the baseline classifier, also compute
the number of ties.

Evaluate your classifiers on the testing data when it is released. Do not further optimize
your system based on performance on the testing data.


G. Tu
rn in

Produce a document that states:

-

Short descriptions of attached files

-

A list of the positive and negative words chosen for your baseline system

-

Performance of the baseline system on the test set

-

A table listing the number of distinct features
for each feature set. Since the split of the
data into training and testing is not exactly the same as Pang et al.¡¦s, the quantity of
different features will be similar, but not identical.

-

A table of performance of the classifiers on the validation set
and test set

-

A written comparison of your results with Pang et al.'s (minimum 5 lines)

-

produce a table listing the 50 most
-
frequently misclassified reviews (across all 6
classifiers) in the validation set, and the number of classifiers by which they we
re
misclassified. For example, the review cv808_12635.txt might have been misclassified
by 4 classifiers. Describe 5 different characteristics of the frequently misclassified
reviews, showing excerpts from 2 reviews for each characteristic. For each of the
se
characteristics, describe a possible feature that could be added to improve performance.


H. Submission:

A compressed directory, containing:

-

All source code

-

One example of a feature file that you produced

-

Your written document

-

Any additional fil
es that you would like to attach



I. References

[1] Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan, Thumbs up? Sentiment Classification
using Machine Learning Techniques, Proceedings of EMNLP 2002.


[2] http://www.cs.cornell.edu/people/pabo/movie
-
re
view
-
data/


[3] Yoshimasa Tsuruoka and Jun'ichi Tsujii, Bidirectional Inference with the Easiest
-
First Strategy
for Tagging Sequence Data, Proceedings of HLT/EMNLP 2005, pp. 467
-
474


[4] http://www
-
tsujii.is.s.u
-
tokyo.ac.jp/~tsuruoka/postagger/


[5] htt
p://svmlight.joachims.org/