Sentiment analysis of movie reviews using Support Vector Machines
Sentiment analysis is a subfield of NLP concerned with the determination of opinion and
subjectivity in a text, which has application in the analysis of online product revi
recommendations, blogs, and other types of opinionated documents.
In this assignment you will be developing classifiers for sentiment analysis of movie
reviews using Support Vector Machines (SVMs), in the manner of the paper by Pang,
Lee, and Vaithyan
athan , which was the first research on this topic. The goal is to
develop a classifier that performs sentiment analysis, assigning a movie review a label of
"positive" or "negative" that predicts whether the author of the review liked the movie or
You may use
Java or Python
programming and scripting languages of your choice for this
assignment, but for the machine learning you must use SVMlight (section D).
The data (available on the course web page) consists of 1,000 positive and 1,000 negative
reviews. These have been divided into training, validation, and test sets of 800, 100, and
100 reviews, respectively. In order to encourage you not to optimize agai
nst the testing
set while developing your classifiers, the testing data will not be immediately available.
The reviews were obtained from Pang's website , and then part
speech tagged using
a bidirectional Maximum Entropy Markov Model [3, 4].
ument is formatted as one sentence per line. Each token is of the format
word/POStag, where a "word" also includes punctuation. Each word is in lowercase.
There is sometimes more than one slash in a token, such as in writer/director/NN.
C. Baseline system
For a baseline system, think of 20 words that you think would be indicative of a positive
movie review, and 20 words that you think would be indicative of a negative review.
To develop the baseline classifier, take this approach: given a movie review, co
many times it contains either a positive word or a negative word (token occurrences).
Assign the label POSITIVE if the review contains more positive words than negative
words. Assign the label NEGATIVE if it contains more negative words than positi
words. If there are an equal number of positive and negative words, it is a TIE.
D. Machine learning
The machine learning software to be used is SVMlight , which learns Support Vector
Machines for binary classification. It is available for Unix syst
ems, Windows, and Mac
You will need to read the documentation on the SVMlight website in order to figure out
how to use the software. To test whether you know how to use it, it might be helpful to
first create a small, "toy" dataset by hand, and then
train and test the SVM on it.
When training the classifier, select the option for classification:
select between classification (c), regression (r), and
preference ranking (p)
A training file is of the format:
<line> .=. <target> <feature>:<v
alue> <feature>:<value> ... <feature>:<value> # <info>
<target> .=. +1 |
1 | 0 | <float>
<feature> .=. <integer> | "qid"
<value> .=. <float>
<info> .=. <string>
Since we are doing binary classification, the value of <target> should be +1 or
ure (which may be expressed as an integer or a string) is associated with a
value, which is a floating
point number. If you want a feature to be binary
may use values of 0.0 and 1.0.
With binary features, it is not necessary to include an expli
cit representation feature of
features that do not occur. For example, suppose a document contains 100 different
words out of a vocabulary of 50,000 possible words. If you are using binary features, it
suffices to include a feature with a value of 1.0 for
each of the words that do occur. You
do not have to include a feature with a value of 0.0 for each of the 49,900 words that do
not appear in the document.
You do not need to perform smoothing.
E. Feature sets
Use these feature sets for training and testin
g your classifier:
3. unigrams + POS
5. top unigrams
1. unigrams: use the word unigrams that occurred >= 4 times in the training data. Let this
quantity be N.
2. bigrams: use the N mos
3. unigrams + POS: use all combinations of word/tag for each of the unigrams in (1).
Since a word may occur with multiple tags, the quantity of this type of feature will be
greater than N.
4. adjectives: use the adjectives that occurred
>= 4 times. Let this quantity be M.
5. top unigrams: use the M most
6. optimized: choose any combination of features you would like, to try to produce the
best classifier possible. For example, you might choose different cutoff values f
frequencies of different types of features. You could also create entirely new types of
features. You could also try different settings for training the SVM. The optimized
classifier should be produced through a process of repeatedly training the classi
computing its performance on the validation set.
Train the SVMs on the training data and perform preliminary tests on the validation data.
To evaluate your classifiers, compute the accuracy rate on the testing data, which is
age of movie reviews correctly classified. For the baseline classifier, also compute
the number of ties.
Evaluate your classifiers on the testing data when it is released. Do not further optimize
your system based on performance on the testing data.
Produce a document that states:
Short descriptions of attached files
A list of the positive and negative words chosen for your baseline system
Performance of the baseline system on the test set
A table listing the number of distinct features
for each feature set. Since the split of the
data into training and testing is not exactly the same as Pang et al.¡¦s, the quantity of
different features will be similar, but not identical.
A table of performance of the classifiers on the validation set
and test set
A written comparison of your results with Pang et al.'s (minimum 5 lines)
produce a table listing the 50 most
frequently misclassified reviews (across all 6
classifiers) in the validation set, and the number of classifiers by which they we
misclassified. For example, the review cv808_12635.txt might have been misclassified
by 4 classifiers. Describe 5 different characteristics of the frequently misclassified
reviews, showing excerpts from 2 reviews for each characteristic. For each of the
characteristics, describe a possible feature that could be added to improve performance.
A compressed directory, containing:
All source code
One example of a feature file that you produced
Your written document
Any additional fil
es that you would like to attach
 Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan, Thumbs up? Sentiment Classification
using Machine Learning Techniques, Proceedings of EMNLP 2002.
 Yoshimasa Tsuruoka and Jun'ichi Tsujii, Bidirectional Inference with the Easiest
for Tagging Sequence Data, Proceedings of HLT/EMNLP 2005, pp. 467