Tsur et al ICWSM 10

From Cohen Courses
Revision as of 21:49, 30 September 2012 by Epapalex (talk | contribs)
Jump to navigationJump to search

This a Paper that appeared at the International AAAI Conference on Weblogs and Social Media 2010

Citation

 title={ICWSM--A great catchy name: Semi-supervised recognition of sarcastic sentences in online product reviews},
 author={Tsur, O. and Davidov, D. and Rappoport, A.},
 booktitle={Proceedings of the fourth international AAAI conference on weblogs and social media},
 pages={162--169},
 year={2010}

Online version

ICWSM–A great catchy name: Semi-supervised recognition of sarcastic sentences in online product reviews

Summary

In this work, the authors introduce a novel semi-supervised approach that is able to identify sarcasm in the comments of online reviews. As they point out, this problem is particularly hard; as a matter of fact, sometimes even people find it hard to recognize sarcasm, let alone a clever machine learning algorithm.

To that end, first define a small training set, labeled by hand, which contains some very obvious sarcastic comments and some clearly non-sarcastic ones. The sarcasm levels for each of those reviews range in a scale from 1-5. Using this train set, they extract two different types of features:

  • Pattern Based: For the pattern identification, the authors separated all terms into High Frequency Words (HFW) or Context Words (CW), simply by thresholding their corpus frequency (with HFW having higher such frequency than CW's). Consequently, they allow for each pattern to contain 2-6 HWF and 1-6 CW. As a next step, they filter out some patterns that are not particularly useful (in order to cut down their initially big number), by eliminating patterns that 1) appear only on a single product, 2) appear on the train set in reviews which are either clearly sarcastic (rated 5) or clearly non-sarcastic (rated 1).
  • Syntactic: These features mainly pertain to the punctuation marks used by the review. For example, the number of quotes, exclamation marks, question marks, the number of capitalized words and the actual length of a sentence constitute important such features. As a preview of the results, however, the authors conclude that punctuation marks are not particularly useful in the detection of sarcasm in written speech (in contrast to spoken communication, as a previous work had concluded).

After the feature extraction process, in order to decide how sarcastic a new comment, drawn from a test dataset, is, they utilize a k-NN inspired classifier which works as follows: For any given (new) review, after extracting the features and converting it to a vector in that feature space, they look at its k nearest neighboring vectors of the training set, in the euclidean sense. Then, the label of that review is determined by the weighted average of the scores/labels for those k neighbors.

Evaluation

Dataset:

The data used for the experiments come from Amazon. In particular, they consist of 66271 reviews, spanning 120 different products. The average number of starts (aka the average rating) for those products was 4.19/5, whereas the average review length was 953 characters.

Metrics:

The authors base their evaluation on

  • Precision
  • Recall
  • Accuracy
  • F-measure

Baselines:

Results:


Related Papers

Study Plan