Difference between revisions of "Dave et. al., WWW 2003"
(9 intermediate revisions by the same user not shown) | |||
Line 16: | Line 16: | ||
=== Overview === | === Overview === | ||
This [[Category::Paper|paper]] proposes some techniques for [[AddressesProblem::Opinion mining|opinion mining]] and [[AddressesProblem::Review classification|classification]] of opinions as positive or negative. It discusses various contemporary methods used for sentiment classification and how they cater to different tasks. | This [[Category::Paper|paper]] proposes some techniques for [[AddressesProblem::Opinion mining|opinion mining]] and [[AddressesProblem::Review classification|classification]] of opinions as positive or negative. It discusses various contemporary methods used for sentiment classification and how they cater to different tasks. | ||
− | |||
=== Proposed Techniques === | === Proposed Techniques === | ||
− | The system | + | The system uses various approaches to obtain features from the given documents and scoring the features. They also experiment with training various machine learning [[UsesMethod::Supervised classifiers|classifiers]] using self-tagged product reviews from websites such as [http://www.amazon.com amazon.com] and [http://www.cnet.com c|net.com]. On [http://www.cnet.com c|net.com], for each review, a user can give a "''thumbs up''" or a "''thumbs down''" for positive or negative review respectively. Similarly on [http://www.amazon.com amazon.com] a customer can give a scalar ratings to a review using number of stars from one to five - one star being the lowest and five stars being the highest rating for a review. |
==== Feature Selection ==== | ==== Feature Selection ==== | ||
− | For selecting features, it proposes substituting certain words like numbers, product names, product type-specific words and low frequency words to some common tokens to generalize the features. It also discusses adding features based on the Wordnet's synset for a given part-of-speech in the sentence. It mentions that using synsets leads to explosion in the size of the feature sets and also causes more noise than signal. It also proposes using colocation features especially to capture noun-adjective relationships. It also tries to use stemming and negation for handling language variations. | + | For selecting features, it proposes substituting certain words like numbers, product names, product type-specific words and low frequency words to some common tokens to generalize the features. It also discusses adding features based on the Wordnet's synset for a given part-of-speech in the sentence. It mentions that using synsets leads to explosion in the size of the feature sets and also causes more noise than signal. It also proposes using colocation features especially to capture noun-adjective relationships. It also tries to use stemming and negation for handling language variations.<br> |
− | Once the substitutions are done, different features are obtained for n-grams. It experiments with bigrams and trigrams features and also using lower order n-grams for smoothing. | + | Once the substitutions are done, different features are obtained for n-grams. It experiments with bigrams and trigrams features and also using lower order n-grams for smoothing. More features were obtained from substrings using the [[UsesMethod::Church's suffix array algorithm|Church's suffix array algorithm]].<br> |
+ | Various thresholds such as frequency counts and [[UsesMethod::Smoothing|smoothing]] are used to restrict the number of features to ease computation and relevance of remaining features.<br> | ||
+ | |||
+ | ==== Feature Scoring ==== | ||
+ | Baseline method for scoring features is<br> | ||
+ | <math alt="square root of pi">score(f_i) = \frac{p(f_i|C) - p(f_i|C')}{p(f_i|C) + p(f_i|C')}</math>, where C and C' are the sets of positive and negative reviews respectively.<br> | ||
+ | Dave et al. also tried other scoring methods using information gain, odds ratios, Jaccard's measure of similarity, but they didn't show significant improvements over the baseline. Different weighting schemes were also experimented like log transform, Gaussian weighing scheme, residual inverse document frequency etc to see how it affects the classification. | ||
==== Sentiment Classification ==== | ==== Sentiment Classification ==== | ||
− | + | A document was classified as positive or negative review based on the sum of the scores of the features present in it - positive review if the sum is positive and negative if the sum is negative. The authors also experimented with [[UsesMethod::Naive_Bayes|Naive Bayes classifier]], [[UsesMethod::SVM|SVM classifier]], [[UsesMethod::Maximum entropy|Maximum entropy classifier]] and [[UsesMethod::Expectation_Maximization|EM classifier]] using Rainbow text classification package to compare the results with their approaches. They also crawl search engine results for a given product name to obtain more reviews and analyze them. | |
− | |||
− | |||
− | |||
== Evaluation == | == Evaluation == | ||
− | They discuss results from the two tests carried out. | + | They discuss results from the two tests carried out - Test 1 and Test 2. Test 1 tests on each of the 7 C|net product categories while using the other 6 as training set. Test 2 uses randomly selected sets of positive and negative reviews from the 4 largest C|net product categories for evaluation. They used one set for testing and the remaining sets for training. They use the product reviews obtained from [[UsesDataset::Amazon product reviews dataset|amazon.com]] and [[UsesDataset::Cnet product reviews dataset|Cnet.com]]<br> |
+ | They present the comparison of classification accuracy for different approaches proposed. | ||
+ | * The use of WordNet, stemming, colocation, negation did not help in improving the results as compared with the Unigram baseline. | ||
+ | * Trigrams model performed the best and then the bigram model. The use of lower order n-grams for smoothing didn't improve the results. | ||
+ | * The bigram baseline model outperformed the SVM, EM, Maximum entropy and Naive Bayes classifiers as well. Naive Bayes classifier with Laplace smoothing showed better performance among the other classifiers which were experimented with. | ||
+ | * Various scoring methods used do not show any significant improvement in the accuracy. Gaussian weighing scheme gave slightly better results compared to other weighing schemes. | ||
+ | * While evaluating new product reviews mined from web, they achieved best results with substring based features and dynamic programming by product class scoring method. | ||
== Discussion == | == Discussion == | ||
+ | Some of the difficulties faced during the task of product review classification are - | ||
+ | * Users sometimes give a 1 star instead of 5 due to misunderstanding the rating system. | ||
+ | * Sometimes customers compare a different product which was better/worse than the given product which causes misclassification due to lack of semantic understanding of the text. | ||
+ | * Most of the reviews are very short and not many features can be obtained for them. | ||
+ | * Most of the products have greater number of positive reviews than negative reviews. | ||
+ | Variable length features showed promising results along with metadata substitutions. | ||
== Related Papers == | == Related Papers == | ||
− | + | *[[RelatedPaper::Pang_et_al_EMNLP_2002|Pang, B., L. Lee, and S. Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, 79–86. ]] [One of the earliest work on sentiment analysis which later inspired further work on review classification] | |
+ | *[[RelatedPaper::Turney,_ACL_2002|Turney, P. D. 2002. Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, 417–424.]] [One of the earliest work on sentiment analysis which later inspired further work on review classification] | ||
== Study Plan == | == Study Plan == | ||
Resources useful for understanding this paper | Resources useful for understanding this paper | ||
* Article: [http://en.wikipedia.org/wiki/Opinion_mining Opinion Mining] | * Article: [http://en.wikipedia.org/wiki/Opinion_mining Opinion Mining] | ||
+ | * Paper: [http://acl.ldc.upenn.edu/J/J01/J01-1001.pdf Church's suffix array algorithm] | ||
+ | * Article: [http://en.wikipedia.org/wiki/Naive_Bayes_classifier Naive Bayes] classifier | ||
+ | * Article: [http://en.wikipedia.org/wiki/Support_vector_machine SVM] classifier |
Latest revision as of 01:09, 4 October 2012
This is a summary of research paper as part of Social Media Analysis 10-802, Fall 2012.
Contents
Citation
Dave, K., Lawrence, S., and Pennock, D.M. 2003. Mining the peanut gallery: Opinion extraction and semantic classification of product reviews. WWW 2003.
Online Version
Abstract from the paper
The web contains a wealth of product reviews, but sifting through them is a daunting task. Ideally, an opinion mining tool would process a set of search results for a given item, generating a list of product attributes (quality, features, etc.) and aggregating opinions about each of them (poor, mixed, good). We begin by identifying the unique properties of this problem and develop a method for automatically distinguishing between positive and negative reviews. Our classifier draws on information retrieval techniques for feature extraction and scoring, and the results for various metrics and heuristics vary depending on the testing situation. The best methods work as well as or better than traditional machine learning. When operating on individual sentences collected from web searches, performance is limited due to noise and ambiguity. But in the context of a complete web-based tool and aided by a simple method for grouping sentences into attributes, the results are qualitatively quite useful.
Summary
Overview
This paper proposes some techniques for opinion mining and classification of opinions as positive or negative. It discusses various contemporary methods used for sentiment classification and how they cater to different tasks.
Proposed Techniques
The system uses various approaches to obtain features from the given documents and scoring the features. They also experiment with training various machine learning classifiers using self-tagged product reviews from websites such as amazon.com and c|net.com. On c|net.com, for each review, a user can give a "thumbs up" or a "thumbs down" for positive or negative review respectively. Similarly on amazon.com a customer can give a scalar ratings to a review using number of stars from one to five - one star being the lowest and five stars being the highest rating for a review.
Feature Selection
For selecting features, it proposes substituting certain words like numbers, product names, product type-specific words and low frequency words to some common tokens to generalize the features. It also discusses adding features based on the Wordnet's synset for a given part-of-speech in the sentence. It mentions that using synsets leads to explosion in the size of the feature sets and also causes more noise than signal. It also proposes using colocation features especially to capture noun-adjective relationships. It also tries to use stemming and negation for handling language variations.
Once the substitutions are done, different features are obtained for n-grams. It experiments with bigrams and trigrams features and also using lower order n-grams for smoothing. More features were obtained from substrings using the Church's suffix array algorithm.
Various thresholds such as frequency counts and smoothing are used to restrict the number of features to ease computation and relevance of remaining features.
Feature Scoring
Baseline method for scoring features is
, where C and C' are the sets of positive and negative reviews respectively.
Dave et al. also tried other scoring methods using information gain, odds ratios, Jaccard's measure of similarity, but they didn't show significant improvements over the baseline. Different weighting schemes were also experimented like log transform, Gaussian weighing scheme, residual inverse document frequency etc to see how it affects the classification.
Sentiment Classification
A document was classified as positive or negative review based on the sum of the scores of the features present in it - positive review if the sum is positive and negative if the sum is negative. The authors also experimented with Naive Bayes classifier, SVM classifier, Maximum entropy classifier and EM classifier using Rainbow text classification package to compare the results with their approaches. They also crawl search engine results for a given product name to obtain more reviews and analyze them.
Evaluation
They discuss results from the two tests carried out - Test 1 and Test 2. Test 1 tests on each of the 7 C|net product categories while using the other 6 as training set. Test 2 uses randomly selected sets of positive and negative reviews from the 4 largest C|net product categories for evaluation. They used one set for testing and the remaining sets for training. They use the product reviews obtained from amazon.com and Cnet.com
They present the comparison of classification accuracy for different approaches proposed.
- The use of WordNet, stemming, colocation, negation did not help in improving the results as compared with the Unigram baseline.
- Trigrams model performed the best and then the bigram model. The use of lower order n-grams for smoothing didn't improve the results.
- The bigram baseline model outperformed the SVM, EM, Maximum entropy and Naive Bayes classifiers as well. Naive Bayes classifier with Laplace smoothing showed better performance among the other classifiers which were experimented with.
- Various scoring methods used do not show any significant improvement in the accuracy. Gaussian weighing scheme gave slightly better results compared to other weighing schemes.
- While evaluating new product reviews mined from web, they achieved best results with substring based features and dynamic programming by product class scoring method.
Discussion
Some of the difficulties faced during the task of product review classification are -
- Users sometimes give a 1 star instead of 5 due to misunderstanding the rating system.
- Sometimes customers compare a different product which was better/worse than the given product which causes misclassification due to lack of semantic understanding of the text.
- Most of the reviews are very short and not many features can be obtained for them.
- Most of the products have greater number of positive reviews than negative reviews.
Variable length features showed promising results along with metadata substitutions.
Related Papers
- Pang, B., L. Lee, and S. Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, 79–86. [One of the earliest work on sentiment analysis which later inspired further work on review classification]
- Turney, P. D. 2002. Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, 417–424. [One of the earliest work on sentiment analysis which later inspired further work on review classification]
Study Plan
Resources useful for understanding this paper
- Article: Opinion Mining
- Paper: Church's suffix array algorithm
- Article: Naive Bayes classifier
- Article: SVM classifier