|
|
(One intermediate revision by the same user not shown) |
Line 1: |
Line 1: |
− | == Citation ==
| |
| | | |
− | Emily B. Fox , Erik B. Sudderth , Michael I. Jordan and Alan S. Willsky, An HDP-HMM for systems with state persistence, Proceedings of the 25th international conference on Machine learning
| |
− |
| |
− | == Online version ==
| |
− |
| |
− | [http://www.cs.berkeley.edu/~jordan/papers/fox-etal-icml08.pdf]
| |
− |
| |
− | == Summary ==
| |
− |
| |
− | This is an early and influential [[Category::paper]] presenting an unsupervised approach to [[AddressesProblem::review classification]]. The basic ideas are:
| |
− |
| |
− | * To use patterns of part of speech tags to pick out phrases that are likely to be meaningful and unambiguous with respect to semantic orientation (e.g. ADJ NOUN might pick out "good service" or "delicious desserts").
| |
− |
| |
− | * To use [[UsesMethod::pointwise mutual information]] (PMI) to score the similarity of each phrase in a review with the two words "excellent" or "poor", and give an overall score for the polarity to each phrase based on the difference of its PMI with "excellent" to the PMI with "poor". A large corpus was used here (the Web, via queries to a search engine).
| |
− |
| |
− | * To score the polarity of a review based on the total polarity of the phrases in it.
| |
− |
| |
− | == Brief description of the method ==
| |
− | The algorithm takes a written review as an input. First it assigns a POS tag to each word in the review to identify adjective or adverb phrases in the input review. They have used PMI-IR algorithm to estimate the semantic orientation of a phrase. The Pointwise Mutual Information (PMI) between two words <math> w_1 </math> and <math> w_2 </math> is defined as follow:
| |
− |
| |
− | <math>
| |
− | PMI(w_1,w_2)=log_2(p(w_1\ and\ w_2)/p(w_1)p(w_2))
| |
− | </math>
| |
− |
| |
− | where <math> p(w_1,w_2) </math> is the probability that <math> w_1 </math> and <math> w_2 </math> co-occur. They have defined the semantic orientation of a phrase as follow:
| |
− |
| |
− | <math>
| |
− | SO(phrase)=PMI(phrase,'excellent')-PMI(phrase,'poor')
| |
− | </math>
| |
− |
| |
− | We can modify the above definition to obtain the following formula:
| |
− |
| |
− | <math>
| |
− | SO(phrase)=log_2(\frac{hits(phrase\ NEAR\ 'excellent')hits('excellent')}{hits(phrase\ NEAR\ 'poor')hits('excellent')} )
| |
− | </math>
| |
− |
| |
− | where operator NEAR means that the two phrases should be appeared close to each other in the corpus. Using the above formula they have calculated the average semantic orientation for a review. They have shown that the value of average semantic orientation for phrases in the items that are tagged as "recommended" by the users are usually positive and those that are tagged as "not recommended" are usually negative.
| |
− |
| |
− | == Experimental Result ==
| |
− |
| |
− | This approach was fairly successful on a range of review-classification tasks: it achieved accuracy of between 65% and 85% in predicting an author-assigned "recommended" flag for Epinions ratings for eight diverse products, ranging from cars to movies. Many later writers used several key ideas from the paper, including: treating polarity prediction as a document-classification problem; classifying documents based on likely-to-be-informative phrases; and using unsupervised or semi-supervised learning methods.
| |
− |
| |
− | == Related papers ==
| |
− |
| |
− | The widely cited [[RelatedPaper::Pang et al EMNLP 2002]] paper was influenced by this paper - but considers supervised learning techniques. The choice of movie reviews as the domain was suggested by the (relatively) poor performance of Turney's method on movies.
| |
− |
| |
− | An interesting follow-up paper is [[RelatedPaper::Turney and Littman, TOIS 2003]] which focuses on evaluation of the technique of using PMI for predicting the [[semantic orientation of words]].
| |