|
|
(3 intermediate revisions by 2 users not shown) |
Line 1: |
Line 1: |
− | This a paper discussed in Social Media Analysis 10-802 in Spring 2011.
| |
| | | |
− | == Citation ==
| |
− |
| |
− | Takamura, H., T. Inui, and M. Okumura. 2005. Extracting semantic orientations of words using spin model. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, 140.
| |
− |
| |
− | == Online version ==
| |
− | [http://portal.acm.org/citation.cfm?id=1219857 Extracting semantic orientations of words using spin model]
| |
− |
| |
− | == Summary ==
| |
− |
| |
− | This paper proposes a method for extracting [[AddressesProblem::semantic orientation of words]] using a [[UsesMethod::spin model]], which is a model fo a set of electrons with spins. Each word has a positive or negative orientation, which corresponds to electrons with up or down spin. It is intractable to calculate the probability function, but instead the mean field theory can be used to approximate the average orientation of each word. According to the spin model, two electrons (words) next to each other have the same spin (orientation).
| |
− |
| |
− | The approach in the paper first constructs a lexical network, where there is a link between two words if one is in the gloss of the other. Each link represents that two words have either the same or a different orientation. The later can happen due to negation words such as 'not'. Then the links are weighted depending on the degree of both words. They call this the gloss network. In addition to the gloss network, yet another network called the gloss-thesaurus network is constructed. This network is based on synonyms, antonyms and hypernyms. They enhance this network with cooccurrence information extracted from the corpus and call that the gloss-thesaurus-corpus network.
| |
− |
| |
− | Given the orientation of a small number of seed words, the orientations of all the other words are propagated through the network. This propagation is based on an update formula for each orientation value and ends when the difference in the value of the variational free energy is smaller than a certain threshold. The words with high final average values are classified as the positive words.
| |
− |
| |
− | They created a network of approximately 88,000 words collected from the [[UsesDataset::Wall Street Journal]] and [[UsesDataset::Brown corpus]]. For evaluation they used a labeled dataset of 3596 words as a gold standard. Parameter tuning as well as the number of seed words is evaluated using 10-fold validation.
| |
− |
| |
− | Based on their experiments they conclude that the network that incorporates synonyms and the cooccurrence information from the corpus improves the accuracy when there are more than 2 seed words. The possible explanation for this is that there is a relatively large degree of freedom with only 2 seed words, resulting in a local optimum. Furthermore they show that their method works well based on a comparison with the shortest-path by [[RelatedPaper::Kamps LREC 2004]] [1] and the bootstrapping method by [[RelatedPaper::Hu SIGKDD 2004]] [2]. However their method is not perfect and suffers from ambiguity, lack of structural information and idiomatic expressions.
| |
− |
| |
− | ''' References '''
| |
− |
| |
− | 1. Jaap Kamps, Maarten Marx, Robert J. Mokken, and Maarten de Rijke. 2004. Using wordnet to measure semantic orientation of adjectives. In Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004), volume IV, pages 1115–1118.
| |
− |
| |
− | 2. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the 2004 ACM SIGKDD international conference on Knowledge discovery and data mining (KDD-2004), pages 168–177.
| |
− |
| |
− | == Remarks ==
| |
− |
| |
− | We noticed that they reported the accuracy based on cross validation. This overfits the dataset and therefore it would have been better if they evaluated on a seperate test set.
| |