Difference between revisions of "Takamura et al. 2005"
Line 24: | Line 24: | ||
where <math>x_i</math> is the spin (+1 or -1) of the <math>i</math>th electron and <math>w</math> is an <math>NxN</math> matrix representing the weights between each pair of electrons. | where <math>x_i</math> is the spin (+1 or -1) of the <math>i</math>th electron and <math>w</math> is an <math>NxN</math> matrix representing the weights between each pair of electrons. | ||
− | + | The probability of an electron configuration is given by | |
− | |||
− | |||
− | |||
− | |||
<math> | <math> | ||
− | + | P(x|W) = \frac{1}{Z(W)} exp(-\Beta E(x, W)) | |
</math> | </math> | ||
− | where | + | where <math>Z(W)</math> is the normalization factor and <math>\Beta</math> is a hyper-parameter called the <i>inverse-temperature</i>. |
== Experimental Result == | == Experimental Result == | ||
The approach described achieved accuracies from 75.2% using two seed words to 91.5% using leave-one-out cross validation. They compare their results to two previous methods for accomplishing the same task on a separate lexical graph constructed using only synonym connections. The first is the graph-based shortest-distance algorithm of Hu and Liu, which achieved a 70.8% accuracy, while Takamura et Al.'s approach achieved 73.4%. The second was Riloff et al.'s bootstrapping method which achieved 72.8%, compared to Takamura et al.'s 83.6% on that data set. | The approach described achieved accuracies from 75.2% using two seed words to 91.5% using leave-one-out cross validation. They compare their results to two previous methods for accomplishing the same task on a separate lexical graph constructed using only synonym connections. The first is the graph-based shortest-distance algorithm of Hu and Liu, which achieved a 70.8% accuracy, while Takamura et Al.'s approach achieved 73.4%. The second was Riloff et al.'s bootstrapping method which achieved 72.8%, compared to Takamura et al.'s 83.6% on that data set. | ||
== Related papers == | == Related papers == |
Revision as of 00:56, 27 September 2012
Contents
Citation
Hiroya Takamura, Takashi Inui, and Manabu Okumura. 2005. Extracting semantic orientations of words using spin model. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics (ACL '05).
Online version
Summary
This paper follows many other sentiment analysis papers in analyzing graphs of words with synonym and antonym links to estimate the net sentiment of each word. Their estimation model, however, is a clear departure from most other work in NLP.
The fundamental idea of the paper is that sentiment of words occurring near each other (according to search engine hit counts) are likely to have similar sentiment values. They observe that this phenomenon is similar to the problem of determining the mostly likely spin states of each electron in a field of electrons.
As they describe it, on a local scale electrons near each other tend to have the same spin. To have two electrons near each other with differing spins requires some amount of energy, and as such, the goal of the optimization problem is to find the state of the electron field with the lowest possible energy. Fortunately, computational physicists have studied this spin model thoroughly. While exhaustive computation requires exponential time, they have also found tractable approximations.
Brief description of the method
The method asserts that the 'energy' of an electron system of electrons is given by
where is the spin (+1 or -1) of the th electron and is an matrix representing the weights between each pair of electrons.
The probability of an electron configuration is given by
where is the normalization factor and is a hyper-parameter called the inverse-temperature.
Experimental Result
The approach described achieved accuracies from 75.2% using two seed words to 91.5% using leave-one-out cross validation. They compare their results to two previous methods for accomplishing the same task on a separate lexical graph constructed using only synonym connections. The first is the graph-based shortest-distance algorithm of Hu and Liu, which achieved a 70.8% accuracy, while Takamura et Al.'s approach achieved 73.4%. The second was Riloff et al.'s bootstrapping method which achieved 72.8%, compared to Takamura et al.'s 83.6% on that data set.