Kessler et al. 2009

From Cohen Courses
Revision as of 02:45, 2 October 2012 by Austinma (talk | contribs) (→‎Summary)
Jump to navigationJump to search

Citation

Jason S. Kessler and Nicolas Nicolov. 2009. Targeting Sentiment Expressions through Supervised Ranking of Linguistic Configurations. In Proceedings of the Association for the Advancement of Artificial Intelligence (AAAI '09).

Online version

Indiana CS

Summary

This paper examines four different methods of attaching sentimental-containing phrases (sentiment expressions) with the nouns they describe (mentions). They assume a separate module is able to accurately identify both sentiment expressions and mentions, and seek only to determine which sentiment expressions modify which mentions.

Furthermore, their algorithms build a graph-like structure linking the speaker or holder of the opinion with the sentiment expression, the sentiment expression with the mention, and meronyms of mentions with their parents. This allows for efficient extraction of results beyond just the single sentiment value of one mention. Instead, queries such as "What do people not like about product X?" or "What other features do users who dislike the camera’s zoom lens feel strongly about?" may be answered by their data representation.

Their data set consisted was constructed by them, and while their resulting precisions were lower than some other papers, their blog-based data set poses its own difficulties. Their methods did, however result in higher precision than other papers's methods on Kessler et Al.'s custom dataset.

Brief description of the method

The method asserts that the 'energy' of an electron system of electrons is given by

where is the spin (+1 or -1) of the th electron and is an matrix representing the weights between each pair of electrons.

The probability of an electron configuration is given by

where is the normalization factor and is a hyper-parameter called the inverse-temperature.

Unfortunately, evaluating is intractable, due to the fact that there are possible configurations of electrons. As such Takamura et al. use a clever approximation. They seek a function that is as similar to as possible. As a distance metric between the two functions they use the variational free energy which is defined as the difference between the mean energy with respect to and the entropy of .

This function's derivative is analytically findable, and hence given a starting value of an analytic update rule can be found, and is shown in the paper.

They then require a way to compute the weighting table . They do this by using their glossary of similar terms and defining where represents the degree of word .

Finally, they discuss two methodologies for determining the hyper-parameter . The first is a simple leave-one-out error rate minimization method, as is standard in many machine learning problems. The second is physics-inspired and is called the magnetism of the system, defined by

They seek a value of that makes positive, but as close as possible to zero. To accomplish this, they simply calculate with several different values of and select the best one they find.

Experimental Result

The approach described achieved accuracies from 75.2% using two seed words to 91.5% using leave-one-out cross validation. They compare their results to two previous methods for accomplishing the same task on a separate lexical graph constructed using only synonym connections. The first is the graph-based shortest-distance algorithm of Hu and Liu, which achieved a 70.8% accuracy, while Takamura et Al.'s approach achieved 73.4%. The second was Riloff et al.'s bootstrapping method which achieved 72.8%, compared to Takamura et al.'s 83.6% on that data set.

Related papers