|
|
Line 1: |
Line 1: |
− | == Citation ==
| |
| | | |
− | Andrea Esuli, and Fabrizio Sebastiani. 2006. SENTIWORDNET: A Publicly Available Lexical Resource
| |
− | for Opinion Mining. In Proceedings of the 5th Conference on Language Resources and Evaluation (LREC '06).
| |
− |
| |
− | == Online version ==
| |
− |
| |
− | [http://gandalf.aksis.uib.no/lrec2006/pdf/384_pdf.pdf Universitetet i Bergen]
| |
− |
| |
− | == Summary ==
| |
− |
| |
− | This paper describes SentiWordNet, an expansion of the normal WordNet. For each SynSet (i.e. sense of a word) in WordNet, the authors compute a triplet of scores, obj(w), pos(w), and neg(w) representing the objectivity, positivity, and negativity, respectively of a word w. These score are each in the range [0.0, 1.0] and sum to 1.0 for any given SynSet.
| |
− |
| |
− | They build SentiWordNet by creating an array of eight ternary classifiers. Each classifier is trained on a different data set using a different learning technique and outputs one of three categories: "objective", "positive" or "negative". Each SynSet is run through each classifier, and the three scores are computed by normalizing the number of classifiers that voted for each of the three categories.
| |
− |
| |
− | Each classifier is trained in semi-supervised fashion using a seed vocabulary of 14 positive and negative terms, and another small set of objective words. The remaining data is classified with the classifiers, and normalized with other senses that WordNet calls either synonyms or antonyms to produce a new set of positive, negative, and objective training corpora.
| |
− |
| |
− | == Brief description of the method ==
| |
− | This goal of this paper is to expand WordNet to hold information about the objectivity, positivity, and negativity of each SynSet contained there within. For each SynSet (i.e. sense of a word) in WordNet, the authors compute a triplet of scores, obj(w), pos(w), and neg(w) representing the objectivity, positivity, and negativity, respectively of a word w. These score are each in the range [0.0, 1.0] and sum to 1.0 for any given SynSet.
| |
− |
| |
− | To accomplish this, they create eight total semi-supervised classifiers. The goal of the training is to produce eight independent classifiers, each with roughly the same precision, but with very different behaviors. This is accomplished by varying the training data given to each, as well as the type of learner.
| |
− |
| |
− | The authors observe that given very small training sets, learners tend to have very high precision but limited recall. Adding more data increases recall but adds "noise" to the data, lowering precision. Furthermore, they distinguish between algorithms like SVMs and Naive Bayes, which require priors over classes, and algorithms like Rocchio, which have no such requirements.
| |
− |
| |
− | As such, they decide to use two learning algorithms, Rocchio and SVMs, along with four different sets of data, for a cross product of eight total classifiers. To produce the required data sets, they began with their small manually-labeled training sets, and expanded labels across WordNet synonym and reversed them across antonym links. Iterating this procedure K times yields a training corpus whose size is roughly exponential in K. They used K values of 0, 2, 4, and 6 to produce four different training sets of very different sizes.
| |
− |
| |
− | With their eight ternary classifiers in hand, they run all eight on every SynSet in WordNet. The score for each category is then computed by dividing the number of classifiers that voted for that category by eight. This yields a set of three scores per SynSet, where each score is in [0.0, 1.0] and sum to 1.0.
| |
− |
| |
− | == Experimental Result ==
| |
− | The approach described achieved accuracies from 75.2% using two seed words to 91.5% using leave-one-out cross validation. They compare their results to two previous methods for accomplishing the same task on a separate lexical graph constructed using only synonym connections. The first is the graph-based shortest-distance algorithm of Hu and Liu, which achieved a 70.8% accuracy, while Takamura et Al.'s approach achieved 73.4%. The second was Riloff et al.'s bootstrapping method which achieved 72.8%, compared to Takamura et al.'s 83.6% on that data set.
| |
− | == Related papers ==
| |