Difference between revisions of "Hassan & Radev ACL 2010"
Line 50: | Line 50: | ||
* In the setting of using [[UsesDataset::WordNet]] synonyms and hypernyms to construct the network and test set to the set of adjectives. It out performs the spin-model, bootstrap and short-path method. | * In the setting of using [[UsesDataset::WordNet]] synonyms and hypernyms to construct the network and test set to the set of adjectives. It out performs the spin-model, bootstrap and short-path method. | ||
− | [File:Hassan_%26_Radev_ACL_2010_E1.png] | + | [[File:Hassan_%26_Radev_ACL_2010_E1.png]] |
* It is also compared to the SO-PMI method in the setting of only 14 seeds. Though SO-PMI with a very large dataset performs slightly better than this method, this method is faster and does not need such large corpus. | * It is also compared to the SO-PMI method in the setting of only 14 seeds. Though SO-PMI with a very large dataset performs slightly better than this method, this method is faster and does not need such large corpus. | ||
− | [File:Hassan_%26_Radev_ACL_2010_E2.png] | + | [[File:Hassan_%26_Radev_ACL_2010_E2.png]] |
== Background == | == Background == |
Revision as of 22:01, 1 October 2012
Contents
Citation
Ahmed Hassan and Dragomir R. Radev. 2010. Identifying text polarity using random walks. In ACL 2010.
Online Version
Summary
This paper shows a method for identifying the polarity of words which addresses the topic of Polarity Classification of words.
The method is based on the observation that a random walk starting at a given word is more likely to hit another word with the same semantic orientation before hitting a word with a different semantic orientation. It applies random walk model to a large word relatedness graph and produce a polarity estimate for any given word.
This method can be used in a semi-supervised setting where a training set of labeled words is used, and in an unsupervised setting where only a handful of seeds is used to define the two polarity classes.
Background and preparation
- Network construction
The dataset WordNet is used to construct the network of words.Collect all words in WordNet, and add links between any two words that occurr in the same synset. The resulting graph is a graph where is a set of word / part-of-speech pairs for all the words in WordNet. is the set of edges connecting each pair of synonymous words.
- Random walk model
Starting from a word with unknown polarity , it moves to a node with probability after the first step. The walk continues until the surfer hits a word with a known polarity.
- First-passage time
It is very similar to the definition of hitting time. The mean first-passage (hitting) time is defined as the average number of steps a random walker, starting in state , will take to enter state for the first time. Considering a subset vertices of the graph, then means the average number of steps a random walker, starting in state , will take to enter a stae for the first time.
Then it is proven that:
Algorithm description
- Construct a word relatedness graph
- Define a random walk on the graph
- Compute the word's hitting time for both the positive and negative sets of vertices
- If the hitting time for the positive set is greater than for the negative set, than the word is classified as negative. Otherwise, it is classified as positive. The ratio between the two hitting times could be used as an indication of how positive/negative the given word is.
Since computing the hitting time is time consuming especially when the graph is large, a Monte Carlo based estimating algorithm is proposed as such:
Experiment result
Comparing to other methods, this method is quite successful in both the settings of semi-supervised and unsupervised.
- In the setting of using WordNet synonyms and hypernyms to construct the network and test set to the set of adjectives. It out performs the spin-model, bootstrap and short-path method.
- It is also compared to the SO-PMI method in the setting of only 14 seeds. Though SO-PMI with a very large dataset performs slightly better than this method, this method is faster and does not need such large corpus.
Background
Some common properties of many networks
- Small-word property - average distance between vertices in a network is short
- power-law degree distributions - many vertices in a network with low degree and a small number with high degree
- network transitivity - two vertices having a same neighbor would have higher probability of being neighbors of each other.
A traditional method of constructing the communities
- Calculate the weight for each pair of vertices.
- Beginning from an vertex only set, add edges between pairs one by one in the desc order of the weights.
- The resulting graph shows a nested set of increasingly large components, which are taken to be the communities.
Related papers
- M. E. J. Newman, The structure of scientic collaboration networks. Proc. Natl. Acad. Sci. USA 98, 404-409(2001)
- M. E. J. Newman, Scientfic collaboration networks: II.Shortest paths, weighted networks, and centrality. Phys. Rev. E 64, 016132 (2001)
- Zhou, Phy. Rev Phys. Rev. E 67, 041908 (2003)
Study plan
This paper is quite clear and self-explanatory thus require very little background in order to understand it. Some of the common properties in the background section would be useful.
Just in case you are not familiar with graph:
The fast algorithm calculating betweenness: