Hassan & Radev ACL 2010

From Cohen Courses
Revision as of 22:35, 1 October 2012 by Dzheng (talk | contribs)
Jump to navigationJump to search

Citation

Ahmed Hassan and Dragomir R. Radev. 2010. Identifying text polarity using random walks. In ACL 2010.

Online Version

online version

Summary

This paper shows a method for identifying the polarity of words which addresses the topic of Polarity Classification of words.

The method is based on the observation that a random walk starting at a given word is more likely to hit another word with the same semantic orientation before hitting a word with a different semantic orientation. It applies random walk model to a large word relatedness graph and produce a polarity estimate for any given word.

This method can be used in a semi-supervised setting where a training set of labeled words is used, and in an unsupervised setting where only a handful of seeds is used to define the two polarity classes.

Background and preparation

  • Network construction

The dataset WordNet is used to construct the network of words.Collect all words in WordNet, and add links between any two words that occurr in the same synset. The resulting graph is a graph where is a set of word / part-of-speech pairs for all the words in WordNet. is the set of edges connecting each pair of synonymous words.

  • Random walk model

Starting from a word with unknown polarity , it moves to a node with probability after the first step. The walk continues until the surfer hits a word with a known polarity.

  • First-passage time

It is very similar to the definition of hitting time. The mean first-passage (hitting) time is defined as the average number of steps a random walker, starting in state Failed to parse (syntax error): {\displaystyle i &ne k} , will take to enter state for the first time. Considering a subset vertices of the graph, then means the average number of steps a random walker, starting in state Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle i &notin S} , will take to enter a stae Failed to parse (syntax error): {\displaystyle k &in S} for the first time.

Then it is proven that: Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle h(i|S) = \sum_{j &in V}p_{ij} * h(j|S) + 1 }

Algorithm description

  1. Construct a word relatedness graph
  2. Define a random walk on the graph
  3. Compute the word's hitting time for both the positive and negative sets of vertices
  4. If the hitting time for the positive set is greater than for the negative set, than the word is classified as negative. Otherwise, it is classified as positive. The ratio between the two hitting times could be used as an indication of how positive/negative the given word is.

Since computing the hitting time is time consuming especially when the graph is large, a Monte Carlo based estimating algorithm is proposed as such: Word polarity using random walks.png

Experiment result

The test result is quite promising. It was verified on three known-community graphs and explored two unknown-community graphs. Both return high accuracy results.

  • computer-generated graph. If out of 16 edges of each vertex, six or less edges are inter-community edges, then the accuracy is pretty high: 100% accuracy of classifying the vertex to the community.

G 2002 result1.png

  • Zachary's karate network Out of 34 nodes, only one node is classified incorrectly.
  • football networks Almost all teams are correctly grouped with the other teams in their conference. Few cases in which the algorithm seems to fail actually correspond to nuances in the scheduling of games.
  • scientific collaboration network The algorithm seems to find two types of communities: scientists grouped together by similarity either of research topic or of methodology.
  • food web The algorithm finds out two well-defined communities of roughly equal size,plus a small number of vertices that belong to neither community. The split between the two large communities corresponds quite closely with the division between pelagic organisms and benthic organisms.

Background

Some common properties of many networks

  • Small-word property - average distance between vertices in a network is short
  • power-law degree distributions - many vertices in a network with low degree and a small number with high degree
  • network transitivity - two vertices having a same neighbor would have higher probability of being neighbors of each other.


A traditional method of constructing the communities

  1. Calculate the weight for each pair of vertices.
  2. Beginning from an vertex only set, add edges between pairs one by one in the desc order of the weights.
  3. The resulting graph shows a nested set of increasingly large components, which are taken to be the communities.

Related papers

Study plan

This paper is quite clear and self-explanatory thus require very little background in order to understand it. Some of the common properties in the background section would be useful.

Just in case you are not familiar with graph:

The fast algorithm calculating betweenness: