Difference between revisions of "Turney, ACL 2002"

From Cohen Courses
Jump to navigationJump to search
Line 8: Line 8:
  
 
== Summary ==
 
== Summary ==
 
 
This is a highly cited KDD paper [[Category::paper]] presenting an unsupervised approach to [[AddressesProblem::Temporal information extraction]]. Specifically it introduces a framework for tracking topic shifts, e.g. news and events, over short time scales.
 
This is a highly cited KDD paper [[Category::paper]] presenting an unsupervised approach to [[AddressesProblem::Temporal information extraction]]. Specifically it introduces a framework for tracking topic shifts, e.g. news and events, over short time scales.
  
 
The key idea is to track news by short, distinctive phrase, which acts as the analogue of "genetic signatures" for different topics. It also produces quantitative analysis of the news cycle on their representative data set.
 
The key idea is to track news by short, distinctive phrase, which acts as the analogue of "genetic signatures" for different topics. It also produces quantitative analysis of the news cycle on their representative data set.
 
  
 
== The method ==
 
== The method ==
 
Some words alternation in a phrase during the quotes called (textual mutation) could inhibit the accurate tracking. To solve this problem, the authors propose a robust method to cluster textual variants of quotes consisting of two stages namely '''phrase graph construction''' and '''clustering'''.
 
Some words alternation in a phrase during the quotes called (textual mutation) could inhibit the accurate tracking. To solve this problem, the authors propose a robust method to cluster textual variants of quotes consisting of two stages namely '''phrase graph construction''' and '''clustering'''.
  
 +
== pre-processing ==
 
First of all, pre-processing is conducted to eliminate the noisy phrases within the data set including:
 
First of all, pre-processing is conducted to eliminate the noisy phrases within the data set including:
  
Line 25: Line 24:
 
3. eliminate the phrases whose domain-frequency is at least 25% (avoid spammers).
 
3. eliminate the phrases whose domain-frequency is at least 25% (avoid spammers).
  
Each node <math>n \in G</math> in the phrase graph <math>G</math> represents a phrase extracted from the corpus. An edge <math>(p,q)</math> is included for every pair of phrases p and q, which always points from shorter phrases to longer phrases. The weight on an edge is calculated from:
+
== Graph construction ==
 
+
Each node <math>p</math> in the phrase graph <math>G</math> represents a phrase extracted from the corpus. An edge <math>(p,q)</math> is included for every pair of phrases p and q, which always points from shorter phrases to longer phrases. Two phrases are connected either the edit-distance (treating a word as a token) is smaller than 1 or there is at least a 10-word consecutive overlap between them. In other words, the edge implies the inclusion relation between the phrases and since the direction is strictly pointing to longer phrases the graph becomes a directed acyclic graph (DAG).
<math>Insert formula here</math>
 
 
 
 
 
  
 +
The authors fail to elaborate how the weight <math>w_{pq}</math> on each edge is calculated. They only state that the weight is increased as the directed edit distance as well as the frequency of q grows.
  
 +
== Clustering ==
 +
The goal of clustering is to retrieve all ''single rooted'' components so that all phrases in a component are closely related by deleting a set of edges of minimum total weight. The single rooted indicates a directed acyclic sub-graph if it contains exactly one root node (out-degree = 0). As other clustering problem, it proves to be a NP-hard problem. Therefore the authors propose three heuristic towards a feasible clustering solution. And the authors claim using the heuristic (although, in my opinion , the contribution of the heuristic is obscure) they found that keeping an edge to the shortest phrase yields 9% improvement over the baseline, 12% improvement keeping an edge to the most frequent phrases and 13% greedily assigning the node to the cluster with the most edges. The experimental result demonstrates that the volume distribution for both phrase and phrase cluster generated by their cluster method follows power law distribution.
  
where operator NEAR means that the two phrases should be appeared close to each other in the corpus. Using the above formula they have calculated the average semantic orientation for a review. They have shown that the value of average semantic orientation for phrases in the items that are tagged as "recommended" by the users are usually positive and those that are tagged as "not recommended" are usually negative.
+
  
 
== Data set ==
 
== Data set ==
 
90 million news and blog articles collected over the final three months of the 2008 U.S.
 
90 million news and blog articles collected over the final three months of the 2008 U.S.
Presidential Election.
+
Presidential Election (from August 1 to October 31 2008).
  
 
== Experimental Result ==
 
== Experimental Result ==

Revision as of 20:42, 26 September 2012

Citation

Jure Leskovec, Lars Backstrom, Jon M. Kleinberg: Meme-tracking and the dynamics of the news cycle. KDD 2009: 497-506.

Online version

[1]

Summary

This is a highly cited KDD paper paper presenting an unsupervised approach to Temporal information extraction. Specifically it introduces a framework for tracking topic shifts, e.g. news and events, over short time scales.

The key idea is to track news by short, distinctive phrase, which acts as the analogue of "genetic signatures" for different topics. It also produces quantitative analysis of the news cycle on their representative data set.

The method

Some words alternation in a phrase during the quotes called (textual mutation) could inhibit the accurate tracking. To solve this problem, the authors propose a robust method to cluster textual variants of quotes consisting of two stages namely phrase graph construction and clustering.

pre-processing

First of all, pre-processing is conducted to eliminate the noisy phrases within the data set including:

1. remove the phrases whose word-length is less than 4.

2. remove the phrases whose term-frequency is less than 10.

3. eliminate the phrases whose domain-frequency is at least 25% (avoid spammers).

Graph construction

Each node in the phrase graph represents a phrase extracted from the corpus. An edge is included for every pair of phrases p and q, which always points from shorter phrases to longer phrases. Two phrases are connected either the edit-distance (treating a word as a token) is smaller than 1 or there is at least a 10-word consecutive overlap between them. In other words, the edge implies the inclusion relation between the phrases and since the direction is strictly pointing to longer phrases the graph becomes a directed acyclic graph (DAG).

The authors fail to elaborate how the weight on each edge is calculated. They only state that the weight is increased as the directed edit distance as well as the frequency of q grows.

Clustering

The goal of clustering is to retrieve all single rooted components so that all phrases in a component are closely related by deleting a set of edges of minimum total weight. The single rooted indicates a directed acyclic sub-graph if it contains exactly one root node (out-degree = 0). As other clustering problem, it proves to be a NP-hard problem. Therefore the authors propose three heuristic towards a feasible clustering solution. And the authors claim using the heuristic (although, in my opinion , the contribution of the heuristic is obscure) they found that keeping an edge to the shortest phrase yields 9% improvement over the baseline, 12% improvement keeping an edge to the most frequent phrases and 13% greedily assigning the node to the cluster with the most edges. The experimental result demonstrates that the volume distribution for both phrase and phrase cluster generated by their cluster method follows power law distribution.


Data set

90 million news and blog articles collected over the final three months of the 2008 U.S. Presidential Election (from August 1 to October 31 2008).

Experimental Result

This approach was fairly successful on a range of review-classification tasks: it achieved accuracy of between 65% and 85% in predicting an author-assigned "recommended" flag for Epinions ratings for eight diverse products, ranging from cars to movies. Many later writers used several key ideas from the paper, including: treating polarity prediction as a document-classification problem; classifying documents based on likely-to-be-informative phrases; and using unsupervised or semi-supervised learning methods.

Related papers

The widely cited Pang et al EMNLP 2002 paper was influenced by this paper - but considers supervised learning techniques. The choice of movie reviews as the domain was suggested by the (relatively) poor performance of Turney's method on movies.

An interesting follow-up paper is Turney and Littman, TOIS 2003 which focuses on evaluation of the technique of using PMI for predicting the semantic orientation of words.