Difference between revisions of "Chambers and Jurafsky, Jointly combining implicit constraints improves temporal ordering, EMNLP 2008"

From Cohen Courses
Jump to navigationJump to search
Line 20: Line 20:
 
Using time expression normalization to create new relations between time expressions and transitive closure over the original set of temporal relations in the corpus, the method shows an 81% increase in the number of relations in the corpus to train on.  
 
Using time expression normalization to create new relations between time expressions and transitive closure over the original set of temporal relations in the corpus, the method shows an 81% increase in the number of relations in the corpus to train on.  
  
Both the increased connectivity of the corpus and the global inference contributed to the improved performance. Global inference alone on the original set of temporal relations in the corpus does not show any improvement over pairwise classification model, highlighting the importance of time expression normalization and transitive closure to make the corpus more well connected.
+
Both the increased connectivity of the corpus and the global inference contributed to the improved performance. Global inference alone on the original set of temporal relations in the corpus does not show any improvement over pairwise classification model. This highlights the importance of time expression normalization and transitive closure to make the corpus more well connected before even conducting global inference over the corpus.
  
 
== Brief description of the method ==
 
== Brief description of the method ==

Revision as of 00:50, 29 September 2011

Reviews of this paper

Citation

Jointly combining implicit constraints improves temporal ordering, by N. Chambers, D. Jurafsky. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2008.

Online version

This paper is available online [1].

Summary

Unlike earlier works on temporal ordering of events that focus more on improving local, pairwise ordering of events while ignoring possible temporal contradictions in the global space of events, this paper is one of the earliest work that presents the idea of using global constraints to better inform local decisions on temporal ordering of events in text. Two types of global constraints are used: transitivity (A before B and B before C implies A before C) and time expression normalization (e.g. last Tuesday is before today).

The constraints are first used to create more densely connected temporal network of events. Then they are enforced over this temporal network of events using Integer Linear Programming to ensure global consistency of local ordering.

The experiment is done on the task of classifying temporal relations between events into before, after, or vague (unknown) relations on the TimeBank Corpus. These are the core relations in the TempEval-07 temporal ordering challenge. The paper shows 3.6% absolute increase in the accuracy of before/after classification over the local, pairwise classification model.

Using time expression normalization to create new relations between time expressions and transitive closure over the original set of temporal relations in the corpus, the method shows an 81% increase in the number of relations in the corpus to train on.

Both the increased connectivity of the corpus and the global inference contributed to the improved performance. Global inference alone on the original set of temporal relations in the corpus does not show any improvement over pairwise classification model. This highlights the importance of time expression normalization and transitive closure to make the corpus more well connected before even conducting global inference over the corpus.

Brief description of the method

The model has two components: (1) pairwise classifier between events, (2) global constraint satisfaction layer that maximizes the confidence scores from the classifier.

In the first component, Support Vector Machine (SVM) classifier is used. Using features varying from POS tags and lexical features surrounding the event to tense, grammatical aspect features of the events, probabilities of temporal relations between pairwise events are computed. These scores are then used as confidence scores to choose an optimal global ordering.

The algorithm takes a written review as an input. First it assigns a POS tag to each word in the review to identify adjective or adverb phrases in the input review. They have used PMI-IR algorithm to estimate the semantic orientation of a phrase. The Pointwise Mutual Information (PMI) between two words and is defined as follow:

where is the probability that and co-occur. They have defined the semantic orientation of a phrase as follow:

We can modify the above definition to obtain the following formula:

where operator NEAR means that the two phrases should be appeared close to each other in the corpus. Using the above formula they have calculated the average semantic orientation for a review. They have shown that the value of average semantic orientation for phrases in the items that are tagged as "recommended" by the users are usually positive and those that are tagged as "not recommended" are usually negative.

Experimental Result

This approach was fairly successful on a range of review-classification tasks: it achieved accuracy of between 65% and 85% in predicting an author-assigned "recommended" flag for Epinions ratings for eight diverse products, ranging from cars to movies. Many later writers used several key ideas from the paper, including: treating polarity prediction as a document-classification problem; classifying documents based on likely-to-be-informative phrases; and using unsupervised or semi-supervised learning methods.

Related papers

The widely cited Pang et al EMNLP 2002 paper was influenced by this paper - but considers supervised learning techniques. The choice of movie reviews as the domain was suggested by the (relatively) poor performance of Turney's method on movies.

An interesting follow-up paper is Turney and Littman, TOIS 2003 which focuses on evaluation of the technique of using PMI for predicting the semantic orientation of words.