Difference between revisions of "Chambers and Jurafsky, Jointly combining implicit constraints improves temporal ordering, EMNLP 2008"

From Cohen Courses
Jump to navigationJump to search
 
(33 intermediate revisions by the same user not shown)
Line 5: Line 5:
  
 
{{MyCiteconference | booktitle = Proceedings of the Conference on Empirical Methods in Natural Language Processing| coauthors = D. Jurafsky| conference = ACL| date = 2008| first = N.| last = Chambers| pages = 698-706| title = Jointly combining implicit constraints improves temporal ordering| http://dl.acm.org/citation.cfm?id=1613803 }}
 
{{MyCiteconference | booktitle = Proceedings of the Conference on Empirical Methods in Natural Language Processing| coauthors = D. Jurafsky| conference = ACL| date = 2008| first = N.| last = Chambers| pages = 698-706| title = Jointly combining implicit constraints improves temporal ordering| http://dl.acm.org/citation.cfm?id=1613803 }}
 +
 +
== Online version ==
  
 
This [[category::paper]] is available online [http://dl.acm.org/citation.cfm?id=1613803].
 
This [[category::paper]] is available online [http://dl.acm.org/citation.cfm?id=1613803].
  
== Online version ==
+
== Summary ==
 +
 
 +
Unlike earlier works on [[AddressesProblem::temporal ordering]] of events that focus more on improving local, pairwise ordering of events while ignoring possible temporal contradictions in the global space of events, this [[Category::paper]] is one of the earliest work that presents the idea of using global constraints to better inform local decisions on temporal ordering of events in text. Two types of global constraints are used: transitivity (A before B and B before C implies A before C) and time expression normalization (e.g. last Tuesday is before today).
 +
 
 +
The constraints are first used to create more densely connected temporal network of events. Then they are enforced over this temporal network of events using [[UsesMethod::Integer Linear Programming]] to ensure global consistency of local ordering.
  
[http://dl.acm.org/citation.cfm?id=1613803]
+
== Brief description of the method ==
  
== Summary ==
+
The model has two components: (1) pairwise classifier between events, (2) global constraint satisfaction layer that maximizes the confidence scores from the classifier.
  
This is an early and influential [[Category::paper]] presenting an unsupervised approach to [[AddressesProblem::review classification]]. There are three basic ideas introduced here.
+
In the first component, [[Support_Vector_Machines|Support Vector Machine]] (SVM) classifier is used. Using features varying from POS tags and lexical features surrounding the event to tense, grammatical aspect features of the events, probabilities of temporal relations between pairwise events are computed. These scores are then used as confidence scores to choose an optimal global ordering.  
  
One key idea is to score the polarity of a review based on the total polarity of the phrases in it.
+
In the second component, the ILP uses the following objective function:
  
A second idea is to use patterns of part of speech tags to pick out phrases that are likely to be meaningful and unambiguous with respect to semantic orientation (e.g. ADJ NOUN might pick out "good service" or "delicious desserts"). 
+
::<math> max \sum_{i} \sum_{j} p_{ij} x_{ij} \,\! </math>
  
Finally, these potentially-meaningful phrases are then scored using [[UsesMethod::pointwise mutual information]] (PMI) to seed words on known polarity.  Specifically, Turney uses PMI to compare each phrase to the words "excellent" or "poor", and then uses these distances to give an overall score for the polarity to each phrase, based on the difference of its PMI with "excellent" to the PMI with "poor".  A very large corpus was used here (the Web, via queries to a search engine), which appears to be important in making this simple technique work.
+
with the constraints:
  
== Brief description of the method ==
+
::<math> \forall_{i} \forall_{j} {x_{ij}} \in {0, 1} </math>
The algorithm takes a written review as an input. First it assigns a POS tag to each word in the review to identify adjective or adverb phrases in the input review. They have used PMI-IR algorithm to estimate the semantic orientation of a phrase. The Pointwise Mutual Information (PMI) between two words <math> w_1 </math> and <math> w_2 </math> is defined as follow:
 
  
<math>
+
::<math> \forall_{i} {x_{i1}} + {x_{i2}} + ... + {x_{im}} = 1\,\!</math>
PMI(w_1,w_2)=log_2(p(w_1\ and\ w_2)/p(w_1)p(w_2))
 
</math>
 
  
where <math> p(w_1,w_2) </math> is the probability that <math> w_1 </math> and <math> w_2 </math> co-occur. They have defined the semantic orientation of a phrase as follow:
+
::<math> {x_{ia}} + {x_{jb}} - {x_{kc}} <= 1 \,\!</math>
  
<math>
+
where <math>{x_{ij}}\,\!</math> represents the ith pair of events classified into the jth relation of ''m'' relations.
SO(phrase)=PMI(phrase,'excellent')-PMI(phrase,'poor')
 
</math>
 
  
We can modify the above definition to obtain the following formula:
+
The first constraint simply says that each variable must be 0 or 1. The second constraint says that a pair of events cannot have two relations at the same time. The third constraint is added for connected pairs of events <math>i, j, k \,\!</math>, for each transitivity condition that infers relation <math>c\,\!</math> given <math>a\,\!</math> and <math>b\,\!</math>.
  
<math>
+
Prior to running the two components, the set of training relations is expanded to create a more well-connected network of events. One way to expand it is to perform temporal reasoning over the document's time expression (e.g. ''yesterday'' is before ''today'') to add new relations between times. Once new time-time relations are added, transitive closure is conducted through transitive rules that creates new connections in the network, such as:
SO(phrase)=log_2(\frac{hits(phrase\ NEAR\ 'excellent')hits('excellent')}{hits(phrase\ NEAR\ 'poor')hits('excellent')} )
 
</math>
 
  
where operator NEAR means that the two phrases should be appeared close to each other in the corpus. Using the above formula they have calculated the average semantic orientation for a review. They have shown that the value of average semantic orientation for phrases in the items that are tagged as "recommended" by the users are usually positive and those that are tagged as "not recommended" are usually negative.
+
''A simultaneous B'' <math>\and</math> ''A before C'' <math>\to</math> ''B before C''
  
 
== Experimental Result ==
 
== Experimental Result ==
  
This approach was fairly successful on a range of review-classification tasks: it achieved accuracy of between 65% and 85% in predicting an author-assigned "recommended" flag for Epinions ratings for eight diverse products, ranging from cars to movies. Many later writers used several key ideas from the paper, including: treating polarity prediction as a document-classification problem; classifying documents based on likely-to-be-informative phrases; and using unsupervised or semi-supervised learning methods.
+
The experiment is done on the task of classifying temporal relations between events into ''before'', ''after'', or ''vague'' (unknown) relations on the [[UsesDataset::TimeBank Corpus]]. These are the core relations in the [http://www.timeml.org/tempeval/ TempEval-07] temporal ordering challenge.
 +
 
 +
Training and testing are done using 10-fold cross validation and micro-averaged accuracies. The method shows 3.6% absolute increase in the accuracy of ''before/after'' classification over the local, pairwise classification model. This improvement is statistically significant (p < 0.000001, Mc-Nemar's test, 2-tailed).
 +
 
 +
Using time expression normalization to create new relations between time expressions, transitive closure over the original set of temporal relations in the corpus, and additional event-event temporal information from [[RelatedPaper::Bethard et. al., Timelines from text: Identification of syntactic temporal relations, 2007|Berthard et al. (2007)]], the method shows an 81% increase in the number of relations in the corpus to train on.
 +
 
 +
== Discussion ==
 +
 
 +
Both the increased connectivity of the corpus and the global inference contributed to the improved performance. Global inference alone on the original set of temporal relations in the corpus shows no improvement over pairwise classification model. This is due to the sparseness of the corpus (since tagging is done manually, the vast majority of possible relations are untagged). Global constraints cannot assist local decisions if the graph is not connected. This highlights the importance of time expression normalization and transitive closure to make the corpus more well connected prior to conducting global inference.
 +
 
 +
Unfortunately, this paper does not differentiate between improvement in performance contributed by the increased connectivity and the improvement contributed by the global inference. Could the increased performance be contributed solely by the increased connectivity of the graph? There is a need for a separate evaluation on the accuracy of relations inferred using only pairwise classification and transitive closure (without global inference).
  
 
== Related papers ==
 
== Related papers ==
  
The widely cited [[RelatedPaper::Pang et al EMNLP 2002]] paper was influenced by this paper - but considers supervised learning techniques.  The choice of movie reviews as the domain was suggested by the (relatively) poor performance of Turney's method on movies.
+
An interesting follow-up paper is [[RelatedPaper::Yoshikawa 2009 jointly identifying temporal relations with markov logic]] which uses Markov Logic instead of Integer Linear Programming to do a ''softer'' (non-deterministic) joint inference.  
  
An interesting follow-up paper is [[RelatedPaper::Turney and Littman, TOIS 2003]] which focuses on evaluation of the technique of using PMI for predicting the [[semantic orientation of words]].
+
Another related paper is [[RelatedPaper::Denis and Muller, Predicting Globally-Coherent Temporal Structures from Texts via Endpoint Inference and Graph Decomposition, IJCAI 2011]] which attempts to classify all types of temporal relations (not just ''before''/''after'') in TimeBank Corpus by first translating these 13 temporal interval relations to their end points, making the set of constraints much smaller for the Integer Linear Programming to deal with, while preserving the same temporal information.

Latest revision as of 13:03, 29 September 2011

Reviews of this paper

Citation

Jointly combining implicit constraints improves temporal ordering, by N. Chambers, D. Jurafsky. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2008.

Online version

This paper is available online [1].

Summary

Unlike earlier works on temporal ordering of events that focus more on improving local, pairwise ordering of events while ignoring possible temporal contradictions in the global space of events, this paper is one of the earliest work that presents the idea of using global constraints to better inform local decisions on temporal ordering of events in text. Two types of global constraints are used: transitivity (A before B and B before C implies A before C) and time expression normalization (e.g. last Tuesday is before today).

The constraints are first used to create more densely connected temporal network of events. Then they are enforced over this temporal network of events using Integer Linear Programming to ensure global consistency of local ordering.

Brief description of the method

The model has two components: (1) pairwise classifier between events, (2) global constraint satisfaction layer that maximizes the confidence scores from the classifier.

In the first component, Support Vector Machine (SVM) classifier is used. Using features varying from POS tags and lexical features surrounding the event to tense, grammatical aspect features of the events, probabilities of temporal relations between pairwise events are computed. These scores are then used as confidence scores to choose an optimal global ordering.

In the second component, the ILP uses the following objective function:

with the constraints:

where represents the ith pair of events classified into the jth relation of m relations.

The first constraint simply says that each variable must be 0 or 1. The second constraint says that a pair of events cannot have two relations at the same time. The third constraint is added for connected pairs of events , for each transitivity condition that infers relation given and .

Prior to running the two components, the set of training relations is expanded to create a more well-connected network of events. One way to expand it is to perform temporal reasoning over the document's time expression (e.g. yesterday is before today) to add new relations between times. Once new time-time relations are added, transitive closure is conducted through transitive rules that creates new connections in the network, such as:

A simultaneous B A before C B before C

Experimental Result

The experiment is done on the task of classifying temporal relations between events into before, after, or vague (unknown) relations on the TimeBank Corpus. These are the core relations in the TempEval-07 temporal ordering challenge.

Training and testing are done using 10-fold cross validation and micro-averaged accuracies. The method shows 3.6% absolute increase in the accuracy of before/after classification over the local, pairwise classification model. This improvement is statistically significant (p < 0.000001, Mc-Nemar's test, 2-tailed).

Using time expression normalization to create new relations between time expressions, transitive closure over the original set of temporal relations in the corpus, and additional event-event temporal information from Berthard et al. (2007), the method shows an 81% increase in the number of relations in the corpus to train on.

Discussion

Both the increased connectivity of the corpus and the global inference contributed to the improved performance. Global inference alone on the original set of temporal relations in the corpus shows no improvement over pairwise classification model. This is due to the sparseness of the corpus (since tagging is done manually, the vast majority of possible relations are untagged). Global constraints cannot assist local decisions if the graph is not connected. This highlights the importance of time expression normalization and transitive closure to make the corpus more well connected prior to conducting global inference.

Unfortunately, this paper does not differentiate between improvement in performance contributed by the increased connectivity and the improvement contributed by the global inference. Could the increased performance be contributed solely by the increased connectivity of the graph? There is a need for a separate evaluation on the accuracy of relations inferred using only pairwise classification and transitive closure (without global inference).

Related papers

An interesting follow-up paper is Yoshikawa 2009 jointly identifying temporal relations with markov logic which uses Markov Logic instead of Integer Linear Programming to do a softer (non-deterministic) joint inference.

Another related paper is Denis and Muller, Predicting Globally-Coherent Temporal Structures from Texts via Endpoint Inference and Graph Decomposition, IJCAI 2011 which attempts to classify all types of temporal relations (not just before/after) in TimeBank Corpus by first translating these 13 temporal interval relations to their end points, making the set of constraints much smaller for the Integer Linear Programming to deal with, while preserving the same temporal information.