Difference between revisions of "Chambers and Jurafsky, Jointly combining implicit constraints improves temporal ordering, EMNLP 2008"
Line 44: | Line 44: | ||
The first constraint simply says that each variable must be 0 or 1. The second constraint says that a pair of events cannot have two relations at the same time. The third constraint is added for connected pairs of events <math>i, j, k \,\!</math>, for each transitivity condition that infers relation <math>c\,\!</math> given <math>a\,\!</math> and <math>b\,\!</math>. | The first constraint simply says that each variable must be 0 or 1. The second constraint says that a pair of events cannot have two relations at the same time. The third constraint is added for connected pairs of events <math>i, j, k \,\!</math>, for each transitivity condition that infers relation <math>c\,\!</math> given <math>a\,\!</math> and <math>b\,\!</math>. | ||
− | Prior to | + | Prior to running the two components, the set of training relations is expanded to create a more well-connected network of events. One way to expand it is to perform temporal reasoning over the document's time expression (e.g. ''yesterday'' is before ''today'') to add new relations between times. Once new time-time relations are added, transitive closure is conducted through transitive rules that creates new connections in the network, such as: |
''A simultaneous B'' <math>\and</math> ''A before C'' <math>\to</math> ''B before C'' | ''A simultaneous B'' <math>\and</math> ''A before C'' <math>\to</math> ''B before C'' |
Revision as of 00:43, 29 September 2011
Contents
Reviews of this paper
Citation
Jointly combining implicit constraints improves temporal ordering, by N. Chambers, D. Jurafsky. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2008.
Online version
This paper is available online [1].
Summary
Unlike earlier works on temporal ordering of events that focus more on improving local, pairwise ordering of events while ignoring possible temporal contradictions in the global space of events, this paper is one of the earliest work that presents the idea of using global constraints to better inform local decisions on temporal ordering of events in text. Two types of global constraints are used: transitivity (A before B and B before C implies A before C) and time expression normalization (e.g. last Tuesday is before today).
The constraints are first used to create more densely connected temporal network of events. Then they are enforced over this temporal network of events using Integer Linear Programming to ensure global consistency of local ordering.
The experiment is done on the task of classifying temporal relations between events into before, after, or vague (unknown) relations on the TimeBank Corpus. These are the core relations in the TempEval-07 temporal ordering challenge. The paper shows 3.6% absolute increase in the accuracy of before/after classification over the local, pairwise classification model.
Using time expression normalization to create new relations between time expressions and transitive closure over the original set of temporal relations in the corpus, the method shows an 81% increase in the number of relations in the corpus to train on.
Both the increased connectivity of the corpus and the global inference contributed to the improved performance. Global inference alone on the original set of temporal relations in the corpus has no improvement over pairwise classification model. This is due to the sparseness of the corpus (since tagging is done manually, the vast majority of possible relations are untagged). Global constraints cannot assist local decisions if the graph is not connected. This highlights the importance of time expression normalization and transitive closure to make the corpus more well connected prior to conducting global inference.
Brief description of the method
The model has two components: (1) pairwise classifier between events, (2) global constraint satisfaction layer that maximizes the confidence scores from the classifier.
In the first component, Support Vector Machine (SVM) classifier is used. Using features varying from POS tags and lexical features surrounding the event to tense, grammatical aspect features of the events, probabilities of temporal relations between pairwise events are computed. These scores are then used as confidence scores to choose an optimal global ordering.
In the second component, the ILP uses the following objective function:
with the constraints:
where represents the ith pair of events classified into the jth relation of m relations.
The first constraint simply says that each variable must be 0 or 1. The second constraint says that a pair of events cannot have two relations at the same time. The third constraint is added for connected pairs of events , for each transitivity condition that infers relation given and .
Prior to running the two components, the set of training relations is expanded to create a more well-connected network of events. One way to expand it is to perform temporal reasoning over the document's time expression (e.g. yesterday is before today) to add new relations between times. Once new time-time relations are added, transitive closure is conducted through transitive rules that creates new connections in the network, such as:
A simultaneous B A before C B before C
Experimental Result
This approach was fairly successful on a range of review-classification tasks: it achieved accuracy of between 65% and 85% in predicting an author-assigned "recommended" flag for Epinions ratings for eight diverse products, ranging from cars to movies. Many later writers used several key ideas from the paper, including: treating polarity prediction as a document-classification problem; classifying documents based on likely-to-be-informative phrases; and using unsupervised or semi-supervised learning methods.
Related papers
The widely cited Pang et al EMNLP 2002 paper was influenced by this paper - but considers supervised learning techniques. The choice of movie reviews as the domain was suggested by the (relatively) poor performance of Turney's method on movies.
An interesting follow-up paper is Turney and Littman, TOIS 2003 which focuses on evaluation of the technique of using PMI for predicting the semantic orientation of words.