Difference between revisions of "Denis and Muller, Predicting Globally-Coherent Temporal Structures from Texts via Endpoint Inference and Graph Decomposition, IJCAI 2011"

From Cohen Courses
Jump to navigationJump to search
Line 22: Line 22:
 
The paper proposes two steps method for temporal ordering of events: (1) learn a classifier which outputs a score for each event pair and their temporal relation, (2) combine these local scores with coherence constraints on the temporal graph within a global optimization problem using [[UsesMethod::Integer Linear Programming]] (ILP).  
 
The paper proposes two steps method for temporal ordering of events: (1) learn a classifier which outputs a score for each event pair and their temporal relation, (2) combine these local scores with coherence constraints on the temporal graph within a global optimization problem using [[UsesMethod::Integer Linear Programming]] (ILP).  
  
Being an NP-hard problem, inference in ILP is sensitive to its number of variables and constraints. In this case, these numbers are exponential in the number of temporal relations used. For ILP to be able to handle all 13 temporal relations specified in [[UsesDataset::TimeBank Corpus]], the paper proposes to first translate these temporal relations intervals into their end points; thus reducing the number of relations from 13 to 5. This reduced number of temporal relations result in 50 times less number of constraints needed to represent global temporal coherence in while preserving the temporal information. The translation between intervals to points are shown in the table below: (inverses and simultaneous relation are not shown):
+
Being an NP-hard problem, inference in ILP is sensitive to its number of variables and constraints. In this case, these numbers are exponential in the number of temporal relations used. For ILP to be able to handle all 13 temporal relations specified in [[UsesDataset::TimeBank Corpus]], the paper proposes to first translate these temporal relations intervals into their end points; thus reducing the number of relations from 13 to 5. This reduced number of temporal relations result in 50 times less number of constraints needed to represent global temporal coherence in while preserving the temporal information. The translation between each pair of events' temporal relations to points are shown in the table below: (inverses and simultaneous relation are not shown)
  
 
[[File:allen.png]]
 
[[File:allen.png]]
  
 
+
Where the intervals are, in [[RelatedPaper::Allen_1983_Maintaining_knowledge_about_temporal_intervals|Allen (1983)]]'s notation: ''b'' as in ''BEFORE'', ''m'' as in ''MEET'' (''IBEFORE'' or immediately before), ''o'' as in ''OVERLAP'', ''s'' as in ''START'', ''d'' as in ''DURING'', and ''f'' as in ''FINISH''. For each event pair <math>({e_1}, {e_2})\,\!</math>, four relations between their endpoints are considered: <math>({e_1^-}, {e_2^-})</math>, <math>({e_1^+}, {e_2^-})</math>, <math>({e_1^-}, {e_2^+})</math>, <math>({e_1^+}, {e_2^+})</math> as shown in the table.
 
 
 
 
 
 
The paper uses [[Markov_Logic_Networks|Markov Logic Network]] to represent constraints of temporal consistency. Three hidden predicates corresponding to the temporal relations to be predicted are:  
 
 
 
* relE2T(''e'',''t'',''r'') representing the temporal relation ''r'' between an event ''e'' and a time expression ''t''
 
* relDCT(''e'',''r'') representing the temporal relation ''r'' between an event ''e'' and DCT
 
* relE2E(''e1'',''e2'',''r'') representing the temporal relation ''r'' between two events of adjacent sentences ''e1'' and ''e2''
 
 
 
The observed predicates, corresponding to information that is given are:
 
 
 
* words, syntactic and lexical feature predicates. For example, the predicate tense(''e'',''t'') denotes the tense ''t'' for an event ''e''
 
* relT2T(''t1'',''t2'',''r'') denoting the temporal relation ''r'' between two time expressions ''t1'' and ''t2''
 
* dctOrder(''t'',''r'') representing the temporal relation ''r'' beetween a time expression ''t'' and DCT.  
 
 
 
The illustration of all temporal predicates are given in the figure below, where dashed lines indicate observed predicates:
 
 
 
[[File:TemporalPredicates.png|400px]]
 
 
 
From these predicates, several formulae that represent constraints of temporal consistency are constructed. These formulae are then input to Markov Logic Network. The formulae are divided into two classes:
 
 
 
* local formulae - formulae that only consider the predicates of a single event-event, event-time or event-DCT pair, for example:
 
: <math>tense(e1,past) \and tense(e2,future) \Rightarrow relE2E(e1,e2,before)\,\!</math>
 
* global formulae - formulae that involve two or more predicates at the same time, and consider the three tasks: event-event, event-time, event-DCT temporal relations predictions, simultaneously. For example:
 
: <math>dctOrder(t1,before) \and relDCT(e1,after) \Rightarrow relE2T(e1,t1,after)\,\!</math>
 
  
 
== Experimental Result ==
 
== Experimental Result ==

Revision as of 00:16, 30 September 2011

Reviews of this paper

Citation

Predicting Globally-Coherent Temporal Structures from Texts via Endpoint Inference and Graph Decomposition, by P. Denis, P. Muller. In Proceedings of IJCAI, 2011.

Online version

This Paper is available online [1].

Summary

Like Yoshikawa et al. (2009), this paper is a follow up paper to Chambers and Jurafsky (2008) that focuses on using global inference to improve local, pairwise temporal ordering of events in text.

Similar to Chambers and Jurafsky (2008), this paper formulates the global inference of temporal ordering as a constraint optimization problem, which can be then given an exact solution using Integer Linear Programming (ILP). Chambers and Jurafsky (2008) however, restricts themselves to only predicting precedence relations (before/after) as ILP becomes impractical when considering all possible interval relations due to the combinatorial number of variables and constraints needed to represent them. The main contribution of this paper is in reducing the number of variables and constraints needed to represent all interval relations, by translating these temporal intervals into their end points, thus preserving the same temporal information while reducing the number of variables and constraints to handle. Additional gain in efficiency is achieved by decomposing the temporal graph and enforcing temporal coherence on the subsets of the graph.

The proposed method is evaluated through experiment on TimeBank Corpus and achieves similar accuracy as Tatu and Srikanth (2008) which infers temporal relations while preserving consistency. Tatu and Srikanth (2008) however, only performs classification on 6 of the 13 possible temporal relations. This paper on the other hand, classifies all 13 temporal relations.

Brief description of the method

The paper proposes two steps method for temporal ordering of events: (1) learn a classifier which outputs a score for each event pair and their temporal relation, (2) combine these local scores with coherence constraints on the temporal graph within a global optimization problem using Integer Linear Programming (ILP).

Being an NP-hard problem, inference in ILP is sensitive to its number of variables and constraints. In this case, these numbers are exponential in the number of temporal relations used. For ILP to be able to handle all 13 temporal relations specified in TimeBank Corpus, the paper proposes to first translate these temporal relations intervals into their end points; thus reducing the number of relations from 13 to 5. This reduced number of temporal relations result in 50 times less number of constraints needed to represent global temporal coherence in while preserving the temporal information. The translation between each pair of events' temporal relations to points are shown in the table below: (inverses and simultaneous relation are not shown)

Allen.png

Where the intervals are, in Allen (1983)'s notation: b as in BEFORE, m as in MEET (IBEFORE or immediately before), o as in OVERLAP, s as in START, d as in DURING, and f as in FINISH. For each event pair , four relations between their endpoints are considered: , , , as shown in the table.

Experimental Result

The experiment is done using the data from TempEval temporal ordering challenge, with the tasks of classifying temporal relations between events and time expressions (Task A), between events and the DCT (Task B), and between events in two consecutive sentences (Task C). Temporal relations are classified into one of 6 classes: BEFORE, OVERLAP, AFTER, BEFORE-OR-OVERLAP, OVERLAP-OR-AFTER, and VAGUE. Training and inference algorithms are provided by Markov thebeast, a Markov Logic interpreter tailored for NLP applications. Accuracy for measuring performance is defined as:

where , , are the number of correctly identified labels in each task, and , and are the number of gold labels of each task.

The paper shows that by incorporating global constraints that hold between temporal relations predicted in Task A, B, and C, the accuracy for all three tasks can be improved significantly. For two out of the three tasks, the approach in this paper achieves the best accuracy by at least 2% more than other approaches. For task B, the approach's accuracy is less than that of rule-based approach; however it is better than all other machine learning approaches.

Related papers

The approach in this paper is similar to that of an earlier work by Chambers and Jurafsky (2008) that proposes to use global framework based on Integer Linear Programming (ILP) to jointly infer temporal relations between events. Chambers and Jurafsky (2008) show that adding global inference improves the accuracy of the inferred temporal relations. However they only focus on event-event temporal relations while this paper also jointly predicts temporal order between events and time expressions, and between events and document creation time.

Secondly, Chambers and Jurafsky (2008) combines the output of local classifiers using ILP framework while this paper uses Markov Logic Networks which represents global constraints through the addition of weighted first order logic formulae. The advantage is that it allows for representation of non-deterministic rules that tend to hold between temporal relations but do not always have to. For example, if A happens before B and B overlaps with C, then there is a good chance that A also happens before C, but this is not always the case. The learned weights of the rules allow for soft enforcement of the constraints.