Difference between revisions of "Yoshikawa 2009 jointly identifying temporal relations with markov logic"

From Cohen Courses
Jump to navigationJump to search
 
(31 intermediate revisions by the same user not shown)
Line 12: Line 12:
 
== Summary ==
 
== Summary ==
  
This paper is a follow up paper to [[RelatedPaper::Chambers and Jurafsky, Jointly combining implicit constraints improves temporal ordering, EMNLP 2008|Chambers and Jurafsky (2008)]] that focuses on using global inference to improve local, pairwise [[AddressesProblem::temporal ordering]] of events in text.
+
This [[Category::paper]] is a follow up paper to [[RelatedPaper::Chambers and Jurafsky, Jointly combining implicit constraints improves temporal ordering, EMNLP 2008|Chambers and Jurafsky (2008)]] that focuses on using global inference to improve local, pairwise [[AddressesProblem::temporal ordering]] of events in text. Similarly in this paper, instead of predicting the three types of temporal relations: between events in adjacent sentences, between events and time expressions in the same sentence, and between events in a document and the document creation time (DCT), in isolation; the paper proposes to use [[UsesMethod::Markov Logic Networks]] to jointly identify relations of all three relation types simultaneously while respecting logical constraints between these temporal relations.
  
 
+
The experiment is done on the [http://www.timeml.org/tempeval/ TempEval-07] data, for the task of classifying temporal relations into one of the 6 classes: ''BEFORE'' (e.g. event A is ''before'' event B), ''OVERLAP'', ''AFTER'', ''BEFORE-OR-OVERLAP'', ''OVERLAP-OR-AFTER'', and ''VAGUE'' (unknown). The paper shows an accuracy increase of 2% for all three types of relations: event-event, event-time, event-DCT, compared to other machine learning based approaches.
 
 
Unlike earlier works on [[AddressesProblem::temporal ordering]] of events that focus more on improving local, pairwise ordering of events while ignoring possible temporal contradictions in the global space of events, this [[Category::paper]] is one of the earliest work that presents the idea of using global constraints to better inform local decisions on temporal ordering of events in text. Two types of global constraints are used: transitivity (A before B and B before C implies A before C) and time expression normalization (e.g. last Tuesday is before today).
 
 
 
The constraints are first used to create more densely connected temporal network of events. Then they are enforced over this temporal network of events using [[UsesMethod::Integer Linear Programming]] to ensure global consistency of local ordering.
 
 
 
The experiment is done on the task of classifying temporal relations between events into ''before'', ''after'', or ''vague'' (unknown) relations on the [[UsesDataset::TimeBank Corpus]]. These are the core relations in the [http://www.timeml.org/tempeval/ TempEval-07] temporal ordering challenge. The paper shows 3.6% absolute increase in the accuracy of ''before/after'' classification over the local, pairwise classification model.
 
 
 
Using time expression normalization to create new relations between time expressions and transitive closure over the original set of temporal relations in the corpus, the method shows an 81% increase in the number of relations in the corpus to train on.
 
 
 
Both the increased connectivity of the corpus and the global inference contributed to the improved performance. Global inference alone on the original set of temporal relations in the corpus has no improvement over pairwise classification model. This is due to the sparseness of the corpus (since tagging is done manually, the vast majority of possible relations are untagged). Global constraints cannot assist local decisions if the graph is not connected. This highlights the importance of time expression normalization and transitive closure to make the corpus more well connected prior to conducting global inference.
 
  
 
== Brief description of the method ==
 
== Brief description of the method ==
  
The model has two components: (1) pairwise classifier between events, (2) global constraint satisfaction layer that maximizes the confidence scores from the classifier.
+
The paper uses [[Markov_Logic_Networks|Markov Logic Network]] to represent constraints of temporal consistency. Three hidden predicates corresponding to the temporal relations to be predicted are:  
  
In the first component, [[Support_Vector_Machines|Support Vector Machine]] (SVM) classifier is used. Using features varying from POS tags and lexical features surrounding the event to tense, grammatical aspect features of the events, probabilities of temporal relations between pairwise events are computed. These scores are then used as confidence scores to choose an optimal global ordering.
+
* relE2T(''e'',''t'',''r'') representing the temporal relation ''r'' between an event ''e'' and a time expression ''t''
 +
* relDCT(''e'',''r'') representing the temporal relation ''r'' between an event ''e'' and DCT
 +
* relE2E(''e1'',''e2'',''r'') representing the temporal relation ''r'' between two events of adjacent sentences ''e1'' and ''e2''
  
In the second component, the ILP uses the following objective function:
+
The observed predicates, corresponding to information that is given are:
  
::<math> max \sum_{i} \sum_{j} p_{ij} x_{ij} \,\! </math>
+
* words, syntactic and lexical feature predicates. For example, the predicate tense(''e'',''t'') denotes the tense ''t'' for an event ''e''
 +
* relT2T(''t1'',''t2'',''r'') denoting the temporal relation ''r'' between two time expressions ''t1'' and ''t2''
 +
* dctOrder(''t'',''r'') representing the temporal relation ''r'' beetween a time expression ''t'' and DCT.
  
with the constraints:
+
The illustration of all temporal predicates are given in the figure below, where dashed lines indicate observed predicates:
  
::<math> \forall_{i} \forall_{j} {x_{ij}} \in {0, 1} </math>
+
[[File:TemporalPredicates.png|400px]]
  
::<math> \forall_{i} {x_{i1}} + {x_{i2}} + ... + {x_{im}} = 1\,\!</math>
+
From these predicates, several formulae that represent constraints of temporal consistency are constructed. These formulae are then input to Markov Logic Network. The formulae are divided into two classes:
  
::<math> {x_{ia}} + {x_{jb}} - {x_{kc}} <= 1 \,\!</math>
+
* local formulae - formulae that only consider the predicates of a single event-event, event-time or event-DCT pair, for example:
 +
: <math>tense(e1,past) \and tense(e2,future) \Rightarrow relE2E(e1,e2,before)\,\!</math>
 +
* global formulae - formulae that involve two or more predicates at the same time, and consider the three tasks: event-event, event-time, event-DCT temporal relations predictions, simultaneously. For example:
 +
: <math>dctOrder(t1,before) \and relDCT(e1,after) \Rightarrow relE2T(e1,t1,after)\,\!</math>
  
where <math>{x_{ij}}\,\!</math> represents the ith pair of events classified into the jth relation of ''m'' relations.
+
== Experimental Result ==
 
 
The first constraint simply says that each variable must be 0 or 1. The second constraint says that a pair of events cannot have two relations at the same time. The third constraint is added for connected pairs of events <math>i, j, k \,\!</math>, for each transitivity condition that infers relation <math>c\,\!</math> given <math>a\,\!</math> and <math>b\,\!</math>.
 
  
Prior to running the two components, the set of training relations is expanded to create a more well-connected network of events. One way to expand it is to perform temporal reasoning over the document's time expression (e.g. ''yesterday'' is before ''today'') to add new relations between times. Once new time-time relations are added, transitive closure is conducted through transitive rules that creates new connections in the network, such as:
+
The experiment is done using the data from [http://www.timeml.org/tempeval/ TempEval] temporal ordering challenge, with the tasks of classifying temporal relations between events and time expressions (Task A), between events and the DCT (Task B), and between events in two consecutive sentences (Task C). Temporal relations are classified into one of 6 classes: ''BEFORE'', ''OVERLAP'', ''AFTER'', ''BEFORE-OR-OVERLAP'', ''OVERLAP-OR-AFTER'', and ''VAGUE''. Training and inference algorithms are provided by [http://code.google.com/p/thebeast/ Markov thebeast], a Markov Logic interpreter tailored for NLP applications. Accuracy for measuring performance is defined as:
  
''A simultaneous B'' <math>\and</math> ''A before C'' <math>\to</math> ''B before C''
+
<math>\frac{{C_a} + {C_b} + {C_c}} {{G_a} + {G_b} + {G_c}}\,\!</math>
  
== Experimental Result ==
+
where <math>{C_a}</math>, <math>{C_b}</math>, <math>{C_c}</math> are the number of correctly identified labels in each task, and <math>{G_a}</math>, <math>{G_b}</math> and <math>{G_c}</math> are the number of gold labels of each task.
  
The experiment is done on classifying event-event temporal relations in [[TimeBank_Corpus|TimeBank Corpus]] into one of ''before'', ''after'', or ''unknown'' class. Training and testing are done using 10-fold cross validation and micro-averaged accuracies. The model improves by 3.6% the accuracy of the pairwise classification model for the task of classifying relations into ''before''/''after'' class. This improvement is statistically significant (p < 0.000001, Mc-Nemar's test, 2-tailed).
+
The paper shows that by incorporating global constraints that hold between temporal relations predicted in Task A, B, and C, the accuracy for all three tasks can be improved significantly. For two out of the three tasks, the approach in this paper achieves the best accuracy by at least 2% more than other approaches. For task B, the approach's accuracy is less than that of rule-based approach; however it is better than all other machine learning approaches.
  
 
== Related papers ==
 
== Related papers ==
  
An interesting follow-up paper is [[RelatedPaper::Yoshikawa 2009 jointly identifying temporal relations with markov logic]] which uses Markov Logic instead of Integer Linear Programming to do a ''softer'' (non-deterministic) joint inference.  
+
The approach in this paper is similar to that of an earlier work by [[Chambers_and_Jurafsky,_Jointly_combining_implicit_constraints_improves_temporal_ordering,_EMNLP_2008|Chambers and Jurafsky (2008)]] that proposes to use global framework based on [[Integer_Linear_Programming|Integer Linear Programming (ILP)]] to jointly infer temporal relations between events. [[Chambers_and_Jurafsky,_Jointly_combining_implicit_constraints_improves_temporal_ordering,_EMNLP_2008|Chambers and Jurafsky (2008)]] show that adding global inference improves the accuracy of the inferred temporal relations. However they only focus on event-event temporal relations while this paper also jointly predicts temporal order between events and time expressions, and between events and document creation time.  
  
Another related paper is [[RelatedPaper::Denis and Muller, Predicting Globally-Coherent Temporal Structures from Texts via Endpoint Inference and Graph Decomposition, IJCAI 2011]] which attempts to classify all types of temporal relations (not just ''before''/''after'') in TimeBank Corpus by first translating these 13 temporal interval relations to their end points, making the set of constraints much smaller for the Integer Linear Programming to deal with, while preserving the same temporal information.
+
Secondly, [[Chambers_and_Jurafsky,_Jointly_combining_implicit_constraints_improves_temporal_ordering,_EMNLP_2008|Chambers and Jurafsky (2008)]] combines the output of local classifiers using [[Integer_Linear_Programming|ILP]] framework while this paper uses [[Markov_Logic_Networks|Markov Logic Networks]] which represents global constraints through the addition of weighted first order logic formulae. The advantage is that it allows for representation of non-deterministic rules that tend to hold between temporal relations but do not always have to. For example, if A happens before B and B overlaps with C, then there is a good chance that A also happens before C, but this is not always the case. The learned weights of the rules allow for ''soft'' enforcement of the constraints.

Latest revision as of 22:31, 29 September 2011

Reviews of this paper

Citation

Jointly Identifying Temporal Relations with Markov Logic, by K. Yoshikawa, J. NAIST, S. Riedel, M. Asahara, Y. Matsumoto. In Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, 2009.

Online version

This Paper is available online [1].

Summary

This paper is a follow up paper to Chambers and Jurafsky (2008) that focuses on using global inference to improve local, pairwise temporal ordering of events in text. Similarly in this paper, instead of predicting the three types of temporal relations: between events in adjacent sentences, between events and time expressions in the same sentence, and between events in a document and the document creation time (DCT), in isolation; the paper proposes to use Markov Logic Networks to jointly identify relations of all three relation types simultaneously while respecting logical constraints between these temporal relations.

The experiment is done on the TempEval-07 data, for the task of classifying temporal relations into one of the 6 classes: BEFORE (e.g. event A is before event B), OVERLAP, AFTER, BEFORE-OR-OVERLAP, OVERLAP-OR-AFTER, and VAGUE (unknown). The paper shows an accuracy increase of 2% for all three types of relations: event-event, event-time, event-DCT, compared to other machine learning based approaches.

Brief description of the method

The paper uses Markov Logic Network to represent constraints of temporal consistency. Three hidden predicates corresponding to the temporal relations to be predicted are:

  • relE2T(e,t,r) representing the temporal relation r between an event e and a time expression t
  • relDCT(e,r) representing the temporal relation r between an event e and DCT
  • relE2E(e1,e2,r) representing the temporal relation r between two events of adjacent sentences e1 and e2

The observed predicates, corresponding to information that is given are:

  • words, syntactic and lexical feature predicates. For example, the predicate tense(e,t) denotes the tense t for an event e
  • relT2T(t1,t2,r) denoting the temporal relation r between two time expressions t1 and t2
  • dctOrder(t,r) representing the temporal relation r beetween a time expression t and DCT.

The illustration of all temporal predicates are given in the figure below, where dashed lines indicate observed predicates:

TemporalPredicates.png

From these predicates, several formulae that represent constraints of temporal consistency are constructed. These formulae are then input to Markov Logic Network. The formulae are divided into two classes:

  • local formulae - formulae that only consider the predicates of a single event-event, event-time or event-DCT pair, for example:
  • global formulae - formulae that involve two or more predicates at the same time, and consider the three tasks: event-event, event-time, event-DCT temporal relations predictions, simultaneously. For example:

Experimental Result

The experiment is done using the data from TempEval temporal ordering challenge, with the tasks of classifying temporal relations between events and time expressions (Task A), between events and the DCT (Task B), and between events in two consecutive sentences (Task C). Temporal relations are classified into one of 6 classes: BEFORE, OVERLAP, AFTER, BEFORE-OR-OVERLAP, OVERLAP-OR-AFTER, and VAGUE. Training and inference algorithms are provided by Markov thebeast, a Markov Logic interpreter tailored for NLP applications. Accuracy for measuring performance is defined as:

where , , are the number of correctly identified labels in each task, and , and are the number of gold labels of each task.

The paper shows that by incorporating global constraints that hold between temporal relations predicted in Task A, B, and C, the accuracy for all three tasks can be improved significantly. For two out of the three tasks, the approach in this paper achieves the best accuracy by at least 2% more than other approaches. For task B, the approach's accuracy is less than that of rule-based approach; however it is better than all other machine learning approaches.

Related papers

The approach in this paper is similar to that of an earlier work by Chambers and Jurafsky (2008) that proposes to use global framework based on Integer Linear Programming (ILP) to jointly infer temporal relations between events. Chambers and Jurafsky (2008) show that adding global inference improves the accuracy of the inferred temporal relations. However they only focus on event-event temporal relations while this paper also jointly predicts temporal order between events and time expressions, and between events and document creation time.

Secondly, Chambers and Jurafsky (2008) combines the output of local classifiers using ILP framework while this paper uses Markov Logic Networks which represents global constraints through the addition of weighted first order logic formulae. The advantage is that it allows for representation of non-deterministic rules that tend to hold between temporal relations but do not always have to. For example, if A happens before B and B overlaps with C, then there is a good chance that A also happens before C, but this is not always the case. The learned weights of the rules allow for soft enforcement of the constraints.