Yoshikawa 2009 jointly identifying temporal relations with markov logic

From Cohen Courses
Jump to navigationJump to search

Reviews of this paper

Citation

Jointly Identifying Temporal Relations with Markov Logic, by K. Yoshikawa, J. NAIST, S. Riedel, M. Asahara, Y. Matsumoto. In Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, 2009.

Online version

This Paper is available online [1].

Summary

This paper is a follow up paper to Chambers and Jurafsky (2008) that focuses on using global inference to improve local, pairwise temporal ordering of events in text. Instead of inferring the three types of relations between events: event-event, event-time, event-document creation time, in isolation; the paper proposes to use Markov Logic Networks to jointly identify relations of all three relation types simultaneously while respecting logical constraints between these temporal relations.

Unlike earlier works on temporal ordering of events that focus more on improving local, pairwise ordering of events while ignoring possible temporal contradictions in the global space of events, this paper is one of the earliest work that presents the idea of using global constraints to better inform local decisions on temporal ordering of events in text. Two types of global constraints are used: transitivity (A before B and B before C implies A before C) and time expression normalization (e.g. last Tuesday is before today).

The constraints are first used to create more densely connected temporal network of events. Then they are enforced over this temporal network of events using Integer Linear Programming to ensure global consistency of local ordering.

The experiment is done on the task of classifying temporal relations between events into before, after, or vague (unknown) relations on the TimeBank Corpus. These are the core relations in the TempEval-07 temporal ordering challenge. The paper shows 3.6% absolute increase in the accuracy of before/after classification over the local, pairwise classification model.

Using time expression normalization to create new relations between time expressions and transitive closure over the original set of temporal relations in the corpus, the method shows an 81% increase in the number of relations in the corpus to train on.

Both the increased connectivity of the corpus and the global inference contributed to the improved performance. Global inference alone on the original set of temporal relations in the corpus has no improvement over pairwise classification model. This is due to the sparseness of the corpus (since tagging is done manually, the vast majority of possible relations are untagged). Global constraints cannot assist local decisions if the graph is not connected. This highlights the importance of time expression normalization and transitive closure to make the corpus more well connected prior to conducting global inference.

Brief description of the method

The model has two components: (1) pairwise classifier between events, (2) global constraint satisfaction layer that maximizes the confidence scores from the classifier.

In the first component, Support Vector Machine (SVM) classifier is used. Using features varying from POS tags and lexical features surrounding the event to tense, grammatical aspect features of the events, probabilities of temporal relations between pairwise events are computed. These scores are then used as confidence scores to choose an optimal global ordering.

In the second component, the ILP uses the following objective function:

with the constraints:

where represents the ith pair of events classified into the jth relation of m relations.

The first constraint simply says that each variable must be 0 or 1. The second constraint says that a pair of events cannot have two relations at the same time. The third constraint is added for connected pairs of events , for each transitivity condition that infers relation given and .

Prior to running the two components, the set of training relations is expanded to create a more well-connected network of events. One way to expand it is to perform temporal reasoning over the document's time expression (e.g. yesterday is before today) to add new relations between times. Once new time-time relations are added, transitive closure is conducted through transitive rules that creates new connections in the network, such as:

A simultaneous B A before C B before C

Experimental Result

The experiment is done using the data from TempEval temporal ordering challenge, with the tasks of classifying temporal relations between events and time expressions in the same sentence (Task A), between events in a document and the document creation time (Task B), and between events in two consecutive sentences (Task C). Temporal relations are classified into one of six classes: BEFORE, OVERLAP, AFTER, BEFORE-OR-OVERLAP, OVERLAP-OR-AFTER, and VAGUE.

The paper shows that by incorporating global constraints that hold between temporal relations predicted in Task A, B, and C, the accuracy for all three tasks can be improved significantly. For two out of the three tasks, the approach in this paper achieves the best accuracy by at least 2% more than other approaches. For task B, the approach's accuracy is less than that of rule-based approach; however it is better than all other machine learning approaches.

Related papers

The approach in this paper is similar to that of an earlier work by Chambers and Jurafsky (2008) that proposes to use global framework based on Integer Linear Programming (ILP) to jointly infer temporal relations between events. Chambers and Jurafsky (2008) show that adding global inference improves the accuracy of the inferred temporal relations. However they only focus on event-event temporal relations while this paper also jointly predicts temporal order between events and time expressions, and between events and document creation time.

Secondly, Chambers and Jurafsky (2008) combines the output of local classifiers using ILP framework while this paper uses Markov Logic Networks which represents global constraints through the addition of weighted first order logic formulae. The advantage is that it allows for representation of non-deterministic rules that tend to hold between temporal relations but do not always have to. For example, if A happens before B and B overlaps with C, then there is a good chance that A also happens before C, but this is not always the case. The learned weights of the rules allow for soft enforcement of the constraints.