Roth and Yih, ICML 2005. Integer Linear Programming Inference for Conditional Random Fields

From Cohen Courses
Jump to navigationJump to search

Citation

Dan Roth and Wen-tau Yih. 2005. Integer Linear Programming Inference for Conditional Random Fields. In Proceedings of the International Conference on Machine learning, ICML'05, New York, NY, USA.

Online Version

Online version

Summary

This paper presents an alternative approach to inference in conditional random fields using integer linear programming (ILP). The standard Viterbi algorithm based on dynamic programming, in general, cannot efficiently incorporate non-local and non-sequential constraints over the output sequence. The authors propose an ILP-based method to inference procedure. and extend CRF models to naturally and efficiently support general constraint structures. For sequential constraints, this procedure reduces to simple linear programming as the inference process.

Method

The efficiency of the CRF approach heavily depends on its Markov property – given the observation, the label of a token is assumed to depend only on the labels of its adjacent tokens. It is difficult to explicitly model and exploit more general constraints such as long distance dependencies keeping in mind the Markovian assumption. This is a problem not only for training CRFs but also for incorporating additional constraints during inference. Viterbi algorithm can handle some of these constraints, but cannot handle more generic constraints. This method is able to take into account these constraints more efficiently.

Conditional Random Field

A standard CRF model is represented by:


where is the global feature vector, is weight vector, and is a normalization factor.


A CRF is trained by maximizing the conditional log-likelihood of a given training set  :


Inference Using ILP

Inference in CRFs is usually performed by the Viterbi algorithm which takes into account the shortest path from start to the end. The weights on the edges of the path determine the global path of the cost. This weight or log score is obtained from a linear combination of feature functions and the weight vector.

This approach is represented in ILP as follows:

Given a directed graph , two distinct nodes , and a non-negative cost of each edge , a minimum cost path from to is desired. For each edge , an indicator variable is introduced. If is in the minimum cost (shortest) path, then is set to 1; otherwise, it is 0. The cost function is, .

The following figure depicts ILP-based representation of CRFs subject to a set of constraints:

CRF-ILP.jpg


where, is the cost function. Hence, changes to . The start and end nodes are denoted by and respectively, and represent the sequence length and number of possible labels. Variable represents the edge from to .

Constraint Representation Using ILP

The set of constraints in the output space can be expressed using a set of Boolean functions. The authors lay down some example constraints in their model:

Example 1

To force the output of label to be 0:

Example 2

There is no "duplication" of segments in a sequence:

CRF-ILP-Example-2.jpg


where, the indicator variable represents the antecedent.

Example 3

If label appears, then label must also appear:

CRF-ILP-Example-3.jpg

for all such that, .

Example 4

When a segment \mathcal{A} of tokens share the same label. So, for every such segment from to :

,


for all such that, . is a binary variable that indicates whether token is assigned label .

Example 5

If each sequence has at least one segment of interest, i.e., some label other than "O" in BIO representation must be present:

CRF-ILP-Example-5.jpg

Experiments and Results

Related Papers