Difference between revisions of "Conditional Random Fields"

From Cohen Courses
Jump to navigationJump to search
Line 1: Line 1:
 
This is a [[category::method]] discussed in [[Information Extraction 10-707 in Fall 2010]].
 
This is a [[category::method]] discussed in [[Information Extraction 10-707 in Fall 2010]].
  
Conditional random field (CRF) is a type of discriminative probabilistic model most often used for the labeling or parsing of sequential data, such as natural language text or biological sequences. With a foundation from Maximum Entropy model and Hidden Markov model, it outperforms them in particular on the tasks of [[AddressesProblem:shallow parsing]], [[AddressesProblem::named entity recognition]] and [[AddressesProblem::visual object recognition]] etc.
+
Conditional random field (CRF) is a type of discriminative probabilistic model most often used for the labeling or parsing of sequential data, such as natural language text or biological sequences. With a foundation from Maximum Entropy model and Hidden Markov model, it outperforms them in particular on the tasks of [[AddressesProblem::Shallow Parsing]], [[AddressesProblem::Named Entity Recognition]] and [[AddressesProblem::visual object recognition]] etc.
  
 
== Introduction ==
 
== Introduction ==

Revision as of 23:05, 30 November 2010

This is a method discussed in Information Extraction 10-707 in Fall 2010.

Conditional random field (CRF) is a type of discriminative probabilistic model most often used for the labeling or parsing of sequential data, such as natural language text or biological sequences. With a foundation from Maximum Entropy model and Hidden Markov model, it outperforms them in particular on the tasks of Shallow Parsing, Named Entity Recognition and visual object recognition etc.

Introduction

Linear-chain Conditional Random Fields

One of the commonly used version is the linear-chain conditional random fields. Such CRFs define conditional probability distributions p(Y|X) of label sequences given input sequences. The label and input sequences are assumed to have the same length.

A CRF on (X, Y) is specified by a local feature vector and a weight vector, the local features are defined as follows:

P1.png

And the global feature vector is thus an aggregate of the local features:

P2.png

The conditional probability distribution defined by CRF is then defined as

P3.png

In sequential labeling task, we would like to find the most probable label sequence for input sequence, thus we use the following formula

P4.png

The decoding process can be done with the Viterbi algorithm.

Relevant Papers