Difference between revisions of "Mayfield et al, CoNLL 2003"

From Cohen Courses
Jump to navigationJump to search
Line 16: Line 16:
  
 
== Brief Description of the Method ==
 
== Brief Description of the Method ==
Each sentence is processed individually. A lattice is built for each sentence where each column contains one vertex for each possible tag and is connected by an edge to every vertex in the next column that represents a valid transition. To compute these transitions, the authors exploit some important properties of [[SVM]]'s: being able to handle very high dimensional spaces and being resistant to overfitting. When these transition are finally estimated and applied to the lattice, the authors run [[Viterbi]] to to find the most likely path, which identifies the final tag for each word of the sentence.
+
Each sentence is processed individually. A lattice is built for each sentence where each column contains one vertex for each possible tag and is connected by an edge to every vertex in the next column that represents a valid transition. To compute these transitions, the authors exploit some important properties of [[SVM]]'s: being able to handle very high dimensional spaces and being resistant to overfitting. When these transitions are finally estimated and applied to the lattice, the authors run [[Viterbi]] to to find the most likely path, which identifies the final tag for each word of the sentence.
  
 
However, standard SVM's do not provide such probabilities. Used a parametric model to fit the posterior <math>P(y=1|f)</math> directly. Since the class-conditional densities between the margins of a SVM are exponential and the Baye's rule on two exponentials suggests using a parametric form of a sigmoid, comes that:
 
However, standard SVM's do not provide such probabilities. Used a parametric model to fit the posterior <math>P(y=1|f)</math> directly. Since the class-conditional densities between the margins of a SVM are exponential and the Baye's rule on two exponentials suggests using a parametric form of a sigmoid, comes that:
Line 28: Line 28:
 
A different SVM model is trained for each transition type.
 
A different SVM model is trained for each transition type.
  
To evaluate a test set, each word of the input is represented by a vector of features (such as the word itself, character n-grams
+
To evaluate a test set, each word of the input is represented by a vector of features (such as the word itself, character n-grams, word length and position in the sentence, capitalization patter, etc.). Each classifier is then applied to this vector to produce a margin, that is then mapped to a probability estimate. When all the probabilities have been computed, the Viterbi algorithm computation takes place.
  
 
== Experimental Results ==
 
== Experimental Results ==
  
 
== Related Papers ==
 
== Related Papers ==

Revision as of 13:29, 29 September 2011

Being edited by Rui Correia

Citation

James Mayfield, Paul McNamee, and Christine Piatko. 2003. Named Entity Recognition using Hundreds of Thousands of Features. In Proceedings of CoNLL-2003.

Online version

[1]

Summary

In this paper the authors address the problem of Named Entity Recognition using Support Vector Machines to capture transition probabilities in a lattice, a method they called SVM-lattice. Their main goal is to provide a language independent [[Named Entity Recognition] system, considering hundreds of thousands of features, which they will let the SVM decide if are relevant or not.

In most Named Entity Recognition systems, handling large numbers of features is expensive and might result in overtraining, demanding for a wise and informed feature selection. The solution proposed by the authors is to built a lattice for each sentence (where the vertexes are the taggs and the edges the possible transitions) and compute the edges transitions probabilities. When these transitions are computed, the authors apply the Viterbi algorithm to find the best path and decide on the set of tags.

Brief Description of the Method

Each sentence is processed individually. A lattice is built for each sentence where each column contains one vertex for each possible tag and is connected by an edge to every vertex in the next column that represents a valid transition. To compute these transitions, the authors exploit some important properties of SVM's: being able to handle very high dimensional spaces and being resistant to overfitting. When these transitions are finally estimated and applied to the lattice, the authors run Viterbi to to find the most likely path, which identifies the final tag for each word of the sentence.

However, standard SVM's do not provide such probabilities. Used a parametric model to fit the posterior directly. Since the class-conditional densities between the margins of a SVM are exponential and the Baye's rule on two exponentials suggests using a parametric form of a sigmoid, comes that:

The authors fix A=-2 and b=0.

A different SVM model is trained for each transition type.

To evaluate a test set, each word of the input is represented by a vector of features (such as the word itself, character n-grams, word length and position in the sentence, capitalization patter, etc.). Each classifier is then applied to this vector to produce a margin, that is then mapped to a probability estimate. When all the probabilities have been computed, the Viterbi algorithm computation takes place.

Experimental Results

Related Papers