Difference between revisions of "Paper:Collins, ACL 2002"
Line 15: | Line 15: | ||
<math>F(x,w)=w_0L(x)+\sum\limits_{s=1}^{m} w_sh_s(x)</math> | <math>F(x,w)=w_0L(x)+\sum\limits_{s=1}^{m} w_sh_s(x)</math> | ||
− | The ranking function can also be written as <math>F(x,w)=w.h(x)</math> if <math>h(x)</math> is defined as <math>\{L(x),h_1(x) \ldots h_m(x)\}</math> | + | The ranking function can also be written as <math>F(x,w)=w.h(x)</math> if <math>h(x)</math> is defined as <math>\{L(x),h_1(x) \ldots h_m(x)\}</math>. |
+ | === Boosting === | ||
Methods: [[Boosting]], [[Voted Perceptron]] | Methods: [[Boosting]], [[Voted Perceptron]] | ||
== Experimental Result == | == Experimental Result == | ||
== Related papers == | == Related papers == |
Revision as of 23:47, 25 September 2011
Contents
Citation
Ranking Algorithms for Named-Entity Extraction: Boosting and the Voted Perceptron, Collins, ACL 2002
Online Version
Here is the online version of the paper.
Summary
In this paper the author describes two algorithms, wich rerank the top N hypotheses from a maximum entropy tagger, and apply them to the task of recovering named-entity boundaries. Since the task at hand can be framed as a tagging task - to tag each word as being either the start of an entity (S), a continuation of an entity (C), or not to be part of ab entity at all (N), the author uses state of the art maximum entropy tagger similar to the one described in Ratnaparkhi EMNLP 1996 as a baseline model. The same Max-ent tagger is used to generate the N=20 most probable segmentations for each input sentence, along with their probabilities. Author's aim is to come up with reranking strategies for the test data candidates in order to improve the precision and recall.
The author considers various global features for each candidate tagged sequence. Most of these features are anchored on entity boundaries in the candidate segmentation. Each candidate tagged sequence , proposed by the Max-ent tagger, is represented by the log probability from the tagger, as well as the values of the global features for ... , where are the no. of global features. The two algorithms described in the next section blend these two sources of information (global features and log probability) to improve upon a strategy which just takes the candidate from the tagger with the highest score for .
The author has shown that existing reranking methods - boosting and voted perceptron, are useful for a new domain, named-entity tagging. Another contribution is the suggestion of various global features of the candidate segmentations which give improvements on this task.
Brief description of the methods
Pairs of the form , where each is a sentence and each is the correct tag sequence for that sentence, serve as the training data. denotes the set of candidates (top N outputs from the Max-ent tagger) for each , where is the th candidate of th sentence. is the probability that the base model assigns to . Hence =log. The set of global features are represented as for ... . The parameters of the model are a vector of parameters, . The ranking function is given as
The ranking function can also be written as if is defined as .
Boosting
Methods: Boosting, Voted Perceptron