Difference between revisions of "Voted Perceptron"

From Cohen Courses
Jump to navigationJump to search
 
Line 1: Line 1:
 
The voted perceptron [[category::method]] is based on the perceptron algorithm of [[paper:Rosenblatt, Frank (1957) | Rosenblatt and Frank]]. The algorithm takes advantage of data that are linearly separable with large margins. This [[category::method]] is simpler to implement, and much more efficient in terms of computation time as compared to [[paper:Vapnik, 1995 | Vapnik's]] [[Support Vector Machines | SVM]]. The algorithm can also be used in very high dimensional spaces using kernel functions.
 
The voted perceptron [[category::method]] is based on the perceptron algorithm of [[paper:Rosenblatt, Frank (1957) | Rosenblatt and Frank]]. The algorithm takes advantage of data that are linearly separable with large margins. This [[category::method]] is simpler to implement, and much more efficient in terms of computation time as compared to [[paper:Vapnik, 1995 | Vapnik's]] [[Support Vector Machines | SVM]]. The algorithm can also be used in very high dimensional spaces using kernel functions.
  
 +
[[paper::Manabu Sassano, IJCNLP]] performed an experimental comparison of voted perceptron with [[Support Vector Machines | SVM]] on classification tasks in NLP and observed that the voted perceptron is comparable to [[Support Vector Machines | SVM]] in terms of accuracy and, in addition, as to learning time and prediction speed of voted perceptron is considerable better than [[Support Vector Machines | SVM]].
 
== Summary ==
 
== Summary ==
 
We assume that all instances are points <math>\mathbf{x} \in \mathbb{R}^n </math>, <math>\lVert \mathbf{x} \rVert</math> denotes Euclidean length of <math>\mathbf{x}</math> and the labels <math>y</math> are in {-1, +1}. The online perceptron algorithm starts with an initial zero prediction vector <math>\mathbf{v} = \mathbf{0}</math>. It predicts the label of a new instance <math>\mathbf{x}</math> to be <math>\hat{y} = \mathrm{sign}(\mathbf{v.x})</math>. If this prediction differs from the label <math>y</math>, it updates the prediction vector to <math>\mathbf{v} = \mathbf{v} + y\mathbf{x}</math>. If the prediction is correct then <math>\mathbf{v}</math> is not changed. The process then repeats with the next example. It has been shown that if the data are linearly separable, then the perceptron algorithm will make a finite number of mistakes, and therefore, if repeatedly cycled through the training set, will converge to a vector which correctly classifies all of the examples. Moreover, the number of mistakes is upper bounded by a function of the gap between the positive and negative examples. In the voted-perceptron algorithm, we store more information during training and then use this elaborate information to generate better predictions on the test data. The information we maintain during training is the list of all prediction vectors that were generated after each and every mistake. For each such vector, we count the number of iterations it “survives” until the next mistake is made; we refer to this count as the “weight” of the prediction vector. To calculate a prediction we compute the binary prediction of each one of the prediction vectors and combine all these predictions by a weighted majority vote. The weights used are the survival times described above. This makes intuitive sense as “good” prediction vectors tend to survive for a long time and thus have larger weight in the majority vote.
 
We assume that all instances are points <math>\mathbf{x} \in \mathbb{R}^n </math>, <math>\lVert \mathbf{x} \rVert</math> denotes Euclidean length of <math>\mathbf{x}</math> and the labels <math>y</math> are in {-1, +1}. The online perceptron algorithm starts with an initial zero prediction vector <math>\mathbf{v} = \mathbf{0}</math>. It predicts the label of a new instance <math>\mathbf{x}</math> to be <math>\hat{y} = \mathrm{sign}(\mathbf{v.x})</math>. If this prediction differs from the label <math>y</math>, it updates the prediction vector to <math>\mathbf{v} = \mathbf{v} + y\mathbf{x}</math>. If the prediction is correct then <math>\mathbf{v}</math> is not changed. The process then repeats with the next example. It has been shown that if the data are linearly separable, then the perceptron algorithm will make a finite number of mistakes, and therefore, if repeatedly cycled through the training set, will converge to a vector which correctly classifies all of the examples. Moreover, the number of mistakes is upper bounded by a function of the gap between the positive and negative examples. In the voted-perceptron algorithm, we store more information during training and then use this elaborate information to generate better predictions on the test data. The information we maintain during training is the list of all prediction vectors that were generated after each and every mistake. For each such vector, we count the number of iterations it “survives” until the next mistake is made; we refer to this count as the “weight” of the prediction vector. To calculate a prediction we compute the binary prediction of each one of the prediction vectors and combine all these predictions by a weighted majority vote. The weights used are the survival times described above. This makes intuitive sense as “good” prediction vectors tend to survive for a long time and thus have larger weight in the majority vote.

Latest revision as of 19:23, 29 October 2011

The voted perceptron method is based on the perceptron algorithm of Rosenblatt and Frank. The algorithm takes advantage of data that are linearly separable with large margins. This method is simpler to implement, and much more efficient in terms of computation time as compared to Vapnik's SVM. The algorithm can also be used in very high dimensional spaces using kernel functions.

Manabu Sassano, IJCNLP performed an experimental comparison of voted perceptron with SVM on classification tasks in NLP and observed that the voted perceptron is comparable to SVM in terms of accuracy and, in addition, as to learning time and prediction speed of voted perceptron is considerable better than SVM.

Summary

We assume that all instances are points , denotes Euclidean length of and the labels are in {-1, +1}. The online perceptron algorithm starts with an initial zero prediction vector . It predicts the label of a new instance to be . If this prediction differs from the label , it updates the prediction vector to . If the prediction is correct then is not changed. The process then repeats with the next example. It has been shown that if the data are linearly separable, then the perceptron algorithm will make a finite number of mistakes, and therefore, if repeatedly cycled through the training set, will converge to a vector which correctly classifies all of the examples. Moreover, the number of mistakes is upper bounded by a function of the gap between the positive and negative examples. In the voted-perceptron algorithm, we store more information during training and then use this elaborate information to generate better predictions on the test data. The information we maintain during training is the list of all prediction vectors that were generated after each and every mistake. For each such vector, we count the number of iterations it “survives” until the next mistake is made; we refer to this count as the “weight” of the prediction vector. To calculate a prediction we compute the binary prediction of each one of the prediction vectors and combine all these predictions by a weighted majority vote. The weights used are the survival times described above. This makes intuitive sense as “good” prediction vectors tend to survive for a long time and thus have larger weight in the majority vote.

Here is the algorithm

VotedPerceptron.png

Relevant Papers