Difference between revisions of "SEARN"

From Cohen Courses
Jump to navigationJump to search
(Created page with 'Being edited by Francis Keith This is a meta-learning [[Category::Method|method]] Coming soon!')
 
 
(7 intermediate revisions by the same user not shown)
Line 1: Line 1:
Being edited by [[User:Fkeith|Francis Keith]]
+
== Citation ==
  
This is a meta-learning [[Category::Method|method]]
+
A detailed description and introduction to SEARN is available [[Daume et al, ML 2009|here]].
  
Coming soon!
+
== Motivation ==
 +
 
 +
SEARN is a [[Category::Method|meta-algorithm]] for doing structured prediction. The basic premise is to combine learning and searching to transform a complex structured prediction problem into a simple classification problem. The learning algorithm is designed to begin with a classifier that uses the training data directly, and use that classifier to produce a fully-learned classifier. In this sense, it moves ''away'' from the classifier created by the training data.
 +
 
 +
== Input ==
 +
 
 +
Running the SEARN meta-algorithm requires a few different inputs.
 +
* <math>L(y,f(\hat{y}))</math> - A loss function, which must be computable for any sequence of predictions
 +
* <math>A</math> - A cost-sensitive learning algorithm. This algorithm will produce learned classifiers, which SEARN refers to as ''policies''.
 +
* <math>\pi</math> - The ''optimal policy''. This should produce low loss when applied to the training data
 +
 
 +
== Algorithm ==
 +
 
 +
The algorithm is defined in [[Daume et al, ML 2009]]
 +
 
 +
[[File:searn-algorithm.png]]
 +
 
 +
First, begin with the current policy being the optimal policy. The goal of the algorithm is to actually move ''away'' from the optimal policy, in an effort to produce a more generalized classifier. This is because the optimal policy is generally produced by searching the training data, or sometimes is provided by an expert system.
 +
 
 +
The outermost loop requires an unclear amount of iterations. The design is that once the model has run through enough iterations, the interpolation weight on the initial policy should be degraded far enough that the model is learned.
 +
 
 +
The inner loop involves using the current policy to classify examples, and then use those examples with the learning algorithm to produce a new policy. We then find an interpolation constant and interpolate the newly learned classifier with the current policy and update the current policy.
 +
 
 +
The interpolation constant <math>\beta</math> is generally different throughout each iteration of the algorithm. A practical value is to use <math>1/T^3</math> on the initial iteration. A general note is that <math>\beta</math> is often much higher after the first iteration, as the first iteration moves the classifier from completely fitting the training data to a statistical method.
 +
 
 +
Approximation can also be used to avoid keeping the algorithm from being prohibitively expensive.
 +
 
 +
== Output ==
 +
 
 +
The policy <math>h_{last}</math> given by the algorithm can be used to classify sequences.
 +
 
 +
== Theoretical Analysis ==
 +
 
 +
It is provable that there is a bound on the amount of degradation that occurs when updating the new policy in the algorithm, and it is bounded by the following equation:
 +
 
 +
<math>L(D,h_{new}) \leq L(D,h) + T\beta\ell_h^{CS}(h') + {1/2}\beta^2T^2c_{max} </math>
 +
 
 +
In addition, we can also say that after <math>C/\beta</math> iterations, the loss is bounded by the following equation:
 +
 
 +
<math>L(D,h_{last}) \leq L(D,\pi) + CT\ell_{avg} + c_{max}({1/2}CT^2\beta + T\operatorname{exp}[-C])</math>
 +
 
 +
The proofs for the above theorems can be found [[Daume et al, ML 2009|here]].
 +
 
 +
== Comparison To Other Methods ==
 +
 
 +
One of the major benefits of SEARN is the flexibility it allows. It allows for arbitrary classifiers, an arbitrary algorithm for learning, and an arbitrary loss function. [[Perceptron]] models are generally more limited, as they cannot use an arbitrary loss function. [[CRF|CRFs]] are generally required to be applied to linear chain models, which SEARN is not limited to.
 +
 
 +
== Relevant Papers ==
 +
 
 +
{{#ask: [[UsesMethod::SEARN]]
 +
| ?AddressesProblem
 +
| ?UsesDataset
 +
}}

Latest revision as of 22:40, 29 September 2011

Citation

A detailed description and introduction to SEARN is available here.

Motivation

SEARN is a meta-algorithm for doing structured prediction. The basic premise is to combine learning and searching to transform a complex structured prediction problem into a simple classification problem. The learning algorithm is designed to begin with a classifier that uses the training data directly, and use that classifier to produce a fully-learned classifier. In this sense, it moves away from the classifier created by the training data.

Input

Running the SEARN meta-algorithm requires a few different inputs.

  • - A loss function, which must be computable for any sequence of predictions
  • - A cost-sensitive learning algorithm. This algorithm will produce learned classifiers, which SEARN refers to as policies.
  • - The optimal policy. This should produce low loss when applied to the training data

Algorithm

The algorithm is defined in Daume et al, ML 2009

Searn-algorithm.png

First, begin with the current policy being the optimal policy. The goal of the algorithm is to actually move away from the optimal policy, in an effort to produce a more generalized classifier. This is because the optimal policy is generally produced by searching the training data, or sometimes is provided by an expert system.

The outermost loop requires an unclear amount of iterations. The design is that once the model has run through enough iterations, the interpolation weight on the initial policy should be degraded far enough that the model is learned.

The inner loop involves using the current policy to classify examples, and then use those examples with the learning algorithm to produce a new policy. We then find an interpolation constant and interpolate the newly learned classifier with the current policy and update the current policy.

The interpolation constant is generally different throughout each iteration of the algorithm. A practical value is to use on the initial iteration. A general note is that is often much higher after the first iteration, as the first iteration moves the classifier from completely fitting the training data to a statistical method.

Approximation can also be used to avoid keeping the algorithm from being prohibitively expensive.

Output

The policy given by the algorithm can be used to classify sequences.

Theoretical Analysis

It is provable that there is a bound on the amount of degradation that occurs when updating the new policy in the algorithm, and it is bounded by the following equation:

In addition, we can also say that after iterations, the loss is bounded by the following equation:

The proofs for the above theorems can be found here.

Comparison To Other Methods

One of the major benefits of SEARN is the flexibility it allows. It allows for arbitrary classifiers, an arbitrary algorithm for learning, and an arbitrary loss function. Perceptron models are generally more limited, as they cannot use an arbitrary loss function. CRFs are generally required to be applied to linear chain models, which SEARN is not limited to.

Relevant Papers