Difference between revisions of "Structured Prediction Cascades"
Line 19: | Line 19: | ||
== Brief description of the method == | == Brief description of the method == | ||
+ | |||
+ | <math>\mathbb{V}</math> | ||
+ | |||
== Experimental Result == | == Experimental Result == | ||
== Related papers == | == Related papers == |
Revision as of 20:29, 4 October 2011
This method as proposed by Weiss et al, AISTATS 2010
This page is reserved for a write up by Dan Howarth
Contents
Citation
Structured Prediction Cascades. David Weiss and Ben Taskar. International Conference on Artificial Intelligence and Statistics (AISTATS), May 2010.
Online version
Summary
In many structured prediction models an increase in model complexity comes at a high computational cost. For example, the complexity of a HMM grows exponentially with the order of the model. This work introduces a method for learning increasingly complex models while continually pruning the possible output space. This is done by "weeding" out the incorrect output states early on.
Previous methods to solve the problem of model complexity were commonly approximate search methods or heuristic pruning techniques. Structured prediction cascades are different however because they explicitly learn the error/computation trade off for each increase in model complexity.
In this work structured prediction cascades are applied to handwriting recognition and POS tagging.
Brief description of the method