Difference between revisions of "Practical very large CRFs"

From Cohen Courses
Jump to navigationJump to search
 
(10 intermediate revisions by one other user not shown)
Line 1: Line 1:
[[Category::Paper]] writeup in progress by [[User:Fkeith|Francis Keith]]
 
 
 
== Citation ==
 
== Citation ==
  
Line 7: Line 5:
 
== Online Version ==
 
== Online Version ==
  
An online version of the paper is available here [http://www.aclweb.org/anthology-new/P/P10/P10-1052.pdf]
+
An online version of the [[Category::Paper|paper]] is available here [http://www.aclweb.org/anthology-new/P/P10/P10-1052.pdf]
  
 
== Summary ==
 
== Summary ==
Line 17: Line 15:
 
* [[UsesMethod::Stochastic Gradient Descent]]
 
* [[UsesMethod::Stochastic Gradient Descent]]
 
* [[UsesMethod::Block Coordinate Descent]]
 
* [[UsesMethod::Block Coordinate Descent]]
 +
 +
The conclusions they reach are:
 +
* [[UsesMethod::Quasi Newton]] is probably the best when dealing with small/moderate sized applications
 +
* [[UsesMethod::Block Coordinate Descent]] is efficient with very large feature sets, as long as there is a limited observation alphabet
 +
* [[UsesMethod::Stochastic Gradient Descent]] seems to be the best choice for large scale applications, as long as it is fine-tuned
 +
* Large-scale sparse models can be efficiently trained, and perform better than small models.
  
 
== Implementation Issues with Large Scale CRFs ==
 
== Implementation Issues with Large Scale CRFs ==
Line 37: Line 41:
 
=== Phonetization ===
 
=== Phonetization ===
  
They use [[UsesDataset::Nettalk]], which contains ~20k English words, and their pronunciations and some prosodic information.
+
They use [[UsesDataset::Nettalk]], which contains ~20k English words, and their pronunciations and some prosodic information. They attempt to produce phonetizations for the test data.
 
 
=== Part of Speech Tagging ===
 
 
 
They use the [[UsesDataset::Penn TreeBank]] for training, and [[UsesDataset::WSJ|Wall Street Journal]] for testing, using various ''n''-gram features, occasionally augmented with windows (show by the ''n''-gram+ and ''n''-gram++ cases).
 
  
 
==== Results ====
 
==== Results ====
 
Reg. is the regularization function used. Method is the method used (which ''n''-gram features). # Feat. is the number of features selected, not the total possible feature space.  
 
Reg. is the regularization function used. Method is the method used (which ''n''-gram features). # Feat. is the number of features selected, not the total possible feature space.  
 +
 +
This shows the accuracy ''without'' prosodic features.
 
{| class="wikitable" border="1"
 
{| class="wikitable" border="1"
 
|-
 
|-
Line 75: Line 77:
 
| 1h37min
 
| 1h37min
 
|-
 
|-
| [[UsesMethod::Stochastic Gradience Descent|SGD]]
+
| [[UsesMethod::Stochastic Gradient Descent|SGD]]
 
| 1-gram
 
| 1-gram
 
| 21.4
 
| 21.4
Line 82: Line 84:
 
| 9min
 
| 9min
 
|-
 
|-
| [[UsesMethod::Stochastic Gradience Descent|SGD]]
+
| [[UsesMethod::Stochastic Gradient Descent|SGD]]
 
| 5-gram+
 
| 5-gram+
 
| 28.5
 
| 28.5
Line 110: Line 112:
 
| 2h18min
 
| 2h18min
 
|}
 
|}
 +
 +
Comparative results on Nettalk using [[UsesMethod::CRF]]s are difficult to find, but the two examples they give (88.4% accuracy and 91.7% accuracy) seem to make the results pretty decent.
 +
 +
When running with prosodic features, they could not produce a model for [[UsesMethod::Quasi Newton]], and they don't really have anything to compare the results to.
 +
 +
{| class="wikitable" border="1"
 +
|-
 +
! Reg.
 +
! Method
 +
! Error
 +
! Time
 +
|-
 +
| [[UsesMethod::Stochastic Gradient Descent|SGD]]
 +
| 5-gram
 +
| 14.71%/8.11%
 +
| 55min
 +
|-
 +
| [[UsesMethod::Stochastic Gradient Descent|SGD]]
 +
| 5-gram+
 +
| 13.91%/7.52%
 +
| 2h45min
 +
|-
 +
| [[UsesMethod::Block Coordinate Descent|BCD]]
 +
| 5-gram
 +
| 14.57%/8.06%
 +
| 2h46min
 +
|-
 +
| [[UsesMethod::Block Coordinate Descent|BCD]]
 +
| 7-gram
 +
| 14.12%/7.86%
 +
| 3h02min
 +
|-
 +
| [[UsesMethod::Block Coordinate Descent|BCD]]
 +
| 5-gram+
 +
| 13.85%/7.47%
 +
| 7h14min
 +
|-
 +
| [[UsesMethod::Block Coordinate Descent|BCD]]
 +
| 5-gram++
 +
| 13.69%/7.36%
 +
| 16h03min
 +
|}
 +
 +
=== Part of Speech Tagging ===
 +
 +
They use the [[UsesDataset::Penn TreeBank]] for training, and [[UsesDataset::WSJ|Wall Street Journal]] for testing. They don't really give an in-depth analysis of the results, only saying that [[UsesMethod::Block Coordinate Descent|BCD]] is impractical to train, leading them to the conclusion that the method should only be used when the number of "blocks" is small. They also describe using an incremental training task, that uses begins only using features on the current word, then after a couple iterations adds in bigrams, then after a couple iterations add trigrams, etc. It produces an error rate of 2.63%/2.75% for POS Tagging, though they don't report the run speed.
 +
 +
== Related Work ==
 +
 +
This paper is very important to the work being done on [[UsesMethod::CRF]]s. They are defined in this paper:
 +
* [[RelatedPaper::Lafferty 2001 Conditional Random Fields|John Lafferty, Fernando Pereira, and Andrew McCallum. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of ICML.]]
 +
 +
Another "efficiency" paper, this time modeling the problem of semantic role labeling as a tree labeling problem to allow for an efficient algorithm:
 +
* [[RelatedPaper::Cohn 2005 semantic role labelling with tree conditional random fields|Semantic role labelling with tree conditional random fields, by T. Cohn, P. Blunsom. In Ninth Conference on Computational Natural Language Learning, 2005.]]
 +
 +
== Comments ==
 +
 +
This is a pretty handy paper when implementing a CRF.
 +
 +
Random note on the re-normalizing alternative to log-domain arithmetic. for forward-backwards, supposedly appears way back in the Rabiner 1989 tutorial on HMM's: http://www.cs.ubc.ca/~murphyk/Bayes/rabiner.pdf  ... though I haven't confirmed this.  Anyway, i used the re-normalizing trick once for implementing CRF forward-backwards, it works very nicely.  So doubles underflow not until you go past 1e+-300 or so... so you underflow only if there's a factor of 1e300 change to a probability between t and (t+1)... very difficult to get that big a magnitude so quick.  (Built up over a 20-long sequence, though, it's quite easy to underflow, only requires a 1e15 change each timestep.)  --[[User:Brendan|Brendan]] 21:03, 13 October 2011 (UTC)

Latest revision as of 16:03, 13 October 2011

Citation

Lavergne, T., O. Cappé, and F. Yvon. Practical very large scale CRFs. ACL-2010.

Online Version

An online version of the paper is available here [1]

Summary

The goal of this paper is to show challenges and results when trying to use Conditional Random Fields with a very large number of features. They detail and compare the 3 methods for computing the penalty, as the sparsity introduced by the penalty makes the model significantly more compact.

The methods they compare are:

The conclusions they reach are:

  • Quasi Newton is probably the best when dealing with small/moderate sized applications
  • Block Coordinate Descent is efficient with very large feature sets, as long as there is a limited observation alphabet
  • Stochastic Gradient Descent seems to be the best choice for large scale applications, as long as it is fine-tuned
  • Large-scale sparse models can be efficiently trained, and perform better than small models.

Implementation Issues with Large Scale CRFs

The paper goes into detail about a few issues with implementing CRFs on a large scale, many of which are useful for implementation issues:

  • As a generally accepted practice when using the Forward-Backward algorithm or any large probabilistic chain, people tend to work in the log-domain to guarantee avoidance of over/underflow. This causes a massive slowdown due to repeated calls to and . They work around this by normalizing and calls, and vectorizing calls.
  • Computing the gradient often requires the most computation time. They resolve this by doing a few things:
    • Using a sparse matrix M, the computation time becomes proportional to the average number of active features (as opposed to being proportional to all possible features)
    • Parallelizing Forward-Backward
    • In addition, Block Coordinate Descent also has the property of being able to truncate forward and backward probabilities after a certain point, which speeds up the running of this immensely.
  • For large scale feature vectors (of size , which could be billions), memory begins to become an issue

Experiments

The paper runs experiments on two domains: Part of Speech Tagging and Phonetization. Since runtime performance is also important, they run all of the processes on the same machine, and they are all using the same frameworks to allow for a fair competition.

Phonetization

They use Nettalk, which contains ~20k English words, and their pronunciations and some prosodic information. They attempt to produce phonetizations for the test data.

Results

Reg. is the regularization function used. Method is the method used (which n-gram features). # Feat. is the number of features selected, not the total possible feature space.

This shows the accuracy without prosodic features.

Reg. Method Iter. # Feat. Error Time
OWL-QN 1-gram 63.4 4684 17.79% 11min
OWL-QN 7-gram 140.2 38214 8.12% 1h02min
OWL-QN 5-gram+ 141.0 43429 7.89% 1h37min
SGD 1-gram 21.4 3540 18.21% 9min
SGD 5-gram+ 28.5 34319 8.01% 45min
BCD 1-gram 28.2 5017 18.27% 27min
BCD 7-gram 9.2 3692 8.21 1h22min
BCD 5-gram+ 8.7 47675 7.91% 2h18min

Comparative results on Nettalk using CRFs are difficult to find, but the two examples they give (88.4% accuracy and 91.7% accuracy) seem to make the results pretty decent.

When running with prosodic features, they could not produce a model for Quasi Newton, and they don't really have anything to compare the results to.

Reg. Method Error Time
SGD 5-gram 14.71%/8.11% 55min
SGD 5-gram+ 13.91%/7.52% 2h45min
BCD 5-gram 14.57%/8.06% 2h46min
BCD 7-gram 14.12%/7.86% 3h02min
BCD 5-gram+ 13.85%/7.47% 7h14min
BCD 5-gram++ 13.69%/7.36% 16h03min

Part of Speech Tagging

They use the Penn TreeBank for training, and Wall Street Journal for testing. They don't really give an in-depth analysis of the results, only saying that BCD is impractical to train, leading them to the conclusion that the method should only be used when the number of "blocks" is small. They also describe using an incremental training task, that uses begins only using features on the current word, then after a couple iterations adds in bigrams, then after a couple iterations add trigrams, etc. It produces an error rate of 2.63%/2.75% for POS Tagging, though they don't report the run speed.

Related Work

This paper is very important to the work being done on CRFs. They are defined in this paper:

Another "efficiency" paper, this time modeling the problem of semantic role labeling as a tree labeling problem to allow for an efficient algorithm:

Comments

This is a pretty handy paper when implementing a CRF.

Random note on the re-normalizing alternative to log-domain arithmetic. for forward-backwards, supposedly appears way back in the Rabiner 1989 tutorial on HMM's: http://www.cs.ubc.ca/~murphyk/Bayes/rabiner.pdf ... though I haven't confirmed this. Anyway, i used the re-normalizing trick once for implementing CRF forward-backwards, it works very nicely. So doubles underflow not until you go past 1e+-300 or so... so you underflow only if there's a factor of 1e300 change to a probability between t and (t+1)... very difficult to get that big a magnitude so quick. (Built up over a 20-long sequence, though, it's quite easy to underflow, only requires a 1e15 change each timestep.) --Brendan 21:03, 13 October 2011 (UTC)