Difference between revisions of "Sparse Additive Generative Models of Text"

From Cohen Courses
Jump to navigationJump to search
(Created page with 'This [[Category::Paper]] is available online [http://www.cs.cmu.edu/~epxing/papers/2011/Eisenstein_Ahmed_Xing_ICML11.pdf]. == Summary ==')
 
Line 2: Line 2:
  
 
== Summary ==
 
== Summary ==
 +
 +
Sparse Additive Generative Models of Text, or SAGE, is an interesting alternative to traditional generative models for text. The key insight of the paper is that you can model latent classes or topics as a deviation in log-frequency from a constant background distribution. It has the advantage of enforcing sparsity which the authors argue prevents over-fitting. Additionally, generative facets can be combined through addition in the log space, avoiding the need for switching variables.
 +
 +
== Datasets ==
 +
 +
 +
== Methodology ==

Revision as of 08:50, 4 October 2012

This Paper is available online [1].

Summary

Sparse Additive Generative Models of Text, or SAGE, is an interesting alternative to traditional generative models for text. The key insight of the paper is that you can model latent classes or topics as a deviation in log-frequency from a constant background distribution. It has the advantage of enforcing sparsity which the authors argue prevents over-fitting. Additionally, generative facets can be combined through addition in the log space, avoiding the need for switching variables.

Datasets

Methodology