Difference between revisions of "Eisenstein et al 2011: Sparse Additive Generative Models of Text"

From Cohen Courses
Jump to navigationJump to search
Line 49: Line 49:
  
 
== Experimental Results ==
 
== Experimental Results ==
The authors performed three major experiments. The first experiment is the . The second experiment explores.  
+
The authors performed four major experiments. The first experiment does not include latent variables. The second experiment explores sparse topic models with latent variables. The last two experiments are related to multifaceted generative models.  
  
=== Exp ===
+
=== Exp 1: Document classification ===
 
[[File:]]
 
[[File:]]
  
=== Exp ===
+
=== Exp 2: Sparse topic models ===
 
[[File:]]
 
[[File:]]
  
=== Exp ===
+
=== Exp 3: Topic and ideology ===
 +
[[File:]]
 +
 
 +
=== Exp 4: Geolocation from text ===
 
[[File:]]
 
[[File:]]
  

Revision as of 20:43, 28 November 2011

Citation

Sparse Additive Generative Models of Text. Eisenstein, Ahmed and Xing. Proceedings of ICML 2011.

Online version

Eisenstein et al 2011

Summary

This recent paper presents sparse learning and additive generative modeling approaches for Topic modeling. This is an important alternative approach to Latent Dirichlet Allocation (LDA) where sparsity and log-space additive modeling are NOT considered or introduced.

Brief Description of the method

This paper first describes three big disadvantages of Latent Dirichlet Allocation: high inference cost, overparameterization, and lack of sparsity representation. Then, it introduces SAGE, an additive generative model which does not require learning the same background distribution again and again, but rather introduces a sparse topic model that performs addition in log-space.

The Generative Story

In contrast to traditional multinomial modeling of words in LDA, SAGE looks at log frequencies and the generative distribution of words in a document d is

where is the background and is the log frequency deviation that represents topic. By doing this, the authors argue that SAGE can take advantages of sparsity-inducing priors on to obtain additional robustness for the model. The generative story of SAGE can be described as follows

  • Draw background distribution m from an uninformative prior
  • For each class k:
     -- For each term i      
          1. Draw 
          2. Draw 
     -- Set 
  • For each document d:
     -- Draw a class  from uniform distribution
     -- For each word n, draw 

Here, indicates the exponential distribution. If we fit a variational distribution over the latent variables, optimizing the bound, we can get the following likelihood equation

Then, we can perform Newton methods to solve the above optimization problem.

Parameter Estimation

For the component vector , the authors use the Sherman-Morrison formula to derive the Hessian matrix H, then gradient can be computed. When considering the variance, the authors construct a fully-factored variational distribution .

Dataset and Experiment Settings

The authors perform four experiments on different datasets.

Experimental Results

The authors performed four major experiments. The first experiment does not include latent variables. The second experiment explores sparse topic models with latent variables. The last two experiments are related to multifaceted generative models.

Exp 1: Document classification

[[File:]]

Exp 2: Sparse topic models

[[File:]]

Exp 3: Topic and ideology

[[File:]]

Exp 4: Geolocation from text

[[File:]]

Related Papers

This paper is related to many papers in three dimensions.

(1) .

(2) .

(3) .