Difference between revisions of "Eisenstein et al 2011: Sparse Additive Generative Models of Text"
Line 52: | Line 52: | ||
=== Exp 1: Document classification === | === Exp 1: Document classification === | ||
− | [[File:]] | + | [[File:Sage1.png]] |
=== Exp 2: Sparse topic models === | === Exp 2: Sparse topic models === | ||
− | [[File:]] | + | [[File:Sage2.png]] |
=== Exp 3: Topic and ideology === | === Exp 3: Topic and ideology === | ||
− | [[File:]] | + | [[File:Sage3.png]] |
=== Exp 4: Geolocation from text === | === Exp 4: Geolocation from text === | ||
− | [[File:]] | + | [[File:Sage4.png]] |
== Related Papers == | == Related Papers == |
Revision as of 20:47, 28 November 2011
Contents
Citation
Sparse Additive Generative Models of Text. Eisenstein, Ahmed and Xing. Proceedings of ICML 2011.
Online version
Summary
This recent paper presents sparse learning and additive generative modeling approaches for Topic modeling. This is an important alternative approach to Latent Dirichlet Allocation (LDA) where sparsity and log-space additive modeling are NOT considered or introduced.
Brief Description of the method
This paper first describes three big disadvantages of Latent Dirichlet Allocation: high inference cost, overparameterization, and lack of sparsity representation. Then, it introduces SAGE, an additive generative model which does not require learning the same background distribution again and again, but rather introduces a sparse topic model that performs addition in log-space.
The Generative Story
In contrast to traditional multinomial modeling of words in LDA, SAGE looks at log frequencies and the generative distribution of words in a document d is
where is the background and is the log frequency deviation that represents topic. By doing this, the authors argue that SAGE can take advantages of sparsity-inducing priors on to obtain additional robustness for the model. The generative story of SAGE can be described as follows
- Draw background distribution m from an uninformative prior
- For each class k:
-- For each term i 1. Draw 2. Draw -- Set
- For each document d:
-- Draw a class from uniform distribution -- For each word n, draw
Here, indicates the exponential distribution. If we fit a variational distribution over the latent variables, optimizing the bound, we can get the following likelihood equation
Then, we can perform Newton methods to solve the above optimization problem.
Parameter Estimation
For the component vector , the authors use the Sherman-Morrison formula to derive the Hessian matrix H, then gradient can be computed. When considering the variance, the authors construct a fully-factored variational distribution .
Dataset and Experiment Settings
The authors perform four experiments on different datasets.
- Document classification on 20 Newsgroups data
- Sparse topic models on NIPS dataset
- Topic and ideology prediction on 2008 US presidential election political blogs
- Geolocation prediction from text using Twitter data
Experimental Results
The authors performed four major experiments. The first experiment does not include latent variables. The second experiment explores sparse topic models with latent variables. The last two experiments are related to multifaceted generative models.
Exp 1: Document classification
Exp 2: Sparse topic models
Exp 3: Topic and ideology
Exp 4: Geolocation from text
Related Papers
This paper is related to many papers in three dimensions.
(1) .
(2) .
(3) .