Difference between revisions of "Blei et al Latent Dirichlet Allocation"
Line 38: | Line 38: | ||
== Parameter Estimation == | == Parameter Estimation == | ||
We now need to estimate the parameters <math>\alpha</math> and <math>\beta</math> of the LDA model. An empirical Bayes method for parameter estimation is provided. Given a corpus of documents D, we wish to find parameters <math>\alpha</math> and <math>\beta</math> that maximize the (marginal) log likelihood of the data: | We now need to estimate the parameters <math>\alpha</math> and <math>\beta</math> of the LDA model. An empirical Bayes method for parameter estimation is provided. Given a corpus of documents D, we wish to find parameters <math>\alpha</math> and <math>\beta</math> that maximize the (marginal) log likelihood of the data: | ||
− | <math>l(\alpha, \beta) = \sum_{d \in D} log(p(document d | \alpha, \beta)</math> | + | |
+ | <math>l(\alpha, \beta) = \sum_{d \in D} log(p(\text{document } d | \alpha, \beta)</math> | ||
This leads to the following iterative EM algorithm | This leads to the following iterative EM algorithm |
Revision as of 19:00, 3 October 2012
Contents
Citation
author = {Blei, David M. and Ng, Andrew Y. and Jordan, Michael I.}, title = {Latent dirichlet allocation}, journal = {J. Mach. Learn. Res.}, issue_date = {3/1/2003}, volume = {3}, month = mar, year = {2003}, issn = {1532-4435}, pages = {993--1022}, numpages = {30}, url = {http://dl.acm.org/citation.cfm?id=944919.944937}, acmid = {944937}, publisher = {JMLR.org}
Online Version
Summary
This paper addresses the problem of document modeling
LDA
LDA is a generative probabilistic model for discrete data such as text corpora. It is a Bayesian model that consists of three hierarchies. Each item of the collection is modeled as a finite mixture i.e. modeled as being generated by an underlying (latent) set of topics, where each topic is characterized by a distribution over words. Each document in the corpus is assumed to be generated using the following process:
1. The author chooses the number of words in the document by drawing from a Poisson() distribution. 2. He then tosses a Dirichlet hypergenerator Dirichlet() to get a which is used to generate a Multinomial() topiv generator 3. For each word from the words a. A topic is chosen from a Multinomial() distribution b. A topic specific word generator parametrized by and is then tossed to get the word
The parameters and are corpus level parameters and are sampled only once in the process of generating a corpus. The variables are sampled once per document. Finally, the variables and are word-level variables and are sampled once for each word in each document. What makes LDA unique is that it consists of three levels, and notably the topic node is sampled repeatedly within a document. This allows documents to be associated with multiple topics rather than just one.
Inference
The posterior distribution of the hidden variables given a document is, in general, intractable. However, many efficient approximate inference techniques can be used to estimate the posterior. The paper describes a convexity-based variational method involving EM algorithm for Bayes parameter estimation.
The basic idea is to obtain a lower bound on the log likelihood parametrized by the variational parameters using the Jensen’s inequality. The variational parameters are chosen by an optimization procedure that attempts to find the tightest possible lower bound. The authors show that this requires choosing the parameters to minimize the KL divergence between the distribution under the variational parameters and the true posterior. This leads to a pair of interdependent update equations which can be solved via an iterative fixed-point method.
Parameter Estimation
We now need to estimate the parameters and of the LDA model. An empirical Bayes method for parameter estimation is provided. Given a corpus of documents D, we wish to find parameters and that maximize the (marginal) log likelihood of the data:
This leads to the following iterative EM algorithm
1. E step: For each document, find the optimizing values of the variational parameters
2. M step: Maximize resulting lower bound on the log likelihood with respect to the model parameters
Experiments
LDA is empirically evaluated in several problem domains -- document modeling, document classification, and collaborative filtering.