Difference between revisions of "Teh et, JASA2006"

From Cohen Courses
Jump to navigationJump to search
 
(3 intermediate revisions by the same user not shown)
Line 13: Line 13:
 
* Develop analogs for the [[UsesMethod:: Hierarchical Dirichlet process]] with representations of both a stick-breaking and a "Chinese restaurant franchise”.
 
* Develop analogs for the [[UsesMethod:: Hierarchical Dirichlet process]] with representations of both a stick-breaking and a "Chinese restaurant franchise”.
  
* Use [[Uses Method:: MCMC algorithm for posterior inference under hierarchical Dirichlet process mixtures]].
+
* Use [[UsesMethod:: MCMC algorithm for posterior inference under hierarchical Dirichlet process mixtures]].
  
 
== Methodology ==
 
== Methodology ==
Line 101: Line 101:
  
 
[[UsesDataset:: Alice’s Adventures in Wonderland by Lewis Carroll]]
 
[[UsesDataset:: Alice’s Adventures in Wonderland by Lewis Carroll]]
 +
 +
== Related Papers ==
 +
 +
[[RelatedPaper::Aldous, D. (1985), “Exchangeability and Related Topics,” in E´cole d’E´te´ de Probabilite´s de Saint-Flour XIII–1983, Springer, Berlin, pp. 1–198]].
 +
 +
[[RelatedPaper::Antoniak, C. (1974), “Mixtures of Dirichlet Processes with Applications to Bayesian Nonparametric Problems,” Annals of Statistics, 2(6), pp. 1152–1174]].

Latest revision as of 22:08, 2 April 2011

Citation

Y. Teh, M. Jordan, M. Beal, and D. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 2006

Online version

Mathew Beal's papers

Summary

This paper proposed a nonparametric Bayes approach to decide the number of mixture components in grouped data, the basic idea is:

Methodology

A hierarchical Dirichlet process is a distribution over a set of random probability measures over . The process defines a set of random probability measures , one for each group, and a global random probability measure . The global measure is distributed as a Dirichlet process with concentration parameter and base probability measure H:

and the random measures are conditionally independent given G0, with distributions given by a Dirichlet process with base probability measure :

.

A hierarchical Dirichlet process can be used as the prior distribution over the factors for grouped data. For each j let be i.i.d. random variables distributed as . Each is a factor corresponding to a single observation . The likelihood is given by:

.

The hierarchical Dirichlet process can readily be extended to more than two levels. That is, the base measure H can itself be a draw from a DP, and the hierarchy can be extended for as many levels as are deemed useful.


  • The stick-breaking construction

Given that the global measure is distributed as a Dirichlet process, it can be expressed using a stick-breaking representation:

where independently and are mutually independent. Since has support at the points , each necessarily has support at these points as well, and can thus be written as:

Let . Note that the weights are independent given (since the are independent given ). These weights are related to the global weights .

An equivalent representation of the hierarchical Dirichlet process mixture can be:

.

After some derivations, the relation between weights and is:

.


  • Chinese restaurant franchise

Crf.jpg

The restaurants correspond to groups and the customers correspond to the factors . Let denote K i.i.d. random variables distributed according to H; this is the global menu of dishes. Vairables represent the table-specific choice of dishes; particular, is the dish served at table t and restaurant j. Use notation to denote the number of maintain counts of customers and counts of tables. Then,

,

MCMC algorithm for posterior sampling in the Chinese restaurant franchise

The Chinese restaurant franchise can yield a Gibbs sampling scheme for posterior sampling given observations x. Rather than dealing with the 's and 's, we shall sample their index variables and instead.


  • Sampling t. The prior probability that takes on a particular previously used value t is proportional to , whereas the probability that it takes on a new value (say ) is proportional to . The likelihood due to given for some previously used t is , then

Hdp eq1.png

If the sampled value of is , we obtain a sample of by sampling from

Hdp eq2.png

If as a result of updating some table t becomes unoccupied, i.e., , then the probability that this table will be reoccupied in the future will be zero, since this is always proportional to . As a result, we may delete the corresponding from the data structure. If as a result of deleting some mixture component k becomes unallocated, we delete this mixture component as well.

  • Sampling k. Since changing actually changes the component membership of all data items in table t, the likelihood obtained by setting is given by , so that the conditional

probability of is

Hdp eq3.png

Data

Nematode biology abstracts

NIPS 1988-1999

Alice’s Adventures in Wonderland by Lewis Carroll

Related Papers

Aldous, D. (1985), “Exchangeability and Related Topics,” in E´cole d’E´te´ de Probabilite´s de Saint-Flour XIII–1983, Springer, Berlin, pp. 1–198.

Antoniak, C. (1974), “Mixtures of Dirichlet Processes with Applications to Bayesian Nonparametric Problems,” Annals of Statistics, 2(6), pp. 1152–1174.