Difference between revisions of "Gibbs sampling"

From Cohen Courses
Jump to navigationJump to search
Line 5: Line 5:
 
== Motivation ==  
 
== Motivation ==  
  
 +
Gibbs sampling was introduced in the context of image processing by Geman and Geman[1]. The Gibbs sampler is a technique for generating random variables from a (marginal) distribution indirectly, without having to calculate the density[2]. Thus, if we are given with conditional densities <math>f(x_i | x_{(-i)}) = f(x_i | x_1, \cdots, x_{i-1}, x_{i+1}, \cdots, x_K)</math>, we can use Gibbs sampling to calculate the marginal distributions <math>f(x_i)</math> or any other function of <math>x_i</math>.
  
 
== Algorithm ==
 
== Algorithm ==
 +
1. Take some initial values <math>X_k^{(0)}, k = 1, 2, \cdots, K.</math>
  
* Initialize the state of the sampler by assigning uniformly values to each of the random variables in the joint probability distribution
+
2. Repeat for <math>t = 1, 2, \cdots, </math>:
* Repeatedly sample a random variable conditioned on the current values of all the other random variables as shown in the formula below
 
  
:<math>p(x_j|x_1,\dots,x_{j-1},x_{j+1},\dots,x_n) = \frac{p(x_1,\dots,x_n)}{p(x_1,\dots,x_{j-1},x_{j+1},\dots,x_n)} \propto p(x_1,\dots,x_n)</math>
+
  For <math>k = 1, 2, \cdots, K \mbox{ generate } X_k^{(t)} \mbox{ from } f(X_k^{(t)} | X_1^{(t)}, \cdots, X_{k-1}^{(t)}, X_{k+1}^{(t-1)}, \cdots X_K^{(t-1)}</math>
 +
 
 +
3. Continue step 2 until joint distribution of <math>(X_1^{(t)}, \cdots, X_K^{(t)})</math> doesn't change.
  
* Stop after a threshold criteria is achieved which may be a stable likelihood of the data.
 
  
 
== A Simple proof of Convergence ==  
 
== A Simple proof of Convergence ==  
 +
 +
== Burnout ==
  
 
== Relation to EM ==  
 
== Relation to EM ==  
Line 22: Line 26:
  
 
== References ==
 
== References ==
 +
1. Geman and Geman
 +
 +
2. http://biostat.jhsph.edu/~mmccall/articles/casella_1992.pdf

Revision as of 13:53, 30 September 2011

Under modification by User:dkulkarn

Gibbs sampling is used to sample from the stable joint distribution of two or more random variables when accurate computation of the integral or a marginal is intractable. Usually some variables in this set of random variables are the actual observables and hence there values need not be sampled in the Gibbs sampling iterations. This form of approximate inference method is generally used when doing posterior probability inference in probabilistic graphical models where computation of marginals are intractable.

Motivation

Gibbs sampling was introduced in the context of image processing by Geman and Geman[1]. The Gibbs sampler is a technique for generating random variables from a (marginal) distribution indirectly, without having to calculate the density[2]. Thus, if we are given with conditional densities , we can use Gibbs sampling to calculate the marginal distributions or any other function of .

Algorithm

1. Take some initial values

2. Repeat for :

 For 

3. Continue step 2 until joint distribution of doesn't change.


A Simple proof of Convergence

Burnout

Relation to EM

Used In

References

1. Geman and Geman

2. http://biostat.jhsph.edu/~mmccall/articles/casella_1992.pdf