Difference between revisions of "Gibbs sampling"

From Cohen Courses
Jump to navigationJump to search
Line 1: Line 1:
Under modification by [[User:dkulkarn]]
 
 
 
Gibbs sampling is used to sample from the stable joint distribution of two or more random variables when accurate computation of the integral or a marginal is intractable. Usually some variables in this set of random variables are the actual observables and hence there values need not be sampled in the [[UsesMethod :: Gibbs sampling]] iterations. This form of approximate inference method is generally used when doing posterior probability inference in probabilistic graphical models where computation of marginals are intractable.
 
Gibbs sampling is used to sample from the stable joint distribution of two or more random variables when accurate computation of the integral or a marginal is intractable. Usually some variables in this set of random variables are the actual observables and hence there values need not be sampled in the [[UsesMethod :: Gibbs sampling]] iterations. This form of approximate inference method is generally used when doing posterior probability inference in probabilistic graphical models where computation of marginals are intractable.
  

Revision as of 17:07, 13 October 2011

Gibbs sampling is used to sample from the stable joint distribution of two or more random variables when accurate computation of the integral or a marginal is intractable. Usually some variables in this set of random variables are the actual observables and hence there values need not be sampled in the Gibbs sampling iterations. This form of approximate inference method is generally used when doing posterior probability inference in probabilistic graphical models where computation of marginals are intractable.

Motivation

Gibbs sampling was introduced in the context of image processing by Geman and Geman[1]. The Gibbs sampler is a technique for generating random variables from a (marginal) distribution indirectly, without having to calculate the density[2]. Thus, if we are given with conditional densities , we can use Gibbs sampling to calculate the marginal distributions or any other function of .

Algorithm

1. Take some initial values

2. Repeat for :

For

3. Continue step 2 until joint distribution of doesn't change.


Under regularity conditions, it can be shown that this procedure eventually stabilizes, and the resulting random variables are indeed a sample from

A Simple proof of Convergence for bivariate case

Consider a bivariate system of bernoulli random variables and . Define two matrices and such that and . Then the transition probability of to can be given as . If the initial distribution to start with was , then at the iteration, . It is well known that as , approaches a stationary point. The stationary point represents .

Burnout

Gibbs sampler requires certain number of iterations until it approaches the stationary state, and generate samples from the marginal distribution. To allow this, first few samples (typically in the order of 500-1000) are discarded. This is known as burnout.

References

1. Geman and Geman

2. http://biostat.jhsph.edu/~mmccall/articles/casella_1992.pdf

3. Trevor Hastie, Robert Tibshirani, Jerome Friedman. The Elements of Statistical Learning.