Difference between revisions of "Belief Propagation"
Line 7: | Line 7: | ||
== Motivation: Marginals vs. Joint Maximizer == | == Motivation: Marginals vs. Joint Maximizer == | ||
To compute marginals, we need to find: | To compute marginals, we need to find: | ||
− | : <math> | + | : <math> P(x_1), P(x_2), P(x_3), ... , P(x_N)</math> |
where as to compute joint maximum likelihood, we need: | where as to compute joint maximum likelihood, we need: | ||
: <math>\underset{x_1,x_2,x_3,...,x_N}{\operatorname{argmax}}\ P(x_1,x_2,x_3,...,x_N).</math> | : <math>\underset{x_1,x_2,x_3,...,x_N}{\operatorname{argmax}}\ P(x_1,x_2,x_3,...,x_N).</math> | ||
− | Unfortunately, for each random variable <math>X_i</math>, it might have M possible states, so if we run search algorithms for all states, the complexity is <math>O(M^N)</math>, which is a hard problem. As a result, we need to find better inference algorithms to solve the above problems. | + | Unfortunately, for each random variable <math>X_i</math>, it might have M possible states, so if we run search algorithms for all states, the complexity is <math>O( M^N )</math>, which is a hard problem. As a result, we need to find better inference algorithms to solve the above problems. |
== Problem Formulation == | == Problem Formulation == |
Revision as of 23:46, 26 September 2011
This is a Method proposed by Judea Pearl, 1982: Reverend Bayes on inference engines: A distributed hierarchical approach, AAAI 1982.
Belief Propagation is a message passing inference method for statistical graphical models (e.g. Bayesian networks and Markov random fields). The basic idea is to compute the marginal distribution of unobserved nodes, based on the conditional distribution of observed nodes. There are two major cases:
- When the graphical model is both a factor graph and a tree (no loops), the exact marginals can be obtained. This is also equivalent to dynamic programming and Viterbi.
- Otherwise, loopy Belief Propagation will become an approximation inference algorithm.
Motivation: Marginals vs. Joint Maximizer
To compute marginals, we need to find:
where as to compute joint maximum likelihood, we need:
Unfortunately, for each random variable , it might have M possible states, so if we run search algorithms for all states, the complexity is , which is a hard problem. As a result, we need to find better inference algorithms to solve the above problems.
Problem Formulation
In a generalized Markov random fields, the