Difference between revisions of "Heckerman, JMLR 2000"
Line 15: | Line 15: | ||
* Consistent Dependency Network | * Consistent Dependency Network | ||
Given a domain of interest having set of finite variables <math>X = (X_1, ... , X_n)</math> with a positive joint distribution <math>p(x)</math>, a consistent dependency network for <math>X</math> is a pair <math>(G, P)</math> where <math>G</math> is a cyclic directed graph and <math>P</math> is a set of conditional probability distributions. The parents of node <math>X_i</math>, denoted <math>Pa_i</math>, correspond to those variable <math>Pa_i \subseteq (X_1, .. , X_{i-1}, X_{i+1},... , X_n)</math> that satisfy: | Given a domain of interest having set of finite variables <math>X = (X_1, ... , X_n)</math> with a positive joint distribution <math>p(x)</math>, a consistent dependency network for <math>X</math> is a pair <math>(G, P)</math> where <math>G</math> is a cyclic directed graph and <math>P</math> is a set of conditional probability distributions. The parents of node <math>X_i</math>, denoted <math>Pa_i</math>, correspond to those variable <math>Pa_i \subseteq (X_1, .. , X_{i-1}, X_{i+1},... , X_n)</math> that satisfy: | ||
− | <math>p(x_i|pa_i) = p(x_i | x_1,...., x_{i-1}, x_{i+1}, ..., x_n) = p(x_i | x \setminus x_i)</math> | + | <math>p(x_i|pa_i) = p(x_i | x_1,...., x_{i-1}, x_{i+1}, ..., x_n) = p(x_i | x \setminus x_i)</math> |
+ | |||
+ | The dependency network is consistent in the sense that each local distribution can be obtained via inference from the joint distribution <math>p(x)</math>. | ||
+ | |||
+ | The inference can be perform by converting to a markov network, triangulating that network and then applying one of the standard algorithms for the inference. | ||
+ | |||
* General Dependency Network | * General Dependency Network | ||
+ | Given a domain of interest having set of finite variables <math>X = (X_1, ... , X_n)</math>, let <math>P = (p_1(x_1|x \setminus x_1), ... , p_n(x_n|x \setminus x_m))</math> be a set of conditional distribution, one for each variable in <math>X</math>. A dependency network for <math>X</math> and <math>P</math> is a pair <math>(G, P')</math> where <math>G</math> is a (usually cyclic) directed graph and <math>P'</math> is a set of conditional probability distribution satisfying | ||
+ | <math>p_i(x_i|Pa_i) = p_i(x_i|x \setminus x_i)</math> | ||
== Experimental Result == | == Experimental Result == |
Revision as of 18:21, 25 September 2011
Contents
Citation
Dependency Networks for Inference, Collaborative Filtering, and Data Visualization. David Heckerman, David Maxwell Chickering, Christopher Meek, Robert Rounthwaite, Carl Kadie; in JMLR, 1(Oct):49-75, 2000
Online version
Summary
In this paper, author describe a graphical model for probabilistic relationship, an alternative to the bayesian network, called dependency network. The dependency network, unlike bayesian network is potentially cyclic. The dependency network are well suited to task of predicting preferences like in collaborative filtering. The dependency network is not good for encoding causal relationship.
Brief description of the method
- Consistent Dependency Network
Given a domain of interest having set of finite variables with a positive joint distribution , a consistent dependency network for is a pair where is a cyclic directed graph and is a set of conditional probability distributions. The parents of node , denoted , correspond to those variable that satisfy:
The dependency network is consistent in the sense that each local distribution can be obtained via inference from the joint distribution .
The inference can be perform by converting to a markov network, triangulating that network and then applying one of the standard algorithms for the inference.
- General Dependency Network
Given a domain of interest having set of finite variables , let be a set of conditional distribution, one for each variable in . A dependency network for and is a pair where is a (usually cyclic) directed graph and is a set of conditional probability distribution satisfying