Markov Logic Networks

From Cohen Courses
Revision as of 08:38, 27 September 2011 by Aanavas (talk | contribs) (→‎Definition)
Jump to navigationJump to search

This is a method that combines first-order logic and probabilistic graphical models. In first-order logic, a set of formulas represent hard constraints over a set of instances, and if an instance violates one of them, it has zero probability. The basic idea of a Markov Logic Network (MLN) is to generalize first-order logic by softening those hard constraints, assigning a real number (the weight) to each formula to indicate how hard it is, so that an instance that violates one or more formulas is not impossible anymore, just less probable.

Definition

A Markov Logic Network is a set of pairs where is a formula in first-order logic and is a real number. Given is a finite set of constants , defines together with a Markov Network in the following way:

  • contains one binary node for each possible grounding of each predicate in . If the ground atom is true, the value of the node is 1, and 0 otherwise.
  • ML, C contains one feature for each possible grounding of each formula in . If the ground formula is true, the value of the feature is 1, and 0 otherwise; and the weight of the feature is .

In other words, the Markov network has an edge between two nodes the corresponding predicates appear together in at least one formula in . Then, the probability distribution over possible instances specified by ML, C is:

where is the normalization factor, is the potential function defined on the -th clique which is related to , and is the discrete variable vector in the clique. Given that equals when is True and 1 otherwise, we can represent probability as:

where is the number of true groundings of in .

Relevant Papers