# Markov Logic Networks

This is a method that combines first-order logic and probabilistic graphical models. In first-order logic, a set of formulas represent hard constraints over a set of instances, and if an instance violates one of them, it has zero probability. The basic idea of a Markov Logic Network (MLN) is to generalize first-order logic by softening those hard constraints, assigning a real number (the weight) to each formula to indicate how hard it is, so that an instance that violates one or more formulas is not impossible anymore, just less probable.

## Definition

A Markov Logic Network ${\displaystyle L}$ is a set of pairs ${\displaystyle (F_{i},w_{i})}$ where ${\displaystyle F_{i}}$ is a formula in first-order logic and ${\displaystyle w_{i}}$ is a real number. Given ${\displaystyle C}$ is a finite set of constants ${\displaystyle C=\{c_{1},c_{2},...\}}$, ${\displaystyle L}$ defines together with ${\displaystyle C}$ a Markov Network ${\displaystyle M_{L,C}}$ in the following way:

• ${\displaystyle M_{L,C}}$ contains one binary node for each possible grounding of each predicate in ${\displaystyle L}$. If the ground atom is true, the value of the node is 1, and 0 otherwise.
• ML, C contains one feature for each possible grounding of each formula ${\displaystyle F_{i}}$ in ${\displaystyle L}$. If the ground formula is true, the value of the feature is 1, and 0 otherwise; and the weight of the feature is ${\displaystyle w_{i}}$.

In other words, the Markov network has an edge between two nodes iff the corresponding predicates appear together in at least one formula in ${\displaystyle L}$. Then, the probability distribution over possible instances ${\displaystyle x}$ specified by ML, C is:

${\displaystyle P(X=x)={\frac {1}{Z}}\prod _{i}{\phi _{i}(x_{\{i\}})}}$

where ${\displaystyle Z}$ is the normalization factor, ${\displaystyle \phi _{i}(x_{\{i\}})}$ is the potential function defined on the ${\displaystyle i}$-th clique which is related to ${\displaystyle F_{i}}$, and ${\displaystyle x_{\{i\}}}$ is the discrete variable vector in the clique. Given that ${\displaystyle \phi _{i}(x_{\{i\}})}$ equals ${\displaystyle e^{w_{i}}}$ when ${\displaystyle F_{i}(x_{\{i\}})}$ is True and 1 otherwise, we can represent probability as:

${\displaystyle P(X=x)={\frac {1}{Z}}{\text{ exp }}\left\{{\sum _{i}{w_{i}n_{i}(x)}}\right\}}$

where ${\displaystyle n_{i}(x)}$ is the number of true groundings of ${\displaystyle F_{i}}$ in ${\displaystyle x}$.

## Relationship to Markov Networks

MLNs are special cases of Markov networks, where the vertices of the network graph are atomic formulas and the edges are the logical connectives used to construct the formula. Each formula is considered to be a clique and the Markov blanket is the set of formulas in which a given atom appears. A potential function is associated to each formula and it is combined with the weight to form the Gibbs measure and partition function for the Markov network.

Therefore, inference in MLNs can be performed using standard Markov network inference techniques like Gibbs sampling, belief propagation and approximation via pseudolikelihood.