Difference between revisions of "Forward-Backward"

From Cohen Courses
Jump to navigationJump to search
 
(72 intermediate revisions by one other user not shown)
Line 1: Line 1:
 
== Summary ==
 
== Summary ==
  
This is a dynamic programming [[Category::method | algorithm]], used in [[AddressesProblem::Hidden Markov Model | Hidden Markov Models]] to efficiently compute the posterior marginals over all the hidden state variables.
+
This is a dynamic programming [[Category::method | algorithm]], used in [[AddressesProblem::Hidden Markov Model | Hidden Markov Models]] to efficiently compute the state posteriors over all the hidden state variables.
 +
 
 +
These values are then used in [[Posterior Decoding]], which simply chooses the state with the highest posterior marginal for each position in the sequence.
 +
 
 +
The forward-backward algorithm can be computed in linear time, where as, a brute force algorithm that checks all possible state sequences would be exponential over the length of the sequence.
 +
 
 +
== Problem formulation ==
 +
 
 +
[[Posterior Decoding]] is one of several ways to find the best hidden state sequence <math>\hat{y}</math>. It consists in picking the highest state posterior for each position in the sequence:
 +
 
 +
<math>
 +
\hat{y}^* = argmax_{y_1,\dots,y_N} \gamma_i(y_i)
 +
</math>
 +
 
 +
where <math>\gamma_i</math> denotes the state posterior for position <math>i</math>. The state posterior is given by:
 +
 
 +
<math>
 +
\gamma_i(s_l) = P_{\theta}(y_i=s_1|\bar{x})
 +
</math>
 +
 
 +
where <math>P_{\theta}(y_i=s_1|\bar{x})</math> is sequence posterior of all possible state sequences where the position <math>i</math> is the state <math>s_l</math>.
 +
 
 +
The sequence posterior <math>P_{\theta}(\bar{y} | \bar{x})</math> for a given sequence <math>\bar{y}</math> is defined as:
 +
 
 +
<math>
 +
P_{\theta}(\bar{y} | \bar{x}) = \frac{P_{\theta}(\bar{x}, \bar{y})}{\sum_{\bar{y}} P_{\theta}(\bar{x}, \bar{y})}
 +
</math>
 +
 
 +
<math>P_{\theta}(\bar{x}, \bar{y})</math> is calculated as the product of all node potentials <math>\phi(s_l)</math> of the nodes in the sequence, corresponding to the state observation parameters for the state <math>s_l</math>, and the transition potentials <math>\phi(s_1,s_n)</math> of the transition in the sequence, corresponding to the translation parameters. These potentials are estimated during the training process of [[AddressesProblem::Hidden Markov Model | Hidden Markov Models]].
 +
As an example the sequence in red in the following figure would be calculated as <math>\phi_1(r)\phi_!(r,s)\phi_2(s)\phi_2(s,s)\phi_3(s)\phi_3(s,s)\phi_4(s)</math>.
 +
 
 +
[[File:Sample_potentials.png]]
 +
 
 +
Thus, to calculate <math>\gamma_2(r)</math>, for instance, we would need to sum the sequence posteriors for the sequences r r r r, s r r r, r r s r, s r s r, r r r s, s r r s, r r s s and s r s s. A brute force algorithm would compute the sequence posteriors separately. This would mean that the number of computations to calculate a single potential would be in the order of <math>(S)^{N}</math>, where S denotes the number of possible states and N is the length of the sequence. Under this approach, this inference would be intractable for real-life problems due to the exponential growth of the number of possible sequences as N and S increase.
 +
 
 +
== Forward-backward ==
 +
The Forward-backward algorithm is an dynamic programming algorithm that can compute \gamma for all states in linear time, by exploring the fact that each state is only dependent on the previous state.
 +
 
 +
It iterates through each node and transition once to build the forward probabilities <math>\alpha</math> and another time to build the backward probabilities <math>\beta</math>.
 +
 
 +
The forward probability represents the probability that in position <math>i</math>, we are in state <math>y_i = s_l</math> and that we have observed <math>\bar{x}</math> up to that position. The forward probability is defined by the following recurrence:
 +
 
 +
<math>
 +
\alpha_1(s_l) = \phi_1(s_l)
 +
</math>
 +
 
 +
<math>
 +
\alpha_i(s_l) = \phi_i(s_l)\sum_{s_m\in{S}}\phi_{i-1}(s_m,s_l)\alpha_{i-1}(s_m)
 +
</math>
 +
 
 +
[[File:forward_backward.png]]
 +
 
 +
We can see in the example that the forward probability <math>\alpha_2(s)</math> corresponds to the sequence posteriors for the partial sequences and end in r in position 2 (s s and r s).
 +
 
 +
The forward probabilities can be used to calculate the posterior over all possible sequences <math>\sum_{\bar{y}} P_{\theta}(\bar{x}, \bar{y})</math>:
 +
 
 +
<math>
 +
\sum_{\bar{y}} P_{\theta}(\bar{x}, \bar{y}) = \sum_{s_k\in S} \alpha_N(s_k)
 +
</math>
 +
 
 +
To calculate the state posterior and transition posterior, we also need to calculate the backward probabilities, which operates in the inverse direction:
 +
 
 +
<math>
 +
\beta_N(s_l) = 1
 +
</math>
 +
 
 +
<math>
 +
\beta_i(s_l) = \sum_{s_m\in{S}}\phi_i(s_l,s_m)\phi_{i+1}(s_m)\beta_{i+1}(s_m)
 +
</math>
 +
 
 +
We can see the the backward probability is defined in a slightly different way, since the backward probability for a given position does not include the state potential for that position, as we can see in the example above.
 +
 
 +
With the forward and backward probabilities we can find the state posterior for any state by simply calculating:
 +
 
 +
<math>
 +
\gamma_i(s_l) = P_{\theta}(y_i=s_1|\bar{x}) = \frac{\alpha_i(s_l)\beta(s_l)}{\sum_{s_k\in S} \alpha_N(s_k)}
 +
</math>
 +
 
 +
we can also calculate the transition posteriors by computing:
 +
 
 +
<math>
 +
\zeta_i(s_l,s_m) = P_{\theta}(y_i=s_l,y_{i+1}=s_m|\bar{x}) = \frac{\alpha_i(s_l)\phi_i(s_l,s_m)\phi_{i+1}(s_m)\beta_{i+1}(s_m)}{\sum_{s_k\in S} \alpha_N(s_k)}
 +
</math>
 +
 
 +
To summarize, the inference using the forward-backward algorithm is done by:
 +
 
 +
* 1 - Calculate the forward and backward probabilities for all positions and hidden states
 +
* 2 - Calculate the state posteriors for all positions and hidden states
 +
* 3 - Choose the sequence with the highest state posteriors
 +
 
 +
The most complex task in this algorithm is calculating the forward and backward probabilities, where it has to iterate through all edges between nodes. However, the number of edges in a Hidden Markov Model is in the order of <math>S^2N</math>, which is tractable and has a linear growth.
 +
 
 +
== Related Concepts ==
 +
 
 +
The [[Inside-outside]] algorithm is a generalization of the Forward-backward algorithm.
 +
 
 +
The [[Forward-backward]] algorithm is used for [[Posterior Decoding]] in a [[HMM | Hidden Markov Model]], where the sequence with the highest state posteriors is chosen. An alternative to this is [[Viterbi Decoding]], where the sequence with the highest sequence posterior is chosen instead. In this case, the [[Viterbi]] algorithm is used.
 +
 
 +
The [[Forward-backward]] algorithm is also used in the [[Baum-Welch]] algorithm, which is used to find unknown parameters in a [[Hidden Markov Model]].

Latest revision as of 08:34, 2 November 2011

Summary

This is a dynamic programming algorithm, used in Hidden Markov Models to efficiently compute the state posteriors over all the hidden state variables.

These values are then used in Posterior Decoding, which simply chooses the state with the highest posterior marginal for each position in the sequence.

The forward-backward algorithm can be computed in linear time, where as, a brute force algorithm that checks all possible state sequences would be exponential over the length of the sequence.

Problem formulation

Posterior Decoding is one of several ways to find the best hidden state sequence . It consists in picking the highest state posterior for each position in the sequence:

where denotes the state posterior for position . The state posterior is given by:

where is sequence posterior of all possible state sequences where the position is the state .

The sequence posterior for a given sequence is defined as:

is calculated as the product of all node potentials of the nodes in the sequence, corresponding to the state observation parameters for the state , and the transition potentials of the transition in the sequence, corresponding to the translation parameters. These potentials are estimated during the training process of Hidden Markov Models. As an example the sequence in red in the following figure would be calculated as .

Sample potentials.png

Thus, to calculate , for instance, we would need to sum the sequence posteriors for the sequences r r r r, s r r r, r r s r, s r s r, r r r s, s r r s, r r s s and s r s s. A brute force algorithm would compute the sequence posteriors separately. This would mean that the number of computations to calculate a single potential would be in the order of , where S denotes the number of possible states and N is the length of the sequence. Under this approach, this inference would be intractable for real-life problems due to the exponential growth of the number of possible sequences as N and S increase.

Forward-backward

The Forward-backward algorithm is an dynamic programming algorithm that can compute \gamma for all states in linear time, by exploring the fact that each state is only dependent on the previous state.

It iterates through each node and transition once to build the forward probabilities and another time to build the backward probabilities .

The forward probability represents the probability that in position , we are in state and that we have observed up to that position. The forward probability is defined by the following recurrence:

Forward backward.png

We can see in the example that the forward probability corresponds to the sequence posteriors for the partial sequences and end in r in position 2 (s s and r s).

The forward probabilities can be used to calculate the posterior over all possible sequences :

To calculate the state posterior and transition posterior, we also need to calculate the backward probabilities, which operates in the inverse direction:

We can see the the backward probability is defined in a slightly different way, since the backward probability for a given position does not include the state potential for that position, as we can see in the example above.

With the forward and backward probabilities we can find the state posterior for any state by simply calculating:

we can also calculate the transition posteriors by computing:

To summarize, the inference using the forward-backward algorithm is done by:

  • 1 - Calculate the forward and backward probabilities for all positions and hidden states
  • 2 - Calculate the state posteriors for all positions and hidden states
  • 3 - Choose the sequence with the highest state posteriors

The most complex task in this algorithm is calculating the forward and backward probabilities, where it has to iterate through all edges between nodes. However, the number of edges in a Hidden Markov Model is in the order of , which is tractable and has a linear growth.

Related Concepts

The Inside-outside algorithm is a generalization of the Forward-backward algorithm.

The Forward-backward algorithm is used for Posterior Decoding in a Hidden Markov Model, where the sequence with the highest state posteriors is chosen. An alternative to this is Viterbi Decoding, where the sequence with the highest sequence posterior is chosen instead. In this case, the Viterbi algorithm is used.

The Forward-backward algorithm is also used in the Baum-Welch algorithm, which is used to find unknown parameters in a Hidden Markov Model.