# Alternating Minimization

## Citation

Information Geometry and Alternating Minimization Procedures, by I. Csiszar, G.Tusnady. In Statistics and Decisions, 1984.

## Summary

Alternating minimization (AM) is used when we are optimizing two variables jointly. Let us call these two variables ${\displaystyle p}$ and ${\displaystyle q}$. The basic premise behind AM is that we keep one variable constant, say ${\displaystyle q}$ and optimize the other variable, i.e., ${\displaystyle p}$, and then we keep ${\displaystyle p}$ constant and optimize over ${\displaystyle q}$. This constitutes one cycle of the optimization procedure. One may find that AM is very similar to the EM algorithm, and in fact it has been shown by the original authors that EM is a special case of AM.

In recent work, this Method was presented in Soft-Supervised Text Classification as the primary method of solving or optimizing the objective function.

## Details

The alternating minimization algorithm attempts to solve a minimization problem of the following form:

given ${\displaystyle {\mathcal {P}},{\mathcal {Q}}}$ and a function ${\displaystyle D:{\mathcal {P}}\times {\mathcal {Q}}}$, we would like to find: ${\displaystyle \min _{(P,Q)\in {\mathcal {P}}\times {\mathcal {Q}}}D(P,Q)}$.

The way AM does this is by generating a sequence of variables ${\displaystyle (p^{n},q^{n})}$ as follows: ${\displaystyle p^{n}=\arg \min _{\textrm {P}}D(p,q^{n-1})}$, and ${\displaystyle q^{n}=\arg \min _{q}D(p^{n},q)}$.

In order for AM to converge to the correct solution, certain conditions must be satisfied. These conditions can be encapsulated in a property known as the 5-points property. A theorem in the original paper states the conditions for convergence:

Theorem (Convergence of AM)

If ${\displaystyle p^{(n)}=\arg \min _{p\in \Delta ^{m}}D(p,q^{n-1}),q^{n}=\arg \min _{q\in \Delta ^{m}}D(p^{n},q)}$ and ${\displaystyle q_{i}^{(0)}(y)>0\quad \forall y\in Y,\forall i}$, then

• ${\displaystyle D(p,q)+D(p,p^{(0)})\geq D(p,q^{(1)})+D(p^{(1)},q^{(1)})\quad \forall p,q\in \Delta ^{m}}$, and
• ${\displaystyle \lim _{n\to \infty }D(p^{(n)},q^{(n)})=\inf _{p,q\in \Delta ^{m}}D(p,q)}$

where ${\displaystyle \Delta ^{m}}$ is the set consisting of m-length vectors, where each component in each vector lies within a |Y|-dimensional probability simplex, which is denoted as ${\displaystyle \Delta }$. The first point is the 5-point property, and the second point is the convergence statement.

## Example

As an example, let us look at what Subramanya and Bilmes did in their Soft-Supervised Text Classification paper. For them, ${\displaystyle (p^{n},q^{n})}$ is a sequence of sets of distributions, i.e., every ${\displaystyle p^{i}}$ is a set of distributions over all vertices in the graph. We know that their original formulation for their algorithm was:

${\displaystyle \min _{\textrm {p}}C_{1}({\textrm {p}})}$, where ${\displaystyle C_{1}(p)=\left[\sum _{i=1}^{l}D_{KL}(r_{i}\parallel p_{i})+\mu \sum _{i}^{n}\sum _{j}w_{ij}D_{KL}(p_{i}\parallel p_{j})-\nu \sum _{i=1}^{n}H(p_{i})\right]}$.

In order to use alternating minimization, we introduce a second distribution over unlabeled data (apart from ${\displaystyle p}$), ${\displaystyle q}$. The new objective reads:

${\displaystyle \min _{\textrm {p,q}}C_{2}({\textrm {p,q}})}$, where ${\displaystyle C_{2}(p,q)=\left[\sum _{i=1}^{l}D_{KL}(r_{i}\parallel q_{i})+\mu \sum _{i}^{n}\sum _{j\in {\mathcal {N}}(i)}w'_{ij}D_{KL}(p_{i}\parallel q_{j})-\nu \sum _{i=1}^{n}H(p_{i})\right]}$, where we define a new similarity matrix ${\displaystyle W'=W+\alpha \mathbf {I} _{n}}$, with ${\displaystyle \alpha \geq 0}$. We thus have two distributions defined for every training point, and the optimization procedure encourages the two to approach each other. Note that the smoothness regularizer (the third term, consisting of entropy) is defined on a different set of distributions as the first term, which aims to respect the labeled training data. It is this choice which makes the update equations for ${\displaystyle p^{(n)}}$ and ${\displaystyle q^{(n)}}$ relatively simple.

Assuming we start with a distribution that is initialized properly, we can express the update equations as:

${\displaystyle p_{i}^{(n)}(y)={\frac {1}{{\mathcal {Z}}_{i}}}\exp \left\{{\frac {\beta _{i}^{(n-1)}(y)}{\gamma _{i}}}\right\}}$, and ${\displaystyle q_{i}^{(n)}(y)={\frac {r_{i}(y)\delta (i\leq l)+\mu \sum _{j}w'_{ji}p_{j}^{(n)}(y)}{\delta (i\leq l)+\mu \sum _{j}w'_{ji}}}}$,

where ${\displaystyle \gamma _{i}=\nu +\mu \sum _{j}w'_{ij}}$ and ${\displaystyle \beta _{i}^{(n-1)}(y)=-\nu +\mu \sum _{j}w'_{ij}(\log q_{j}^{(n-1)}(y)-1)}$.

The derivations for these updates can be obtained by using the definitions of ${\displaystyle p^{n}}$ and ${\displaystyle q^{n}}$ provided above.