# L-BFGS

Jump to navigationJump to search

## Contents

### Citation

On the Broyden-Fletcher-Goldfarb-Shanno Method, by D.F. Shanno, . In Journal of Optimization Theory and Applications, 1985.

### Summary

The L-BFGS algorithm is an optimization method that falls under the group of techniques known as "quasi-Newton" optimization methods. The "L" stands for "Limited" (in the limited memory sense, not that the method is necessarily limited), and BFGS are the individuals who came up with the original (non-limited memory variant) algorithm: Broyden, Fletcher, Goldfarb, and Shanno. Here is a picture of all of them:

In general, whereas optimization methods like the cutting planes algorithm, which are in the category of "bundle methods", essentially lower bound the objective function using linear gradients, quasi-Newton algorithms use the gradients to estimate the Hessian (the square matrix of the second-order partial derivatives, think of it as the vector generalization of a second derivative). With the gradient and an approximated Hessian, we can build a quadratic approximation of the objective function.

### Details

The standard BFGS approximates the objective function with a locally quadratic optimization: ${\displaystyle m_{t}(w)J(w_{t})+\langle \nabla J(w_{t}),w-w_{t}\rangle +{\frac {1}{2}}(w-w_{t})^{T}H_{t}(w-w_{t})}$ where ${\displaystyle J(w_{t})}$ is the objective function, ${\displaystyle \nabla J(w_{t})}$ is the gradient, and ${\displaystyle H_{t}}$ is the Hessian.

We update parameters with the weight that minimizes the quadratic approximation: ${\displaystyle w_{t+1}=\arg \min _{w}J(w_{t})+\langle \nabla J(w_{t}),w-w_{t}\rangle +{\frac {1}{2}}(w-w_{t})^{T}H_{t}(w-w_{t})}$

In Quasi-Newton methods, we use a Newton update to update the weight, in other words we use the Hessian and the gradient to come up with a direction in which to update: ${\displaystyle w_{t+1}=w_{t}-\eta _{t}H_{t}^{-1}\nabla J(w_{t})}$, where ${\displaystyle \eta _{t}}$ is the step size, which is usually found/tuned by a line search. The line search needs to match the Wolfe Conditions, which intuitively make sure there is a sufficient decrease in objective function while respecting the curvature condition (of convexity).

The exact Hessian is very expensive to calculate, and the full exact Hessian is very expensive to invert. We wish to come up with an iterative approximation to the inverse Hessian, let us call it ${\displaystyle B}$. Then, ${\displaystyle B_{t+1}=\arg \min _{B}||B-B_{t}||_{w}{\textrm {s.t.}}s_{t}=By_{t}}$, where ${\displaystyle y_{t}=\nabla J(w_{t+1})-\nabla J(w_{t})}$ is the gradient difference between iterations, and ${\displaystyle s_{t}=w_{t+1}-w_{t}}$ is the parameter difference between iterations. After a bit of math, this results in the following update (${\displaystyle I}$ is the identity matrix): ${\displaystyle B_{t+1}=\left(I-{\frac {s_{t}y_{t}^{T}}{\langle s_{t},y_{t}\rangle }}\right)B_{t}\left(I-{\frac {s_{t}y_{t}^{T}}{\langle s_{t},y_{t}\rangle }}\right)+{\frac {s_{t}s_{t}^{T}}{\langle s_{t},y_{t}\rangle }}}$

The above explains the BFGS method. L-BFGS is essentially the same, except we use a low-rank approximation to B.

For non-smooth optimization functions, there are a few modifications we need to make. Firstly, our local quadratic approximation is no longer well defined. Second, note that since not every subgradient is a descent direction, the descent direction (which we formulated as ${\displaystyle -B_{t}\nabla J(w_{t})}$ is also not well defined. We adjust our quadratic approximation as: ${\displaystyle m_{t}(w)=\sup _{s\in \partial J(w_{t})}\{J(w_{t})+\langle s,w-w_{t}\rangle +{\frac {1}{2}}(w-w_{t})^{T}H_{t}(w-w_{t})\}}$. We deal with this supremum by shifting a term from the objective function to a constraint and creating a constrained optimization problem for the weight update:

${\displaystyle w_{t+1}=\arg \min _{w}{\frac {1}{2}}(w-w_{t})^{T}H_{t}(w-w_{t})+\xi }$ such that ${\displaystyle J(w_{t})+\langle s_{i},w-w_{t}\rangle \leq \xi {\textrm {for}}s_{1},\dots ,s_{k}\in \partial J(w_{t})}$. Notice that we sample k subgradients from the set of subgradients (which is a convex set by the way).

### Related Work

There are a lot of papers that use this method. Here are some examples: