Difference between revisions of "Liu and Nocedal, 1989"

From Cohen Courses
Jump to navigationJump to search
(Created page with '[http://dl.acm.org/citation.cfm?id=83726 Weblink] == Abstract == We study the numerical performance of a limited memory quasi�Newton method for large scale optimization� whi…')
 
Line 2: Line 2:
  
 
== Abstract ==
 
== Abstract ==
We study the numerical performance of a limited memory quasi�Newton method for large scale optimization� which we call the L�BFGS method�. We compare its performance with that of the method developed by Buckley and LeNir ����� which combines cyles of BFGS steps and conjugate direction steps�. Our numerical tests indicate that the L�BFGS method is faster than the method of Buckley and LeNir� and is better able to use additional storage to accelerate convergence�. We show that the L�BFGS method can be greatly accelerated by means of a simple scaling�. We then compare the L�BFGS method with the partitioned quasi�Newton method of Griewank and Tointa. The results show that� for some problems� the partitioned quasi� Newton method is clearly superior to the L�BFGS method.� However we �find that for other problems the L�BFGS method is very competitive due to its low iteration cost�. We also study the convergence properties of the L�BFGS method� and prove global convergence on uniformly convex problems�.
+
We study the numerical performance of a limited memory quasi-Newton method for large scale optimization� which we call the L-BFGS method. We compare its performance with that of the method developed by Buckley and LeNir ����� which combines cyles of BFGS steps and conjugate direction steps�. Our numerical tests indicate that the L-BFGS method is faster than the method of Buckley and LeNir� and is better able to use additional storage to accelerate convergence�. We show that the L-BFGS method can be greatly accelerated by means of a simple scaling�. We then compare the L-BFGS method with the partitioned quasi-Newton method of Griewank and Tointa. The results show that� for some problems� the partitioned quasi� Newton method is clearly superior to the L-BFGS method.� However we �find that for other problems the L-BFGS method is very competitive due to its low iteration cost�. We also study the convergence properties of the L-BFGS method� and prove global convergence on uniformly convex problems.

Revision as of 09:28, 2 October 2012

Weblink

Abstract

We study the numerical performance of a limited memory quasi-Newton method for large scale optimization� which we call the L-BFGS method. We compare its performance with that of the method developed by Buckley and LeNir ����� which combines cyles of BFGS steps and conjugate direction steps�. Our numerical tests indicate that the L-BFGS method is faster than the method of Buckley and LeNir� and is better able to use additional storage to accelerate convergence�. We show that the L-BFGS method can be greatly accelerated by means of a simple scaling�. We then compare the L-BFGS method with the partitioned quasi-Newton method of Griewank and Tointa. The results show that� for some problems� the partitioned quasi� Newton method is clearly superior to the L-BFGS method.� However we �find that for other problems the L-BFGS method is very competitive due to its low iteration cost�. We also study the convergence properties of the L-BFGS method� and prove global convergence on uniformly convex problems.