Difference between revisions of "Lbfgs.m"
m (→See also)
|Line 25:||Line 25:|
The L-BFGS algorithm is the default of [[fminnewton.m]], and
The L-BFGS algorithm is the default of [[fminnewton.m]], and a good mix of computational efficiency and fast convergence.
Revision as of 16:10, 13 August 2018
This function calculates the quasi-Newton update with the gradient history, giving the search direction. Based on the BFGS algorithm, this limited-memory BFGS (L-BFGS) algorithm stores only few vectors that implicitly represent the approximation of the BFGS algorithm (which stores a matrix equal to the number of optimisation variables)
This function is the implementation from section 4 of http://dx.doi.org/10.1090/S0025-5718-1980-0572855-7
x_hist - vector array of waveform history, num_var x size_store df_hist - vector array of gradient history, num_var x size_store grad - vector of the current gradient, (num_vars x 1) N - number of waveform/gradient vectors to store (default=20)
direction - the vector giving the BFGS approximation to the search direction
The L-BFGS algorithm is the default of fminnewton.m, and is a good mix of computational efficiency and fast convergence.