Difference between revisions of "Lbfgs.m"

From Spinach Documentation Wiki
Jump to: navigation, search
(Notes)
Line 1: Line 1:
 
{{DISPLAYTITLE:lbfgs.m}}
 
{{DISPLAYTITLE:lbfgs.m}}
 
+
Calculates an approximation to the Newton-Raphson search direction using past gradients to build a serviceable substitute to a Hessian. The Hessian matrix is never explicitly formed or inverted. This function is the implementation from section 4 of http://dx.doi.org/10.1090/S0025-5718-1980-0572855-7
This function calculates the quasi-Newton update with the gradient history, giving the search direction. Based on the [[quasinewton.m|BFGS]] algorithm, this limited-memory BFGS (L-BFGS) algorithm stores only few vectors that implicitly represent the approximation of the BFGS algorithm (which stores a matrix equal to the number of optimisation variables)
 
  
 
==Syntax==
 
==Syntax==
  
     direction=lbfgs(x_hist,df_hist,grad,N)
+
     direction=lbfgs(dx_hist,dg_hist,g,n_grads)
 
 
==Description==
 
This function is the implementation from section 4 of http://dx.doi.org/10.1090/S0025-5718-1980-0572855-7
 
  
 
==Arguments==
 
==Arguments==
 
              
 
              
    x_hist         - vector array of waveform history, num_var x size_store
+
    dx_hist         - history of x increments,
 +
                      bookshelf array
 
   
 
   
    df_hist        - vector array of gradient history, num_var x size_store
+
    dg_hist        - history of gradient increments,
 +
                      bookshelf array
 
   
 
   
    grad          - vector of the current gradient, (num_vars x 1)
+
    g              - current gradient
 
   
 
   
    N              - number of waveform/gradient vectors to store (default=20)
+
    n_grads        - max number of past gradients to  
 +
                      use for the Hessian estimate
  
 
==Returns==
 
==Returns==
  
    direction     - the vector giving the BFGS approximation to the search direction
+
    direction       - LBFGS approximation to the  
 +
                      search direction
  
 
==Notes==
 
==Notes==
Line 28: Line 28:
  
 
==See also==
 
==See also==
[[quasinewton.m]], [[fminnewton.m]], [[optim_tols.m]]
+
[[fminnewton.m]], [[hess_reg.m]]
  
  
''Version 1.9, authors: [[Ilya Kuprov]], [[David Goodwin]]''
+
''Version 2.2, authors: [[Ilya Kuprov]], [[David Goodwin]]''

Revision as of 16:22, 13 August 2018

Calculates an approximation to the Newton-Raphson search direction using past gradients to build a serviceable substitute to a Hessian. The Hessian matrix is never explicitly formed or inverted. This function is the implementation from section 4 of http://dx.doi.org/10.1090/S0025-5718-1980-0572855-7

Syntax

    direction=lbfgs(dx_hist,dg_hist,g,n_grads)

Arguments

   dx_hist         - history of x increments,
                     bookshelf array

   dg_hist         - history of gradient increments,
                     bookshelf array

   g               - current gradient

   n_grads         - max number of past gradients to 
                     use for the Hessian estimate

Returns

   direction       - LBFGS approximation to the 
                     search direction

Notes

The L-BFGS algorithm is the default of fminnewton.m, and is a good mix of computational efficiency and fast convergence.

See also

fminnewton.m, hess_reg.m


Version 2.2, authors: Ilya Kuprov, David Goodwin