Difference between revisions of "Fminnewton.m"

From Spinach Documentation Wiki
Jump to: navigation, search
m (title format for fminnewton)
(No difference)

Revision as of 10:58, 12 January 2016


Finds a local minimum of a function of several variables, based on fminlbfgs.m code from D. Kroon, University of Twente (Nov 2010). This version is modified to be taylored to the GRAPE (Gradient Ascent Pulse Engineering) module. Syntax:

    [x,fval,grad,hess,diag_data] = fminnewton(spin_system,cost_function,x_init,optim)

where the spin_system is that created from Spinach, the cost_function is a function handle to that which produces as cost, gradient, and\or Hessian as a function of an initial guess, x_init. optim is an optional argument containing the optimisation options as a pair-wise cell structure.

Inputs

'spin_system' A spin system created with Spinach should always be passed to the grape optimal control module.
'cost_function' Function handle or string which is minimized:
                          function [f,g,h]=cost_function(x)
                              f , value calculation at x;
                              if ( nargout > 1 )
                                  g , gradient calculation at x;
                              elseif ( nargout > 2 )
                                  g , gradient calculation at x;
                                  h , hessian calculation at x;
                              end
                          end.
'x_init' Initial guess. Scalar, vector or matrix.
'optim' Structure with optimiser options. Structure is set out in the optim_tols.m function. Five optimisation methods are available: Steepest descent, BFGS, SR1, limited memory BFGS, and Newton-Raphson method. The default is l-BFGS which is fast and memory efficient.
Extended description of optimiser options:
  optim.method	-   Optimisation method:
          'bfgs'      Broyden-Fletcher-Goldfarb-Shanno algorithm (default)
                      Number of unknowns > 3000, 'lbfgs' is used.
          'sr1'       Symmetric rank 1 Hessian update.
          'lbfgs'     limited memory BFGS algorithm
          'steepdesc'	steepest decent optimization
          'newton'    Newton-Raphson method (needs Hessian)

optim.OutputFcn - User-defined function called at each iteration,

                      useful in storing convergence properties.

optim.lbfgs_store - Iterations to approximate the Hessian in l-BFGS

                          20 is default. A small for non-smooth functions
                          large for good quadratic approximiation.
  optim.tolX      -   Termination tolerance on x, default 1e-6.
  optim.tolF      -   Termination tolerance on function, default 1e-6.

optim.max_iterations - Maximum iterations allowed, default 100.

optim.tol_decrease_F - Wolfe sufficient decrease condition on

                              gradient, default 0.01.

optim.tol_curvature - Wolfe curvature condition on gradient,

                              default 0.9.

optim.tau1 - Bracket expansion if stepsize becomes larger,

                      default 3.

optim.tau2 - Left bracket reduction in section phase,

                      default 0.1.

optim.tau3 - Right bracket reduction in section phase,

                      default 0.5.

Outputs

'x' The minimiser of the function.
'fval' The minimum found at the corresponding minimiser.
'grad' The gradient at this location.
'hess' The Hessian at this location.
'diag_data' diagnostics data structure, containing complete information about the calculation. It has the following self‐explanatory fields:

diag_data.exit_message

diag_data.iteration

diag_data.funccount

diag_data.gradcount

diag_data.hesscount

diag_data.trajectory

diag_data.total_grad

diag_data.total_hess

diag_data.minimiser

diag_data.fval

diag_data.gradient

diag_data.hessian

diag_data.timeTotal

diag_data.timeExtern

diag_data.timeIntern