From Spinach Documentation Wiki
Jump to: navigation, search

Contains the names and default values of all Tolerances and options for numerical optimisation parameters in Spinach.




Sets the various accuracy cut-offs, options and tolerances used throughout the optimisation. Modifications to this function are discouraged; the accuracy settings should be modified by setting the sub-fields of the optim structure. The optim structure is inherited from fminnewton.m, and is passed as one of its arguments.


    optim          - structure containing numerical optimisation options and 
                     tolerances (see below for extensive description).
    n_vars         - dimension of the objective function variable.


    optim_param    - parsed optim variables, with all tolerances structured in
                     an orderly manner.

Default structure

Default Value Values Comments
optim.method 'lbfgs' 'sr1'
Methods of numerical optimization supported by fminnewton.m: gradient descent, ascent, quasi‐Newton methods, and the second order Newton‐Raphson method.
<tt>optim.npen 0 Positive integer Number of penalties to separate from the cost (optimisation uses the sum of the penalties and the cost, but this can be spearated in the console display). When this is used the cost of the cost function should be returned as an array with npen+1 elements, with elements 2 to npen+1 begin the penalty terms.
optim.obj_fun_disp 0 Function handle A function handle which is applied to the display of the cost e.g. optim.obj_fun_disp=@(fx)(1-fx) gives the infidelity. Note; this only applies to the display, not the computational use of the ocst.
optim.max_iterations 100 Positive integer Number of iterations to which the numerical optimisation should be run.
optim.inverse_method true Boolean For Hessian update methods, BFGS, SR1. Set to true, optimization updates and uses the inverse Hessian; set to false and the Hessian is updated, requiring the inverse of the Hessian to find the search direction.
optim.lbfgs_store 20 Positive integer Size of store used in lbfgs (limited memory bfgs) algorithm.
optim.tol_x 1e-6 [math]\in [0, 1][/math] Termination tolerance on the waveform.
optim.tol_fx maximise: Inf

minimise: -Inf
real number Termination tolerance on the objective function value (fidelity).
optim.tol_gfx 1e-6 [math]\in [0, 1][/math] Termination tolerance on gradient: the first order optimality condition.
optim.linesearch 'bracket-section' 'newton-step'
Type of line-search to use. Two are available: simple backtracking line-search or a bracketing line-search with a sectioning phase using cubic interpolation. The third, 'newton-step', is effectively no line search.
optim.linesearch_rules Backtracking: 'Armijo'

bracket-section: Wolfe-strong
These line-search rules decide if a step-length is an acceptable size. 'Armijo' and 'Goldstein' are gradient-free and consider the most simple - probably sufficient for the 'newton' method with 'backtracking'. 'Wolfe-weak' and 'Wolfe-strong' are required for acceptable performance os quasi-Newton methods.
optim.tol_linesearch_fx Armijo, Wolfe-weak, Wolfe-strong:

Armijo, Wolfe-weak, Wolfe-strong:
[math]\in [0, 1][/math]

[math]\in [0, 0.5][/math]
Armijo-Goldstein inequality: condition of sufficient decrease in the objective function.
optim.tol_linesearch_gfx 0.9 [math]\in [0, 1][/math] Wolfe-Powel condition: curvature condition. Only required for 'Wolfe-weak' and 'Wolfe-strong'.
optim.tol_linesearch_reduction [math]\frac{2}{\left(1+\sqrt{5}\right)}[/math] [math]\in [0, 1][/math] 'backtracting' step length contraction factor (rho).
optim.tau1 3 Positive integer Bracket expansion if step-size becomes larger in gradient based 'bracket-section' line search.
optim.tau2 0.1 [math]\in [0, 1][/math] Left bracket reduction used in section phase of gradient based 'bracket-section' line search.
optim.tau3 0.5 [math]\in [0, 1][/math] Right bracket reduction used in section phase of gradient based 'bracket-section' line search.
optim.regularisation newton:

krotov-sr1, krotov-bfgs:

sr1, bfgs:
Hessian regularisation method, if a Hessian is calculated (Newton‐Raphson method) 'RFO' is default, if 'sr1' or 'bfgs' then 'none' is default. Forces the Hessian matrix to be positive definite. 'RFO' also gives scaling.
optim.conditioning krotov-sr1, krotov-bfgs:

newton, sr1, bfgs:
Method to ensure the Hessian matrix is sufficiently positive definite by monitoring the condition number of the Hessian matrix. Not required if regularisation is specified.
optim.n_reg 2500 Positive integer Maximum regularisation iterates for 'RFO' and 'TRM' conditioning, and maximum trials for 'CHOL' regularisation.
optim.alpha 1 Positive number This is the uniform scaling factor of the 'RFO' method.
optim.delta 1 Positive number 'TRM' uniform eigenvalue shifting factor (delta).
optim.phi RFO:

Positive number At each iterate of the 'iterative' conditioning above, the value of optim.alpha (RFO) or optim.delta (TRM) is multiplied by this factor; reducing the condition number until less than optim.max_cond.
optim.max_cond iterative:

Positive number Maximum condition number of the Hessian matrix, when 'iterative' conditioning is used default is 1e4, when 'scaled conditioning is used default is 1e14.


This function should not be called directly by the user, and modifications are discouraged.

See also


Version 1.9, authors: David Goodwin