optim_tols.m

From Spinach Documentation Wiki
Jump to: navigation, search


Contains the names and default values of all Tolerances and options for numerical optimisation parameters in Spinach.

Syntax

    optim_param=optim_tols(optim,n_vars)

Description

Sets the various accuracy cut-offs, options and tolerances used throughout the optimisation. Modifications to this function are discouraged; the accuracy settings should be modified by setting the sub-fields of the optim structure. The optim structure is inherited from the functions fminnewton.m, fminsimplex.m, and fminkrotov.m, and is passed as one of their arguments.

Arguments

    optim          - structure containing numerical optimisation options and 
                     tolerances (see below for extensive description).
    
    n_vars         - dimension of the objective function variable.


Returns

    optim_param    - parsed optim variables, with all tolerances structured in
                     an orderly manner.

Default structure

Default Value Values Comments Optimisers
optim.method fminnewton: 'lbfgs'

fminsimplex: 'nm_simplex'

fminkrotov: 'krotov'
'sr1'
'bfgs'
'lbfgs'
'newton'
'nm_simplex'
'md_simplex'
'krotov'
'krotov-sr1'
'krotov-bfgs'
Methods of numerical optimization supported by Spinach’s fminnewton.m, fminsimplex, and fminkrotov files. Gradient descent, ascent, quasi‐Newton methods, and the second order Newton‐Raphson method are fully supported through fminnewton.m. Nelder-Mead and Multi-directional search simplex methods are supported through fminsimplex.m. Krotov methods including quasi-newton updated methods are supported through fminkrotov.m fminnewton
fminsimplex
fminkrotov
optim.extremum fminnewton, fminsimplex: 'minimum'

fminkrotov: 'maximum'
'minimum'
'maximum'
Option of weather to maximise of minimise the objective function. fminsimplex.m currently only has the option of minimisation. fminnewton
fminkrotov
optim.npen 0 Positive integer Number of penalties to separate from the cost (optimisation uses the sum of the penalties and the cost, but this can be spearated in the console display). When this is used the cost of the cost function should be returned as an array with npen+1 elements, with elements 2 to npen+1 begin the penalty terms. fminnewton
optim.obj_fun_disp 0 Function handle A function handle which is applied to the display of the cost e.g. optim.obj_fun_disp=@(fx)(1-fx) gives the infidelity. Note; this only applies to the display, not the computational use of the ocst. fminnewton
optim.max_iterations 100 Positive integer Number of iterations to which the numerical optimisation should be run. fminnewton
fminsimplex
fminkrotov
optim.inverse_method true Boolean For Hessian update methods, BFGS, SR1. Set to true, optimization updates and uses the inverse Hessian; set to false and the Hessian is updated, requiring the inverse of the Hessian to find the search direction. fminnewton
optim.lbfgs_store 20 Positive integer Size of store used in lbfgs (limited memory bfgs) algorithm. fminnewton
optim.tol_x 1e-6 [math]\in [0, 1][/math] Termination tolerance on the waveform. fminnewton
fminsimplex
fminkrotov
optim.tol_fx maximise: Inf

minimise: -Inf
real number Termination tolerance on the objective function value (fidelity). fminnewton
fminsimplex
fminkrotov
optim.tol_gfx 1e-6 [math]\in [0, 1][/math] Termination tolerance on gradient: the first order optimality condition. fminnewton
fminkrotov
optim.linesearch 'bracket-section' 'newton-step'
'backtracking'
'bracket-section'
Type of line-search to use. Two are available: simple backtracking line-search or a bracketing line-search with a sectioning phase using cubic interpolation. The third, 'newton-step', is effectively no line search. fminnewton
optim.linesearch_rules Backtracking: 'Armijo'

bracket-section: Wolfe-strong
'Armijo'
'Goldstein'
'Wolfe-weak'
'Wolfe-strong'
These line-search rules decide if a step-length is an acceptable size. 'Armijo' and 'Goldstein' are gradient-free and consider the most simple - probably sufficient for the 'newton' method with 'backtracking'. 'Wolfe-weak' and 'Wolfe-strong' are required for acceptable performance os quasi-Newton methods. fminnewton
optim.tol_linesearch_fx Armijo, Wolfe-weak, Wolfe-strong:
0.01

Goldstein:
0.25
Armijo, Wolfe-weak, Wolfe-strong:
[math]\in [0, 1][/math]

Goldstein:
[math]\in [0, 0.5][/math]
Armijo-Goldstein inequality: condition of sufficient decrease in the objective function. fminnewton
optim.tol_linesearch_gfx 0.9 [math]\in [0, 1][/math] Wolfe-Powel condition: curvature condition. Only required for 'Wolfe-weak' and 'Wolfe-strong'. fminnewton
optim.tol_linesearch_reduction [math]\frac{2}{\left(1+\sqrt{5}\right)}[/math] [math]\in [0, 1][/math] 'backtracting' step length contraction factor (rho). fminnewton
optim.tau1 3 Positive integer Bracket expansion if step-size becomes larger in gradient based 'bracket-section' line search. fminnewton
optim.tau2 0.1 [math]\in [0, 1][/math] Left bracket reduction used in section phase of gradient based 'bracket-section' line search. fminnewton
optim.tau3 0.5 [math]\in [0, 1][/math] Right bracket reduction used in section phase of gradient based 'bracket-section' line search. fminnewton
optim.regularisation newton:
'RFO'

krotov-sr1, krotov-bfgs:
'TRM'

sr1, bfgs:
'none'
'none'
'CHOL'
'TRM'
'RFO'
Hessian regularisation method, if a Hessian is calculated (Newton‐Raphson method) 'RFO' is default, if 'sr1' or 'bfgs' then 'none' is default. Forces the Hessian matrix to be positive definite. 'RFO' also gives scaling. fminnewton
fminkrotov
optim.conditioning krotov-sr1, krotov-bfgs:
'scaled'

newton, sr1, bfgs:
'iterative'
'none'
'iterative'
'scaled'
Method to ensure the Hessian matrix is sufficiently positive definite by monitoring the condition number of the Hessian matrix. Not required if regularisation is specified. fminnewton
fminkrotov
optim.n_reg 2500 Positive integer Maximum regularisation iterates for 'RFO' and 'TRM' conditioning, and maximum trials for 'CHOL' regularisation. fminnewton
fminkrotov
optim.alpha 1 Positive number This is the uniform scaling factor of the 'RFO' method. fminnewton
fminkrotov
optim.delta 1 Positive number 'TRM' uniform eigenvalue shifting factor (delta). fminnewton
fminkrotov
optim.phi RFO:
0.9

TRM:
0.81[math]^{-1}[/math]
Positive number At each iterate of the 'iterative' conditioning above, the value of optim.alpha (RFO) or optim.delta (TRM) is multiplied by this factor; reducing the condition number until less than optim.max_cond. fminnewton
fminkrotov
optim.max_cond iterative:
1e4

scaled:
1e14
Positive number Maximum condition number of the Hessian matrix, when 'iterative' conditioning is used default is 1e4, when 'scaled conditioning is used default is 1e14. fminnewton
fminkrotov
optim.simplex_min 1e-3 Positive number Tolerance for cgce test based on relative size of simplex. fminsimplex
optim.max_n_fx Inf Positive integer Maximum number of objective function evaluations. fminsimplex
optim.termination -Inf Real number Termination tolerance on the objective function value. fminsimplex
optim.init_simplex equilateral 'equilateral'
'right-angled'
Initial simplex shape. fminsimplex
optim.expansion 2 Real number > 1 Simplex expansion factor at an expand step. fminsimplex
optim.contraction 0.5 [math]\in [0, 1][/math] Simplex contraction factor at a contract step. fminsimplex
optim.reflection 1.0 Positive number Simplex reflection factor at a reflet step (Nelder-Mead method only). fminsimplex

Notes

This function should not be called directly by the user, and modifications are discouraged.

See also

fminnewton.m, fminsimplex.m


Version 1.9, authors: David Goodwin