Difference between revisions of "Optimal control module"

From Spinach Documentation Wiki
Jump to: navigation, search
Line 120: Line 120:
''Revision 3399, authors: [[Ilya Kuprov]], [[David Goodwin]]''
''Version 1.9, authors: [[Ilya Kuprov]], [[David Goodwin]]''

Revision as of 13:45, 3 January 2017

This section contains the description of the functions in the Optimal Control module of Spinach.

Creating an Optimal Control problem

Optimal control can be thought of as a numerical mathematical optimisation algorithm used to solve a solve a physics problem. To keep this obvious to the user, Spinach divides an optimal control problem into two structures:

  1. The control problem
    - containing all physics defining the control system.
  2. The optimisation problem
    - defining options, tolerances and an "objective function" to be solved with a root-finding-type numerical method.

The two structures will be defined by the control system in ctrl_sys and the optimisation options in optim.

Control operators

A bilinear control system should be split into a drift Hamiltonian (that which is beyond control of the experiment), and control Hamiltonians.

[math]\mathbf{H}=\mathbf{H}_0+\sum\limits_{k=1}^K c_k(t)\mathbf{H}_k[/math]

where [math]\mathbf{H}_0[/math] is the drift Hamiltonian, and [math]\mathbf{H}_k[/math] are the control operators with time-dependent control amplitudes [math]c_k(t)[/math]. The optimal control finds the optimal set of control amplitudes with respect to the overlap of an initial state and a target state.

An example of six control operators for an HCF spin_system defined with Spinach:

    % Drift Hamiltonian
    % Control operators
    LxH=(LpH+LpH')/2; LyH=(LpH-LpH')/2i;
    LxC=(LpC+LpC')/2; LyC=(LpC-LpC')/2i;
    LxF=(LpF+LpF')/2; LyF=(LpF-LpF')/2i;

The control operators should be defined as a cell array being a field of a structure ctrl_sys


Initial and target states

In the context of state-to-state transfer, an initial and target state must be defined. Example syntax for a three spin_system stating with all magnetisation on the first spin with the target state of all magnetisation on the third spin:

    % Initial normalised state
    % Desired normalised target state

Initial guess

The GRAPE optimal control relies on the piecewise constant approximation, where the total time allowed for the optimisation is split into nsteps time points. Although the Krotov optimal control does not rely on the piecewise constant approximation, time should also be split into (relatively small) time points, which numerically approximate continuous time.

An initial guess needs to be provided to any optimisation. In Spinach, the initial guess should be a matrix of size kxn, where k is the number of control operators, and n is the number of time points.

Syntax example for the above HCF spin system:

    % Power level
    % Pulse duration (seconds)
    % Number of time steps
    % Calculate the time step
    % Take a random guess for the pulse

Note that it is useful to scale the initial guess, in this case by dividing by 10. The final stage in defining the control system is to collect and check the parameters into a single, well defined, structure:


Objective function

The fundamental basis of the optimal control module is to define a objective function (also named an cost function), which returns the fidelity cost and its gradient with respect to controls.

    function [cost,cost_gradient] = cost_function(waveform)

The cost function handle should not be anonymous - use nested, local, or simple functions. An example of this objective function using GRAPE:

    function [cost,cost_grad] = cost_function(waveform)

Numerical optimisation

fminnewton.m contains an implementation of steepest ascent/descent, quasi‐Newton (SR1, BFGS and LBFGS), and Newton‐Raphson minimisation/maximisation algorithms. An example to maximise the above example is:

    % define the optimisation algorithm and maximum iterations
    optim = struct('method','grad_ascent','max_iterations',100);
    % run the optimisation calling the cost_function with a handle

The console output for this example shows iteratively better solutions to control the system from its initial state to its target state. In this case oc_waveform is the optimal waveform found after 100 iterates of the gradient ascent algorithm, with its resulting fidelity.

The gradient ascent algorithm does not have good convergence characteristics and it takes a large amount of iterates to get good solutions. Further examples, with better convergence, are outlined in Optimal control with GRAPE.


See also

Control functions

control_sys.m, grape.m, krotov.m, penalty.m

Optimisation functions

fminkrotov.m, fminnewton.m, fminsimplex.m, hessprep.m, hessreg.m, lbfgs.m, linesearch.m, objeval.m, optim_report.m, optim_tols.m, quasinewton.m

Other relevant functions

hess_reorder.m, dirdiff.m, step.m

Version 1.9, authors: Ilya Kuprov, David Goodwin