# Optimal control module

This section contains the description of the functions in the Optimal Control module of Spinach.

## Contents

## Creating an Optimal Control problem

Optimal control can be thought of as a numerical mathematical optimisation algorithm used to solve a solve a physics problem. To keep this obvious to the user, Spinach divides an optimal control problem into two structures:

- The control problem
- - containing all physics defining the control system.

- The optimisation problem
- - defining options, tolerances and an "objective function" to be solved with a root-finding-type numerical method.

The two structures will be defined by the control system in `ctrl_sys`

and the optimisation options in `optim`

.

### Control operators

A bilinear control system should be split into a drift Hamiltonian (that which is beyond control of the experiment), and control Hamiltonians.

[math]\mathbf{H}=\mathbf{H}_0+\sum\limits_{k=1}^K c_k(t)\mathbf{H}_k[/math]

where [math]\mathbf{H}_0[/math] is the `drift`

Hamiltonian, and [math]\mathbf{H}_k[/math] are the control operators with time-dependent control amplitudes [math]c_k(t)[/math]. The optimal control finds the optimal set of control amplitudes with respect to the overlap of an initial state and a target state.

An example of six control operators for an HCF `spin_system`

defined with Spinach:

% Drift Hamiltonian drift=hamiltonian(assume(spin_system,'nmr')); % Control operators LpH=operator(spin_system,'L+','1H'); LpC=operator(spin_system,'L+','13C'); LpF=operator(spin_system,'L+','19F'); LxH=(LpH+LpH')/2; LyH=(LpH-LpH')/2i; LxC=(LpC+LpC')/2; LyC=(LpC-LpC')/2i; LxF=(LpF+LpF')/2; LyF=(LpF-LpF')/2i;

The control operators should be defined as a cell array being a field of a structure `ctrl_sys`

ctrl_sys.control_ops={LxH,LyH,LxC,LyC,LxF,LyF};

### Initial and target states

In the context of state-to-state transfer, an initial and target state must be defined. Example syntax for a three `spin_system`

stating with all magnetisation on the first spin with the target state of all magnetisation on the third spin:

% Initial normalised state rho=state(spin_system,{'Lz'},{1}); ctrl_sys.rho=rho/norm(rho); % Desired normalised target state target=state(spin_system,{'Lz'},{3}); ctrl_sys.target=target/norm(target);

### Initial guess

The GRAPE optimal control relies on the piecewise constant approximation, where the total time allowed for the optimisation is split into `nsteps` time points. Although the Krotov optimal control does not rely on the piecewise constant approximation, time should also be split into (relatively small) time points, which numerically approximate continuous time.

An initial guess needs to be provided to any optimisation. In Spinach, the initial guess should be a matrix of size kxn, where k is the number of control operators, and n is the number of time points.

Syntax example for the above HCF spin system:

% Power level ctrl_sys.power_level=2*pi*10e3; % Pulse duration (seconds) ctrl_sys.pulse_duration=0.100; % Number of time steps ctrl_sys.nsteps=50; % Calculate the time step ctrl_sys.time_step=ctrl_sys.pulse_duration/ctrl_sys.nsteps; % Take a random guess for the pulse guess=randn(numel(ctrl_sys.control_ops),ctrl_sys.nsteps)/10;

Note that it is useful to scale the initial guess, in this case by dividing by 10. The final stage in defining the control system is to collect and check the parameters into a single, well defined, structure:

ctrl_sys=control_sys(spin_system,ctrl_sys);

### Objective function

The fundamental basis of the optimal control module is to define a objective function (also named an cost function), which returns the fidelity cost and its gradient with respect to controls.

function [cost,cost_gradient] = cost_function(waveform)

The cost function handle should not be anonymous - use nested, local, or simple functions. An example of this objective function using GRAPE:

function [cost,cost_grad] = cost_function(waveform) [~,fidelity,gradient]=grape(spin_system,ctrl_sys,drift,waveform); [pen_cost,pen_grad]=penalty(waveform,'SNS',-ones(size(waveform)),ones(size(waveform))); cost=fidelity+pen_cost; cost_grad=gradient+pen_grad; end

### Numerical optimisation

fminnewton.m contains an implementation of steepest ascent/descent, quasi‐Newton (SR1, BFGS and LBFGS), and Newton‐Raphson minimisation/maximisation algorithms. An example to maximise the above example is:

% define the optimisation algorithm and maximum iterations optim = struct('method','grad_ascent','max_iterations',100); % run the optimisation calling the cost_function with a handle [oc_waveform,fidelity]=fminnewton(@cost_function,guess,optim);

The console output for this example shows iteratively better solutions to control the system from its initial state to its target state. In this case `oc_waveform`

is the optimal waveform found after 100 iterates of the gradient ascent algorithm, with its resulting `fidelity`

.

The gradient ascent algorithm does not have good convergence characteristics and it takes a large amount of iterates to get good solutions. Further examples, with better convergence, are outlined in Optimal control with GRAPE.

## Notes

## See also

**Control functions**

control_sys.m, grape.m, krotov.m, penalty.m

**Optimisation functions**

fminkrotov.m, fminnewton.m, fminsimplex.m, hessprep.m, hessreg.m, lbfgs.m, linesearch.m, objeval.m, optim_report.m, optim_tols.m, quasinewton.m

**Other relevant functions**

hess_reorder.m, dirdiff.m, step.m

*Revision 3399, authors: Ilya Kuprov, David Goodwin*