# Appendix F: expert-level options

This section catalogs very advanced functionality that occasionally needs to be manually controlled.

## Summary of disable switches

The following strings may be added to sys.disable cell array.

Switch Functionality being disabled
'hygiene' health checks at start-up
'pt' automatic detection of non-interacting subspaces
'zte' automatic elimination of unpopulated states
'symmetry' permutation symmetry factorization
'pt' automatic detection of non-interacting subspaces
'krylov' Krylov propagation inside evolution.m for large Liouvillians
'clean-up' sparse array clean-up
'dss' destination state screening inside evolution.m
'expv' Krylov propagation inside step.m
'trajlevel' trajectory analysis inside evolution.m
'merge' small subspace merging inside evolution.m
'norm_coil' coil normalization inside evolution.m
'colorbar' colorbar drawing by plotting utilities

## Summary of enable switches

The following strings may be added to sys.enable cell array.

Switch Functionality being enabled
'gpu' GPU arithmetic on NVidia GPUs
'caching' propagator caching
'greedy' greedy parallelisation
'xmemlist' state-cluster cross-membership list generation in basis.m
'paranoia' paranoid numerical accuracy settings
'cowboy' loose numerical accuracy settings
'dafuq' detailed parallel profiling

## NVidia GPU support

Several functions in Spinach can make use of CUDA GPUs (we strongly recommend Titan V). If your computer has a recent NVidia graphics card, enabling that functionality may be beneficial. This is done by adding 'gpu' to the enable array:

    sys.enable={'gpu'};


Spinach kernel modules that can make use of GPU arithmetic are:

1. Time evolution in evolution.m function.
2. Matrix exponential calculation in propagator.m function.
3. Matrix inverse-times-vector operation during the slow-passage detection in slowpass.m function.
4. Krylov propagation in krylov.m and step.m functions.

Very significant acceleration is observed (factor of 10 or more relative to the CPU) for systems that have state space dimensions in excess of 50,000.

Numerical pseudocontact shift solvers also support GPUs for the Fourier solver option. GPU support is enabled in ipcs.m and kpcs.m by specifying

    options.gpu=1;


For the typical 128x128x128 point grids used in paramagnetic centre probability density reconstructions from PCS, using a Tesla K40 card results in up to an order of magnitude acceleration relative to 32 CPU cores. Note that commodity NVidia graphics cards (e.g. GeForce) have artificially capped 64-bit floating-point performance - the Tesla range is more expensive, but much recommended.

## Propagator caching

Wall clock time consumed by the simulation of the CN2D solid state NMR experiment for a 14N-13C spin pair in glycine with different matrix exponential caching settings in Spinach. Light grey columns correspond to running with matrix caching switched off, medium grey columns are for runs with the caching switched on and a cache that is empty at the start of the simulation. Dark grey columns correspond to simulations where all required matrix exponentials are already present in the cache.

Spinach may be instructed to keep a disk record of the matrices that propagator.m function has previously seen, so that propagators are not recomputed, but instead fetched from the disk next time the matrix is encountered. This can save large amounts of time in simulations of very repetitive pulse sequences. To turn this functionality on, add 'caching' to the enable array:

    sys.enable={'caching'};


The cached propagators are placed into /scratch directory in the Spinach root folder.

Spinach uses Matlab's built-in Java interface for the hashing operation that is used to generate matrix identifiers. For very large matrices it may be necessary to increase the Java heap size (Preferences/MATLAB/General/Java Heap Memory). It is not a good idea to set the default Matlab file save format to v7.3 because the files in that format are not compressed. If at all possible, leave the default value of v7.0 unchanged.

## Parallelisation control

Assuming your cluster has been set up to work with Matlab, Spinach offers a degree of control over parallel execution:

    sys.parallel - a cell array with two elements, the first one is the string
specifying the cluster profile, the second one is the number
workers that should be started under that profile.

sys.parprops - a cell array of cell arrays with option name-value pairs that
will be assigned to the AdditionalProperties field of the
parallel cluster object. The content here will be specific to

sys.scratch  - a character string specifying the scratch directory. It is
important that this is rapid storage that is accessible to
all worker processes - MDCS writes far too much to the disk,
and Spinach writes more.


If a parallel pool is already running when Spinach starts, this pool is re-used and the above settings are ignored.

The default setting in Matlab is that every worker process only runs in a single thread. Adding the 'greedy' flag to the enable array:

    sys.enable={'greedy'};


overrides the default setting and allows the worker processes to use as much CPU as they see fit. This is useful in situations when state spaces are dominated by a single large subspace. Note that, once a job with this setting is run, it would persist in the parallel pool unitl the pool is restarted or a job with different settings is run.

This setting is usually counterproductive when an efficient parallelisation avenue (such as powder averaging) is present - use it carefully and make sure it does not actually slow your calculation down. We typically see a lot of advantage for multi-dimensional liquid state NMR, but not elsewhere.

## User input overrides

User input to create.m function may be overriden by adding code to autoexec.m, which is executed by create.m before user input is parsed.

Version 2.4, authors: Ilya Kuprov