# Tensor train module

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Tensor product expressions appear naturally in spin dynamics because the state space of a multi-spin system is a direct product of the state spaces of individual spins. A simple example is provided by the nuclear Zeeman interaction Hamiltonian

${{\hat{H}}_{\text{Z}}}=\sum\limits_{n=1}^{N}{{{{\vec{B}}}_{0}}\cdot {{\mathbf{A}}^{\left( n \right)}}\cdot {{{\hat{\vec{S}}}}^{\left( n \right)}}}$

where $N$ is the number of spins, ${{\vec{B}}_{0}}$ is the applied magnetic field, ${{\mathbf{A}}^{\left( n \right)}}$ are nuclear chemical shielding tensors, the sum runs over all nuclei, and the nuclear spin operators

${{\hat{\vec{S}}}^{\left( n \right)}}=\left[ \begin{matrix} \hat{S}_{\text{X}}^{\left( n \right)} & \hat{S}_{\text{Y}}^{\left( n \right)} & \hat{S}_{\text{Z}}^{\left( n \right)} \\ \end{matrix} \right]$

have the following direct product form

$\hat{S}_{\left\{ \text{X,Y,Z} \right\}}^{\left( n \right)}=\mathbf{1}\otimes \mathbf{1}\otimes ...\otimes {{\hat{\sigma }}_{\left\{ \text{X,Y,Z} \right\}}}\otimes ...\otimes \mathbf{1}\otimes \mathbf{1}$

where $\mathbf{1}$ denotes a unit matrix of appropriate dimension and the Pauli matrices $\left\{ {{{\hat{\sigma }}}_{\text{X}}},{{{\hat{\sigma }}}_{\text{Y}}},{{{\hat{\sigma }}}_{\text{Z}}} \right\}$ occur in the n-th position of the direct product sequence. This representation is known in numerical linear algebra as canonical polyadic (CP) format. Although CP representations have been known in magnetic resonance spectroscopy for a long time, they suffer in practice from rapid inflation – spin Hamiltonians encountered in NMR and ESR (electron spin resonance) systems can be complicated and, even for simple initial conditions, the number of terms in the canonical decomposition increases rapidly during system evolution. More ominously, the number of CP terms can change dramatically after small perturbations in the Hamiltonian or the system state, for example

$\sum\limits_{n=1}^{N}{\hat{S}_{\text{Z}}^{\left( n \right)}}=\underset{\varepsilon \to 0}{\mathop{\lim }}\,\frac{{{(\mathbf{1}+\varepsilon {{{\hat{\sigma }}}_{Z}})}^{\otimes N}}-{{\mathbf{1}}^{\otimes N}}}{\varepsilon }$

where ${{\hat{a}}^{\otimes N}}=\hat{a}\otimes \cdots \otimes \hat{a}$. The left hand side of this equation contains $N$ terms, but the expression approximating it on the right hand side has only two terms. The positive side of it is that we can approximate $N$ terms by 2 terms with any accuracy $\varepsilon$, reducing storage and CPU time. The drawback is that both terms of the approximation grow to infinity when $\varepsilon \to 0$. Such instabilities in the CP format make it difficult to use – in the presence of numerical rounding noise the number of terms in the decomposition quickly becomes equal to the dimension of the full state space and any efficiency savings disappear.

Unlike the CP format, which is an open tensor network, closed tensor network formats are stable to small perturbations. The most popular closed tensor network format was repeatedly rediscovered and is currently known under three different names: DMRG in condensed-matter physics, MPS (matrix product states) in computational physics and TT (tensor train) in numerical linear algebra. Having duly tossed our three-sided dice, we shall proceed to use the latter name. The general form of the tensor train format is:

$x=\tau ({{x}^{(1)}},\ldots ,{{x}^{(N)}})=\sum\limits_{{{\alpha }_{1}}=0}^{{{r}_{1}}}{\cdots }\sum\limits_{{{\alpha }_{N-1}}=0}^{{{r}_{N-1}}}{x_{{{\alpha }_{1}}}^{(1)}\otimes x_{{{\alpha }_{1}}{{\alpha }_{2}}}^{(2)}\otimes \cdots \otimes x_{{{\alpha }_{N-2}}{{\alpha }_{N-1}}}^{(N-1)}\otimes x_{{{\alpha }_{N-1}}}^{(N)}}$

For the total ${{\hat{S}}_{\text{Z}}}$ operator the tensor train format is

$\begin{matrix} \hat{H}=\sum\limits_{{{\alpha }_{1}}=1}^{2}{\cdots }\sum\limits_{{{\alpha }_{N-1}}=1}^{2}{\hat{H}_{{{\alpha }_{1}}}^{(1)}\otimes \hat{H}_{{{\alpha }_{1}}{{\alpha }_{2}}}^{(2)}\otimes \cdots \otimes \hat{H}_{{{\alpha }_{N-2}}{{\alpha }_{N-1}}}^{(N-1)}\otimes \hat{H}_{{{\alpha }_{N-1}}}^{(N)}} \\ \left[ \begin{matrix} \hat{H}_{1}^{(1)} \\ \hat{H}_{2}^{(1)} \\ \end{matrix} \right]=\left[ \begin{matrix} {{{\hat{\sigma }}}_{\text{Z}}} \\ \mathbf{1} \\ \end{matrix} \right],\text{ }\left[ \begin{matrix} \hat{H}_{11}^{(2)} & \hat{H}_{12}^{(2)} \\ \hat{H}_{21}^{(2)} & \hat{H}_{22}^{(2)} \\ \end{matrix} \right]=\cdots =\left[ \begin{matrix} \hat{H}_{11}^{(N-1)} & \hat{H}_{12}^{(N-1)} \\ \hat{H}_{21}^{(N-1)} & \hat{H}_{22}^{(N-1)} \\ \end{matrix} \right]=\left[ \begin{matrix} \mathbf{1} & 0 \\ {{{\hat{\sigma }}}_{\text{Z}}} & \mathbf{1} \\ \end{matrix} \right],\text{ }\left[ \begin{matrix} \hat{H}_{1}^{(N)} \\ \hat{H}_{2}^{(N)} \\ \end{matrix} \right]=\left[ \begin{matrix} \mathbf{1} \\ {{{\hat{\sigma }}}_{\text{Z}}} \\ \end{matrix} \right] \\ \end{matrix}$

Note that the number of terms in each summation (known as bond dimension) is two, and that all factors of the decomposition are now bounded. The tensor train representation in the right hand side of (5) has $4N-4$ operators $\hat{H}_{{{\alpha }_{n-1}}{{\alpha }_{n}}}^{(n)}$, each of which is either zero, or the identity $\mathbf{1}$, or the Pauli matrix ${{\hat{\sigma }}_{Z}}$. The original representation of the Hamiltonian in Equation (3) has ${{N}^{2}}$ such operators – the tensor train representation is clearly more memory-efficient.

Another notable example is the ZZ coupling Hamiltonian that often makes an appearance in simple linear spin chain models:

$\hat{J}=\sum\limits_{m \gt n}{\hat{S}_{\text{Z}}^{(m)}\hat{S}_{\text{Z}}^{(n)}}$

This CP format with $N(N-1)/2$ rank-one terms contains ${{N}^{2}}(N-1)$ operators. The corresponding tensor train representation is:

$\begin{matrix} \hat{J}=\sum\limits_{{{\alpha }_{1}}=1}^{3}{\cdots }\sum\limits_{{{\alpha }_{N-1}}=1}^{3}{\hat{J}_{{{\alpha }_{1}}}^{(1)}\otimes \hat{J}_{{{\alpha }_{1}}{{\alpha }_{2}}}^{(2)}\otimes \cdots \otimes \hat{J}_{{{\alpha }_{N-2}}{{\alpha }_{N-1}}}^{(N-1)}\otimes \hat{J}_{{{\alpha }_{N-1}}}^{(N)}} \\ {{{\hat{J}}}^{(1)}}=\left[ \begin{matrix} 0 \\ {{{\hat{\sigma }}}_{Z}} \\ \mathbf{1} \\ \end{matrix} \right],\text{ }{{{\hat{J}}}^{(2)}}=\cdots ={{{\hat{J}}}^{(N-1)}}=\left[ \begin{matrix} \mathbf{1} & 0 & 0 \\ {{{\hat{\sigma }}}_{Z}} & \mathbf{1} & 0 \\ 0 & {{{\hat{\sigma }}}_{Z}} & \mathbf{1} \\ \end{matrix} \right], {{{\hat{J}}}^{(N)}}=\left[ \begin{matrix} \mathbf{1} \\ {{{\hat{\sigma }}}_{Z}} \\ 0 \\ \end{matrix} \right] \\ \end{matrix}$

Here each summation runs over three terms only, and the total number of $\hat{J}_{{{\alpha }_{n-1}}{{\alpha }_{n}}}^{(n)}$ matrices is $9N-12$, which is much smaller than the storage requirements of the CP format.

The above stands in sharp contrast with the classical approach to magnetic resonance simulations, where the Hamiltonian is represented as a ${{2}^{N}}\times {{2}^{N}}$ sparse matrix with all non-zero entries stored in memory. As soon as the sparse matrix is assembled, the CPU and memory resources grow exponentially with the number of spins $N$, making the simulation prohibitively difficult for large systems. To avoid this problem (known colloquially as the curse of dimensionality), all data should be kept in compressed tensor formats of the form given in the equations above, without ever expanding the Kronecker products.

A very considerable body of literature exists on manipulating expressions directly in tensor product formats. In particular, a given tensor may be converted into the TT format using sequential singular value decompositions. Given two tensors in the TT format, any bilinear operation (addition, element-wise multiplication, matrix-vector multiplication, etc.) may be carried out directly in the TT format. Efficient algorithms for Fourier transform and convolution have also recently emerged.

## Building tensor train objects

The primary function that creates tensor trains is ttclass.m.

## User-space functions

clearcoeff.m - absorbs the physical coefficient of the tensor train into its cores.

conj.m - element-wise conjugation of a matrix in a TT representation.

ctranspose.m - conjugate-transpose of a matrix in a TT representation.

diag.m – mimics diag for matrices. If an input is ttclass matrix, returns a ttclass vector (matrix with 1 column) containing the diagonal. If an input is a vector, returns a diagonal matrix.

dot.m – mimics the dot product operation for matrices.

full.m – mimics the corresponding operation for sparse matrices. Computes all Kronecker products and returns full matrix. May result in out-of-memory errors for large matrices.

hdot.m – the Hadamard dot product operation.

ismatrix.m - returns true for non-empty TT objects.

isnumeric.m - returns true for non-empty TT objects

isreal.m - returns true for real-valued ttclass objects.

kron.m - Kronecker product of tensor train representation of matrices.

mean.m – mimics the corresponding operation for matrices. If the input is a ttclass matrix, returns a vector of its mean values along the specified dimension. If the input is a ttclass vector, returns the mean value in a scalar.

minus.m - tensor train subtraction.

mldivide.m - matrix left divide for TT objects.

mrdivide.m - divides a tensor train by a scalar.

mtimes.m - matrix product in the TT representation.

nnz.m - number of non-zeros used by the TT object.

norm.m – computes a norm of the ttclass object. Because of the internal structure of the tensor train format, only the Frobenius norm can be computed reasonably fast.

plus.m - tensor train addition.

rand.m - random tensor trains.

ranks.m – returns tensor train ranks of the tensor trains buffered in the ttclass.

rdivide.m - divides a tensor train by a scalar.

revert.m - computes a bit-revert permutation of a tensor train.

shrink.m – compresses a given ttclass object into a single tensor train with optimal ranks.

sizes.m – returns mode sizes of a ttclass object (dimensions of elementary matrices).

size.m - returns the size of the matrix represented by the tensor train.

subsref.m - provides indexing functionality for tensor train objects.

sum.m - mimics the corresponding operation for matrices. If the input is a ttclass matrix, returns a vector of its sums along the specified dimension. If an input is a ttclass vector, returns a sum of its values in a scalar.

trace.m – returns the trace of a given matrix.

transpose.m - transpose of a matrix represented by a tensor train.

unit_like.m - unit tensor train with the same topology as the one supplied.

vec.m – reshapes a ttclass matrix into the ttclass vector with the same elements.