Tensor train module

From Spinach Documentation Wiki
Jump to: navigation, search

Tensor product expressions appear naturally in spin dynamics because the state space of a multi-spin system is a direct product of the state spaces of individual spins. A simple example is provided by the nuclear Zeeman interaction Hamiltonian

[math]{{\hat{H}}_{\text{Z}}}=\sum\limits_{n=1}^{N}{{{{\vec{B}}}_{0}}\cdot {{\mathbf{A}}^{\left( n \right)}}\cdot {{{\hat{\vec{S}}}}^{\left( n \right)}}}[/math]

where [math]N[/math] is the number of spins, [math]{{\vec{B}}_{0}}[/math] is the applied magnetic field, [math]{{\mathbf{A}}^{\left( n \right)}}[/math] are nuclear chemical shielding tensors, the sum runs over all nuclei, and the nuclear spin operators

[math]{{\hat{\vec{S}}}^{\left( n \right)}}=\left[ \begin{matrix} \hat{S}_{\text{X}}^{\left( n \right)} & \hat{S}_{\text{Y}}^{\left( n \right)} & \hat{S}_{\text{Z}}^{\left( n \right)} \\ \end{matrix} \right][/math]

have the following direct product form

[math]\hat{S}_{\left\{ \text{X,Y,Z} \right\}}^{\left( n \right)}=\mathbf{1}\otimes \mathbf{1}\otimes ...\otimes {{\hat{\sigma }}_{\left\{ \text{X,Y,Z} \right\}}}\otimes ...\otimes \mathbf{1}\otimes \mathbf{1}[/math]

where [math]\mathbf{1}[/math] denotes a unit matrix of appropriate dimension and the Pauli matrices [math]\left\{ {{{\hat{\sigma }}}_{\text{X}}},{{{\hat{\sigma }}}_{\text{Y}}},{{{\hat{\sigma }}}_{\text{Z}}} \right\}[/math] occur in the n-th position of the direct product sequence. This representation is known in numerical linear algebra as canonical polyadic (CP) format. Although CP representations have been known in magnetic resonance spectroscopy for a long time, they suffer in practice from rapid inflation – spin Hamiltonians encountered in NMR and ESR (electron spin resonance) systems can be complicated and, even for simple initial conditions, the number of terms in the canonical decomposition increases rapidly during system evolution. More ominously, the number of CP terms can change dramatically after small perturbations in the Hamiltonian or the system state, for example

[math]\sum\limits_{n=1}^{N}{\hat{S}_{\text{Z}}^{\left( n \right)}}=\underset{\varepsilon \to 0}{\mathop{\lim }}\,\frac{{{(\mathbf{1}+\varepsilon {{{\hat{\sigma }}}_{Z}})}^{\otimes N}}-{{\mathbf{1}}^{\otimes N}}}{\varepsilon }[/math]

where [math]{{\hat{a}}^{\otimes N}}=\hat{a}\otimes \cdots \otimes \hat{a}[/math]. The left hand side of this equation contains [math]N[/math] terms, but the expression approximating it on the right hand side has only two terms. The positive side of it is that we can approximate [math]N[/math] terms by 2 terms with any accuracy [math]\varepsilon [/math], reducing storage and CPU time. The drawback is that both terms of the approximation grow to infinity when [math]\varepsilon \to 0[/math]. Such instabilities in the CP format make it difficult to use – in the presence of numerical rounding noise the number of terms in the decomposition quickly becomes equal to the dimension of the full state space and any efficiency savings disappear.

Unlike the CP format, which is an open tensor network, closed tensor network formats are stable to small perturbations. The most popular closed tensor network format was repeatedly rediscovered and is currently known under three different names: DMRG in condensed-matter physics, MPS (matrix product states) in computational physics and TT (tensor train) in numerical linear algebra. Having duly tossed our three-sided dice, we shall proceed to use the latter name. The general form of the tensor train format is:

[math]x=\tau ({{x}^{(1)}},\ldots ,{{x}^{(N)}})=\sum\limits_{{{\alpha }_{1}}=0}^{{{r}_{1}}}{\cdots }\sum\limits_{{{\alpha }_{N-1}}=0}^{{{r}_{N-1}}}{x_{{{\alpha }_{1}}}^{(1)}\otimes x_{{{\alpha }_{1}}{{\alpha }_{2}}}^{(2)}\otimes \cdots \otimes x_{{{\alpha }_{N-2}}{{\alpha }_{N-1}}}^{(N-1)}\otimes x_{{{\alpha }_{N-1}}}^{(N)}}[/math]

For the total [math]{{\hat{S}}_{\text{Z}}}[/math] operator the tensor train format is

[math]\begin{matrix} \hat{H}=\sum\limits_{{{\alpha }_{1}}=1}^{2}{\cdots }\sum\limits_{{{\alpha }_{N-1}}=1}^{2}{\hat{H}_{{{\alpha }_{1}}}^{(1)}\otimes \hat{H}_{{{\alpha }_{1}}{{\alpha }_{2}}}^{(2)}\otimes \cdots \otimes \hat{H}_{{{\alpha }_{N-2}}{{\alpha }_{N-1}}}^{(N-1)}\otimes \hat{H}_{{{\alpha }_{N-1}}}^{(N)}} \\ \left[ \begin{matrix} \hat{H}_{1}^{(1)} \\ \hat{H}_{2}^{(1)} \\ \end{matrix} \right]=\left[ \begin{matrix} {{{\hat{\sigma }}}_{\text{Z}}} \\ \mathbf{1} \\ \end{matrix} \right],\text{ }\left[ \begin{matrix} \hat{H}_{11}^{(2)} & \hat{H}_{12}^{(2)} \\ \hat{H}_{21}^{(2)} & \hat{H}_{22}^{(2)} \\ \end{matrix} \right]=\cdots =\left[ \begin{matrix} \hat{H}_{11}^{(N-1)} & \hat{H}_{12}^{(N-1)} \\ \hat{H}_{21}^{(N-1)} & \hat{H}_{22}^{(N-1)} \\ \end{matrix} \right]=\left[ \begin{matrix} \mathbf{1} & 0 \\ {{{\hat{\sigma }}}_{\text{Z}}} & \mathbf{1} \\ \end{matrix} \right],\text{ }\left[ \begin{matrix} \hat{H}_{1}^{(N)} \\ \hat{H}_{2}^{(N)} \\ \end{matrix} \right]=\left[ \begin{matrix} \mathbf{1} \\ {{{\hat{\sigma }}}_{\text{Z}}} \\ \end{matrix} \right] \\ \end{matrix}[/math]

Note that the number of terms in each summation (known as bond dimension) is two, and that all factors of the decomposition are now bounded. The tensor train representation in the right hand side of (5) has [math]4N-4[/math] operators [math]\hat{H}_{{{\alpha }_{n-1}}{{\alpha }_{n}}}^{(n)}[/math], each of which is either zero, or the identity [math]\mathbf{1}[/math], or the Pauli matrix [math]{{\hat{\sigma }}_{Z}}[/math]. The original representation of the Hamiltonian in Equation (3) has [math]{{N}^{2}}[/math] such operators – the tensor train representation is clearly more memory-efficient.

Another notable example is the ZZ coupling Hamiltonian that often makes an appearance in simple linear spin chain models:

[math]\hat{J}=\sum\limits_{m \gt n}{\hat{S}_{\text{Z}}^{(m)}\hat{S}_{\text{Z}}^{(n)}}[/math]

This CP format with [math]N(N-1)/2[/math] rank-one terms contains [math]{{N}^{2}}(N-1)[/math] operators. The corresponding tensor train representation is:

[math]\begin{matrix} \hat{J}=\sum\limits_{{{\alpha }_{1}}=1}^{3}{\cdots }\sum\limits_{{{\alpha }_{N-1}}=1}^{3}{\hat{J}_{{{\alpha }_{1}}}^{(1)}\otimes \hat{J}_{{{\alpha }_{1}}{{\alpha }_{2}}}^{(2)}\otimes \cdots \otimes \hat{J}_{{{\alpha }_{N-2}}{{\alpha }_{N-1}}}^{(N-1)}\otimes \hat{J}_{{{\alpha }_{N-1}}}^{(N)}} \\ {{{\hat{J}}}^{(1)}}=\left[ \begin{matrix} 0 \\ {{{\hat{\sigma }}}_{Z}} \\ \mathbf{1} \\ \end{matrix} \right],\text{ }{{{\hat{J}}}^{(2)}}=\cdots ={{{\hat{J}}}^{(N-1)}}=\left[ \begin{matrix} \mathbf{1} & 0 & 0 \\ {{{\hat{\sigma }}}_{Z}} & \mathbf{1} & 0 \\ 0 & {{{\hat{\sigma }}}_{Z}} & \mathbf{1} \\ \end{matrix} \right], {{{\hat{J}}}^{(N)}}=\left[ \begin{matrix} \mathbf{1} \\ {{{\hat{\sigma }}}_{Z}} \\ 0 \\ \end{matrix} \right] \\ \end{matrix}[/math]

Here each summation runs over three terms only, and the total number of [math]\hat{J}_{{{\alpha }_{n-1}}{{\alpha }_{n}}}^{(n)}[/math] matrices is [math]9N-12[/math], which is much smaller than the storage requirements of the CP format.

The above stands in sharp contrast with the classical approach to magnetic resonance simulations, where the Hamiltonian is represented as a [math]{{2}^{N}}\times {{2}^{N}}[/math] sparse matrix with all non-zero entries stored in memory. As soon as the sparse matrix is assembled, the CPU and memory resources grow exponentially with the number of spins [math]N[/math], making the simulation prohibitively difficult for large systems. To avoid this problem (known colloquially as the curse of dimensionality), all data should be kept in compressed tensor formats of the form given in the equations above, without ever expanding the Kronecker products.

A very considerable body of literature exists on manipulating expressions directly in tensor product formats. In particular, a given tensor may be converted into the TT format using sequential singular value decompositions. Given two tensors in the TT format, any bilinear operation (addition, element-wise multiplication, matrix-vector multiplication, etc.) may be carried out directly in the TT format. Efficient algorithms for Fourier transform and convolution have also recently emerged.

Building tensor train objects

The primary function that creates tensor trains is ttclass.m.

User-space functions

clearcoeff.m - absorbs the physical coefficient of the tensor train into its cores.

conj.m - element-wise conjugation of a matrix in a TT representation.

ctranspose.m - conjugate-transpose of a matrix in a TT representation.

diag.m – mimics diag for matrices. If an input is ttclass matrix, returns a ttclass vector (matrix with 1 column) containing the diagonal. If an input is a vector, returns a diagonal matrix.

dot.m – mimics the dot product operation for matrices.

full.m – mimics the corresponding operation for sparse matrices. Computes all Kronecker products and returns full matrix. May result in out-of-memory errors for large matrices.

hdot.m – the Hadamard dot product operation.

ismatrix.m - returns true for non-empty TT objects.

isnumeric.m - returns true for non-empty TT objects

isreal.m - returns true for real-valued ttclass objects.

kron.m - Kronecker product of tensor train representation of matrices.

mean.m – mimics the corresponding operation for matrices. If the input is a ttclass matrix, returns a vector of its mean values along the specified dimension. If the input is a ttclass vector, returns the mean value in a scalar.

minus.m - tensor train subtraction.

mldivide.m - matrix left divide for TT objects.

mrdivide.m - divides a tensor train by a scalar.

mtimes.m - matrix product in the TT representation.

nnz.m - number of non-zeros used by the TT object.

norm.m – computes a norm of the ttclass object. Because of the internal structure of the tensor train format, only the Frobenius norm can be computed reasonably fast.

plus.m - tensor train addition.

rand.m - random tensor trains.

ranks.m – returns tensor train ranks of the tensor trains buffered in the ttclass.

rdivide.m - divides a tensor train by a scalar.

revert.m - computes a bit-revert permutation of a tensor train.

shrink.m – compresses a given ttclass object into a single tensor train with optimal ranks.

sizes.m – returns mode sizes of a ttclass object (dimensions of elementary matrices).

size.m - returns the size of the matrix represented by the tensor train.

subsref.m - provides indexing functionality for tensor train objects.

sum.m - mimics the corresponding operation for matrices. If the input is a ttclass matrix, returns a vector of its sums along the specified dimension. If an input is a ttclass vector, returns a sum of its values in a scalar.

trace.m – returns the trace of a given matrix.

transpose.m - transpose of a matrix represented by a tensor train.

unit_like.m - unit tensor train with the same topology as the one supplied.

vec.m – reshapes a ttclass matrix into the ttclass vector with the same elements.