Tensor trains

From Spinach Documentation Wiki
Jump to: navigation, search

This document describes the experimental tensor train (aka DMRG, aka MPO) class implementation in Spinach. Note that in practical applications the tensor train formalism is a veritable minefield – DO NOT USE IT unless you are an expert in numerical linear algebra and know exactly what you are doing.

Tensor product formats

Operators and states are generated in Spinach as sums of rank-one (elementary) tensors, for example:

[math]\hat E \otimes {\hat L_{\rm{Z}}} \otimes \hat E \otimes {\hat L_ - } \otimes \hat E[/math]

Normally, Spinach represents this operator as a sparse matrix. Using the ttclass object would keep and manipulate this operator without computing the Kronecker products. This leads to significant memory savings, but can make calculations rather slow. The ttclass object in Spinach is designed to behave like a matrix – the very complex housekeeping and complicated tensor product mathematics with automatic recompression and optimization algorithms works under the bonnet.

Creation and basic operations

Let us generate the example operator above in the tensor train representation. First, we get the constituent Pauli matrices:

    p=pauli(2);

Instead of sending them through kron, we send them to the tensor train class constructor:

    A=ttclass(1,{p.u; p.z; p.u; p.m; p.u},0);

where the first argument is a scalar multiplier to be placed in front of the operator and the last parameter is the internal round-off tolerance. We can examine the result by typing:

    >> A
    A = 32x32 ttclass array with properties:
        coeff: 1
        cores: {5x1 cell}
        tolerance: 0
        debuglevel: 0

We see that A represents a 32x32 matrix and contains 5 cores. We can index into A like it’s a matrix:

    >> A(1,1)
    ans = 0

We can also use standard Matlab operations. For example, we can sum A with itself:

    >> B=A+A
    B = 32x32 ttclass array with properties:
        coeff: [1 1]
        cores: {5x2 cell}
        tolerance: [0 0]
        debuglevel: 0

This is now a sum of two tensor trains – there are two elements of the coeff array, and cores has two columns. Summation is an expensive operation in tensor train representation and so the object buffers tensor trains when you sum them together. To force the summation, use shrink as follows:

    >> B=shrink(B)
    B = 32x32 ttclass array with properties:
        coeff: 2.8284
        cores: {5x1 cell}
        tolerance: 0
        debuglevel: 0

We can, for example, compute the Frobenius norm of the result:

    >> norm(B-2*A,'fro')
    ans = 7.1951e-16

Note that, like every floating-point arithmetic system, tensor train representation has round-off errors. due to the nature of the format, these are more severe and less predictable than the round-off errors of standard double-precision arithmetic. You have been warned.