By a tensor problem in general, we mean one where all data on input and output are
given exactly or approximately in tensor formats defined by a small number of
parameters compared to the total amount of data. For such problems we propose
to seek for algorithms that work with data exclusively in tensor formats,
the price we pay is a contamination of data through some approximation
(recompression) at each operation.
We show that a certain intentional corruption of data, under pretty mild assumptions,
still preserves the convergence rate of superlinear iterations. Then, we discuss
which tensor formats are best suitable and advocate to deal with the Tucker format
for all operands.
As an application, we present new approximate matrix inversion algorithms with linear
and even sublinear complexity in the matrix size and recent suggestions for construction
of eigenvalue solvers. Also, we discuss the gains of combination of tensor and typical
Toeplitz-like structures. In particular, doubly Toeplitz (block Toeplitz with Toeplitz
blocks) matrices of a wide class can be approximately inverted with the complexity of
order of square root of the matrix size. |