Compressing exisiting data#

using Plots

import TensorCrossInterpolation as TCI
using QuanticsTCI

TCI#

Let us demonstrate how to compress exisiting data by TCI. First, we create a test dataset on a 3D grid.

# Replace this line with the dataset to be tested for compressibility.
grid = range(-pi, pi; length=200)
dataset = [cos(x) + cos(y) + cos(z) for x in grid, y in grid, z in grid]
size(dataset)
(200, 200, 200)

We now construct a TCI.

# Construct TCI
tolerance = 1e-5
tt, ranks, errors = TCI.crossinterpolate2(
    Float64, i -> dataset[i...], collect(size(dataset)), tolerance=tolerance)

# Check error
ttdataset = [tt([i, j, k]) for i in axes(grid, 1), j in axes(grid, 1), k in axes(grid, 1)]
errors = abs.(ttdataset .- dataset)
println(
    "TCI of the dataset with tolerance $tolerance has link dimensions $(TCI.linkdims(tt)), "
    *
    "for a max error of $(maximum(errors))."
)
TCI of the dataset with tolerance 1.0e-5 has link dimensions [2, 2], for a max error of 1.7763568394002505e-15.

Let us plot the original data and the TCI error on a 2D cut.

# Original data
c1 = heatmap(dataset[:, :, 1], aspect_ratio=1)
title!("Original data")

# TCI error
c2 = heatmap(log10.(abs.(errors[:, :, 1])), aspect_ratio=1)
title!("log10 of abs error of TCI")

plot(c1, c2, size=(800, 500))

QTCI#

We now demonstrate how to compress existing data by QTCI.

# Number of bits
R = 8

# Replace with your dataset
grid = range(-pi, pi; length=2^R + 1)[1:end-1] # exclude the end point
dataset = [cos(x) + cos(y) + cos(z) for x in grid, y in grid, z in grid]
size(dataset)
(256, 256, 256)

QuanticsTCI.jl#

Let us first use quanticscrossinterpolate function in QuanticsTCI.jl.

# Perform QTCI
tolerance = 1e-5
qtt, ranks, errors = quanticscrossinterpolate(
    dataset, tolerance=tolerance, unfoldingscheme=:fused)
(QuanticsTCI.QuanticsTensorCI2{Float64}(TensorCrossInterpolation.TensorCI2{Float64} with rank 6, QuanticsGrids.InherentDiscreteGrid{3}(8, (1, 1, 1), 2, :fused, (1, 1, 1)), TensorCrossInterpolation.CachedFunction{Float64, UInt128} with 28941 entries), [6, 6, 6], [5.593760408576879e-16, 6.844206914176828e-16, 6.844206914176828e-16])

Below, we compute the error for the whole tensor, which may be too expensive for a large \(\mathcal{R}\).

# Check error
qttdataset = [qtt([i, j, k]) for i in axes(grid, 1), j in axes(grid, 1), k in axes(grid, 1)]
qtterrors = abs.(qttdataset .- dataset)
println(
    "Quantics TCI compression of the dataset with tolerance $tolerance has " *
    "link dimensions $(TCI.linkdims(qtt.tci)), for a max error of $(maximum(qtterrors))."
)
Quantics TCI compression of the dataset with tolerance 1.0e-5 has link dimensions [3, 6, 6, 6, 6, 6, 4], for a max error of 4.884981308350689e-15.

Again, let us plot the original data and the TCI error on a 2D cut.

# Original data
c1 = heatmap(qttdataset[:, :, 1], aspect_ratio=1)
title!("Original data")

c2 = heatmap(log10.(abs.(qtterrors[:, :, 1])), aspect_ratio=1)
title!("log10 of abs error of QTCI")

plot(c1, c2, size=(800, 500))

QuanticsGrids.jl + TensorCrossInterpolation.jl#

QuanticsTCI.jl is user-friendly, yet utilizing QuanticsGrids.jl directly provides greater flexibility.

import QuanticsGrids as QG


function create_qgrid(R, qttdataset)
    # 3D quantics grid with R bits and the fused reprensentation (default)
    qgrid = QG.InherentDiscreteGrid{3}(R)

    # Function that returns the value of the dataset at the given quantics index
    qf(qindex) = qttdataset[QG.quantics_to_grididx(qgrid, qindex)...]

    return qgrid, qf
end

qgrid, qf = create_qgrid(R, qttdataset)

# Data at the quantics index [1, 1, ..., 1] = the index [1, 1, 1].
qf(fill(1, R)) == qttdataset[1, 1, 1]
true

The create_qgrid function generates a 3D quantics grid and a closure (qf) for dataset access based on quantics indices, given a grid resolution (R) and a dataset (qttdataset). This design reduces reliance on global variables, leading to faster function evalulations.

The effectiveness of TCI significantly depends on selecting appropriate initial pivots. Optimal initial pivots are locations where the function intended for interpolation exhibits large absolute values.

# Local dimensions
localdims = fill(8, R)

# Generate initial pivots by maximainzing the aboslute value of the function from random points.
# This is a heuristic to find good initial pivots.
# The optimization is performed by single-sites updates.
ninitialpivots = 10
initialpivots = [TCI.optfirstpivot(qf, localdims, [rand(1:d) for d in localdims]) for _ in 1:ninitialpivots]

for p in initialpivots
    println("Initial pivot: $p $(qf(p))")
end
Initial pivot: [1, 8, 8, 8, 8, 8, 8, 8] 2.999096456088612
Initial pivot: [5, 5, 5, 5, 5, 5, 5, 5] -2.9996988186962033
Initial pivot: [7, 7, 7, 7, 7, 7, 7, 7] -2.9993976373924083
Initial pivot: [3, 6, 6, 6, 6, 6, 6, 6] 2.9993976373924074
Initial pivot: [3, 6, 6, 6, 6, 6, 6, 6] 2.9993976373924074
Initial pivot: [4, 4, 4, 4, 4, 4, 4, 4] -2.9993976373924083
Initial pivot: [3, 6, 6, 6, 6, 6, 6, 6] 2.9993976373924074
Initial pivot: [3, 3, 3, 3, 3, 3, 3, 3] -2.999698818696204
Initial pivot: [8, 1, 1, 1, 1, 1, 1, 1] 3.0
Initial pivot: [3, 6, 6, 6, 6, 6, 6, 6] 2.9993976373924074
# Perform (Q)TCI
tolerance = 1e-5
qtt, ranks, errors = TCI.crossinterpolate2(Float64, qf, localdims, initialpivots; tolerance=tolerance)
(TensorCrossInterpolation.TensorCI2{Float64} with rank 6, [6, 6, 6], [2.535009239560774e-15, 2.535009239560774e-15, 2.535009239560774e-15])
# Test error
qtt(initialpivots[1])  qf(initialpivots[1])
true