Documentation
Documentation of all types and methods in module TensorCrossInterpolation.
Matrix approximation
Matrix cross interpolation (MCI)
TensorCrossInterpolation.MatrixCI — Typemutable struct MatrixCrossInterpolation{T}Represents a cross interpolation of a matrix - or, to be more precise, the data necessary to evaluate a cross interpolation. This data can be fed into various methods such as evaluate(c, i, j) to get interpolated values, and be improved by dynamically adding pivots.
TensorCrossInterpolation.AinvtimesB — MethodAtimesBinv(A::Matrix, B::Matrix)Calculates the matrix product $A^{-1} B$, given a square matrix $A$ and a rectangular matrix $B$ in a numerically stable way using QR decomposition. This is useful in case $A^{-1}$ is ill-conditioned.
TensorCrossInterpolation.AtimesBinv — MethodAtimesBinv(A::Matrix, B::Matrix)Calculates the matrix product $A B^{-1}$, given a rectangular matrix $A$ and a square matrix $B$ in a numerically stable way using QR decomposition. This is useful in case $B^{-1}$ is ill-conditioned.
TensorCrossInterpolation.addpivot! — Methodfunction addpivot(
a::AbstractMatrix{T},
ci::MatrixCrossInterpolation{T},
pivotindices::AbstractArray{T, 2})Add a new pivot given by pivotindices to the cross interpolation ci of the matrix a.
TensorCrossInterpolation.addpivot! — Methodfunction addpivot(
a::AbstractMatrix{T},
ci::MatrixCrossInterpolation{T})Add a new pivot that maximizes the error to the cross interpolation ci of the matrix a.
TensorCrossInterpolation.crossinterpolate — Methodfunction crossinterpolate(
a::AbstractMatrix{T};
tolerance=1e-6,
maxiter=200,
firstpivot=argmax(abs.(a))
) where {T}Cross interpolate a $m\times n$ matrix $a$, i.e. find a set of indices $I$ and $J$, such that $a(1\ldots m, 1\ldots n) \approx a(1\ldots m, I) (a(I, J))^{-1} a(J, 1\ldots m)$.
TensorCrossInterpolation.findnewpivot — Methodfindnewpivot(
a::AbstractMatrix{T},
ci::MatrixCrossInterpolation{T},
row_indices::Union{Vector{Int}},
col_indices::Union{Vector{Int}})Finds the pivot that maximizes the local error across all components of a and ci within the subset given by rowindices and colindices. By default, all avalable rows of ci will be considered.
TensorCrossInterpolation.localerror — Methodlocalerror(
a::AbstractMatrix{T},
ci::MatrixCrossInterpolation{T},
row_indices::Union{AbstractVector{Int},Colon,Int}=Colon(),
col_indices::Union{AbstractVector{Int},Colon,Int}=Colon())Returns all local errors of the cross interpolation ci with respect to matrix a. To find local errors on a submatrix, specify rowindices or colindices.
Adaptive cross approximation (ACA)
TensorCrossInterpolation.addpivot! — Methodfunction addpivot!(a::AbstractMatrix{T}, aca::MatrixACA{T})Find and add a new pivot according to the ACA algorithm in Kumar 2016 ()
TensorCrossInterpolation.uk — MethodCompute u_k(x) for all x
TensorCrossInterpolation.vk — MethodCompute v_k(y) for all y
Rank-revealing LU decomposition (rrLU)
Base.:\ — MethodOverride solving Ax = b using LU decomposition
TensorCrossInterpolation.lastpivoterror — MethodCorrect estimate of the last pivot error A special care is taken for a full-rank matrix: the last pivot error is set to zero.
TensorCrossInterpolation.rrlu! — Methodfunction rrlu!(
A::AbstractMatrix{T};
maxrank::Int=typemax(Int),
reltol::Number=1e-14,
abstol::Number=0.0,
leftorthogonal::Bool=true
)::rrLU{T} where {T}Rank-revealing LU decomposition.
TensorCrossInterpolation.rrlu — Methodfunction rrlu(
A::AbstractMatrix{T};
maxrank::Int=typemax(Int),
reltol::Number=1e-14,
abstol::Number=0.0,
leftorthogonal::Bool=true
)::rrLU{T} where {T}Rank-revealing LU decomposition.
TensorCrossInterpolation.solve — MethodSolve (LU) x = b
L: lower triangular matrix U: upper triangular matrix b: right-hand side vector
Return x
Note: Not optimized for performance
Tensor trains and tensor cross Interpolation
Tensor train (TT)
TensorCrossInterpolation.AbstractTensorTrain — Typeabstract type AbstractTensorTrain{V} <: Function endAbstract type that is a supertype to all tensor train types found in this module. The main purpose of this type is for the definition of functions such as rank and linkdims that are shared between different tensor train classes.
When iterated over, the tensor train will return each of the tensors in order.
Implementations: TensorTrain, TensorCI2, TensorCI1
Base.:+ — Methodfunction (+)(lhs::AbstractTensorTrain{V}, rhs::AbstractTensorTrain{V}) where {V}Addition of two tensor trains. If c = a + b, then c(v) ≈ a(v) + b(v) at each index set v. Note that this function increases the bond dimension, i.e. $\chi_{\text{result}} = \chi_1 + \chi_2$ if the original tensor trains had bond dimensions $\chi_1$ and $\chi_2$. Can be combined with automatic recompression by calling add.
Base.:- — Methodfunction (-)(lhs::AbstractTensorTrain{V}, rhs::AbstractTensorTrain{V}) where {V}Subtraction of two tensor trains. If c = a - b, then c(v) ≈ a(v) - b(v) at each index set v. Note that this function increases the bond dimension, i.e. $\chi_{\text{result}} = \chi_1 + \chi_2$ if the original tensor trains had bond dimensions $\chi_1$ and $\chi_2$. Can be combined with automatic recompression by calling subtract (see documentation for add).
Base.length — Methodfunction length(tt::AbstractTensorTrain{V}) where {V}Length of the tensor train, i.e. the number of tensors in the tensor train.
Base.sum — Methodfunction sum(tt::TensorTrain{V}) where {V}Evaluates the sum of the tensor train approximation over all lattice sites in an efficient factorized manner.
LinearAlgebra.norm — MethodFrobenius norm of a tensor train.
LinearAlgebra.norm2 — MethodSquared Frobenius norm of a tensor train.
LinearAlgebra.rank — Methodfunction rank(tt::AbstractTensorTrain{V}) where {V}Rank of the tensor train, i.e. the maximum link dimension.
TensorCrossInterpolation.add — Methodfunction add(
lhs::AbstractTensorTrain{V}, rhs::AbstractTensorTrain{V};
factorlhs=one(V), factorrhs=one(V),
tolerance::Float64=0.0, maxbonddim::Int=typemax(Int)
) where {V}Addition of two tensor trains. If C = add(A, B), then C(v) ≈ A(v) + B(v) at each index set v. Note that this function increases the bond dimension, i.e. $\chi_{\text{result}} = \chi_1 + \chi_2$ if the original tensor trains had bond dimensions $\chi_1$ and $\chi_2$.
Arguments:
lhs,rhs: Tensor trains to be added.factorlhs,factorrhs: Factors to multiply each tensor train by before addition.tolerance,maxbonddim: Parameters to be used for the recompression step.
Returns: A new TensorTrain representing the function factorlhs * lhs(v) + factorrhs * rhs(v).
See also: +
TensorCrossInterpolation.evaluate — Methodfunction evaluate(
tt::TensorTrain{V},
indexset::Union{AbstractVector{LocalIndex}, NTuple{N, LocalIndex}}
)::V where {N, V}Evaluates the tensor train tt at indices given by indexset.
TensorCrossInterpolation.evaluate — Methodfunction evaluate(tt::TensorTrain{V}, indexset::CartesianIndex) where {V}Evaluates the tensor train tt at indices given by indexset.
TensorCrossInterpolation.linkdim — Methodfunction linkdim(tt::AbstractTensorTrain{V}, i::Int)::Int where {V}Bond dimensions at the link between tensor $T_i$ and $T_{i+1}$ in the tensor train.
TensorCrossInterpolation.linkdims — Methodfunction linkdims(tt::AbstractTensorTrain{V})::Vector{Int} where {V}Bond dimensions along the links between $T$ tensors in the tensor train.
See also: rank
TensorCrossInterpolation.sitedim — Methodfunction linkdim(tt::AbstractTensorTrain{V}, i::Int)::Int where {V}Dimension of the ith site index along the tensor train.
TensorCrossInterpolation.sitedims — Methodfunction sitedims(tt::AbstractTensorTrain{V})::Vector{Vector{Int}} where {V}Dimensions of the site indices (local indices, physical indices) of the tensor train.
TensorCrossInterpolation.sitetensor — Methodsitetensors(tt::AbstractTensorTrain{V}, i) where {V}The tensor at site i of the tensor train.
TensorCrossInterpolation.sitetensors — Methodsitetensors(tt::AbstractTensorTrain{V}) where {V}The tensors that make up the tensor train.
TensorCrossInterpolation.subtract — Methodfunction subtract(
lhs::AbstractTensorTrain{V}, rhs::AbstractTensorTrain{V};
tolerance::Float64=0.0, maxbonddim::Int=typemax(Int)
)Subtract two tensor trains lhs and rhs. See add.
TensorCrossInterpolation.TTCache — Typestruct TTCache{ValueType}Cached evalulation of a tensor train. This is useful when the same TT is evaluated multiple times with the same indices. The number of site indices per tensor core can be arbitray irrespective of the number of site indices of the original tensor train.
TensorCrossInterpolation.TensorTrain — Typestruct TensorTrain{ValueType}Represents a tensor train, also known as MPS.
The tensor train can be evaluated using standard function call notation:
tt = TensorTrain(...)
value = tt([1, 2, 3, 4])The corresponding function is:
function (tt::TensorTrain{V})(indexset) where {V}Evaluates the tensor train tt at indices given by indexset.
TensorCrossInterpolation.TensorTrain — Methodfunction TensorTrain(sitetensors::Vector{Array{V, 3}}) where {V}Create a tensor train out of a vector of tensors. Each tensor should have links to the previous and next tensor as dimension 1 and 3, respectively; the local index ("physical index" for MPS in physics) is dimension 2.
TensorCrossInterpolation.TensorTrain — Methodfunction TensorTrain{V2,N}(tci::AbstractTensorTrain{V}) where {V,V2,N}Convert a tensor-train-like object into a tensor train.
Arguments:
tt::AbstractTensorTrain{V}: a tensor-train-like object.localdims: a vector of local dimensions for each tensor in the tensor train. A each element oflocaldimsshould be an array-like object ofN-2integers.
TensorCrossInterpolation.TensorTrain — Methodfunction TensorTrain(tci::AbstractTensorTrain{V}) where {V}Convert a tensor-train-like object into a tensor train. This includes TCI1 and TCI2 objects.
TensorCrossInterpolation.TensorTrainFit — TypeFitting data with a TensorTrain object. This may be useful when the interpolated function is noisy.
TensorCrossInterpolation.batchevaluate — Methodprojector: 0 means no projection, otherwise the index of the projector
TensorCrossInterpolation.compress! — Methodfunction compress!(
tt::TensorTrain{V, N},
method::Symbol=:LU;
tolerance::Float64=1e-12,
maxbonddim=typemax(Int)
) where {V, N}Compress the tensor train tt using LU, CI or SVD decompositions.
TensorCrossInterpolation.Contraction — TypeContraction of two TTOs Optionally, the contraction can be done with a function applied to the result.
TensorCrossInterpolation.contract — Methodfunction contract(
A::TensorTrain{V1,4},
B::TensorTrain{V2,4};
algorithm::Symbol=:TCI,
tolerance::Float64=1e-12,
maxbonddim::Int=typemax(Int),
f::Union{Nothing,Function}=nothing,
kwargs...
) where {V1,V2}Contract two tensor trains A and B.
Currently, two implementations are available:
algorithm=:TCIconstructs a new TCI that fits the contraction ofAandB.algorithm=:naiveuses a naive tensor contraction and subsequent SVD recompression of the tensor train.algorithm=:zipupuses a naive tensor contraction with on-the-fly LU decomposition.
Arguments:
AandBare the tensor trains to be contracted.algorithmchooses the algorithm used to evaluate the contraction.toleranceis the tolerance of the TCI or SVD recompression.maxbonddimsets the maximum bond dimension of the resulting tensor train.fis a function to be applied elementwise to the result. This option is only available withalgorithm=:TCI.methodchooses the method used for the factorization in thealgorithm=:zipupcase (:SVDor:LU).kwargs...are forwarded tocrossinterpolate2ifalgorithm=:TCI.
TensorCrossInterpolation.contract_zipup — MethodSee SVD version: https://tensornetwork.org/mps/algorithms/zipupmpo/
Tensor cross interpolation (TCI)
Note: In most cases, it is advantageous to use TensorCrossInterpolation.TensorCI2 instead.
TensorCrossInterpolation.TensorCI1 — Typemutable struct TensorCI1{ValueType} <: AbstractTensorTrain{ValueType}Type that represents tensor cross interpolations created using the TCI1 algorithm. Users may want to create these using crossinterpolate1 rather than calling a constructor directly.
TensorCrossInterpolation.addpivot! — Methodfunction addpivot!(tci::TensorCI1{V}, p::Int, tolerance::1e-12) where {V,F}Add a pivot to the TCI at site p. Do not add a pivot if the error is below tolerance.
TensorCrossInterpolation.addpivotcol! — Methodfunction addpivotcol!(tci::TensorCI1{V}, cross::MatrixCI{V}, p::Int, newj::Int, f) where {V}Add the column with index newj as a pivot row to bond p.
TensorCrossInterpolation.addpivotrow! — Methodfunction addpivotrow!(tci::TensorCI1{V}, cross::MatrixCI{V}, p::Int, newi::Int, f) where {V}Add the row with index newi as a pivot row to bond p.
TensorCrossInterpolation.crossinterpolate — Methodfunction crossinterpolate(
::Type{ValueType},
f,
localdims::Union{Vector{Int},NTuple{N,Int}},
firstpivot::MultiIndex=ones(Int, length(localdims));
tolerance::Float64=1e-8,
maxiter::Int=200,
sweepstrategy::Symbol=:backandforth,
pivottolerance::Float64=1e-12,
verbosity::Int=0,
additionalpivots::Vector{MultiIndex}=MultiIndex[],
normalizeerror::Bool=true
) where {ValueType, N}Deprecated, and only included for backward compatibility. Please use crossinterpolate1 instead.
TensorCrossInterpolation.crossinterpolate1 — Methodfunction crossinterpolate1(
::Type{ValueType},
f,
localdims::Union{Vector{Int},NTuple{N,Int}},
firstpivot::MultiIndex=ones(Int, length(localdims));
tolerance::Float64=1e-8,
maxiter::Int=200,
sweepstrategy::Symbol=:backandforth,
pivottolerance::Float64=1e-12,
verbosity::Int=0,
additionalpivots::Vector{MultiIndex}=MultiIndex[],
normalizeerror::Bool=true
) where {ValueType, N}Cross interpolate a function $f(\mathbf{u})$, where $\mathbf{u} \in [1, \ldots, d_1] \times [1, \ldots, d_2] \times \ldots \times [1, \ldots, d_{\mathscr{L}}]$ and $d_1 \ldots d_{\mathscr{L}}$ are the local dimensions.
Arguments:
ValueTypeis the return type off. Automatic inference is too error-prone.fis the function to be interpolated.fshould have a single parameter, which is a vector of the same length aslocaldims. The return type should beValueType.localdims::Union{Vector{Int},NTuple{N,Int}}is aVector(orTuple) that contains the local dimension of each index off.firstpivot::MultiIndexis the first pivot, used by the TCI algorithm for initialization. Default:[1, 1, ...].tolerance::Float64is a float specifying the tolerance for the interpolation. Default:1e-8.maxiter::Intis the maximum number of iterations (i.e. optimization sweeps) before aborting the TCI construction. Default:200.sweepstrategy::Symbolspecifies whether to sweep forward (:forward), backward (:backward), or back and forth (:backandforth) during optimization. Default::backandforth.pivottolerance::Float64specifies the tolerance below which no new pivot will be added to each tensor. Default:1e-12.verbosity::Intcan be set to>= 1to get convergence information on standard output during optimization. Default:0.additionalpivots::Vector{MultiIndex}is a vector of additional pivots that the algorithm should add to the initial pivot before starting optimization. This is not necessary in most cases.normalizeerror::Booldetermines whether to scale the error by the maximum absolute value offfound during sampling. If set tofalse, the algorithm continues until the absolute error is belowtolerance. If set totrue, the algorithm uses the absolute error divided by the maximum sample instead. This is helpful if the magnitude of the function is not known in advance. Default:true.
Notes:
- Convergence may depend on the choice of first pivot. A good rule of thumb is to choose
firstpivotclose to the largest structure inf, or on a maximum off. If the structure offis not known in advance,optfirstpivotmay be helpful. - By default, no caching takes place. Use the
CachedFunctionwrapper if your function is expensive to evaluate.
See also: optfirstpivot, CachedFunction, crossinterpolate2
TensorCrossInterpolation.evaluate — Methodfunction evaluate(tci::TensorCI1{V}, indexset::AbstractVector{LocalIndex}) where {V}Evaluate the TCI at a specific set of indices. This method is inefficient; to evaluate many points, first convert the TCI object into a tensor train using tensortrain(tci).
TensorCrossInterpolation.getPi — MethodbuildPiAt(tci::TensorCrossInterpolation{V}, p::Int)Build a 4-legged $\Pi$ tensor at site p. Indices are in the order $i, u_p, u_{p + 1}, j$, as in the TCI paper.
TensorCrossInterpolation.isnested — MethodReturn if a is nested in b ($a < b$). If row_or_col is :row/:col, the last/first element of each index in b is ignored, respectively.
Tensor cross interpolation 2 (TCI2)
TensorCrossInterpolation.GlobalPivotSearchInput — MethodGlobalPivotSearchInput(tci::TensorCI2{ValueType}) where {ValueType}Construct a GlobalPivotSearchInput from a TensorCI2 object.
TensorCrossInterpolation.TensorCI2 — Typemutable struct TensorCI2{ValueType} <: AbstractTensorTrain{ValueType}Type that represents tensor cross interpolations created using the TCI2 algorithm. Users may want to create these using crossinterpolate2 rather than calling a constructor directly.
TensorCrossInterpolation.TensorCI2 — MethodInitialize a TCI2 object with local pivot lists.TensorCrossInterpolation.addglobalpivots! — MethodAdd global pivots to index sets
TensorCrossInterpolation.addglobalpivots1sitesweep! — MethodAdd global pivots to index sets and perform a 1site sweep
TensorCrossInterpolation.addglobalpivots2sitesweep! — MethodAdd global pivots to index sets and perform a 2site sweep. Retry until all pivots are added or until ntry iterations are reached.
TensorCrossInterpolation.crossinterpolate2 — Methodfunction crossinterpolate2(
::Type{ValueType},
f,
localdims::Union{Vector{Int},NTuple{N,Int}},
initialpivots::Vector{MultiIndex}=[ones(Int, length(localdims))];
kwargs...
) where {ValueType,N}Cross interpolate a function $f(\mathbf{u})$ using the TCI2 algorithm. Here, the domain of $f$ is $\mathbf{u} \in [1, \ldots, d_1] \times [1, \ldots, d_2] \times \ldots \times [1, \ldots, d_{\mathscr{L}}]$ and $d_1 \ldots d_{\mathscr{L}}$ are the local dimensions.
Arguments:
ValueTypeis the return type off. Automatic inference is too error-prone.fis the function to be interpolated.fshould have a single parameter, which is a vector of the same length aslocaldims. The return type should beValueType.localdims::Union{Vector{Int},NTuple{N,Int}}is aVector(orTuple) that contains the local dimension of each index off.initialpivots::Vector{MultiIndex}is a vector of pivots to be used for initialization. Default:[1, 1, ...].
Refer to optimize! for other keyword arguments such as tolerance, maxbonddim, maxiter.
Notes:
- Set
toleranceto be > 0 ormaxbonddimto some reasonable value. Otherwise, convergence is not reachable. - By default, no caching takes place. Use the
CachedFunctionwrapper if your function is expensive to evaluate.
See also: optimize!, optfirstpivot, CachedFunction, crossinterpolate1
TensorCrossInterpolation.invalidatesitetensors! — MethodInvalidate the site tensor at bond b.
TensorCrossInterpolation.issitetensorsavailable — MethodReturn if site tensors are available
TensorCrossInterpolation.optimize! — Methodfunction optimize!(
tci::TensorCI2{ValueType},
f;
tolerance::Float64=1e-8,
maxbonddim::Int=typemax(Int),
maxiter::Int=200,
sweepstrategy::Symbol=:backandforth,
pivotsearch::Symbol=:full,
verbosity::Int=0,
loginterval::Int=10,
normalizeerror::Bool=true,
ncheckhistory=3,
maxnglobalpivot::Int=5,
nsearchglobalpivot::Int=5,
tolmarginglobalsearch::Float64=10.0,
strictlynested::Bool=false
) where {ValueType}Perform optimization sweeps using the TCI2 algorithm. This will sucessively improve the TCI approximation of a function until it fits f with an error smaller than tolerance, or until the maximum bond dimension (maxbonddim) is reached.
Arguments:
tci::TensorCI2{ValueType}: The TCI to optimize.f: The function to fit.fis the function to be interpolated.fshould have a single parameter, which is a vector of the same length aslocaldims. The return type should beValueType.localdims::Union{Vector{Int},NTuple{N,Int}}is aVector(orTuple) that contains the local dimension of each index off.tolerance::Float64is a float specifying the target tolerance for the interpolation. Default:1e-8.maxbonddim::Intspecifies the maximum bond dimension for the TCI. Default:typemax(Int), i.e. effectively unlimited.maxiter::Intis the maximum number of iterations (i.e. optimization sweeps) before aborting the TCI construction. Default:200.sweepstrategy::Symbolspecifies whether to sweep forward (:forward), backward (:backward), or back and forth (:backandforth) during optimization. Default::backandforth.pivotsearch::Symboldetermins how pivots are searched (:fullor:rook). Default::full.verbosity::Intcan be set to>= 1to get convergence information on standard output during optimization. Default:0.loginterval::Intcan be set to>= 1to specify how frequently to print convergence information. Default:10.normalizeerror::Booldetermines whether to scale the error by the maximum absolute value offfound during sampling. If set tofalse, the algorithm continues until the absolute error is belowtolerance. If set totrue, the algorithm uses the absolute error divided by the maximum sample instead. This is helpful if the magnitude of the function is not known in advance. Default:true.ncheckhistory::Intis the number of history points to use for convergence checks. Default:3.globalpivotfinder::Union{AbstractGlobalPivotFinder, Nothing}is a global pivot finder to use for searching global pivots. Default:nothing. Ifnothing, a default global pivot finder is used.maxnglobalpivot::Intcan be set to>= 0. Default:5. The maximum number of global pivots to add in each iteration.strictlynested::Booldetermines whether to preserve partial nesting in the TCI algorithm. Default:false.checkbatchevaluatable::BoolCheck if the functionfis batch evaluatable. Default:false.checkconvglobalpivot::BoolCheck if the global pivot finder is converged. Default:true. In the future, this will be set tofalseby default.
Arguments (deprecated):
pivottolerance::Float64is the tolerance for the pivot search. Deprecated.nsearchglobalpivot::Intis the number of search points for the global pivot finder. Deprecated.tolmarginglobalsearch::Float64is the tolerance for the global pivot finder. Deprecated.
Notes:
- Set
toleranceto be > 0 ormaxbonddimto some reasonable value. Otherwise, convergence is not reachable. - By default, no caching takes place. Use the
CachedFunctionwrapper if your function is expensive to evaluate.
See also: crossinterpolate2, optfirstpivot, CachedFunction, crossinterpolate1
TensorCrossInterpolation.printnestinginfo — Methodfunction printnestinginfo(tci::TensorCI2{T}) where {T}Print information about fulfillment of the nesting criterion ($I_\ell < I_{\ell+1}$ and $J_\ell < J_{\ell+1}$) on each pair of bonds $\ell, \ell+1$ of tci to stdout.
TensorCrossInterpolation.printnestinginfo — Methodfunction printnestinginfo(io::IO, tci::TensorCI2{T}) where {T}Print information about fulfillment of the nesting criterion ($I_\ell < I_{\ell+1}$ and $J_\ell < J_{\ell+1}$) on each pair of bonds $\ell, \ell+1$ of tci to io.
TensorCrossInterpolation.searchglobalpivots — MethodSearch global pivots where the interpolation error exceeds abstol.
TensorCrossInterpolation.sweep2site! — MethodPerform 2site sweeps on a TCI2.
TensorCrossInterpolation.updatepivots! — MethodUpdate pivots at bond b of tci using the TCI2 algorithm. Site tensors will be invalidated.
TensorCrossInterpolation.AbstractGlobalPivotFinder — TypeAbstractGlobalPivotFinderAbstract type for global pivot finders that search for indices with high interpolation error.
TensorCrossInterpolation.AbstractGlobalPivotFinder — Method(finder::AbstractGlobalPivotFinder)(
input::GlobalPivotSearchInput{ValueType},
f,
abstol::Float64;
verbosity::Int=0,
rng::AbstractRNG=Random.default_rng()
)::Vector{MultiIndex} where {ValueType}Find global pivots using the given finder algorithm.
Arguments
input: Input data for the search algorithmf: Function to be interpolatedabstol: Absolute tolerance for the interpolation errorverbosity: Verbosity level (default: 0)rng: Random number generator (default: Random.default_rng())
Returns
Vector{MultiIndex}: Set of indices with high interpolation error
TensorCrossInterpolation.DefaultGlobalPivotFinder — TypeDefaultGlobalPivotFinderDefault implementation of global pivot finder that uses random search.
Fields
nsearch::Int: Number of initial points to search frommaxnglobalpivot::Int: Maximum number of pivots to add in each iterationtolmarginglobalsearch::Float64: Search for pivots where the interpolation error is larger than the tolerance multiplied by this factor
TensorCrossInterpolation.DefaultGlobalPivotFinder — MethodDefaultGlobalPivotFinder(;
nsearch::Int=5,
maxnglobalpivot::Int=5,
tolmarginglobalsearch::Float64=10.0
)Construct a DefaultGlobalPivotFinder with the given parameters.
TensorCrossInterpolation.DefaultGlobalPivotFinder — Method(finder::DefaultGlobalPivotFinder)(
input::GlobalPivotSearchInput{ValueType},
f,
abstol::Float64;
verbosity::Int=0,
rng::AbstractRNG=Random.default_rng()
)::Vector{MultiIndex} where {ValueType}Find global pivots using random search.
Arguments
input: Input data for the search algorithmf: Function to be interpolatedabstol: Absolute tolerance for the interpolation errorverbosity: Verbosity level (default: 0)rng: Random number generator (default: Random.default_rng())
Returns
Vector{MultiIndex}: Set of indices with high interpolation error
TensorCrossInterpolation.GlobalPivotSearchInput — TypeGlobalPivotSearchInput{ValueType}Input data structure for global pivot search algorithms.
Fields
localdims::Vector{Int}: Dimensions of each tensor indexcurrent_tt::TensorTrain{ValueType,3}: Current tensor train approximationmaxsamplevalue::ValueType: Maximum absolute value of the functionIset::Vector{Vector{MultiIndex}}: Set of left indicesJset::Vector{Vector{MultiIndex}}: Set of right indices
Integration
TensorCrossInterpolation.integrate — Methodfunction integrate(
::Type{ValueType},
f,
a::Vector{ValueType},
b::Vector{ValueType};
tolerance=1e-8,
GKorder::Int=15
) where {ValueType}Integrate the function f using TCI and Gauss–Kronrod quadrature rules.
Arguments:
ValueType: return type off`.a`: Vector of lower bounds in each dimension. Effectively, the lower corner of the hypercube that is being integrated over.b`: Vector of upper bounds in each dimension.tolerance: tolerance of the TCI approximation for the values of f.GKorder: Order of the Gauss–Kronrod rule, e.g. 15.
Helpers and utility methods
TensorCrossInterpolation.CachedFunction — TypeRepresents a function that maps ArgType -> ValueType and automatically caches function values. A CachedFunction object can be called in the same way as the original function using the usual function call syntax. The type K denotes the type of the keys used to cache function values, which could be an integer type. This defaults to UInt128. A safer but slower alternative is BigInt, which is better suited for functions with a large number of arguments. CachedFunction does not support batch evaluation of function values.
TensorCrossInterpolation.CachedFunction — Methodfunction CachedFunction{ValueType}(
f::Function,
localdims::Vector{Int}
) where {ValueType}Constructor for a cached function that avoids repeated evaluation of the same values.
Arguments:
ValueTypeis the return type off.Kis the type for the keys used to cache function values. This defaults toUInt128.fis the function to be wrapped. It should take a vector of integers as its sole
argument, and return a value of type ValueType.
localdimsis a Vector that describes the local dimensions of each argument off.
TensorCrossInterpolation.cachedata — MethodGet all cached data of a CachedFunction object.
TensorCrossInterpolation.BatchEvaluatorAdapter — TypeWrap any function to support batch evaluation.
TensorCrossInterpolation.ThreadedBatchEvaluator — TypeThreadedBatchEvaluator{T} is a wrapper for a function that supports batch evaluation parallelized over the index sets using threads. The function to be wrapped must be thread-safe.
TensorCrossInterpolation._batchevaluate_dispatch — MethodThis file contains functions for evaluating a function on a batch of indices mainly for TensorCI2. If the function supports batch evaluation, then it should implement the BatchEvaluator interface. Otherwise, the function is evaluated on each index individually using the usual function call syntax and loops.
TensorCrossInterpolation._batchevaluate_dispatch — MethodCalling batchevaluate on a function that supports batch evaluation will dispatch to this function.
TensorCrossInterpolation.optfirstpivot — Methodfunction optfirstpivot(
f,
localdims::Union{Vector{Int},NTuple{N,Int}},
firstpivot::MultiIndex=ones(Int, length(localdims));
maxsweep=1000
) where {N}Optimize the first pivot for a tensor cross interpolation.
Arguments:
fis function to be interpolated.localdims::Union{Vector{Int},NTuple{N,Int}}determines the local dimensions of the function parameters (seecrossinterpolate1).fistpivot::MultiIndex=ones(Int, length(localdims))is the starting point for the optimization. It is advantageous to choose it close to a global maximum of the function.maxsweepis the maximum number of optimization sweeps. Default:1000.
See also: crossinterpolate1
TensorCrossInterpolation.projector_to_slice — MethodConstruct slice for the site indces of one tensor core Returns a slice and the corresponding shape for resize
TensorCrossInterpolation.estimatetrueerror — Methodfunction estimatetrueerror(
tt::TensorTrain{ValueType,3}, f;
nsearch::Int = 100,
initialpoints::Union{Nothing,AbstractVector{MultiIndex}} = nothing,
)::Vector{Tuple{MultiIndex,Float64}} where {ValueType}Estimate the global error by comparing the exact function value and the TT approximation using a greedy algorithm and returns a unique list of pairs of the pivot and the error (error is the absolute difference between the exact function value and the TT approximation). On return, the list is sorted in descending order of the error.
Arguments:
tt::TensorTrain{ValueType,3}: The tensor train to be compared withff: The function to be compared withtt.nsearch::Int: The number of initial points to be used in the search (defaults to 100).initialpoints::Union{Nothing,AbstractVector{MultiIndex}}: The initial points to be used in the search (defaults tonothing). Ifinitialpointsis notnothing,nsearchis ignored.