pub trait TensorLike: TensorIndex {
Show 24 methods
// Required methods
fn factorize(
&self,
left_inds: &[<Self as TensorIndex>::Index],
options: &FactorizeOptions,
) -> Result<FactorizeResult<Self>, FactorizeError>;
fn factorize_full_rank(
&self,
left_inds: &[<Self as TensorIndex>::Index],
alg: FactorizeAlg,
canonical: Canonical,
) -> Result<FactorizeResult<Self>, FactorizeError>;
fn conj(&self) -> Self;
fn direct_sum(
&self,
other: &Self,
pairs: &[(<Self as TensorIndex>::Index, <Self as TensorIndex>::Index)],
) -> Result<DirectSumResult<Self>>;
fn outer_product(&self, other: &Self) -> Result<Self>;
fn norm_squared(&self) -> f64;
fn permuteinds(
&self,
new_order: &[<Self as TensorIndex>::Index],
) -> Result<Self>;
fn fuse_indices(
&self,
old_indices: &[<Self as TensorIndex>::Index],
new_index: <Self as TensorIndex>::Index,
order: LinearizationOrder,
) -> Result<Self>;
fn contract(tensors: &[&Self], allowed: AllowedPairs<'_>) -> Result<Self>;
fn contract_connected(
tensors: &[&Self],
allowed: AllowedPairs<'_>,
) -> Result<Self>;
fn axpby(&self, a: AnyScalar, other: &Self, b: AnyScalar) -> Result<Self>;
fn scale(&self, scalar: AnyScalar) -> Result<Self>;
fn inner_product(&self, other: &Self) -> Result<AnyScalar>;
fn maxabs(&self) -> f64;
fn diagonal(
input_index: &<Self as TensorIndex>::Index,
output_index: &<Self as TensorIndex>::Index,
) -> Result<Self>;
fn scalar_one() -> Result<Self>;
fn ones(indices: &[<Self as TensorIndex>::Index]) -> Result<Self>;
fn onehot(
index_vals: &[(<Self as TensorIndex>::Index, usize)],
) -> Result<Self>;
// Provided methods
fn norm(&self) -> f64 { ... }
fn sub(&self, other: &Self) -> Result<Self> { ... }
fn neg(&self) -> Result<Self> { ... }
fn isapprox(&self, other: &Self, atol: f64, rtol: f64) -> bool { ... }
fn validate(&self) -> Result<()> { ... }
fn delta(
input_indices: &[<Self as TensorIndex>::Index],
output_indices: &[<Self as TensorIndex>::Index],
) -> Result<Self> { ... }
}Expand description
Trait for tensor-like objects that expose external indices and support contraction.
This trait is fully generic (monomorphic), meaning it does not support
trait objects (dyn TensorLike). For heterogeneous tensor collections,
use an enum wrapper instead.
§Design Principles
- Minimal interface: Only external indices and automatic contraction
- Fully generic: Uses associated type for
Index, returnsSelf - Stable ordering:
external_indices()returns indices in deterministic order - No trait objects: Requires
Sized, cannot usedyn TensorLike
§Example
use tensor4all_core::{AllowedPairs, DynIndex, TensorDynLen, TensorLike};
fn contract_pair(a: &TensorDynLen, b: &TensorDynLen) -> anyhow::Result<TensorDynLen> {
Ok(<TensorDynLen as TensorLike>::contract(&[a, b], AllowedPairs::All)?)
}
let i = DynIndex::new_dyn(2);
let j = DynIndex::new_dyn(2);
let a = TensorDynLen::from_dense(
vec![i.clone(), j.clone()],
vec![1.0, 0.0, 0.0, 1.0],
)?;
let b = TensorDynLen::from_dense(vec![j.clone()], vec![2.0, 3.0])?;
let result = contract_pair(&a, &b)?;
assert_eq!(result.to_vec::<f64>()?, vec![2.0, 3.0]);§Heterogeneous Collections
For mixing different tensor types, define an enum:
use tensor4all_core::{block_tensor::BlockTensor, DynIndex, TensorDynLen};
let i = DynIndex::new_dyn(2);
let dense = TensorDynLen::from_dense(vec![i.clone()], vec![1.0, 2.0]).unwrap();
let block = BlockTensor::new(vec![dense.clone()], (1, 1));
enum TensorNetwork {
Dense(TensorDynLen),
Block(BlockTensor<TensorDynLen>),
}
let network = TensorNetwork::Block(block);
assert!(matches!(network, TensorNetwork::Block(_)));§Supertrait
TensorLike extends TensorIndex, which provides:
external_indices()- Get all external indicesnum_external_indices()- Count external indicesreplaceind()/replaceinds()- Replace indices
This separation allows tensor networks (like TreeTN) to implement
index operations without implementing contraction/factorization.
Required Methods§
Sourcefn factorize(
&self,
left_inds: &[<Self as TensorIndex>::Index],
options: &FactorizeOptions,
) -> Result<FactorizeResult<Self>, FactorizeError>
fn factorize( &self, left_inds: &[<Self as TensorIndex>::Index], options: &FactorizeOptions, ) -> Result<FactorizeResult<Self>, FactorizeError>
Factorize this tensor into left and right factors.
This function dispatches to the appropriate algorithm based on options.alg:
SVD: Singular Value DecompositionQR: QR decompositionLU: Rank-revealing LU decompositionCI: Cross Interpolation
The canonical option controls which factor is “canonical”:
Canonical::Left: Left factor is orthogonal (SVD/QR) or unit-diagonal (LU/CI)Canonical::Right: Right factor is orthogonal (SVD) or unit-diagonal (LU/CI)
§Arguments
left_inds- Indices to place on the left sideoptions- Factorization options
§Returns
A FactorizeResult containing the left and right factors, bond index,
singular values (for SVD), and rank.
§Errors
Returns FactorizeError if:
- The storage type is not supported (only DenseF64 and DenseC64)
- QR is used with
Canonical::Right - The underlying algorithm fails
Sourcefn factorize_full_rank(
&self,
left_inds: &[<Self as TensorIndex>::Index],
alg: FactorizeAlg,
canonical: Canonical,
) -> Result<FactorizeResult<Self>, FactorizeError>
fn factorize_full_rank( &self, left_inds: &[<Self as TensorIndex>::Index], alg: FactorizeAlg, canonical: Canonical, ) -> Result<FactorizeResult<Self>, FactorizeError>
Factorize this tensor without applying truncation controls.
Use this for exact tensor rewrites such as canonicalization, where the
contracted factors must preserve the represented tensor up to numerical
roundoff. Unlike Self::factorize, this method must not consult global
SVD/QR/LU truncation defaults or apply maximum-rank limits.
§Arguments
left_inds- Indices to place on the left side.alg- Decomposition algorithm to use.canonical- Which factor should carry the canonical form.
§Returns
A FactorizeResult containing the left and right factors, bond index,
singular values for SVD, and retained exact numerical rank.
§Errors
Returns FactorizeError if:
- the storage type is not supported,
- the requested canonical direction is unsupported for the algorithm, or
- the underlying decomposition fails.
§Examples
use tensor4all_core::{
Canonical, DynIndex, FactorizeAlg, TensorDynLen, TensorLike,
};
let i = DynIndex::new_dyn(2);
let j = DynIndex::new_dyn(2);
let tensor = TensorDynLen::from_dense(
vec![i.clone(), j.clone()],
vec![1.0_f64, 0.0, 0.0, 1.0e-16],
)?;
let result = tensor.factorize_full_rank(
std::slice::from_ref(&i),
FactorizeAlg::QR,
Canonical::Left,
)?;
let reconstructed = result.left.contract(&result.right);
assert!((tensor - reconstructed).maxabs() < 1.0e-18);Sourcefn conj(&self) -> Self
fn conj(&self) -> Self
Tensor conjugate operation.
This is a generalized conjugate operation that depends on the tensor type:
- For dense tensors (TensorDynLen): element-wise complex conjugate
- For symmetric tensors: tensor conjugate considering symmetry sectors
This operation is essential for computing inner products and overlaps in tensor network algorithms like fitting.
§Returns
A new tensor representing the tensor conjugate.
Sourcefn direct_sum(
&self,
other: &Self,
pairs: &[(<Self as TensorIndex>::Index, <Self as TensorIndex>::Index)],
) -> Result<DirectSumResult<Self>>
fn direct_sum( &self, other: &Self, pairs: &[(<Self as TensorIndex>::Index, <Self as TensorIndex>::Index)], ) -> Result<DirectSumResult<Self>>
Direct sum of two tensors along specified index pairs.
For tensors A and B with indices to be summed specified as pairs, creates a new tensor C where each paired index has dimension = dim_A + dim_B. Non-paired indices must match exactly between A and B (same ID).
§Arguments
other- Second tensorpairs- Pairs of (self_index, other_index) to be summed. Each pair creates a new index in the result with dimension = dim(self_index) + dim(other_index).
§Returns
A DirectSumResult containing the result tensor and new indices created
for the summed dimensions (one per pair).
§Example
use tensor4all_core::{DynIndex, TensorDynLen, TensorLike};
let j = DynIndex::new_dyn(2);
let k = DynIndex::new_dyn(3);
let a = TensorDynLen::from_dense(vec![j.clone()], vec![1.0, 2.0])?;
let b = TensorDynLen::from_dense(vec![k.clone()], vec![3.0, 4.0, 5.0])?;
let result = a.direct_sum(&b, &[(j.clone(), k.clone())])?;
assert_eq!(result.new_indices.len(), 1);
assert_eq!(result.tensor.dims(), vec![5]);
assert_eq!(result.tensor.to_vec::<f64>()?, vec![1.0, 2.0, 3.0, 4.0, 5.0]);Sourcefn outer_product(&self, other: &Self) -> Result<Self>
fn outer_product(&self, other: &Self) -> Result<Self>
Outer product (tensor product) of two tensors.
Computes the tensor product of self and other, resulting in a tensor
with all indices from both tensors. No indices are contracted.
§Arguments
other- The other tensor to compute outer product with
§Returns
A new tensor representing the outer product.
§Errors
Returns an error if the tensors have common indices (by ID).
Use tensordot for contraction when indices overlap.
Sourcefn norm_squared(&self) -> f64
fn norm_squared(&self) -> f64
Compute the squared Frobenius norm of the tensor.
The squared Frobenius norm is defined as the sum of squared absolute values
of all tensor elements: ||T||_F^2 = sum_i |T_i|^2.
This is used for computing norms in tensor network algorithms, convergence checks, and normalization.
§Returns
The squared Frobenius norm as a non-negative f64.
Sourcefn permuteinds(
&self,
new_order: &[<Self as TensorIndex>::Index],
) -> Result<Self>
fn permuteinds( &self, new_order: &[<Self as TensorIndex>::Index], ) -> Result<Self>
Permute tensor indices to match the specified order.
This reorders the tensor’s axes to match the order specified by new_order.
The indices in new_order are matched by ID with the tensor’s current indices.
§Arguments
new_order- The desired order of indices (matched by ID)
§Returns
A new tensor with permuted indices.
§Errors
Returns an error if:
- The number of indices doesn’t match
- An index ID in
new_orderis not found in the tensor
Sourcefn fuse_indices(
&self,
old_indices: &[<Self as TensorIndex>::Index],
new_index: <Self as TensorIndex>::Index,
order: LinearizationOrder,
) -> Result<Self>
fn fuse_indices( &self, old_indices: &[<Self as TensorIndex>::Index], new_index: <Self as TensorIndex>::Index, order: LinearizationOrder, ) -> Result<Self>
Fuse local tensor indices into one replacement index.
This is a local axis fusion operation: it reshapes the tensor so
old_indices are replaced by new_index. Indices are matched by ID,
and old_indices must be non-empty. The order of old_indices defines
the fused coordinate linearization, while order defines how those
coordinates map to the replacement axis. new_index.dim() must equal
the product of the matched axis dimensions. The replacement index is
inserted at the earliest fused axis position, and the remaining indices
retain their relative order.
Implementations should return Err if this operation is unsupported or
if exact local fusion cannot be represented by the tensor type.
§Arguments
old_indices- Existing local indices to fuse, matched by ID. Must be non-empty.new_index- Replacement index whose dimension is the fused product.order- Linearization order for mapping old coordinates to the fused axis.
§Returns
A new tensor with old_indices replaced by new_index.
§Errors
Returns an error if:
old_indicesis empty- Any requested index ID is missing from the tensor
- The replacement dimension does not match the product of fused axis dimensions
- The tensor type does not support local axis fusion
- Exact local fusion cannot be represented by the tensor type
§Examples
use tensor4all_core::{
DynIndex, IndexLike, LinearizationOrder, TensorDynLen, TensorLike,
};
let i = DynIndex::new_dyn(2);
let j = DynIndex::new_dyn(3);
let k = DynIndex::new_dyn(2);
let fused = DynIndex::new_link(6)?;
let data: Vec<f64> = (0..12).map(|value| value as f64).collect();
let tensor = TensorDynLen::from_dense(vec![i.clone(), j.clone(), k.clone()], data)?;
let fused_tensor = <TensorDynLen as TensorLike>::fuse_indices(
&tensor,
&[j.clone(), i.clone()],
fused.clone(),
LinearizationOrder::ColumnMajor,
)?;
assert_eq!(fused_tensor.indices(), &[fused.clone(), k.clone()]);
assert_eq!(fused_tensor.dims(), vec![6, 2]);
let roundtrip = fused_tensor
.unfuse_index(&fused, &[j, i], LinearizationOrder::ColumnMajor)?
.permuteinds(tensor.indices())?;
assert!(roundtrip.isapprox(&tensor, 1e-12, 0.0));Sourcefn contract(tensors: &[&Self], allowed: AllowedPairs<'_>) -> Result<Self>
fn contract(tensors: &[&Self], allowed: AllowedPairs<'_>) -> Result<Self>
Contract multiple tensors over their contractable indices.
This method contracts 2 or more tensors. Pairs of indices that satisfy
is_contractable() (same ID, same dimension, compatible ConjState)
are contracted based on the allowed parameter.
Handles disconnected tensor graphs automatically by:
- Finding connected components based on contractable indices
- Contracting each connected component separately
- Combining results using outer product
§Arguments
tensors- Slice of tensor references to contract (must have length >= 1)allowed- Specifies which tensor pairs can have their indices contracted:AllowedPairs::All: Contract all contractable index pairs (default behavior)AllowedPairs::Specified(&[(i, j)]): Only contract indices between specified tensor pairs
§Returns
A new tensor representing the contracted result. If tensors form disconnected components, they are combined via outer product.
§Behavior by N
- N=0: Error
- N=1: Clone of input
- N>=2: Contract connected components, combine with outer product
§Errors
Returns an error if:
- No tensors are provided
AllowedPairs::Specifiedcontains a pair with no contractable indices
§Example
use tensor4all_core::{AllowedPairs, DynIndex, TensorDynLen, TensorLike};
let i = DynIndex::new_dyn(2);
let j = DynIndex::new_dyn(2);
let k = DynIndex::new_dyn(2);
let a = TensorDynLen::from_dense(
vec![i.clone(), j.clone()],
vec![1.0, 0.0, 0.0, 1.0],
)?;
let b = TensorDynLen::from_dense(
vec![j.clone(), k.clone()],
vec![1.0, 2.0, 3.0, 4.0],
)?;
let c = TensorDynLen::from_dense(vec![k.clone()], vec![1.0, 10.0])?;
let all = <TensorDynLen as TensorLike>::contract(&[&a, &b, &c], AllowedPairs::All)?;
let specified = <TensorDynLen as TensorLike>::contract(
&[&a, &b, &c],
AllowedPairs::Specified(&[(0, 1), (1, 2)]),
)?;
assert_eq!(all.dims(), vec![2]);
assert!(all.sub(&specified)?.maxabs() < 1e-12);Sourcefn contract_connected(
tensors: &[&Self],
allowed: AllowedPairs<'_>,
) -> Result<Self>
fn contract_connected( tensors: &[&Self], allowed: AllowedPairs<'_>, ) -> Result<Self>
Contract multiple tensors that must form a connected graph.
This is the core contraction method that requires all tensors to be
connected through contractable indices. Use Self::contract if you want
automatic handling of disconnected components via outer product.
§Arguments
tensors- Slice of tensor references to contract (must form a connected graph)allowed- Specifies which tensor pairs can have their indices contracted
§Returns
A new tensor representing the contracted result.
§Errors
Returns an error if:
- No tensors are provided
- The tensors form a disconnected graph
§Example
use tensor4all_core::{AllowedPairs, DynIndex, TensorDynLen, TensorLike};
let i = DynIndex::new_dyn(2);
let j = DynIndex::new_dyn(2);
let k = DynIndex::new_dyn(2);
let a = TensorDynLen::from_dense(
vec![i.clone(), j.clone()],
vec![1.0, 0.0, 0.0, 1.0],
)?;
let b = TensorDynLen::from_dense(
vec![j.clone(), k.clone()],
vec![1.0, 2.0, 3.0, 4.0],
)?;
let c = TensorDynLen::from_dense(vec![k.clone()], vec![1.0, 10.0])?;
let result = TensorDynLen::contract_connected(&[&a, &b, &c], AllowedPairs::All)?;
assert_eq!(result.dims(), vec![2]);Sourcefn axpby(&self, a: AnyScalar, other: &Self, b: AnyScalar) -> Result<Self>
fn axpby(&self, a: AnyScalar, other: &Self, b: AnyScalar) -> Result<Self>
Compute a linear combination: a * self + b * other.
This is the fundamental vector space operation.
Sourcefn inner_product(&self, other: &Self) -> Result<AnyScalar>
fn inner_product(&self, other: &Self) -> Result<AnyScalar>
Inner product (dot product) of two tensors.
Computes ⟨self, other⟩ = Σ conj(self)_i * other_i.
Sourcefn diagonal(
input_index: &<Self as TensorIndex>::Index,
output_index: &<Self as TensorIndex>::Index,
) -> Result<Self>
fn diagonal( input_index: &<Self as TensorIndex>::Index, output_index: &<Self as TensorIndex>::Index, ) -> Result<Self>
Create a diagonal (Kronecker delta) tensor for a single index pair.
Creates a 2D tensor T[i, o] where T[i, o] = δ_{i,o} (1 if i==o, 0 otherwise).
§Arguments
input_index- Input indexoutput_index- Output index (must have same dimension as input)
§Returns
A 2D tensor with shape [dim, dim] representing the identity matrix.
§Errors
Returns an error if dimensions don’t match.
§Examples
use tensor4all_core::{DynIndex, TensorDynLen, TensorLike};
let i = DynIndex::new_dyn(3);
let o = DynIndex::new_dyn(3);
let delta = TensorDynLen::diagonal(&i, &o).unwrap();
assert_eq!(delta.dims(), vec![3, 3]);
let data = delta.to_vec::<f64>().unwrap();
// Identity matrix in column-major: [1,0,0, 0,1,0, 0,0,1]
assert!((data[0] - 1.0).abs() < 1e-12);
assert!((data[4] - 1.0).abs() < 1e-12);
assert!((data[8] - 1.0).abs() < 1e-12);
assert!((data[1]).abs() < 1e-12);Sourcefn scalar_one() -> Result<Self>
fn scalar_one() -> Result<Self>
Create a scalar tensor with value 1.0.
This is used as the identity element for outer products.
§Examples
use tensor4all_core::{TensorDynLen, TensorLike};
let one = TensorDynLen::scalar_one().unwrap();
assert_eq!(one.dims(), Vec::<usize>::new());
assert!((one.only().real() - 1.0).abs() < 1e-12);Sourcefn ones(indices: &[<Self as TensorIndex>::Index]) -> Result<Self>
fn ones(indices: &[<Self as TensorIndex>::Index]) -> Result<Self>
Create a tensor filled with 1.0 for the given indices.
This is useful for adding indices to tensors via outer product without changing tensor values (since multiplying by 1 is identity).
§Example
To add a dummy index l to tensor T:
use tensor4all_core::{DynIndex, TensorDynLen, TensorLike};
let i = DynIndex::new_dyn(2);
let l = DynIndex::new_dyn(3);
let t = TensorDynLen::from_dense(vec![i.clone()], vec![2.0, 4.0])?;
let ones = TensorDynLen::ones(&[l.clone()])?;
let t_with_l = t.outer_product(&ones)?;
assert_eq!(t_with_l.dims(), vec![2, 3]);
assert_eq!(t_with_l.to_vec::<f64>()?, vec![2.0, 4.0, 2.0, 4.0, 2.0, 4.0]);Sourcefn onehot(index_vals: &[(<Self as TensorIndex>::Index, usize)]) -> Result<Self>
fn onehot(index_vals: &[(<Self as TensorIndex>::Index, usize)]) -> Result<Self>
Create a one-hot tensor with value 1.0 at the specified index positions.
Similar to ITensors.jl’s onehot(i => 1, j => 2).
§Arguments
index_vals- Pairs of (Index, 0-indexed position)
§Errors
Returns error if any value >= corresponding index dimension.
§Examples
use tensor4all_core::{DynIndex, TensorDynLen, TensorLike};
let i = DynIndex::new_dyn(3);
let j = DynIndex::new_dyn(2);
// One-hot at i=1, j=0
let t = TensorDynLen::onehot(&[(i, 1), (j, 0)]).unwrap();
assert_eq!(t.dims(), vec![3, 2]);
let data = t.to_vec::<f64>().unwrap();
// column-major 3x2: element at (1,0) = index 1
assert!((data[1] - 1.0).abs() < 1e-12);
assert!((t.sum().real() - 1.0).abs() < 1e-12); // exactly one non-zeroProvided Methods§
Sourcefn sub(&self, other: &Self) -> Result<Self>
fn sub(&self, other: &Self) -> Result<Self>
Element-wise subtraction: self - other.
Indices are automatically permuted to match self’s order via axpby.
§Examples
use tensor4all_core::{DynIndex, TensorDynLen, TensorLike};
let i = DynIndex::new_dyn(2);
let a = TensorDynLen::from_dense(vec![i.clone()], vec![5.0, 3.0]).unwrap();
let b = TensorDynLen::from_dense(vec![i.clone()], vec![1.0, 1.0]).unwrap();
let diff = a.sub(&b).unwrap();
assert_eq!(diff.to_vec::<f64>().unwrap(), vec![4.0, 2.0]);Sourcefn isapprox(&self, other: &Self, atol: f64, rtol: f64) -> bool
fn isapprox(&self, other: &Self, atol: f64, rtol: f64) -> bool
Approximate equality check (Julia isapprox semantics).
Returns true if ||self - other|| <= max(atol, rtol * max(||self||, ||other||)).
§Examples
use tensor4all_core::{DynIndex, TensorDynLen, TensorLike};
let i = DynIndex::new_dyn(3);
let a = TensorDynLen::from_dense(vec![i.clone()], vec![1.0, 2.0, 3.0]).unwrap();
let b = TensorDynLen::from_dense(vec![i.clone()], vec![1.0, 2.0, 3.0]).unwrap();
assert!(a.isapprox(&b, 1e-12, 0.0));
let c = TensorDynLen::from_dense(vec![i.clone()], vec![1.1, 2.0, 3.0]).unwrap();
assert!(!a.isapprox(&c, 1e-3, 0.0));Sourcefn validate(&self) -> Result<()>
fn validate(&self) -> Result<()>
Validate structural consistency of this tensor.
The default implementation does nothing (always succeeds). Types with internal structure (for example, block-sparse tensors) can override this to check invariants such as index sharing between blocks.
Sourcefn delta(
input_indices: &[<Self as TensorIndex>::Index],
output_indices: &[<Self as TensorIndex>::Index],
) -> Result<Self>
fn delta( input_indices: &[<Self as TensorIndex>::Index], output_indices: &[<Self as TensorIndex>::Index], ) -> Result<Self>
Create a delta (identity) tensor as outer product of diagonals.
For paired indices (i1, o1), (i2, o2), ..., creates a tensor where:
T[i1, o1, i2, o2, ...] = δ_{i1,o1} × δ_{i2,o2} × ...
This is computed as the outer product of individual diagonal tensors.
§Arguments
input_indices- Input indicesoutput_indices- Output indices (must have same length and matching dimensions)
§Returns
A tensor representing the identity operator on the given index space.
§Errors
Returns an error if:
- Number of input and output indices don’t match
- Dimensions of paired indices don’t match
§Examples
use tensor4all_core::{DynIndex, TensorDynLen, TensorLike};
let i1 = DynIndex::new_dyn(2);
let o1 = DynIndex::new_dyn(2);
let i2 = DynIndex::new_dyn(3);
let o2 = DynIndex::new_dyn(3);
let d = TensorDynLen::delta(&[i1, i2], &[o1, o2]).unwrap();
assert_eq!(d.dims(), vec![2, 2, 3, 3]);Dyn Compatibility§
This trait is not dyn compatible.
In older versions of Rust, dyn compatibility was called "object safety", so this trait is not object safe.