pub struct TensorDynLen {
pub indices: Vec<DynIndex>,
/* private fields */
}Expand description
Tensor with dynamic rank (number of indices) and dynamic scalar type.
This is a concrete type using DynIndex (= Index<DynId, TagSet>).
The canonical numeric payload is always [tenferro::Tensor].
§Examples
use tensor4all_core::{TensorDynLen, DynIndex};
// Create a 2×3 real tensor
let i = DynIndex::new_dyn(2);
let j = DynIndex::new_dyn(3);
let data = vec![1.0_f64, 2.0, 3.0, 4.0, 5.0, 6.0];
let t = TensorDynLen::from_dense(vec![i.clone(), j.clone()], data).unwrap();
assert_eq!(t.dims(), vec![2, 3]);
// Sum all elements: 1+2+3+4+5+6 = 21
let s = t.sum();
assert!((s.real() - 21.0).abs() < 1e-12);Fields§
§indices: Vec<DynIndex>Full index information (includes tags and other metadata).
Implementations§
Source§impl TensorDynLen
impl TensorDynLen
Sourcepub fn dims(&self) -> Vec<usize>
pub fn dims(&self) -> Vec<usize>
Get dims in the current indices order.
This is computed on-demand from indices (single source of truth).
Sourcepub fn new(indices: Vec<DynIndex>, storage: Arc<Storage>) -> Self
pub fn new(indices: Vec<DynIndex>, storage: Arc<Storage>) -> Self
Create a new tensor with dynamic rank.
§Panics
Panics if the storage is Diag and not all indices have the same dimension. Panics if there are duplicate indices.
Sourcepub fn from_indices(indices: Vec<DynIndex>, storage: Arc<Storage>) -> Self
pub fn from_indices(indices: Vec<DynIndex>, storage: Arc<Storage>) -> Self
Create a new tensor with dynamic rank, automatically computing dimensions from indices.
This is a convenience constructor that extracts dimensions from indices using IndexLike::dim().
§Panics
Panics if the storage is Diag and not all indices have the same dimension. Panics if there are duplicate indices.
Sourcepub fn from_storage(
indices: Vec<DynIndex>,
storage: Arc<Storage>,
) -> Result<Self>
pub fn from_storage( indices: Vec<DynIndex>, storage: Arc<Storage>, ) -> Result<Self>
Create a tensor from explicit storage by seeding a canonical native payload.
Sourcepub fn requires_grad(&self) -> bool
pub fn requires_grad(&self) -> bool
Returns whether the tensor participates in reverse-mode AD.
Sourcepub fn set_requires_grad(&mut self, enabled: bool) -> Result<()>
pub fn set_requires_grad(&mut self, enabled: bool) -> Result<()>
Enables or disables reverse-mode gradient tracking.
Sourcepub fn grad(&self) -> Result<Option<Self>>
pub fn grad(&self) -> Result<Option<Self>>
Returns the accumulated reverse gradient when available.
Sourcepub fn backward(&self, grad_output: Option<&Self>) -> Result<()>
pub fn backward(&self, grad_output: Option<&Self>) -> Result<()>
Runs reverse-mode AD backward pass from this tensor.
Accumulates gradients on all input tensors that have requires_grad == true.
§Arguments
grad_output- Optional gradient seed. PassNonefor default (ones).
Sourcepub fn to_storage(&self) -> Result<Arc<Storage>>
pub fn to_storage(&self) -> Result<Arc<Storage>>
Materialize the primal snapshot as storage.
Sourcepub fn only(&self) -> AnyScalar
pub fn only(&self) -> AnyScalar
Extract the scalar value from a 0-dimensional tensor (or 1-element tensor).
This is similar to Julia’s only() function.
§Panics
Panics if the tensor has more than one element.
§Example
use tensor4all_core::{TensorDynLen, AnyScalar};
use tensor4all_core::index::{DefaultIndex as Index, DynId};
// Create a scalar tensor (0 dimensions, 1 element)
let indices: Vec<Index<DynId>> = vec![];
let tensor: TensorDynLen = TensorDynLen::from_dense(indices, vec![42.0]).unwrap();
assert_eq!(tensor.only().real(), 42.0);Sourcepub fn permute_indices(&self, new_indices: &[DynIndex]) -> Self
pub fn permute_indices(&self, new_indices: &[DynIndex]) -> Self
Permute the tensor dimensions using the given new indices order.
This is the main permutation method that takes the desired new indices and automatically computes the corresponding permutation of dimensions and data. The new indices must be a permutation of the original indices (matched by ID).
§Arguments
new_indices- The desired new indices order. Must be a permutation ofself.indices(matched by ID).
§Panics
Panics if new_indices.len() != self.indices.len(), if any index ID
doesn’t match, or if there are duplicate indices.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
// Create a 2×3 tensor
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let indices = vec![i.clone(), j.clone()];
let tensor: TensorDynLen = TensorDynLen::from_dense(indices, vec![0.0; 6]).unwrap();
// Permute to 3×2: swap the two dimensions by providing new indices order
let permuted = tensor.permute_indices(&[j, i]);
assert_eq!(permuted.dims(), vec![3, 2]);Sourcepub fn permute(&self, perm: &[usize]) -> Self
pub fn permute(&self, perm: &[usize]) -> Self
Permute the tensor dimensions, returning a new tensor.
This method reorders the indices, dimensions, and data according to the
given permutation. The permutation specifies which old axis each new
axis corresponds to: new_axis[i] = old_axis[perm[i]].
§Arguments
perm- The permutation:perm[i]is the old axis index for new axisi
§Panics
Panics if perm.len() != self.indices.len() or if the permutation is invalid.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
// Create a 2×3 tensor
let indices = vec![
Index::new_dyn(2),
Index::new_dyn(3),
];
let tensor: TensorDynLen = TensorDynLen::from_dense(indices, vec![0.0; 6]).unwrap();
// Permute to 3×2: swap the two dimensions
let permuted = tensor.permute(&[1, 0]);
assert_eq!(permuted.dims(), vec![3, 2]);Sourcepub fn contract(&self, other: &Self) -> Self
pub fn contract(&self, other: &Self) -> Self
Contract this tensor with another tensor along common indices.
This method finds common indices between self and other, then contracts
along those indices. The result tensor contains all non-contracted indices
from both tensors, with indices from self appearing first, followed by
indices from other that are not common.
§Arguments
other- The tensor to contract with
§Returns
A new tensor resulting from the contraction.
§Panics
Panics if there are no common indices, if common indices have mismatched dimensions, or if storage types don’t match.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
// Create two tensors: A[i, j] and B[j, k]
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let k = Index::new_dyn(4);
let indices_a = vec![i.clone(), j.clone()];
let tensor_a: TensorDynLen = TensorDynLen::from_dense(indices_a, vec![0.0; 6]).unwrap();
let indices_b = vec![j.clone(), k.clone()];
let tensor_b: TensorDynLen = TensorDynLen::from_dense(indices_b, vec![0.0; 12]).unwrap();
// Contract along j: result is C[i, k]
let result = tensor_a.contract(&tensor_b);
assert_eq!(result.dims(), vec![2, 4]);Sourcepub fn tensordot(
&self,
other: &Self,
pairs: &[(DynIndex, DynIndex)],
) -> Result<Self>
pub fn tensordot( &self, other: &Self, pairs: &[(DynIndex, DynIndex)], ) -> Result<Self>
Contract this tensor with another tensor along explicitly specified index pairs.
Similar to NumPy’s tensordot, this method contracts only along the explicitly
specified pairs of indices. Unlike contract() which automatically contracts
all common indices, tensordot gives you explicit control over which indices
to contract.
§Arguments
other- The tensor to contract withpairs- Pairs of indices to contract:(index_from_self, index_from_other)
§Returns
A new tensor resulting from the contraction, or an error if:
- Any specified index is not found in the respective tensor
- Dimensions don’t match for any pair
- The same axis is specified multiple times in
selforother - There are common indices (same ID) that are not in the contraction pairs (batch contraction is not yet implemented)
§Future: Batch Contraction
In a future version, common indices not specified in pairs will be treated
as batch dimensions (like batched GEMM). Currently, this case returns an error.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
// Create two tensors: A[i, j] and B[k, l] where j and k have same dimension but different IDs
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let k = Index::new_dyn(3); // Same dimension as j, but different ID
let l = Index::new_dyn(4);
let indices_a = vec![i.clone(), j.clone()];
let tensor_a: TensorDynLen = TensorDynLen::from_dense(indices_a, vec![0.0; 6]).unwrap();
let indices_b = vec![k.clone(), l.clone()];
let tensor_b: TensorDynLen = TensorDynLen::from_dense(indices_b, vec![0.0; 12]).unwrap();
// Contract j (from A) with k (from B): result is C[i, l]
let result = tensor_a.tensordot(&tensor_b, &[(j.clone(), k.clone())]).unwrap();
assert_eq!(result.dims(), vec![2, 4]);Sourcepub fn outer_product(&self, other: &Self) -> Result<Self>
pub fn outer_product(&self, other: &Self) -> Result<Self>
Compute the outer product (tensor product) of two tensors.
Creates a new tensor whose indices are the concatenation of the indices
from both input tensors. The result has shape [...self.dims, ...other.dims].
This is equivalent to numpy’s np.outer or np.tensordot(a, b, axes=0),
or ITensor’s * operator when there are no common indices.
§Arguments
other- The other tensor to compute outer product with
§Returns
A new tensor with indices from both tensors.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let tensor_a: TensorDynLen = TensorDynLen::from_dense(vec![i.clone()], vec![1.0, 2.0]).unwrap();
let tensor_b: TensorDynLen =
TensorDynLen::from_dense(vec![j.clone()], vec![1.0, 2.0, 3.0]).unwrap();
// Outer product: C[i, j] = A[i] * B[j]
let result = tensor_a.outer_product(&tensor_b).unwrap();
assert_eq!(result.dims(), vec![2, 3]);Source§impl TensorDynLen
impl TensorDynLen
Sourcepub fn random<T: RandomScalar, R: Rng>(
rng: &mut R,
indices: Vec<DynIndex>,
) -> Self
pub fn random<T: RandomScalar, R: Rng>( rng: &mut R, indices: Vec<DynIndex>, ) -> Self
Create a random tensor with values from standard normal distribution (generic over scalar type).
For f64, each element is drawn from the standard normal distribution.
For Complex64, both real and imaginary parts are drawn independently.
§Type Parameters
T- The scalar element type (must implementRandomScalar)R- The random number generator type
§Arguments
rng- Random number generatorindices- The indices for the tensor
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
use rand::SeedableRng;
use rand_chacha::ChaCha8Rng;
let mut rng = ChaCha8Rng::seed_from_u64(42);
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let tensor: TensorDynLen = TensorDynLen::random::<f64, _>(&mut rng, vec![i, j]);
assert_eq!(tensor.dims(), vec![2, 3]);Source§impl TensorDynLen
impl TensorDynLen
Sourcepub fn add(&self, other: &Self) -> Result<Self>
pub fn add(&self, other: &Self) -> Result<Self>
Add two tensors element-wise.
The tensors must have the same index set (matched by ID). If the indices
are in a different order, the other tensor will be permuted to match self.
§Arguments
other- The tensor to add
§Returns
A new tensor representing self + other, or an error if:
- The tensors have different index sets
- The dimensions don’t match
- Storage types are incompatible
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let indices_a = vec![i.clone(), j.clone()];
let data_a = vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0];
let tensor_a: TensorDynLen = TensorDynLen::from_dense(indices_a, data_a).unwrap();
let indices_b = vec![i.clone(), j.clone()];
let data_b = vec![1.0, 1.0, 1.0, 1.0, 1.0, 1.0];
let tensor_b: TensorDynLen = TensorDynLen::from_dense(indices_b, data_b).unwrap();
let sum = tensor_a.add(&tensor_b).unwrap();
// sum = [[2, 3, 4], [5, 6, 7]]Sourcepub fn axpby(&self, a: AnyScalar, other: &Self, b: AnyScalar) -> Result<Self>
pub fn axpby(&self, a: AnyScalar, other: &Self, b: AnyScalar) -> Result<Self>
Compute a linear combination: a * self + b * other.
Sourcepub fn inner_product(&self, other: &Self) -> Result<AnyScalar>
pub fn inner_product(&self, other: &Self) -> Result<AnyScalar>
Inner product (dot product) of two tensors.
Computes ⟨self, other⟩ = Σ conj(self)_i * other_i.
Source§impl TensorDynLen
impl TensorDynLen
Sourcepub fn replaceind(&self, old_index: &DynIndex, new_index: &DynIndex) -> Self
pub fn replaceind(&self, old_index: &DynIndex, new_index: &DynIndex) -> Self
Replace an index in the tensor with a new index.
This replaces the index matching old_index by ID with new_index.
The storage data is not modified, only the index metadata is changed.
§Arguments
old_index- The index to replace (matched by ID)new_index- The new index to use
§Returns
A new tensor with the index replaced. If no index matches old_index,
returns a clone of the original tensor.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let new_i = Index::new_dyn(2); // Same dimension, different ID
let indices = vec![i.clone(), j.clone()];
let tensor: TensorDynLen = TensorDynLen::from_dense(indices, vec![0.0; 6]).unwrap();
// Replace index i with new_i
let replaced = tensor.replaceind(&i, &new_i);
assert_eq!(replaced.indices[0].id, new_i.id);
assert_eq!(replaced.indices[1].id, j.id);Sourcepub fn replaceinds(
&self,
old_indices: &[DynIndex],
new_indices: &[DynIndex],
) -> Self
pub fn replaceinds( &self, old_indices: &[DynIndex], new_indices: &[DynIndex], ) -> Self
Replace multiple indices in the tensor.
This replaces each index in old_indices (matched by ID) with the corresponding
index in new_indices. The storage data is not modified.
§Arguments
old_indices- The indices to replace (matched by ID)new_indices- The new indices to use
§Panics
Panics if old_indices and new_indices have different lengths.
§Returns
A new tensor with the indices replaced. Indices not found in old_indices
are kept unchanged.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let new_i = Index::new_dyn(2);
let new_j = Index::new_dyn(3);
let indices = vec![i.clone(), j.clone()];
let tensor: TensorDynLen = TensorDynLen::from_dense(indices, vec![0.0; 6]).unwrap();
// Replace both indices
let replaced = tensor.replaceinds(&[i.clone(), j.clone()], &[new_i.clone(), new_j.clone()]);
assert_eq!(replaced.indices[0].id, new_i.id);
assert_eq!(replaced.indices[1].id, new_j.id);Source§impl TensorDynLen
impl TensorDynLen
Sourcepub fn conj(&self) -> Self
pub fn conj(&self) -> Self
Complex conjugate of all tensor elements.
For real (f64) tensors, returns a copy (conjugate of real is identity). For complex (Complex64) tensors, conjugates each element.
The indices and dimensions remain unchanged.
This is inspired by the conj operation in ITensorMPS.jl.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
use num_complex::Complex64;
let i = Index::new_dyn(2);
let data = vec![Complex64::new(1.0, 2.0), Complex64::new(3.0, -4.0)];
let tensor: TensorDynLen = TensorDynLen::from_dense(vec![i], data).unwrap();
let conj_tensor = tensor.conj();
// Elements are now conjugated: 1-2i, 3+4iSource§impl TensorDynLen
impl TensorDynLen
Sourcepub fn norm_squared(&self) -> f64
pub fn norm_squared(&self) -> f64
Compute the squared Frobenius norm of the tensor: ||T||² = Σ|T_ijk…|²
For real tensors: sum of squares of all elements. For complex tensors: sum of |z|² = z * conj(z) for all elements.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let data = vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0]; // 1² + 2² + ... + 6² = 91
let tensor: TensorDynLen = TensorDynLen::from_dense(vec![i, j], data).unwrap();
assert!((tensor.norm_squared() - 91.0).abs() < 1e-10);Sourcepub fn norm(&self) -> f64
pub fn norm(&self) -> f64
Compute the Frobenius norm of the tensor: ||T|| = sqrt(Σ|T_ijk…|²)
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
let i = Index::new_dyn(2);
let data = vec![3.0, 4.0]; // sqrt(9 + 16) = 5
let tensor: TensorDynLen = TensorDynLen::from_dense(vec![i], data).unwrap();
assert!((tensor.norm() - 5.0).abs() < 1e-10);Sourcepub fn distance(&self, other: &Self) -> f64
pub fn distance(&self, other: &Self) -> f64
Compute the relative distance between two tensors.
Returns ||A - B|| / ||A|| (Frobenius norm).
If ||A|| = 0, returns ||B|| instead to avoid division by zero.
This is the ITensor-style distance function useful for comparing tensors.
§Arguments
other- The other tensor to compare with
§Returns
The relative distance as a f64 value.
§Note
The indices of both tensors must be permutable to each other. The result tensor (A - B) uses the index ordering from self.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
let i = Index::new_dyn(2);
let data_a = vec![1.0, 0.0];
let data_b = vec![1.0, 0.0]; // Same tensor
let tensor_a: TensorDynLen = TensorDynLen::from_dense(vec![i.clone()], data_a).unwrap();
let tensor_b: TensorDynLen = TensorDynLen::from_dense(vec![i.clone()], data_b).unwrap();
assert!(tensor_a.distance(&tensor_b) < 1e-10); // Zero distanceSource§impl TensorDynLen
impl TensorDynLen
Sourcepub fn from_dense<T: TensorElement>(
indices: Vec<DynIndex>,
data: Vec<T>,
) -> Result<Self>
pub fn from_dense<T: TensorElement>( indices: Vec<DynIndex>, data: Vec<T>, ) -> Result<Self>
Create a tensor from dense data with explicit indices.
This is the recommended high-level API for creating tensors from raw data.
It avoids direct access to Storage internals.
§Type Parameters
T- Scalar type (f64orComplex64)
§Arguments
indices- Vector of indices for the tensordata- Tensor data in column-major order
§Panics
Panics if data length doesn’t match the product of index dimensions.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let data = vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0];
let tensor: TensorDynLen = TensorDynLen::from_dense(vec![i, j], data).unwrap();
assert_eq!(tensor.dims(), vec![2, 3]);Sourcepub fn from_diag<T: TensorElement>(
indices: Vec<DynIndex>,
data: Vec<T>,
) -> Result<Self>
pub fn from_diag<T: TensorElement>( indices: Vec<DynIndex>, data: Vec<T>, ) -> Result<Self>
Create a diagonal tensor from diagonal payload data with explicit indices.
The public native bridge currently materializes diagonal payloads densely, so
the returned tensor is mathematically diagonal but may not report
TensorDynLen::is_diag at the native-storage level.
Sourcepub fn scalar<T: TensorElement>(value: T) -> Result<Self>
pub fn scalar<T: TensorElement>(value: T) -> Result<Self>
Create a scalar (0-dimensional) tensor from a supported element value.
§Example
use tensor4all_core::TensorDynLen;
let scalar = TensorDynLen::scalar(42.0).unwrap();
assert_eq!(scalar.dims(), Vec::<usize>::new());
assert_eq!(scalar.only().real(), 42.0);Sourcepub fn zeros<T: TensorElement + Zero + Clone>(
indices: Vec<DynIndex>,
) -> Result<Self>
pub fn zeros<T: TensorElement + Zero + Clone>( indices: Vec<DynIndex>, ) -> Result<Self>
Create a tensor filled with zeros of a supported element type.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let tensor = TensorDynLen::zeros::<f64>(vec![i, j]).unwrap();
assert_eq!(tensor.dims(), vec![2, 3]);Source§impl TensorDynLen
impl TensorDynLen
Sourcepub fn to_vec<T: TensorElement>(&self) -> Result<Vec<T>>
pub fn to_vec<T: TensorElement>(&self) -> Result<Vec<T>>
Extract tensor data as a column-major Vec<T>.
§Type Parameters
T- The scalar element type (f64orComplex64).
§Returns
A vector of the tensor data in column-major order.
§Errors
Returns an error if the tensor’s scalar type does not match T.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
let i = Index::new_dyn(2);
let tensor = TensorDynLen::from_dense(vec![i], vec![1.0, 2.0]).unwrap();
let data = tensor.to_vec::<f64>().unwrap();
assert_eq!(data, &[1.0, 2.0]);Sourcepub fn as_slice_f64(&self) -> Result<Vec<f64>>
pub fn as_slice_f64(&self) -> Result<Vec<f64>>
Extract tensor data as a column-major Vec<f64>.
Prefer the generic to_vec::<f64>() method.
This wrapper is kept for C API compatibility.
Sourcepub fn as_slice_c64(&self) -> Result<Vec<Complex64>>
pub fn as_slice_c64(&self) -> Result<Vec<Complex64>>
Extract tensor data as a column-major Vec<Complex64>.
Prefer the generic to_vec::<Complex64>() method.
This wrapper is kept for C API compatibility.
Sourcepub fn is_f64(&self) -> bool
pub fn is_f64(&self) -> bool
Check if the tensor has f64 storage.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
let i = Index::new_dyn(2);
let tensor = TensorDynLen::from_dense(vec![i], vec![1.0, 2.0]).unwrap();
assert!(tensor.is_f64());
assert!(!tensor.is_complex());Sourcepub fn is_diag(&self) -> bool
pub fn is_diag(&self) -> bool
Check whether the tensor currently uses native diagonal structured storage.
This is a storage-level predicate, not a semantic diagonality check.
Sourcepub fn is_complex(&self) -> bool
pub fn is_complex(&self) -> bool
Check if the tensor has complex storage (C64).
Trait Implementations§
Source§impl Clone for TensorDynLen
impl Clone for TensorDynLen
Source§fn clone(&self) -> TensorDynLen
fn clone(&self) -> TensorDynLen
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read moreSource§impl Debug for TensorDynLen
impl Debug for TensorDynLen
Source§impl Mul<&TensorDynLen> for &TensorDynLen
Implement multiplication operator for tensor contraction.
impl Mul<&TensorDynLen> for &TensorDynLen
Implement multiplication operator for tensor contraction.
The * operator performs tensor contraction along common indices.
This is equivalent to calling the contract method.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
// Create two tensors: A[i, j] and B[j, k]
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let k = Index::new_dyn(4);
let indices_a = vec![i.clone(), j.clone()];
let tensor_a: TensorDynLen = TensorDynLen::from_dense(indices_a, vec![0.0; 6]).unwrap();
let indices_b = vec![j.clone(), k.clone()];
let tensor_b: TensorDynLen = TensorDynLen::from_dense(indices_b, vec![0.0; 12]).unwrap();
// Contract along j using * operator: result is C[i, k]
let result = &tensor_a * &tensor_b;
assert_eq!(result.dims(), vec![2, 4]);Source§type Output = TensorDynLen
type Output = TensorDynLen
* operator.Source§impl Mul<&TensorDynLen> for TensorDynLen
Implement multiplication operator for tensor contraction (mixed owned/reference).
impl Mul<&TensorDynLen> for TensorDynLen
Implement multiplication operator for tensor contraction (mixed owned/reference).
Source§type Output = TensorDynLen
type Output = TensorDynLen
* operator.Source§impl Mul<TensorDynLen> for &TensorDynLen
Implement multiplication operator for tensor contraction (mixed reference/owned).
impl Mul<TensorDynLen> for &TensorDynLen
Implement multiplication operator for tensor contraction (mixed reference/owned).
Source§type Output = TensorDynLen
type Output = TensorDynLen
* operator.Source§impl Mul for TensorDynLen
Implement multiplication operator for tensor contraction (owned version).
impl Mul for TensorDynLen
Implement multiplication operator for tensor contraction (owned version).
This allows using tensor_a * tensor_b when both tensors are owned.
Source§type Output = TensorDynLen
type Output = TensorDynLen
* operator.Source§impl Neg for &TensorDynLen
impl Neg for &TensorDynLen
Source§impl Neg for TensorDynLen
impl Neg for TensorDynLen
Source§impl Sub<&TensorDynLen> for &TensorDynLen
impl Sub<&TensorDynLen> for &TensorDynLen
Source§type Output = TensorDynLen
type Output = TensorDynLen
- operator.Source§impl Sub<&TensorDynLen> for TensorDynLen
impl Sub<&TensorDynLen> for TensorDynLen
Source§type Output = TensorDynLen
type Output = TensorDynLen
- operator.Source§impl Sub<TensorDynLen> for &TensorDynLen
impl Sub<TensorDynLen> for &TensorDynLen
Source§type Output = TensorDynLen
type Output = TensorDynLen
- operator.Source§impl Sub for TensorDynLen
impl Sub for TensorDynLen
Source§type Output = TensorDynLen
type Output = TensorDynLen
- operator.Source§impl TensorAccess for TensorDynLen
impl TensorAccess for TensorDynLen
Source§impl TensorIndex for TensorDynLen
impl TensorIndex for TensorDynLen
Source§fn external_indices(&self) -> Vec<DynIndex> ⓘ
fn external_indices(&self) -> Vec<DynIndex> ⓘ
Source§fn num_external_indices(&self) -> usize
fn num_external_indices(&self) -> usize
Source§fn replaceind(&self, old_index: &DynIndex, new_index: &DynIndex) -> Result<Self>
fn replaceind(&self, old_index: &DynIndex, new_index: &DynIndex) -> Result<Self>
Source§impl TensorLike for TensorDynLen
impl TensorLike for TensorDynLen
Source§fn factorize(
&self,
left_inds: &[DynIndex],
options: &FactorizeOptions,
) -> Result<FactorizeResult<Self>, FactorizeError>
fn factorize( &self, left_inds: &[DynIndex], options: &FactorizeOptions, ) -> Result<FactorizeResult<Self>, FactorizeError>
Source§fn direct_sum(
&self,
other: &Self,
pairs: &[(DynIndex, DynIndex)],
) -> Result<DirectSumResult<Self>>
fn direct_sum( &self, other: &Self, pairs: &[(DynIndex, DynIndex)], ) -> Result<DirectSumResult<Self>>
Source§fn outer_product(&self, other: &Self) -> Result<Self>
fn outer_product(&self, other: &Self) -> Result<Self>
Source§fn norm_squared(&self) -> f64
fn norm_squared(&self) -> f64
Source§fn permuteinds(&self, new_order: &[DynIndex]) -> Result<Self>
fn permuteinds(&self, new_order: &[DynIndex]) -> Result<Self>
Source§fn contract(tensors: &[&Self], allowed: AllowedPairs<'_>) -> Result<Self>
fn contract(tensors: &[&Self], allowed: AllowedPairs<'_>) -> Result<Self>
Source§fn contract_connected(
tensors: &[&Self],
allowed: AllowedPairs<'_>,
) -> Result<Self>
fn contract_connected( tensors: &[&Self], allowed: AllowedPairs<'_>, ) -> Result<Self>
Source§fn axpby(&self, a: AnyScalar, other: &Self, b: AnyScalar) -> Result<Self>
fn axpby(&self, a: AnyScalar, other: &Self, b: AnyScalar) -> Result<Self>
a * self + b * other. Read moreSource§fn inner_product(&self, other: &Self) -> Result<AnyScalar>
fn inner_product(&self, other: &Self) -> Result<AnyScalar>
Source§fn diagonal(input_index: &DynIndex, output_index: &DynIndex) -> Result<Self>
fn diagonal(input_index: &DynIndex, output_index: &DynIndex) -> Result<Self>
Source§fn scalar_one() -> Result<Self>
fn scalar_one() -> Result<Self>
Source§fn ones(indices: &[DynIndex]) -> Result<Self>
fn ones(indices: &[DynIndex]) -> Result<Self>
Source§fn onehot(index_vals: &[(DynIndex, usize)]) -> Result<Self>
fn onehot(index_vals: &[(DynIndex, usize)]) -> Result<Self>
Source§fn isapprox(&self, other: &Self, atol: f64, rtol: f64) -> bool
fn isapprox(&self, other: &Self, atol: f64, rtol: f64) -> bool
isapprox semantics). Read moreSource§fn delta(
input_indices: &[<Self as TensorIndex>::Index],
output_indices: &[<Self as TensorIndex>::Index],
) -> Result<Self>
fn delta( input_indices: &[<Self as TensorIndex>::Index], output_indices: &[<Self as TensorIndex>::Index], ) -> Result<Self>
Auto Trait Implementations§
impl Freeze for TensorDynLen
impl !RefUnwindSafe for TensorDynLen
impl Send for TensorDynLen
impl Sync for TensorDynLen
impl Unpin for TensorDynLen
impl !UnwindSafe for TensorDynLen
Blanket Implementations§
§impl<U> As for U
impl<U> As for U
§fn as_<T>(self) -> Twhere
T: CastFrom<U>,
fn as_<T>(self) -> Twhere
T: CastFrom<U>,
self to type T. The semantics of numeric casting with the as operator are followed, so <T as As>::as_::<U> can be used in the same way as T as U for numeric conversions. Read moreSource§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
§impl<T> DistributionExt for Twhere
T: ?Sized,
impl<T> DistributionExt for Twhere
T: ?Sized,
fn rand<T>(&self, rng: &mut (impl Rng + ?Sized)) -> Twhere
Self: Distribution<T>,
§impl<T> DistributionExt for Twhere
T: ?Sized,
impl<T> DistributionExt for Twhere
T: ?Sized,
fn rand<T>(&self, rng: &mut (impl Rng + ?Sized)) -> Twhere
Self: Distribution<T>,
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more