Skip to main content

TensorDynLen

Struct TensorDynLen 

Source
pub struct TensorDynLen {
    pub indices: Vec<DynIndex>,
    /* private fields */
}
Expand description

Dynamic-rank tensor with structured payload storage – the central data type of tensor4all.

TensorDynLen stores a logical multi-dimensional tensor of f64 or Complex64 values together with a list of DynIndex labels. The authoritative payload is compact Storage, which may be dense, diagonal, or explicitly structured. The indices carry unique identities (UUIDs) so that contraction, addition, and other binary operations can automatically match legs by identity rather than position.

§Key Operations

OperationMethod
Create from datafrom_dense, from_diag, zeros
Extract datato_vec, sum, only
Contractioncontract, * operator
Arithmeticadd, scale, axpby, - operator
Factorizationvia TensorLike::factorize
Normsnorm, norm_squared, maxabs
Index opsreplaceind, permute_indices

§Data Layout

Logical dense extraction uses column-major order (first index varies fastest), matching Fortran, Julia, and ITensors.jl conventions. Compact structured payloads additionally carry explicit payload dimensions, strides, and logical-axis classes.

§Examples

use tensor4all_core::{TensorDynLen, DynIndex};

// Create a 2x3 real tensor
let i = DynIndex::new_dyn(2);
let j = DynIndex::new_dyn(3);
let data = vec![1.0_f64, 2.0, 3.0, 4.0, 5.0, 6.0];
let t = TensorDynLen::from_dense(vec![i.clone(), j.clone()], data).unwrap();

assert_eq!(t.dims(), vec![2, 3]);
assert!(t.is_f64());

// Sum all elements: 1+2+3+4+5+6 = 21
let s = t.sum();
assert!((s.real() - 21.0).abs() < 1e-12);

// Extract data back out
let data_out = t.to_vec::<f64>().unwrap();
assert_eq!(data_out, vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0]);

Fields§

§indices: Vec<DynIndex>

Full index information (includes tags and other metadata).

Implementations§

Source§

impl TensorDynLen

Source

pub fn dims(&self) -> Vec<usize>

Get dims in the current indices order.

This is computed on-demand from indices (single source of truth).

§Examples
use tensor4all_core::{DynIndex, TensorDynLen};

let i = DynIndex::new_dyn(2);
let j = DynIndex::new_dyn(3);
let k = DynIndex::new_dyn(4);
let t = TensorDynLen::from_dense(
    vec![i, j, k],
    vec![0.0; 24],
).unwrap();
assert_eq!(t.dims(), vec![2, 3, 4]);
Source

pub fn select_indices( &self, selected_indices: &[DynIndex], positions: &[usize], ) -> Result<Self>

Select fixed coordinates for tensor indices and drop those axes.

The selected_indices slice identifies tensor axes by index identity, and positions gives the zero-based coordinate to take on each selected axis. Unselected indices are preserved in their original order.

§Arguments
  • selected_indices - Indices to fix and remove from the result. Each index must appear exactly once in this tensor.
  • positions - Coordinates for selected_indices. Each coordinate must be less than the corresponding index dimension.
§Returns

A tensor over the unselected indices. Selecting no indices returns a clone of the original tensor. Selecting all indices returns a rank-0 scalar tensor. Diagonal and structured tensors are sliced from their compact payload without materializing the original full tensor; the result keeps structured storage when the remaining logical axes can still be represented by axis classes.

§Errors

Returns an error if the argument lengths differ, a selected index is not present, a selected index is duplicated, or a coordinate is out of range.

§Examples
use tensor4all_core::{DynIndex, TensorDynLen};

let i = DynIndex::new_dyn(2);
let j = DynIndex::new_dyn(3);
let data = vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0];
let tensor = TensorDynLen::from_dense(vec![i.clone(), j.clone()], data).unwrap();

let selected = tensor.select_indices(&[j], &[1]).unwrap();
assert_eq!(selected.dims(), vec![2]);
assert_eq!(selected.to_vec::<f64>().unwrap(), vec![3.0, 4.0]);
Source

pub fn new(indices: Vec<DynIndex>, storage: Arc<Storage>) -> Self

Create a new tensor with dynamic rank.

§Panics

Panics if the storage is Diag and not all indices have the same dimension. Panics if there are duplicate indices.

§Examples
use tensor4all_core::{DynIndex, TensorDynLen, Storage};
use std::sync::Arc;

let i = DynIndex::new_dyn(3);
let storage = Arc::new(Storage::new_dense::<f64>(3));
let t = TensorDynLen::new(vec![i], storage);
assert_eq!(t.dims(), vec![3]);
Source

pub fn from_indices(indices: Vec<DynIndex>, storage: Arc<Storage>) -> Self

Create a new tensor with dynamic rank, automatically computing dimensions from indices.

This is a convenience constructor that extracts dimensions from indices using IndexLike::dim().

§Panics

Panics if the storage is Diag and not all indices have the same dimension. Panics if there are duplicate indices.

§Examples
use tensor4all_core::{DynIndex, TensorDynLen, Storage};
use std::sync::Arc;

let i = DynIndex::new_dyn(4);
let storage = Arc::new(Storage::new_dense::<f64>(4));
let t = TensorDynLen::from_indices(vec![i], storage);
assert_eq!(t.dims(), vec![4]);
Source

pub fn from_storage( indices: Vec<DynIndex>, storage: Arc<Storage>, ) -> Result<Self>

Create a tensor from explicit compact storage.

§Examples
use tensor4all_core::{DynIndex, TensorDynLen, Storage};
use std::sync::Arc;

let i = DynIndex::new_dyn(2);
let j = DynIndex::new_dyn(2);
let storage = Arc::new(Storage::new_diag(vec![1.0_f64, 2.0]));
let t = TensorDynLen::from_storage(vec![i, j], storage).unwrap();
assert_eq!(t.dims(), vec![2, 2]);
Source

pub fn from_structured_storage( indices: Vec<DynIndex>, storage: Arc<Storage>, ) -> Result<Self>

Create a tensor from explicit structured storage.

This is an alias for TensorDynLen::from_storage with a name that emphasizes that compact structured metadata is preserved.

§Errors

Returns an error if the storage logical dimensions do not match the supplied indices, or if duplicate indices are provided.

§Examples
use std::sync::Arc;
use tensor4all_core::{DynIndex, Storage, StorageKind, TensorDynLen};

let i = DynIndex::new_dyn(2);
let j = DynIndex::new_dyn(2);
let storage = Arc::new(Storage::from_diag_col_major(vec![1.0_f64, 2.0], 2).unwrap());
let tensor = TensorDynLen::from_structured_storage(vec![i, j], storage).unwrap();
assert_eq!(tensor.storage().storage_kind(), StorageKind::Diagonal);
Source

pub fn indices(&self) -> &[DynIndex]

Borrow the indices.

Source

pub fn enable_grad(self) -> Self

Enable reverse-mode AD tracking on this tensor by creating a tracked leaf.

Source

pub fn tracks_grad(&self) -> bool

Report whether this tensor participates in gradient tracking.

Source

pub fn grad(&self) -> Result<Option<Self>>

Return the accumulated gradient, if one has been stored.

Source

pub fn clear_grad(&self) -> Result<()>

Clear the accumulated gradient stored for this tensor.

Source

pub fn backward(&self) -> Result<()>

Run reverse-mode autodiff from this scalar tensor.

Source

pub fn detach(&self) -> Self

Detach this tensor from the reverse graph.

Source

pub fn is_simple(&self) -> bool

Check if this tensor is already in canonical form.

Source

pub fn to_storage(&self) -> Result<Arc<Storage>>

Materialize the primal snapshot as storage.

Source

pub fn storage(&self) -> Arc<Storage>

Returns the authoritative compact storage.

Source

pub fn sum(&self) -> AnyScalar

Sum all elements, returning AnyScalar.

§Examples
use tensor4all_core::{DynIndex, TensorDynLen};

let i = DynIndex::new_dyn(3);
let t = TensorDynLen::from_dense(vec![i], vec![1.0, 2.0, 3.0]).unwrap();
let s = t.sum();
assert!((s.real() - 6.0).abs() < 1e-12);
Source

pub fn only(&self) -> AnyScalar

Extract the scalar value from a 0-dimensional tensor (or 1-element tensor).

This is similar to Julia’s only() function.

§Panics

Panics if the tensor has more than one element.

§Example
use tensor4all_core::{TensorDynLen, AnyScalar};
use tensor4all_core::index::{DefaultIndex as Index, DynId};

// Create a scalar tensor (0 dimensions, 1 element)
let indices: Vec<Index<DynId>> = vec![];
let tensor: TensorDynLen = TensorDynLen::from_dense(indices, vec![42.0]).unwrap();

assert_eq!(tensor.only().real(), 42.0);
Source

pub fn permute_indices(&self, new_indices: &[DynIndex]) -> Self

Permute the tensor dimensions using the given new indices order.

This is the main permutation method that takes the desired new indices and automatically computes the corresponding permutation of dimensions and data. The new indices must be a permutation of the original indices (matched by ID).

§Arguments
  • new_indices - The desired new indices order. Must be a permutation of self.indices (matched by ID).
§Panics

Panics if new_indices.len() != self.indices.len(), if any index ID doesn’t match, or if there are duplicate indices.

§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};

// Create a 2×3 tensor
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let indices = vec![i.clone(), j.clone()];
let tensor: TensorDynLen = TensorDynLen::from_dense(indices, vec![0.0; 6]).unwrap();

// Permute to 3×2: swap the two dimensions by providing new indices order
let permuted = tensor.permute_indices(&[j, i]);
assert_eq!(permuted.dims(), vec![3, 2]);
Source

pub fn permute(&self, perm: &[usize]) -> Self

Permute the tensor dimensions, returning a new tensor.

This method reorders the indices, dimensions, and data according to the given permutation. The permutation specifies which old axis each new axis corresponds to: new_axis[i] = old_axis[perm[i]].

§Arguments
  • perm - The permutation: perm[i] is the old axis index for new axis i
§Panics

Panics if perm.len() != self.indices.len() or if the permutation is invalid.

§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};

// Create a 2×3 tensor
let indices = vec![
    Index::new_dyn(2),
    Index::new_dyn(3),
];
let tensor: TensorDynLen = TensorDynLen::from_dense(indices, vec![0.0; 6]).unwrap();

// Permute to 3×2: swap the two dimensions
let permuted = tensor.permute(&[1, 0]);
assert_eq!(permuted.dims(), vec![3, 2]);
Source

pub fn contract(&self, other: &Self) -> Self

Contract this tensor with another tensor along common indices.

This method finds common indices between self and other, then contracts along those indices. The result tensor contains all non-contracted indices from both tensors, with indices from self appearing first, followed by indices from other that are not common.

§Arguments
  • other - The tensor to contract with
§Returns

A new tensor resulting from the contraction.

§Panics

Panics if there are no common indices, if common indices have mismatched dimensions, or if storage types don’t match.

§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};

// Create two tensors: A[i, j] and B[j, k]
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let k = Index::new_dyn(4);

let indices_a = vec![i.clone(), j.clone()];
let tensor_a: TensorDynLen = TensorDynLen::from_dense(indices_a, vec![0.0; 6]).unwrap();

let indices_b = vec![j.clone(), k.clone()];
let tensor_b: TensorDynLen = TensorDynLen::from_dense(indices_b, vec![0.0; 12]).unwrap();

// Contract along j with the default pairwise semantics: result is C[i, k]
let result = tensor_a.contract(&tensor_b);
assert_eq!(result.dims(), vec![2, 4]);
Source

pub fn contract_with_options( &self, other: &Self, options: ContractionOptions<'_>, ) -> Result<Self>

Contract this tensor with another tensor using explicit contraction options.

§Arguments
  • other - The tensor to contract with.
  • options - Pair-selection policy and retained indices.
§Returns

The contracted tensor, or an error if the contraction cannot be built.

§Errors

Returns an error if the tensors are disconnected, retained indices are invalid, or the contraction plan cannot be executed.

§Examples
use tensor4all_core::{AllowedPairs, ContractionOptions, DynIndex, TensorDynLen};

let batch = DynIndex::new_dyn(2);
let i = DynIndex::new_dyn(2);
let k = DynIndex::new_dyn(3);
let j = DynIndex::new_dyn(2);

let a = TensorDynLen::from_dense(
    vec![batch.clone(), i.clone(), k.clone()],
    vec![1.0_f64; 12],
).unwrap();
let b = TensorDynLen::from_dense(
    vec![batch.clone(), k.clone(), j.clone()],
    vec![1.0_f64; 12],
).unwrap();
let retain = [batch.clone()];
let options = ContractionOptions::new(AllowedPairs::All).with_retain_indices(&retain);
let result = a.contract_with_options(&b, options).unwrap();

assert_eq!(result.indices(), &[batch, i, j]);
assert_eq!(result.dims(), vec![2, 2, 2]);
assert_eq!(result.to_vec::<f64>().unwrap(), vec![3.0; 8]);
Source

pub fn tensordot( &self, other: &Self, pairs: &[(DynIndex, DynIndex)], ) -> Result<Self>

Contract this tensor with another tensor along explicitly specified index pairs.

Similar to NumPy’s tensordot, this method contracts only along the explicitly specified pairs of indices. Unlike contract() which automatically contracts all common indices, tensordot gives you explicit control over which indices to contract.

§Arguments
  • other - The tensor to contract with
  • pairs - Pairs of indices to contract: (index_from_self, index_from_other)
§Returns

A new tensor resulting from the contraction, or an error if:

  • Any specified index is not found in the respective tensor
  • Dimensions don’t match for any pair
  • The same axis is specified multiple times in self or other
  • There are common indices (same ID) that are not in the contraction pairs (batch contraction is not yet implemented)
§Future: Batch Contraction

In a future version, common indices not specified in pairs will be treated as batch dimensions (like batched GEMM). Currently, this case returns an error.

§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};

// Create two tensors: A[i, j] and B[k, l] where j and k have same dimension but different IDs
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let k = Index::new_dyn(3);  // Same dimension as j, but different ID
let l = Index::new_dyn(4);

let indices_a = vec![i.clone(), j.clone()];
let tensor_a: TensorDynLen = TensorDynLen::from_dense(indices_a, vec![0.0; 6]).unwrap();

let indices_b = vec![k.clone(), l.clone()];
let tensor_b: TensorDynLen = TensorDynLen::from_dense(indices_b, vec![0.0; 12]).unwrap();

// Contract j (from A) with k (from B): result is C[i, l]
let result = tensor_a.tensordot(&tensor_b, &[(j.clone(), k.clone())]).unwrap();
assert_eq!(result.dims(), vec![2, 4]);
Source

pub fn outer_product(&self, other: &Self) -> Result<Self>

Compute the outer product (tensor product) of two tensors.

Creates a new tensor whose indices are the concatenation of the indices from both input tensors. The result has shape [...self.dims, ...other.dims].

This is equivalent to numpy’s np.outer or np.tensordot(a, b, axes=0), or ITensor’s * operator when there are no common indices.

§Arguments
  • other - The other tensor to compute outer product with
§Returns

A new tensor with indices from both tensors.

§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};

let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let tensor_a: TensorDynLen = TensorDynLen::from_dense(vec![i.clone()], vec![1.0, 2.0]).unwrap();
let tensor_b: TensorDynLen =
    TensorDynLen::from_dense(vec![j.clone()], vec![1.0, 2.0, 3.0]).unwrap();

// Outer product: C[i, j] = A[i] * B[j]
let result = tensor_a.outer_product(&tensor_b).unwrap();
assert_eq!(result.dims(), vec![2, 3]);
Source§

impl TensorDynLen

Source

pub fn random<T: RandomScalar, R: Rng>( rng: &mut R, indices: Vec<DynIndex>, ) -> Self

Create a random tensor with values from standard normal distribution (generic over scalar type).

For f64, each element is drawn from the standard normal distribution. For Complex64, both real and imaginary parts are drawn independently.

§Type Parameters
  • T - The scalar element type (must implement RandomScalar)
  • R - The random number generator type
§Arguments
  • rng - Random number generator
  • indices - The indices for the tensor
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
use rand::SeedableRng;
use rand_chacha::ChaCha8Rng;

let mut rng = ChaCha8Rng::seed_from_u64(42);
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let tensor: TensorDynLen = TensorDynLen::random::<f64, _>(&mut rng, vec![i, j]);
assert_eq!(tensor.dims(), vec![2, 3]);
Source§

impl TensorDynLen

Source

pub fn add(&self, other: &Self) -> Result<Self>

Add two tensors element-wise.

The tensors must have the same index set (matched by ID). If the indices are in a different order, the other tensor will be permuted to match self.

§Arguments
  • other - The tensor to add
§Returns

A new tensor representing self + other, or an error if:

  • The tensors have different index sets
  • The dimensions don’t match
  • Storage types are incompatible
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};

let i = Index::new_dyn(2);
let j = Index::new_dyn(3);

let indices_a = vec![i.clone(), j.clone()];
let data_a = vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0];
let tensor_a: TensorDynLen = TensorDynLen::from_dense(indices_a, data_a).unwrap();

let indices_b = vec![i.clone(), j.clone()];
let data_b = vec![1.0, 1.0, 1.0, 1.0, 1.0, 1.0];
let tensor_b: TensorDynLen = TensorDynLen::from_dense(indices_b, data_b).unwrap();

let sum = tensor_a.add(&tensor_b).unwrap();
// sum = [[2, 3, 4], [5, 6, 7]]
Source

pub fn axpby(&self, a: AnyScalar, other: &Self, b: AnyScalar) -> Result<Self>

Compute a linear combination: a * self + b * other.

Both tensors must have the same set of indices (matched by ID). If indices are in a different order, other is automatically permuted to match self.

§Examples
use tensor4all_core::{AnyScalar, DynIndex, TensorDynLen};

let i = DynIndex::new_dyn(2);
let a = TensorDynLen::from_dense(vec![i.clone()], vec![1.0, 2.0]).unwrap();
let b = TensorDynLen::from_dense(vec![i.clone()], vec![3.0, 4.0]).unwrap();

// 2*a + 3*b = [2+9, 4+12] = [11, 16]
let result = a.axpby(AnyScalar::new_real(2.0), &b, AnyScalar::new_real(3.0)).unwrap();
let data = result.to_vec::<f64>().unwrap();
assert!((data[0] - 11.0).abs() < 1e-12);
assert!((data[1] - 16.0).abs() < 1e-12);
Source

pub fn scale(&self, scalar: AnyScalar) -> Result<Self>

Scalar multiplication.

Multiplies every element by scalar.

§Examples
use tensor4all_core::{AnyScalar, DynIndex, TensorDynLen};

let i = DynIndex::new_dyn(3);
let t = TensorDynLen::from_dense(vec![i], vec![1.0, 2.0, 3.0]).unwrap();
let scaled = t.scale(AnyScalar::new_real(2.0)).unwrap();
assert_eq!(scaled.to_vec::<f64>().unwrap(), vec![2.0, 4.0, 6.0]);
Source

pub fn inner_product(&self, other: &Self) -> Result<AnyScalar>

Inner product (dot product) of two tensors.

Computes ⟨self, other⟩ = Σ conj(self)_i * other_i.

§Examples
use tensor4all_core::{DynIndex, TensorDynLen};

let i = DynIndex::new_dyn(3);
let a = TensorDynLen::from_dense(vec![i.clone()], vec![1.0, 2.0, 3.0]).unwrap();
let b = TensorDynLen::from_dense(vec![i.clone()], vec![4.0, 5.0, 6.0]).unwrap();

// <a, b> = 1*4 + 2*5 + 3*6 = 32
let ip = a.inner_product(&b).unwrap();
assert!((ip.real() - 32.0).abs() < 1e-12);
Source§

impl TensorDynLen

Source

pub fn replaceind(&self, old_index: &DynIndex, new_index: &DynIndex) -> Self

Replace an index in the tensor with a new index.

This replaces the index matching old_index by ID with new_index. The storage data is not modified, only the index metadata is changed.

§Arguments
  • old_index - The index to replace (matched by ID)
  • new_index - The new index to use
§Returns

A new tensor with the index replaced. If no index matches old_index, returns a clone of the original tensor.

§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};

let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let new_i = Index::new_dyn(2);  // Same dimension, different ID

let indices = vec![i.clone(), j.clone()];
let tensor: TensorDynLen = TensorDynLen::from_dense(indices, vec![0.0; 6]).unwrap();

// Replace index i with new_i
let replaced = tensor.replaceind(&i, &new_i);
assert_eq!(replaced.indices[0].id, new_i.id);
assert_eq!(replaced.indices[1].id, j.id);
Source

pub fn replaceinds( &self, old_indices: &[DynIndex], new_indices: &[DynIndex], ) -> Self

Replace multiple indices in the tensor.

This replaces each index in old_indices (matched by ID) with the corresponding index in new_indices. The storage data is not modified.

§Arguments
  • old_indices - The indices to replace (matched by ID)
  • new_indices - The new indices to use
§Panics

Panics if old_indices and new_indices have different lengths.

§Returns

A new tensor with the indices replaced. Indices not found in old_indices are kept unchanged.

§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};

let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let new_i = Index::new_dyn(2);
let new_j = Index::new_dyn(3);

let indices = vec![i.clone(), j.clone()];
let tensor: TensorDynLen = TensorDynLen::from_dense(indices, vec![0.0; 6]).unwrap();

// Replace both indices
let replaced = tensor.replaceinds(&[i.clone(), j.clone()], &[new_i.clone(), new_j.clone()]);
assert_eq!(replaced.indices[0].id, new_i.id);
assert_eq!(replaced.indices[1].id, new_j.id);
Source§

impl TensorDynLen

Source

pub fn conj(&self) -> Self

Complex conjugate of all tensor elements.

For real (f64) tensors, returns a copy (conjugate of real is identity). For complex (Complex64) tensors, conjugates each element.

The indices and dimensions remain unchanged.

This is inspired by the conj operation in ITensorMPS.jl.

§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
use num_complex::Complex64;

let i = Index::new_dyn(2);
let data = vec![Complex64::new(1.0, 2.0), Complex64::new(3.0, -4.0)];
let tensor: TensorDynLen = TensorDynLen::from_dense(vec![i], data).unwrap();

let conj_tensor = tensor.conj();
// Elements are now conjugated: 1-2i, 3+4i
Source§

impl TensorDynLen

Source

pub fn norm_squared(&self) -> f64

Compute the squared Frobenius norm of the tensor: ||T||² = Σ|T_ijk…|²

For real tensors: sum of squares of all elements. For complex tensors: sum of |z|² = z * conj(z) for all elements.

§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};

let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let data = vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0];  // 1² + 2² + ... + 6² = 91
let tensor: TensorDynLen = TensorDynLen::from_dense(vec![i, j], data).unwrap();

assert!((tensor.norm_squared() - 91.0).abs() < 1e-10);
Source

pub fn norm(&self) -> f64

Compute the Frobenius norm of the tensor: ||T|| = sqrt(Σ|T_ijk…|²)

§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};

let i = Index::new_dyn(2);
let data = vec![3.0, 4.0];  // sqrt(9 + 16) = 5
let tensor: TensorDynLen = TensorDynLen::from_dense(vec![i], data).unwrap();

assert!((tensor.norm() - 5.0).abs() < 1e-10);
Source

pub fn maxabs(&self) -> f64

Maximum absolute value of all elements (L-infinity norm).

§Examples
use tensor4all_core::{DynIndex, TensorDynLen};

let i = DynIndex::new_dyn(4);
let t = TensorDynLen::from_dense(vec![i], vec![-5.0, 1.0, 3.0, -2.0]).unwrap();
assert!((t.maxabs() - 5.0).abs() < 1e-12);
Source

pub fn distance(&self, other: &Self) -> f64

Compute the relative distance between two tensors.

Returns ||A - B|| / ||A|| (Frobenius norm). If ||A|| = 0, returns ||B|| instead to avoid division by zero.

This is the ITensor-style distance function useful for comparing tensors.

§Arguments
  • other - The other tensor to compare with
§Returns

The relative distance as a f64 value.

§Note

The indices of both tensors must be permutable to each other. The result tensor (A - B) uses the index ordering from self.

§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};

let i = Index::new_dyn(2);
let data_a = vec![1.0, 0.0];
let data_b = vec![1.0, 0.0];  // Same tensor
let tensor_a: TensorDynLen = TensorDynLen::from_dense(vec![i.clone()], data_a).unwrap();
let tensor_b: TensorDynLen = TensorDynLen::from_dense(vec![i.clone()], data_b).unwrap();

assert!(tensor_a.distance(&tensor_b) < 1e-10);  // Zero distance
Source§

impl TensorDynLen

Source

pub fn from_dense<T: TensorElement>( indices: Vec<DynIndex>, data: Vec<T>, ) -> Result<Self>

Create a tensor from dense data with explicit indices.

This is the recommended high-level API for creating tensors from raw data. It avoids direct access to Storage internals.

§Type Parameters
  • T - Scalar type (f64 or Complex64)
§Arguments
  • indices - Vector of indices for the tensor
  • data - Tensor data in column-major order
§Panics

Panics if data length doesn’t match the product of index dimensions.

§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};

let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let data = vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0];
let tensor: TensorDynLen = TensorDynLen::from_dense(vec![i, j], data).unwrap();
assert_eq!(tensor.dims(), vec![2, 3]);
Source

pub fn from_dense_any( indices: Vec<DynIndex>, data: Vec<AnyScalar>, ) -> Result<Self>

Create a tensor from dense payload data provided as AnyScalar values.

This is the preferred public API when the caller only knows the scalar type at runtime.

§Examples
use tensor4all_core::{AnyScalar, TensorDynLen};
use tensor4all_core::index::{DefaultIndex as Index, DynId};

let i = Index::new_dyn(2);
let j = Index::new_dyn(2);
let tensor = TensorDynLen::from_dense_any(
    vec![i, j],
    vec![
        AnyScalar::new_real(1.0),
        AnyScalar::new_complex(0.0, 1.0),
        AnyScalar::new_real(2.0),
        AnyScalar::new_real(3.0),
    ],
).unwrap();

assert!(tensor.is_complex());
assert_eq!(tensor.dims(), vec![2, 2]);
Source

pub fn from_diag<T: TensorElement>( indices: Vec<DynIndex>, data: Vec<T>, ) -> Result<Self>

Create a diagonal tensor from diagonal payload data with explicit indices.

All indices must have the same dimension, and data.len() must equal that dimension. The resulting tensor has nonzero entries only on the multi-index diagonal (T[i,i,...,i] = data[i]).

The returned tensor preserves compact diagonal payload metadata; use TensorDynLen::is_diag or TensorDynLen::storage to inspect that representation.

§Examples
use tensor4all_core::{DynIndex, TensorDynLen};

let i = DynIndex::new_dyn(3);
let j = DynIndex::new_dyn(3);
let diag = TensorDynLen::from_diag(vec![i, j], vec![1.0, 2.0, 3.0]).unwrap();
assert!(diag.is_diag());

let data = diag.to_vec::<f64>().unwrap();
// 3x3 identity-like: [1,0,0, 0,2,0, 0,0,3] in column-major
assert!((data[0] - 1.0).abs() < 1e-12);
assert!((data[4] - 2.0).abs() < 1e-12);
assert!((data[8] - 3.0).abs() < 1e-12);
assert!((data[1]).abs() < 1e-12);  // off-diagonal is zero
Source

pub fn from_diag_any( indices: Vec<DynIndex>, data: Vec<AnyScalar>, ) -> Result<Self>

Create a diagonal tensor from diagonal payload data provided as AnyScalar values.

This is the preferred public API when the caller only knows the scalar type at runtime.

§Examples
use tensor4all_core::{AnyScalar, TensorDynLen};
use tensor4all_core::index::{DefaultIndex as Index, DynId};

let i = Index::new_dyn(2);
let j = Index::new_dyn(2);
let tensor = TensorDynLen::from_diag_any(
    vec![i, j],
    vec![AnyScalar::new_real(1.0), AnyScalar::new_complex(2.0, -1.0)],
).unwrap();

assert!(tensor.is_complex());
assert_eq!(tensor.dims(), vec![2, 2]);
Source

pub fn copy_tensor(indices: Vec<DynIndex>, value: AnyScalar) -> Result<Self>

Create a copy tensor whose nonzero entries are value on the diagonal.

For indices [i, j, k], the returned tensor satisfies T[i, j, k] = value when i = j = k, and zero otherwise.

§Examples
use tensor4all_core::{AnyScalar, TensorDynLen};
use tensor4all_core::index::{DefaultIndex as Index, DynId};

let i = Index::new_dyn(2);
let j = Index::new_dyn(2);
let k = Index::new_dyn(2);
let tensor = TensorDynLen::copy_tensor(
    vec![i, j, k],
    AnyScalar::new_real(1.0),
).unwrap();

assert_eq!(tensor.dims(), vec![2, 2, 2]);
Source

pub fn fuse_indices( &self, old_indices: &[DynIndex], new_index: DynIndex, order: LinearizationOrder, ) -> Result<Self>

Replace multiple tensor indices with one fused index using an exact local reshape.

The indices in old_indices identify the axes to fuse by ID and also define the coordinate order used inside new_index. The new fused index is inserted at the earliest axis position among the fused axes; all other axes keep their original relative order. Use LinearizationOrder::ColumnMajor to match tensor4all’s dense vector layout, or LinearizationOrder::RowMajor when interoperating with row-major fused coordinates.

§Arguments
  • old_indices - Non-empty list of existing tensor indices to replace. Each index is matched by ID, must appear exactly once in the tensor, must have the same dimension as the matched tensor axis, and must not be duplicated in this list.
  • new_index - Replacement index whose dimension must equal the product of the dimensions in old_indices.
  • order - Linearization convention used to encode the old coordinates into the single coordinate of new_index.
§Returns

A tensor with the same element type and values, but with old_indices replaced by new_index.

§Errors

Returns an error if old_indices is empty, contains duplicate IDs, references an index not present in the tensor, if the fused dimension does not match the product of the old dimensions, if the replacement would duplicate a kept index, or if the dense reshape cannot be represented without overflow.

§Examples
use tensor4all_core::{DynIndex, LinearizationOrder, TensorDynLen, TensorLike};

let i = DynIndex::new_dyn(2);
let j = DynIndex::new_dyn(2);
let fused = DynIndex::new_link(4).unwrap();
let tensor = TensorDynLen::from_dense(
    vec![i.clone(), j.clone()],
    vec![1.0, 2.0, 3.0, 4.0],
).unwrap();

let fused_tensor = tensor
    .fuse_indices(&[i.clone(), j.clone()], fused.clone(), LinearizationOrder::ColumnMajor)
    .unwrap();
assert_eq!(fused_tensor.dims(), vec![4]);

let roundtrip = fused_tensor
    .unfuse_index(&fused, &[i, j], LinearizationOrder::ColumnMajor)
    .unwrap();
assert!(roundtrip.isapprox(&tensor, 1e-12, 0.0));
Source

pub fn unfuse_index( &self, old_index: &DynIndex, new_indices: &[DynIndex], order: LinearizationOrder, ) -> Result<Self>

Replace one fused index with multiple indices using an exact reshape.

The caller must specify how the old fused index should be decoded into the new indices via order.

§Examples
use tensor4all_core::{DynIndex, LinearizationOrder, TensorDynLen, TensorLike};

let fused = DynIndex::new_dyn(4);
let i = DynIndex::new_dyn(2);
let j = DynIndex::new_dyn(2);
let tensor = TensorDynLen::from_dense(vec![fused.clone()], vec![1.0, 2.0, 3.0, 4.0]).unwrap();

let unfused = tensor
    .unfuse_index(&fused, &[i.clone(), j.clone()], LinearizationOrder::ColumnMajor)
    .unwrap();

let expected = TensorDynLen::from_dense(vec![i, j], vec![1.0, 2.0, 3.0, 4.0]).unwrap();
assert!(unfused.isapprox(&expected, 1e-12, 0.0));
Source

pub fn scalar<T: TensorElement>(value: T) -> Result<Self>

Create a scalar (0-dimensional) tensor from a supported element value.

§Example
use tensor4all_core::TensorDynLen;

let scalar = TensorDynLen::scalar(42.0).unwrap();
assert_eq!(scalar.dims(), Vec::<usize>::new());
assert_eq!(scalar.only().real(), 42.0);
Source

pub fn zeros<T: TensorElement + Zero + Clone>( indices: Vec<DynIndex>, ) -> Result<Self>

Create a tensor filled with zeros of a supported element type.

§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};

let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let tensor = TensorDynLen::zeros::<f64>(vec![i, j]).unwrap();
assert_eq!(tensor.dims(), vec![2, 3]);
Source§

impl TensorDynLen

Source

pub fn to_vec<T: TensorElement>(&self) -> Result<Vec<T>>

Extract tensor data as a column-major Vec<T>.

§Type Parameters
  • T - The scalar element type (f64 or Complex64).
§Returns

A vector of the tensor data in column-major order.

§Errors

Returns an error if the tensor’s scalar type does not match T.

§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};

let i = Index::new_dyn(2);
let tensor = TensorDynLen::from_dense(vec![i], vec![1.0, 2.0]).unwrap();
let data = tensor.to_vec::<f64>().unwrap();
assert_eq!(data, &[1.0, 2.0]);
Source

pub fn as_slice_f64(&self) -> Result<Vec<f64>>

Extract tensor data as a column-major Vec<f64>.

Prefer the generic to_vec::<f64>() method. This wrapper is kept for C API compatibility.

Source

pub fn as_slice_c64(&self) -> Result<Vec<Complex64>>

Extract tensor data as a column-major Vec<Complex64>.

Prefer the generic to_vec::<Complex64>() method. This wrapper is kept for C API compatibility.

Source

pub fn is_f64(&self) -> bool

Check if the tensor has f64 storage.

§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};

let i = Index::new_dyn(2);
let tensor = TensorDynLen::from_dense(vec![i], vec![1.0, 2.0]).unwrap();
assert!(tensor.is_f64());
assert!(!tensor.is_complex());
Source

pub fn is_diag(&self) -> bool

Check whether the tensor carries diagonal logical axis metadata.

§Examples
use tensor4all_core::{DynIndex, Storage, TensorDynLen};

// Tensors from `from_dense` use dense storage
let i = DynIndex::new_dyn(2);
let j = DynIndex::new_dyn(2);
let dense = TensorDynLen::from_dense(vec![i, j], vec![1.0, 0.0, 0.0, 1.0]).unwrap();
assert!(!dense.is_diag());

// Diagonal metadata is preserved when constructing from diagonal storage.
let k = DynIndex::new_dyn(2);
let l = DynIndex::new_dyn(2);
let diag = TensorDynLen::from_storage(
    vec![k, l],
    Storage::from_diag_col_major(vec![1.0, 2.0], 2)
        .map(std::sync::Arc::new)
        .unwrap(),
)
.unwrap();
assert!(diag.is_diag());
Source

pub fn is_complex(&self) -> bool

Check if the tensor has complex storage (C64).

§Examples
use tensor4all_core::{DynIndex, TensorDynLen};
use num_complex::Complex64;

let i = DynIndex::new_dyn(2);
let real_t = TensorDynLen::from_dense(vec![i.clone()], vec![1.0, 2.0]).unwrap();
assert!(!real_t.is_complex());

let complex_t = TensorDynLen::from_dense(
    vec![i],
    vec![Complex64::new(1.0, 0.0), Complex64::new(0.0, 1.0)],
).unwrap();
assert!(complex_t.is_complex());

Trait Implementations§

Source§

impl Clone for TensorDynLen

Source§

fn clone(&self) -> TensorDynLen

Returns a duplicate of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl Debug for TensorDynLen

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl Mul<&TensorDynLen> for &TensorDynLen

Implement multiplication operator for tensor contraction.

The * operator performs tensor contraction along common indices. This is equivalent to calling the contract method.

§Example

use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};

// Create two tensors: A[i, j] and B[j, k]
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let k = Index::new_dyn(4);

let indices_a = vec![i.clone(), j.clone()];
let tensor_a: TensorDynLen = TensorDynLen::from_dense(indices_a, vec![0.0; 6]).unwrap();

let indices_b = vec![j.clone(), k.clone()];
let tensor_b: TensorDynLen = TensorDynLen::from_dense(indices_b, vec![0.0; 12]).unwrap();

// Contract along j using * operator: result is C[i, k]
let result = &tensor_a * &tensor_b;
assert_eq!(result.dims(), vec![2, 4]);
Source§

type Output = TensorDynLen

The resulting type after applying the * operator.
Source§

fn mul(self, other: &TensorDynLen) -> Self::Output

Performs the * operation. Read more
Source§

impl Mul<&TensorDynLen> for TensorDynLen

Implement multiplication operator for tensor contraction (mixed owned/reference).

Source§

type Output = TensorDynLen

The resulting type after applying the * operator.
Source§

fn mul(self, other: &TensorDynLen) -> Self::Output

Performs the * operation. Read more
Source§

impl Mul<TensorDynLen> for &TensorDynLen

Implement multiplication operator for tensor contraction (mixed reference/owned).

Source§

type Output = TensorDynLen

The resulting type after applying the * operator.
Source§

fn mul(self, other: TensorDynLen) -> Self::Output

Performs the * operation. Read more
Source§

impl Mul for TensorDynLen

Implement multiplication operator for tensor contraction (owned version).

This allows using tensor_a * tensor_b when both tensors are owned.

Source§

type Output = TensorDynLen

The resulting type after applying the * operator.
Source§

fn mul(self, other: TensorDynLen) -> Self::Output

Performs the * operation. Read more
Source§

impl Neg for &TensorDynLen

Source§

type Output = TensorDynLen

The resulting type after applying the - operator.
Source§

fn neg(self) -> Self::Output

Performs the unary - operation. Read more
Source§

impl Neg for TensorDynLen

Source§

type Output = TensorDynLen

The resulting type after applying the - operator.
Source§

fn neg(self) -> Self::Output

Performs the unary - operation. Read more
Source§

impl Sub<&TensorDynLen> for &TensorDynLen

Source§

type Output = TensorDynLen

The resulting type after applying the - operator.
Source§

fn sub(self, other: &TensorDynLen) -> Self::Output

Performs the - operation. Read more
Source§

impl Sub<&TensorDynLen> for TensorDynLen

Source§

type Output = TensorDynLen

The resulting type after applying the - operator.
Source§

fn sub(self, other: &TensorDynLen) -> Self::Output

Performs the - operation. Read more
Source§

impl Sub<TensorDynLen> for &TensorDynLen

Source§

type Output = TensorDynLen

The resulting type after applying the - operator.
Source§

fn sub(self, other: TensorDynLen) -> Self::Output

Performs the - operation. Read more
Source§

impl Sub for TensorDynLen

Source§

type Output = TensorDynLen

The resulting type after applying the - operator.
Source§

fn sub(self, other: TensorDynLen) -> Self::Output

Performs the - operation. Read more
Source§

impl TensorAccess for TensorDynLen

Source§

fn indices(&self) -> &[DynIndex]

Get a reference to the indices.
Source§

impl TensorIndex for TensorDynLen

Source§

type Index = Index<DynId>

The index type used by this object.
Source§

fn external_indices(&self) -> Vec<DynIndex>

Return flattened external indices for this object. Read more
Source§

fn num_external_indices(&self) -> usize

Number of external indices. Read more
Source§

fn replaceind(&self, old_index: &DynIndex, new_index: &DynIndex) -> Result<Self>

Replace an index in this object. Read more
Source§

fn replaceinds( &self, old_indices: &[DynIndex], new_indices: &[DynIndex], ) -> Result<Self>

Replace multiple indices in this object. Read more
Source§

fn replaceinds_pairs( &self, pairs: &[(Self::Index, Self::Index)], ) -> Result<Self>

Replace indices using pairs of (old, new). Read more
Source§

impl TensorLike for TensorDynLen

Source§

fn factorize( &self, left_inds: &[DynIndex], options: &FactorizeOptions, ) -> Result<FactorizeResult<Self>, FactorizeError>

Factorize this tensor into left and right factors. Read more
Source§

fn factorize_full_rank( &self, left_inds: &[DynIndex], alg: FactorizeAlg, canonical: Canonical, ) -> Result<FactorizeResult<Self>, FactorizeError>

Factorize this tensor without applying truncation controls. Read more
Source§

fn conj(&self) -> Self

Tensor conjugate operation. Read more
Source§

fn direct_sum( &self, other: &Self, pairs: &[(DynIndex, DynIndex)], ) -> Result<DirectSumResult<Self>>

Direct sum of two tensors along specified index pairs. Read more
Source§

fn outer_product(&self, other: &Self) -> Result<Self>

Outer product (tensor product) of two tensors. Read more
Source§

fn norm_squared(&self) -> f64

Compute the squared Frobenius norm of the tensor. Read more
Source§

fn maxabs(&self) -> f64

Maximum absolute value of all elements (L-infinity norm).
Source§

fn permuteinds(&self, new_order: &[DynIndex]) -> Result<Self>

Permute tensor indices to match the specified order. Read more
Source§

fn fuse_indices( &self, old_indices: &[DynIndex], new_index: DynIndex, order: LinearizationOrder, ) -> Result<Self>

Fuse local tensor indices into one replacement index. Read more
Source§

fn contract(tensors: &[&Self], allowed: AllowedPairs<'_>) -> Result<Self>

Contract multiple tensors over their contractable indices. Read more
Source§

fn contract_connected( tensors: &[&Self], allowed: AllowedPairs<'_>, ) -> Result<Self>

Contract multiple tensors that must form a connected graph. Read more
Source§

fn axpby(&self, a: AnyScalar, other: &Self, b: AnyScalar) -> Result<Self>

Compute a linear combination: a * self + b * other. Read more
Source§

fn scale(&self, scalar: AnyScalar) -> Result<Self>

Scalar multiplication.
Source§

fn inner_product(&self, other: &Self) -> Result<AnyScalar>

Inner product (dot product) of two tensors. Read more
Source§

fn diagonal(input_index: &DynIndex, output_index: &DynIndex) -> Result<Self>

Create a diagonal (Kronecker delta) tensor for a single index pair. Read more
Source§

fn scalar_one() -> Result<Self>

Create a scalar tensor with value 1.0. Read more
Source§

fn ones(indices: &[DynIndex]) -> Result<Self>

Create a tensor filled with 1.0 for the given indices. Read more
Source§

fn onehot(index_vals: &[(DynIndex, usize)]) -> Result<Self>

Create a one-hot tensor with value 1.0 at the specified index positions. Read more
Source§

fn norm(&self) -> f64

Compute the Frobenius norm of the tensor.
Source§

fn sub(&self, other: &Self) -> Result<Self>

Element-wise subtraction: self - other. Read more
Source§

fn neg(&self) -> Result<Self>

Negate all elements: -self.
Source§

fn isapprox(&self, other: &Self, atol: f64, rtol: f64) -> bool

Approximate equality check (Julia isapprox semantics). Read more
Source§

fn validate(&self) -> Result<()>

Validate structural consistency of this tensor. Read more
Source§

fn delta( input_indices: &[<Self as TensorIndex>::Index], output_indices: &[<Self as TensorIndex>::Index], ) -> Result<Self>

Create a delta (identity) tensor as outer product of diagonals. Read more

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
§

impl<U> As for U

§

fn as_<T>(self) -> T
where T: CastFrom<U>,

Casts self to type T. The semantics of numeric casting with the as operator are followed, so <T as As>::as_::<U> can be used in the same way as T as U for numeric conversions. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
§

impl<T> ByRef<T> for T

§

fn by_ref(&self) -> &T

§

impl<T> ByRef<T> for T

§

fn by_ref(&self) -> &T

Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dest: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dest. Read more
§

impl<T> DistributionExt for T
where T: ?Sized,

§

fn rand<T>(&self, rng: &mut (impl Rng + ?Sized)) -> T
where Self: Distribution<T>,

§

impl<T> DistributionExt for T
where T: ?Sized,

§

fn rand<T>(&self, rng: &mut (impl Rng + ?Sized)) -> T
where Self: Distribution<T>,

Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> IntoEither for T

Source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
§

impl<Rhs, Lhs, Output> MulByRef<Rhs> for Lhs
where &'a Lhs: for<'a> Mul<&'a Rhs, Output = Output>,

§

type Output = Output

§

fn mul_by_ref(&self, rhs: &Rhs) -> <Lhs as MulByRef<Rhs>>::Output

§

impl<Rhs, Lhs, Output> MulByRef<Rhs> for Lhs
where &'a Lhs: for<'a> Mul<&'a Rhs, Output = Output>,

§

type Output = Output

§

fn mul_by_ref(&self, rhs: &Rhs) -> <Lhs as MulByRef<Rhs>>::Output

§

impl<T, Output> NegByRef for T
where &'a T: for<'a> Neg<Output = Output>,

§

type Output = Output

§

fn neg_by_ref(&self) -> <T as NegByRef>::Output

§

impl<T, Output> NegByRef for T
where &'a T: for<'a> Neg<Output = Output>,

§

type Output = Output

§

fn neg_by_ref(&self) -> <T as NegByRef>::Output

§

impl<T> Pointable for T

§

const ALIGN: usize

The alignment of pointer.
§

type Init = T

The type for initializers.
§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
§

impl<Rhs, Lhs, Output> SubByRef<Rhs> for Lhs
where &'a Lhs: for<'a> Sub<&'a Rhs, Output = Output>,

§

type Output = Output

§

fn sub_by_ref(&self, rhs: &Rhs) -> <Lhs as SubByRef<Rhs>>::Output

§

impl<Rhs, Lhs, Output> SubByRef<Rhs> for Lhs
where &'a Lhs: for<'a> Sub<&'a Rhs, Output = Output>,

§

type Output = Output

§

fn sub_by_ref(&self, rhs: &Rhs) -> <Lhs as SubByRef<Rhs>>::Output

Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
§

impl<V, T> VZip<V> for T
where V: MultiLane<T>,

§

fn vzip(self) -> V

§

impl<T, U> Imply<T> for U
where T: ?Sized, U: ?Sized,

§

impl<T> MaybeSend for T

§

impl<T> MaybeSendSync for T

§

impl<T> MaybeSync for T