pub struct TensorDynLen {
pub indices: Vec<DynIndex>,
/* private fields */
}Expand description
Dynamic-rank tensor with structured payload storage – the central data type of tensor4all.
TensorDynLen stores a logical multi-dimensional tensor of f64 or
Complex64 values together with a list of DynIndex labels. The
authoritative payload is compact Storage, which may be dense, diagonal,
or explicitly structured. The indices carry unique identities (UUIDs) so
that contraction, addition, and other binary operations can automatically
match legs by identity rather than position.
§Key Operations
| Operation | Method |
|---|---|
| Create from data | from_dense, from_diag, zeros |
| Extract data | to_vec, sum, only |
| Contraction | contract, * operator |
| Arithmetic | add, scale, axpby, - operator |
| Factorization | via TensorLike::factorize |
| Norms | norm, norm_squared, maxabs |
| Index ops | replaceind, permute_indices |
§Data Layout
Logical dense extraction uses column-major order (first index varies fastest), matching Fortran, Julia, and ITensors.jl conventions. Compact structured payloads additionally carry explicit payload dimensions, strides, and logical-axis classes.
§Examples
use tensor4all_core::{TensorDynLen, DynIndex};
// Create a 2x3 real tensor
let i = DynIndex::new_dyn(2);
let j = DynIndex::new_dyn(3);
let data = vec![1.0_f64, 2.0, 3.0, 4.0, 5.0, 6.0];
let t = TensorDynLen::from_dense(vec![i.clone(), j.clone()], data).unwrap();
assert_eq!(t.dims(), vec![2, 3]);
assert!(t.is_f64());
// Sum all elements: 1+2+3+4+5+6 = 21
let s = t.sum();
assert!((s.real() - 21.0).abs() < 1e-12);
// Extract data back out
let data_out = t.to_vec::<f64>().unwrap();
assert_eq!(data_out, vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0]);Fields§
§indices: Vec<DynIndex>Full index information (includes tags and other metadata).
Implementations§
Source§impl TensorDynLen
impl TensorDynLen
Sourcepub fn dims(&self) -> Vec<usize>
pub fn dims(&self) -> Vec<usize>
Get dims in the current indices order.
This is computed on-demand from indices (single source of truth).
§Examples
use tensor4all_core::{DynIndex, TensorDynLen};
let i = DynIndex::new_dyn(2);
let j = DynIndex::new_dyn(3);
let k = DynIndex::new_dyn(4);
let t = TensorDynLen::from_dense(
vec![i, j, k],
vec![0.0; 24],
).unwrap();
assert_eq!(t.dims(), vec![2, 3, 4]);Sourcepub fn select_indices(
&self,
selected_indices: &[DynIndex],
positions: &[usize],
) -> Result<Self>
pub fn select_indices( &self, selected_indices: &[DynIndex], positions: &[usize], ) -> Result<Self>
Select fixed coordinates for tensor indices and drop those axes.
The selected_indices slice identifies tensor axes by index identity,
and positions gives the zero-based coordinate to take on each
selected axis. Unselected indices are preserved in their original order.
§Arguments
selected_indices- Indices to fix and remove from the result. Each index must appear exactly once in this tensor.positions- Coordinates forselected_indices. Each coordinate must be less than the corresponding index dimension.
§Returns
A tensor over the unselected indices. Selecting no indices returns a clone of the original tensor. Selecting all indices returns a rank-0 scalar tensor. Diagonal and structured tensors are sliced from their compact payload without materializing the original full tensor; the result keeps structured storage when the remaining logical axes can still be represented by axis classes.
§Errors
Returns an error if the argument lengths differ, a selected index is not present, a selected index is duplicated, or a coordinate is out of range.
§Examples
use tensor4all_core::{DynIndex, TensorDynLen};
let i = DynIndex::new_dyn(2);
let j = DynIndex::new_dyn(3);
let data = vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0];
let tensor = TensorDynLen::from_dense(vec![i.clone(), j.clone()], data).unwrap();
let selected = tensor.select_indices(&[j], &[1]).unwrap();
assert_eq!(selected.dims(), vec![2]);
assert_eq!(selected.to_vec::<f64>().unwrap(), vec![3.0, 4.0]);Sourcepub fn new(indices: Vec<DynIndex>, storage: Arc<Storage>) -> Self
pub fn new(indices: Vec<DynIndex>, storage: Arc<Storage>) -> Self
Create a new tensor with dynamic rank.
§Panics
Panics if the storage is Diag and not all indices have the same dimension. Panics if there are duplicate indices.
§Examples
use tensor4all_core::{DynIndex, TensorDynLen, Storage};
use std::sync::Arc;
let i = DynIndex::new_dyn(3);
let storage = Arc::new(Storage::new_dense::<f64>(3));
let t = TensorDynLen::new(vec![i], storage);
assert_eq!(t.dims(), vec![3]);Sourcepub fn from_indices(indices: Vec<DynIndex>, storage: Arc<Storage>) -> Self
pub fn from_indices(indices: Vec<DynIndex>, storage: Arc<Storage>) -> Self
Create a new tensor with dynamic rank, automatically computing dimensions from indices.
This is a convenience constructor that extracts dimensions from indices using IndexLike::dim().
§Panics
Panics if the storage is Diag and not all indices have the same dimension. Panics if there are duplicate indices.
§Examples
use tensor4all_core::{DynIndex, TensorDynLen, Storage};
use std::sync::Arc;
let i = DynIndex::new_dyn(4);
let storage = Arc::new(Storage::new_dense::<f64>(4));
let t = TensorDynLen::from_indices(vec![i], storage);
assert_eq!(t.dims(), vec![4]);Sourcepub fn from_storage(
indices: Vec<DynIndex>,
storage: Arc<Storage>,
) -> Result<Self>
pub fn from_storage( indices: Vec<DynIndex>, storage: Arc<Storage>, ) -> Result<Self>
Create a tensor from explicit compact storage.
§Examples
use tensor4all_core::{DynIndex, TensorDynLen, Storage};
use std::sync::Arc;
let i = DynIndex::new_dyn(2);
let j = DynIndex::new_dyn(2);
let storage = Arc::new(Storage::new_diag(vec![1.0_f64, 2.0]));
let t = TensorDynLen::from_storage(vec![i, j], storage).unwrap();
assert_eq!(t.dims(), vec![2, 2]);Sourcepub fn from_structured_storage(
indices: Vec<DynIndex>,
storage: Arc<Storage>,
) -> Result<Self>
pub fn from_structured_storage( indices: Vec<DynIndex>, storage: Arc<Storage>, ) -> Result<Self>
Create a tensor from explicit structured storage.
This is an alias for TensorDynLen::from_storage with a name that
emphasizes that compact structured metadata is preserved.
§Errors
Returns an error if the storage logical dimensions do not match the supplied indices, or if duplicate indices are provided.
§Examples
use std::sync::Arc;
use tensor4all_core::{DynIndex, Storage, StorageKind, TensorDynLen};
let i = DynIndex::new_dyn(2);
let j = DynIndex::new_dyn(2);
let storage = Arc::new(Storage::from_diag_col_major(vec![1.0_f64, 2.0], 2).unwrap());
let tensor = TensorDynLen::from_structured_storage(vec![i, j], storage).unwrap();
assert_eq!(tensor.storage().storage_kind(), StorageKind::Diagonal);Sourcepub fn enable_grad(self) -> Self
pub fn enable_grad(self) -> Self
Enable reverse-mode AD tracking on this tensor by creating a tracked leaf.
Sourcepub fn tracks_grad(&self) -> bool
pub fn tracks_grad(&self) -> bool
Report whether this tensor participates in gradient tracking.
Sourcepub fn grad(&self) -> Result<Option<Self>>
pub fn grad(&self) -> Result<Option<Self>>
Return the accumulated gradient, if one has been stored.
Sourcepub fn clear_grad(&self) -> Result<()>
pub fn clear_grad(&self) -> Result<()>
Clear the accumulated gradient stored for this tensor.
Sourcepub fn to_storage(&self) -> Result<Arc<Storage>>
pub fn to_storage(&self) -> Result<Arc<Storage>>
Materialize the primal snapshot as storage.
Sourcepub fn sum(&self) -> AnyScalar
pub fn sum(&self) -> AnyScalar
Sum all elements, returning AnyScalar.
§Examples
use tensor4all_core::{DynIndex, TensorDynLen};
let i = DynIndex::new_dyn(3);
let t = TensorDynLen::from_dense(vec![i], vec![1.0, 2.0, 3.0]).unwrap();
let s = t.sum();
assert!((s.real() - 6.0).abs() < 1e-12);Sourcepub fn only(&self) -> AnyScalar
pub fn only(&self) -> AnyScalar
Extract the scalar value from a 0-dimensional tensor (or 1-element tensor).
This is similar to Julia’s only() function.
§Panics
Panics if the tensor has more than one element.
§Example
use tensor4all_core::{TensorDynLen, AnyScalar};
use tensor4all_core::index::{DefaultIndex as Index, DynId};
// Create a scalar tensor (0 dimensions, 1 element)
let indices: Vec<Index<DynId>> = vec![];
let tensor: TensorDynLen = TensorDynLen::from_dense(indices, vec![42.0]).unwrap();
assert_eq!(tensor.only().real(), 42.0);Sourcepub fn permute_indices(&self, new_indices: &[DynIndex]) -> Self
pub fn permute_indices(&self, new_indices: &[DynIndex]) -> Self
Permute the tensor dimensions using the given new indices order.
This is the main permutation method that takes the desired new indices and automatically computes the corresponding permutation of dimensions and data. The new indices must be a permutation of the original indices (matched by ID).
§Arguments
new_indices- The desired new indices order. Must be a permutation ofself.indices(matched by ID).
§Panics
Panics if new_indices.len() != self.indices.len(), if any index ID
doesn’t match, or if there are duplicate indices.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
// Create a 2×3 tensor
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let indices = vec![i.clone(), j.clone()];
let tensor: TensorDynLen = TensorDynLen::from_dense(indices, vec![0.0; 6]).unwrap();
// Permute to 3×2: swap the two dimensions by providing new indices order
let permuted = tensor.permute_indices(&[j, i]);
assert_eq!(permuted.dims(), vec![3, 2]);Sourcepub fn permute(&self, perm: &[usize]) -> Self
pub fn permute(&self, perm: &[usize]) -> Self
Permute the tensor dimensions, returning a new tensor.
This method reorders the indices, dimensions, and data according to the
given permutation. The permutation specifies which old axis each new
axis corresponds to: new_axis[i] = old_axis[perm[i]].
§Arguments
perm- The permutation:perm[i]is the old axis index for new axisi
§Panics
Panics if perm.len() != self.indices.len() or if the permutation is invalid.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
// Create a 2×3 tensor
let indices = vec![
Index::new_dyn(2),
Index::new_dyn(3),
];
let tensor: TensorDynLen = TensorDynLen::from_dense(indices, vec![0.0; 6]).unwrap();
// Permute to 3×2: swap the two dimensions
let permuted = tensor.permute(&[1, 0]);
assert_eq!(permuted.dims(), vec![3, 2]);Sourcepub fn contract(&self, other: &Self) -> Self
pub fn contract(&self, other: &Self) -> Self
Contract this tensor with another tensor along common indices.
This method finds common indices between self and other, then contracts
along those indices. The result tensor contains all non-contracted indices
from both tensors, with indices from self appearing first, followed by
indices from other that are not common.
§Arguments
other- The tensor to contract with
§Returns
A new tensor resulting from the contraction.
§Panics
Panics if there are no common indices, if common indices have mismatched dimensions, or if storage types don’t match.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
// Create two tensors: A[i, j] and B[j, k]
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let k = Index::new_dyn(4);
let indices_a = vec![i.clone(), j.clone()];
let tensor_a: TensorDynLen = TensorDynLen::from_dense(indices_a, vec![0.0; 6]).unwrap();
let indices_b = vec![j.clone(), k.clone()];
let tensor_b: TensorDynLen = TensorDynLen::from_dense(indices_b, vec![0.0; 12]).unwrap();
// Contract along j with the default pairwise semantics: result is C[i, k]
let result = tensor_a.contract(&tensor_b);
assert_eq!(result.dims(), vec![2, 4]);Sourcepub fn contract_with_options(
&self,
other: &Self,
options: ContractionOptions<'_>,
) -> Result<Self>
pub fn contract_with_options( &self, other: &Self, options: ContractionOptions<'_>, ) -> Result<Self>
Contract this tensor with another tensor using explicit contraction options.
§Arguments
other- The tensor to contract with.options- Pair-selection policy and retained indices.
§Returns
The contracted tensor, or an error if the contraction cannot be built.
§Errors
Returns an error if the tensors are disconnected, retained indices are invalid, or the contraction plan cannot be executed.
§Examples
use tensor4all_core::{AllowedPairs, ContractionOptions, DynIndex, TensorDynLen};
let batch = DynIndex::new_dyn(2);
let i = DynIndex::new_dyn(2);
let k = DynIndex::new_dyn(3);
let j = DynIndex::new_dyn(2);
let a = TensorDynLen::from_dense(
vec![batch.clone(), i.clone(), k.clone()],
vec![1.0_f64; 12],
).unwrap();
let b = TensorDynLen::from_dense(
vec![batch.clone(), k.clone(), j.clone()],
vec![1.0_f64; 12],
).unwrap();
let retain = [batch.clone()];
let options = ContractionOptions::new(AllowedPairs::All).with_retain_indices(&retain);
let result = a.contract_with_options(&b, options).unwrap();
assert_eq!(result.indices(), &[batch, i, j]);
assert_eq!(result.dims(), vec![2, 2, 2]);
assert_eq!(result.to_vec::<f64>().unwrap(), vec![3.0; 8]);Sourcepub fn tensordot(
&self,
other: &Self,
pairs: &[(DynIndex, DynIndex)],
) -> Result<Self>
pub fn tensordot( &self, other: &Self, pairs: &[(DynIndex, DynIndex)], ) -> Result<Self>
Contract this tensor with another tensor along explicitly specified index pairs.
Similar to NumPy’s tensordot, this method contracts only along the explicitly
specified pairs of indices. Unlike contract() which automatically contracts
all common indices, tensordot gives you explicit control over which indices
to contract.
§Arguments
other- The tensor to contract withpairs- Pairs of indices to contract:(index_from_self, index_from_other)
§Returns
A new tensor resulting from the contraction, or an error if:
- Any specified index is not found in the respective tensor
- Dimensions don’t match for any pair
- The same axis is specified multiple times in
selforother - There are common indices (same ID) that are not in the contraction pairs (batch contraction is not yet implemented)
§Future: Batch Contraction
In a future version, common indices not specified in pairs will be treated
as batch dimensions (like batched GEMM). Currently, this case returns an error.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
// Create two tensors: A[i, j] and B[k, l] where j and k have same dimension but different IDs
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let k = Index::new_dyn(3); // Same dimension as j, but different ID
let l = Index::new_dyn(4);
let indices_a = vec![i.clone(), j.clone()];
let tensor_a: TensorDynLen = TensorDynLen::from_dense(indices_a, vec![0.0; 6]).unwrap();
let indices_b = vec![k.clone(), l.clone()];
let tensor_b: TensorDynLen = TensorDynLen::from_dense(indices_b, vec![0.0; 12]).unwrap();
// Contract j (from A) with k (from B): result is C[i, l]
let result = tensor_a.tensordot(&tensor_b, &[(j.clone(), k.clone())]).unwrap();
assert_eq!(result.dims(), vec![2, 4]);Sourcepub fn outer_product(&self, other: &Self) -> Result<Self>
pub fn outer_product(&self, other: &Self) -> Result<Self>
Compute the outer product (tensor product) of two tensors.
Creates a new tensor whose indices are the concatenation of the indices
from both input tensors. The result has shape [...self.dims, ...other.dims].
This is equivalent to numpy’s np.outer or np.tensordot(a, b, axes=0),
or ITensor’s * operator when there are no common indices.
§Arguments
other- The other tensor to compute outer product with
§Returns
A new tensor with indices from both tensors.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let tensor_a: TensorDynLen = TensorDynLen::from_dense(vec![i.clone()], vec![1.0, 2.0]).unwrap();
let tensor_b: TensorDynLen =
TensorDynLen::from_dense(vec![j.clone()], vec![1.0, 2.0, 3.0]).unwrap();
// Outer product: C[i, j] = A[i] * B[j]
let result = tensor_a.outer_product(&tensor_b).unwrap();
assert_eq!(result.dims(), vec![2, 3]);Source§impl TensorDynLen
impl TensorDynLen
Sourcepub fn random<T: RandomScalar, R: Rng>(
rng: &mut R,
indices: Vec<DynIndex>,
) -> Self
pub fn random<T: RandomScalar, R: Rng>( rng: &mut R, indices: Vec<DynIndex>, ) -> Self
Create a random tensor with values from standard normal distribution (generic over scalar type).
For f64, each element is drawn from the standard normal distribution.
For Complex64, both real and imaginary parts are drawn independently.
§Type Parameters
T- The scalar element type (must implementRandomScalar)R- The random number generator type
§Arguments
rng- Random number generatorindices- The indices for the tensor
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
use rand::SeedableRng;
use rand_chacha::ChaCha8Rng;
let mut rng = ChaCha8Rng::seed_from_u64(42);
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let tensor: TensorDynLen = TensorDynLen::random::<f64, _>(&mut rng, vec![i, j]);
assert_eq!(tensor.dims(), vec![2, 3]);Source§impl TensorDynLen
impl TensorDynLen
Sourcepub fn add(&self, other: &Self) -> Result<Self>
pub fn add(&self, other: &Self) -> Result<Self>
Add two tensors element-wise.
The tensors must have the same index set (matched by ID). If the indices
are in a different order, the other tensor will be permuted to match self.
§Arguments
other- The tensor to add
§Returns
A new tensor representing self + other, or an error if:
- The tensors have different index sets
- The dimensions don’t match
- Storage types are incompatible
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let indices_a = vec![i.clone(), j.clone()];
let data_a = vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0];
let tensor_a: TensorDynLen = TensorDynLen::from_dense(indices_a, data_a).unwrap();
let indices_b = vec![i.clone(), j.clone()];
let data_b = vec![1.0, 1.0, 1.0, 1.0, 1.0, 1.0];
let tensor_b: TensorDynLen = TensorDynLen::from_dense(indices_b, data_b).unwrap();
let sum = tensor_a.add(&tensor_b).unwrap();
// sum = [[2, 3, 4], [5, 6, 7]]Sourcepub fn axpby(&self, a: AnyScalar, other: &Self, b: AnyScalar) -> Result<Self>
pub fn axpby(&self, a: AnyScalar, other: &Self, b: AnyScalar) -> Result<Self>
Compute a linear combination: a * self + b * other.
Both tensors must have the same set of indices (matched by ID).
If indices are in a different order, other is automatically permuted
to match self.
§Examples
use tensor4all_core::{AnyScalar, DynIndex, TensorDynLen};
let i = DynIndex::new_dyn(2);
let a = TensorDynLen::from_dense(vec![i.clone()], vec![1.0, 2.0]).unwrap();
let b = TensorDynLen::from_dense(vec![i.clone()], vec![3.0, 4.0]).unwrap();
// 2*a + 3*b = [2+9, 4+12] = [11, 16]
let result = a.axpby(AnyScalar::new_real(2.0), &b, AnyScalar::new_real(3.0)).unwrap();
let data = result.to_vec::<f64>().unwrap();
assert!((data[0] - 11.0).abs() < 1e-12);
assert!((data[1] - 16.0).abs() < 1e-12);Sourcepub fn scale(&self, scalar: AnyScalar) -> Result<Self>
pub fn scale(&self, scalar: AnyScalar) -> Result<Self>
Scalar multiplication.
Multiplies every element by scalar.
§Examples
use tensor4all_core::{AnyScalar, DynIndex, TensorDynLen};
let i = DynIndex::new_dyn(3);
let t = TensorDynLen::from_dense(vec![i], vec![1.0, 2.0, 3.0]).unwrap();
let scaled = t.scale(AnyScalar::new_real(2.0)).unwrap();
assert_eq!(scaled.to_vec::<f64>().unwrap(), vec![2.0, 4.0, 6.0]);Sourcepub fn inner_product(&self, other: &Self) -> Result<AnyScalar>
pub fn inner_product(&self, other: &Self) -> Result<AnyScalar>
Inner product (dot product) of two tensors.
Computes ⟨self, other⟩ = Σ conj(self)_i * other_i.
§Examples
use tensor4all_core::{DynIndex, TensorDynLen};
let i = DynIndex::new_dyn(3);
let a = TensorDynLen::from_dense(vec![i.clone()], vec![1.0, 2.0, 3.0]).unwrap();
let b = TensorDynLen::from_dense(vec![i.clone()], vec![4.0, 5.0, 6.0]).unwrap();
// <a, b> = 1*4 + 2*5 + 3*6 = 32
let ip = a.inner_product(&b).unwrap();
assert!((ip.real() - 32.0).abs() < 1e-12);Source§impl TensorDynLen
impl TensorDynLen
Sourcepub fn replaceind(&self, old_index: &DynIndex, new_index: &DynIndex) -> Self
pub fn replaceind(&self, old_index: &DynIndex, new_index: &DynIndex) -> Self
Replace an index in the tensor with a new index.
This replaces the index matching old_index by ID with new_index.
The storage data is not modified, only the index metadata is changed.
§Arguments
old_index- The index to replace (matched by ID)new_index- The new index to use
§Returns
A new tensor with the index replaced. If no index matches old_index,
returns a clone of the original tensor.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let new_i = Index::new_dyn(2); // Same dimension, different ID
let indices = vec![i.clone(), j.clone()];
let tensor: TensorDynLen = TensorDynLen::from_dense(indices, vec![0.0; 6]).unwrap();
// Replace index i with new_i
let replaced = tensor.replaceind(&i, &new_i);
assert_eq!(replaced.indices[0].id, new_i.id);
assert_eq!(replaced.indices[1].id, j.id);Sourcepub fn replaceinds(
&self,
old_indices: &[DynIndex],
new_indices: &[DynIndex],
) -> Self
pub fn replaceinds( &self, old_indices: &[DynIndex], new_indices: &[DynIndex], ) -> Self
Replace multiple indices in the tensor.
This replaces each index in old_indices (matched by ID) with the corresponding
index in new_indices. The storage data is not modified.
§Arguments
old_indices- The indices to replace (matched by ID)new_indices- The new indices to use
§Panics
Panics if old_indices and new_indices have different lengths.
§Returns
A new tensor with the indices replaced. Indices not found in old_indices
are kept unchanged.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let new_i = Index::new_dyn(2);
let new_j = Index::new_dyn(3);
let indices = vec![i.clone(), j.clone()];
let tensor: TensorDynLen = TensorDynLen::from_dense(indices, vec![0.0; 6]).unwrap();
// Replace both indices
let replaced = tensor.replaceinds(&[i.clone(), j.clone()], &[new_i.clone(), new_j.clone()]);
assert_eq!(replaced.indices[0].id, new_i.id);
assert_eq!(replaced.indices[1].id, new_j.id);Source§impl TensorDynLen
impl TensorDynLen
Sourcepub fn conj(&self) -> Self
pub fn conj(&self) -> Self
Complex conjugate of all tensor elements.
For real (f64) tensors, returns a copy (conjugate of real is identity). For complex (Complex64) tensors, conjugates each element.
The indices and dimensions remain unchanged.
This is inspired by the conj operation in ITensorMPS.jl.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
use num_complex::Complex64;
let i = Index::new_dyn(2);
let data = vec![Complex64::new(1.0, 2.0), Complex64::new(3.0, -4.0)];
let tensor: TensorDynLen = TensorDynLen::from_dense(vec![i], data).unwrap();
let conj_tensor = tensor.conj();
// Elements are now conjugated: 1-2i, 3+4iSource§impl TensorDynLen
impl TensorDynLen
Sourcepub fn norm_squared(&self) -> f64
pub fn norm_squared(&self) -> f64
Compute the squared Frobenius norm of the tensor: ||T||² = Σ|T_ijk…|²
For real tensors: sum of squares of all elements. For complex tensors: sum of |z|² = z * conj(z) for all elements.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let data = vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0]; // 1² + 2² + ... + 6² = 91
let tensor: TensorDynLen = TensorDynLen::from_dense(vec![i, j], data).unwrap();
assert!((tensor.norm_squared() - 91.0).abs() < 1e-10);Sourcepub fn norm(&self) -> f64
pub fn norm(&self) -> f64
Compute the Frobenius norm of the tensor: ||T|| = sqrt(Σ|T_ijk…|²)
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
let i = Index::new_dyn(2);
let data = vec![3.0, 4.0]; // sqrt(9 + 16) = 5
let tensor: TensorDynLen = TensorDynLen::from_dense(vec![i], data).unwrap();
assert!((tensor.norm() - 5.0).abs() < 1e-10);Sourcepub fn maxabs(&self) -> f64
pub fn maxabs(&self) -> f64
Maximum absolute value of all elements (L-infinity norm).
§Examples
use tensor4all_core::{DynIndex, TensorDynLen};
let i = DynIndex::new_dyn(4);
let t = TensorDynLen::from_dense(vec![i], vec![-5.0, 1.0, 3.0, -2.0]).unwrap();
assert!((t.maxabs() - 5.0).abs() < 1e-12);Sourcepub fn distance(&self, other: &Self) -> f64
pub fn distance(&self, other: &Self) -> f64
Compute the relative distance between two tensors.
Returns ||A - B|| / ||A|| (Frobenius norm).
If ||A|| = 0, returns ||B|| instead to avoid division by zero.
This is the ITensor-style distance function useful for comparing tensors.
§Arguments
other- The other tensor to compare with
§Returns
The relative distance as a f64 value.
§Note
The indices of both tensors must be permutable to each other. The result tensor (A - B) uses the index ordering from self.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
let i = Index::new_dyn(2);
let data_a = vec![1.0, 0.0];
let data_b = vec![1.0, 0.0]; // Same tensor
let tensor_a: TensorDynLen = TensorDynLen::from_dense(vec![i.clone()], data_a).unwrap();
let tensor_b: TensorDynLen = TensorDynLen::from_dense(vec![i.clone()], data_b).unwrap();
assert!(tensor_a.distance(&tensor_b) < 1e-10); // Zero distanceSource§impl TensorDynLen
impl TensorDynLen
Sourcepub fn from_dense<T: TensorElement>(
indices: Vec<DynIndex>,
data: Vec<T>,
) -> Result<Self>
pub fn from_dense<T: TensorElement>( indices: Vec<DynIndex>, data: Vec<T>, ) -> Result<Self>
Create a tensor from dense data with explicit indices.
This is the recommended high-level API for creating tensors from raw data.
It avoids direct access to Storage internals.
§Type Parameters
T- Scalar type (f64orComplex64)
§Arguments
indices- Vector of indices for the tensordata- Tensor data in column-major order
§Panics
Panics if data length doesn’t match the product of index dimensions.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let data = vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0];
let tensor: TensorDynLen = TensorDynLen::from_dense(vec![i, j], data).unwrap();
assert_eq!(tensor.dims(), vec![2, 3]);Sourcepub fn from_dense_any(
indices: Vec<DynIndex>,
data: Vec<AnyScalar>,
) -> Result<Self>
pub fn from_dense_any( indices: Vec<DynIndex>, data: Vec<AnyScalar>, ) -> Result<Self>
Create a tensor from dense payload data provided as AnyScalar values.
This is the preferred public API when the caller only knows the scalar type at runtime.
§Examples
use tensor4all_core::{AnyScalar, TensorDynLen};
use tensor4all_core::index::{DefaultIndex as Index, DynId};
let i = Index::new_dyn(2);
let j = Index::new_dyn(2);
let tensor = TensorDynLen::from_dense_any(
vec![i, j],
vec![
AnyScalar::new_real(1.0),
AnyScalar::new_complex(0.0, 1.0),
AnyScalar::new_real(2.0),
AnyScalar::new_real(3.0),
],
).unwrap();
assert!(tensor.is_complex());
assert_eq!(tensor.dims(), vec![2, 2]);Sourcepub fn from_diag<T: TensorElement>(
indices: Vec<DynIndex>,
data: Vec<T>,
) -> Result<Self>
pub fn from_diag<T: TensorElement>( indices: Vec<DynIndex>, data: Vec<T>, ) -> Result<Self>
Create a diagonal tensor from diagonal payload data with explicit indices.
All indices must have the same dimension, and data.len() must equal
that dimension. The resulting tensor has nonzero entries only on
the multi-index diagonal (T[i,i,...,i] = data[i]).
The returned tensor preserves compact diagonal payload metadata; use
TensorDynLen::is_diag or TensorDynLen::storage to inspect that
representation.
§Examples
use tensor4all_core::{DynIndex, TensorDynLen};
let i = DynIndex::new_dyn(3);
let j = DynIndex::new_dyn(3);
let diag = TensorDynLen::from_diag(vec![i, j], vec![1.0, 2.0, 3.0]).unwrap();
assert!(diag.is_diag());
let data = diag.to_vec::<f64>().unwrap();
// 3x3 identity-like: [1,0,0, 0,2,0, 0,0,3] in column-major
assert!((data[0] - 1.0).abs() < 1e-12);
assert!((data[4] - 2.0).abs() < 1e-12);
assert!((data[8] - 3.0).abs() < 1e-12);
assert!((data[1]).abs() < 1e-12); // off-diagonal is zeroSourcepub fn from_diag_any(
indices: Vec<DynIndex>,
data: Vec<AnyScalar>,
) -> Result<Self>
pub fn from_diag_any( indices: Vec<DynIndex>, data: Vec<AnyScalar>, ) -> Result<Self>
Create a diagonal tensor from diagonal payload data provided as
AnyScalar values.
This is the preferred public API when the caller only knows the scalar type at runtime.
§Examples
use tensor4all_core::{AnyScalar, TensorDynLen};
use tensor4all_core::index::{DefaultIndex as Index, DynId};
let i = Index::new_dyn(2);
let j = Index::new_dyn(2);
let tensor = TensorDynLen::from_diag_any(
vec![i, j],
vec![AnyScalar::new_real(1.0), AnyScalar::new_complex(2.0, -1.0)],
).unwrap();
assert!(tensor.is_complex());
assert_eq!(tensor.dims(), vec![2, 2]);Sourcepub fn copy_tensor(indices: Vec<DynIndex>, value: AnyScalar) -> Result<Self>
pub fn copy_tensor(indices: Vec<DynIndex>, value: AnyScalar) -> Result<Self>
Create a copy tensor whose nonzero entries are value on the diagonal.
For indices [i, j, k], the returned tensor satisfies
T[i, j, k] = value when i = j = k, and zero otherwise.
§Examples
use tensor4all_core::{AnyScalar, TensorDynLen};
use tensor4all_core::index::{DefaultIndex as Index, DynId};
let i = Index::new_dyn(2);
let j = Index::new_dyn(2);
let k = Index::new_dyn(2);
let tensor = TensorDynLen::copy_tensor(
vec![i, j, k],
AnyScalar::new_real(1.0),
).unwrap();
assert_eq!(tensor.dims(), vec![2, 2, 2]);Sourcepub fn fuse_indices(
&self,
old_indices: &[DynIndex],
new_index: DynIndex,
order: LinearizationOrder,
) -> Result<Self>
pub fn fuse_indices( &self, old_indices: &[DynIndex], new_index: DynIndex, order: LinearizationOrder, ) -> Result<Self>
Replace multiple tensor indices with one fused index using an exact local reshape.
The indices in old_indices identify the axes to fuse by ID and also
define the coordinate order used inside new_index. The new fused index
is inserted at the earliest axis position among the fused axes; all
other axes keep their original relative order. Use
LinearizationOrder::ColumnMajor to match tensor4all’s dense vector
layout, or LinearizationOrder::RowMajor when interoperating with
row-major fused coordinates.
§Arguments
old_indices- Non-empty list of existing tensor indices to replace. Each index is matched by ID, must appear exactly once in the tensor, must have the same dimension as the matched tensor axis, and must not be duplicated in this list.new_index- Replacement index whose dimension must equal the product of the dimensions inold_indices.order- Linearization convention used to encode the old coordinates into the single coordinate ofnew_index.
§Returns
A tensor with the same element type and values, but with old_indices
replaced by new_index.
§Errors
Returns an error if old_indices is empty, contains duplicate IDs,
references an index not present in the tensor, if the fused dimension
does not match the product of the old dimensions, if the replacement
would duplicate a kept index, or if the dense reshape cannot be
represented without overflow.
§Examples
use tensor4all_core::{DynIndex, LinearizationOrder, TensorDynLen, TensorLike};
let i = DynIndex::new_dyn(2);
let j = DynIndex::new_dyn(2);
let fused = DynIndex::new_link(4).unwrap();
let tensor = TensorDynLen::from_dense(
vec![i.clone(), j.clone()],
vec![1.0, 2.0, 3.0, 4.0],
).unwrap();
let fused_tensor = tensor
.fuse_indices(&[i.clone(), j.clone()], fused.clone(), LinearizationOrder::ColumnMajor)
.unwrap();
assert_eq!(fused_tensor.dims(), vec![4]);
let roundtrip = fused_tensor
.unfuse_index(&fused, &[i, j], LinearizationOrder::ColumnMajor)
.unwrap();
assert!(roundtrip.isapprox(&tensor, 1e-12, 0.0));Sourcepub fn unfuse_index(
&self,
old_index: &DynIndex,
new_indices: &[DynIndex],
order: LinearizationOrder,
) -> Result<Self>
pub fn unfuse_index( &self, old_index: &DynIndex, new_indices: &[DynIndex], order: LinearizationOrder, ) -> Result<Self>
Replace one fused index with multiple indices using an exact reshape.
The caller must specify how the old fused index should be decoded into
the new indices via order.
§Examples
use tensor4all_core::{DynIndex, LinearizationOrder, TensorDynLen, TensorLike};
let fused = DynIndex::new_dyn(4);
let i = DynIndex::new_dyn(2);
let j = DynIndex::new_dyn(2);
let tensor = TensorDynLen::from_dense(vec![fused.clone()], vec![1.0, 2.0, 3.0, 4.0]).unwrap();
let unfused = tensor
.unfuse_index(&fused, &[i.clone(), j.clone()], LinearizationOrder::ColumnMajor)
.unwrap();
let expected = TensorDynLen::from_dense(vec![i, j], vec![1.0, 2.0, 3.0, 4.0]).unwrap();
assert!(unfused.isapprox(&expected, 1e-12, 0.0));Sourcepub fn scalar<T: TensorElement>(value: T) -> Result<Self>
pub fn scalar<T: TensorElement>(value: T) -> Result<Self>
Create a scalar (0-dimensional) tensor from a supported element value.
§Example
use tensor4all_core::TensorDynLen;
let scalar = TensorDynLen::scalar(42.0).unwrap();
assert_eq!(scalar.dims(), Vec::<usize>::new());
assert_eq!(scalar.only().real(), 42.0);Sourcepub fn zeros<T: TensorElement + Zero + Clone>(
indices: Vec<DynIndex>,
) -> Result<Self>
pub fn zeros<T: TensorElement + Zero + Clone>( indices: Vec<DynIndex>, ) -> Result<Self>
Create a tensor filled with zeros of a supported element type.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let tensor = TensorDynLen::zeros::<f64>(vec![i, j]).unwrap();
assert_eq!(tensor.dims(), vec![2, 3]);Source§impl TensorDynLen
impl TensorDynLen
Sourcepub fn to_vec<T: TensorElement>(&self) -> Result<Vec<T>>
pub fn to_vec<T: TensorElement>(&self) -> Result<Vec<T>>
Extract tensor data as a column-major Vec<T>.
§Type Parameters
T- The scalar element type (f64orComplex64).
§Returns
A vector of the tensor data in column-major order.
§Errors
Returns an error if the tensor’s scalar type does not match T.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
let i = Index::new_dyn(2);
let tensor = TensorDynLen::from_dense(vec![i], vec![1.0, 2.0]).unwrap();
let data = tensor.to_vec::<f64>().unwrap();
assert_eq!(data, &[1.0, 2.0]);Sourcepub fn as_slice_f64(&self) -> Result<Vec<f64>>
pub fn as_slice_f64(&self) -> Result<Vec<f64>>
Extract tensor data as a column-major Vec<f64>.
Prefer the generic to_vec::<f64>() method.
This wrapper is kept for C API compatibility.
Sourcepub fn as_slice_c64(&self) -> Result<Vec<Complex64>>
pub fn as_slice_c64(&self) -> Result<Vec<Complex64>>
Extract tensor data as a column-major Vec<Complex64>.
Prefer the generic to_vec::<Complex64>() method.
This wrapper is kept for C API compatibility.
Sourcepub fn is_f64(&self) -> bool
pub fn is_f64(&self) -> bool
Check if the tensor has f64 storage.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
let i = Index::new_dyn(2);
let tensor = TensorDynLen::from_dense(vec![i], vec![1.0, 2.0]).unwrap();
assert!(tensor.is_f64());
assert!(!tensor.is_complex());Sourcepub fn is_diag(&self) -> bool
pub fn is_diag(&self) -> bool
Check whether the tensor carries diagonal logical axis metadata.
§Examples
use tensor4all_core::{DynIndex, Storage, TensorDynLen};
// Tensors from `from_dense` use dense storage
let i = DynIndex::new_dyn(2);
let j = DynIndex::new_dyn(2);
let dense = TensorDynLen::from_dense(vec![i, j], vec![1.0, 0.0, 0.0, 1.0]).unwrap();
assert!(!dense.is_diag());
// Diagonal metadata is preserved when constructing from diagonal storage.
let k = DynIndex::new_dyn(2);
let l = DynIndex::new_dyn(2);
let diag = TensorDynLen::from_storage(
vec![k, l],
Storage::from_diag_col_major(vec![1.0, 2.0], 2)
.map(std::sync::Arc::new)
.unwrap(),
)
.unwrap();
assert!(diag.is_diag());Sourcepub fn is_complex(&self) -> bool
pub fn is_complex(&self) -> bool
Check if the tensor has complex storage (C64).
§Examples
use tensor4all_core::{DynIndex, TensorDynLen};
use num_complex::Complex64;
let i = DynIndex::new_dyn(2);
let real_t = TensorDynLen::from_dense(vec![i.clone()], vec![1.0, 2.0]).unwrap();
assert!(!real_t.is_complex());
let complex_t = TensorDynLen::from_dense(
vec![i],
vec![Complex64::new(1.0, 0.0), Complex64::new(0.0, 1.0)],
).unwrap();
assert!(complex_t.is_complex());Trait Implementations§
Source§impl Clone for TensorDynLen
impl Clone for TensorDynLen
Source§fn clone(&self) -> TensorDynLen
fn clone(&self) -> TensorDynLen
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read moreSource§impl Debug for TensorDynLen
impl Debug for TensorDynLen
Source§impl Mul<&TensorDynLen> for &TensorDynLen
Implement multiplication operator for tensor contraction.
impl Mul<&TensorDynLen> for &TensorDynLen
Implement multiplication operator for tensor contraction.
The * operator performs tensor contraction along common indices.
This is equivalent to calling the contract method.
§Example
use tensor4all_core::TensorDynLen;
use tensor4all_core::index::{DefaultIndex as Index, DynId};
// Create two tensors: A[i, j] and B[j, k]
let i = Index::new_dyn(2);
let j = Index::new_dyn(3);
let k = Index::new_dyn(4);
let indices_a = vec![i.clone(), j.clone()];
let tensor_a: TensorDynLen = TensorDynLen::from_dense(indices_a, vec![0.0; 6]).unwrap();
let indices_b = vec![j.clone(), k.clone()];
let tensor_b: TensorDynLen = TensorDynLen::from_dense(indices_b, vec![0.0; 12]).unwrap();
// Contract along j using * operator: result is C[i, k]
let result = &tensor_a * &tensor_b;
assert_eq!(result.dims(), vec![2, 4]);Source§type Output = TensorDynLen
type Output = TensorDynLen
* operator.Source§impl Mul<&TensorDynLen> for TensorDynLen
Implement multiplication operator for tensor contraction (mixed owned/reference).
impl Mul<&TensorDynLen> for TensorDynLen
Implement multiplication operator for tensor contraction (mixed owned/reference).
Source§type Output = TensorDynLen
type Output = TensorDynLen
* operator.Source§impl Mul<TensorDynLen> for &TensorDynLen
Implement multiplication operator for tensor contraction (mixed reference/owned).
impl Mul<TensorDynLen> for &TensorDynLen
Implement multiplication operator for tensor contraction (mixed reference/owned).
Source§type Output = TensorDynLen
type Output = TensorDynLen
* operator.Source§impl Mul for TensorDynLen
Implement multiplication operator for tensor contraction (owned version).
impl Mul for TensorDynLen
Implement multiplication operator for tensor contraction (owned version).
This allows using tensor_a * tensor_b when both tensors are owned.
Source§type Output = TensorDynLen
type Output = TensorDynLen
* operator.Source§impl Neg for &TensorDynLen
impl Neg for &TensorDynLen
Source§impl Neg for TensorDynLen
impl Neg for TensorDynLen
Source§impl Sub<&TensorDynLen> for &TensorDynLen
impl Sub<&TensorDynLen> for &TensorDynLen
Source§type Output = TensorDynLen
type Output = TensorDynLen
- operator.Source§impl Sub<&TensorDynLen> for TensorDynLen
impl Sub<&TensorDynLen> for TensorDynLen
Source§type Output = TensorDynLen
type Output = TensorDynLen
- operator.Source§impl Sub<TensorDynLen> for &TensorDynLen
impl Sub<TensorDynLen> for &TensorDynLen
Source§type Output = TensorDynLen
type Output = TensorDynLen
- operator.Source§impl Sub for TensorDynLen
impl Sub for TensorDynLen
Source§type Output = TensorDynLen
type Output = TensorDynLen
- operator.Source§impl TensorAccess for TensorDynLen
impl TensorAccess for TensorDynLen
Source§impl TensorIndex for TensorDynLen
impl TensorIndex for TensorDynLen
Source§fn external_indices(&self) -> Vec<DynIndex> ⓘ
fn external_indices(&self) -> Vec<DynIndex> ⓘ
Source§fn num_external_indices(&self) -> usize
fn num_external_indices(&self) -> usize
Source§fn replaceind(&self, old_index: &DynIndex, new_index: &DynIndex) -> Result<Self>
fn replaceind(&self, old_index: &DynIndex, new_index: &DynIndex) -> Result<Self>
Source§impl TensorLike for TensorDynLen
impl TensorLike for TensorDynLen
Source§fn factorize(
&self,
left_inds: &[DynIndex],
options: &FactorizeOptions,
) -> Result<FactorizeResult<Self>, FactorizeError>
fn factorize( &self, left_inds: &[DynIndex], options: &FactorizeOptions, ) -> Result<FactorizeResult<Self>, FactorizeError>
Source§fn factorize_full_rank(
&self,
left_inds: &[DynIndex],
alg: FactorizeAlg,
canonical: Canonical,
) -> Result<FactorizeResult<Self>, FactorizeError>
fn factorize_full_rank( &self, left_inds: &[DynIndex], alg: FactorizeAlg, canonical: Canonical, ) -> Result<FactorizeResult<Self>, FactorizeError>
Source§fn direct_sum(
&self,
other: &Self,
pairs: &[(DynIndex, DynIndex)],
) -> Result<DirectSumResult<Self>>
fn direct_sum( &self, other: &Self, pairs: &[(DynIndex, DynIndex)], ) -> Result<DirectSumResult<Self>>
Source§fn outer_product(&self, other: &Self) -> Result<Self>
fn outer_product(&self, other: &Self) -> Result<Self>
Source§fn norm_squared(&self) -> f64
fn norm_squared(&self) -> f64
Source§fn permuteinds(&self, new_order: &[DynIndex]) -> Result<Self>
fn permuteinds(&self, new_order: &[DynIndex]) -> Result<Self>
Source§fn fuse_indices(
&self,
old_indices: &[DynIndex],
new_index: DynIndex,
order: LinearizationOrder,
) -> Result<Self>
fn fuse_indices( &self, old_indices: &[DynIndex], new_index: DynIndex, order: LinearizationOrder, ) -> Result<Self>
Source§fn contract(tensors: &[&Self], allowed: AllowedPairs<'_>) -> Result<Self>
fn contract(tensors: &[&Self], allowed: AllowedPairs<'_>) -> Result<Self>
Source§fn contract_connected(
tensors: &[&Self],
allowed: AllowedPairs<'_>,
) -> Result<Self>
fn contract_connected( tensors: &[&Self], allowed: AllowedPairs<'_>, ) -> Result<Self>
Source§fn axpby(&self, a: AnyScalar, other: &Self, b: AnyScalar) -> Result<Self>
fn axpby(&self, a: AnyScalar, other: &Self, b: AnyScalar) -> Result<Self>
a * self + b * other. Read moreSource§fn inner_product(&self, other: &Self) -> Result<AnyScalar>
fn inner_product(&self, other: &Self) -> Result<AnyScalar>
Source§fn diagonal(input_index: &DynIndex, output_index: &DynIndex) -> Result<Self>
fn diagonal(input_index: &DynIndex, output_index: &DynIndex) -> Result<Self>
Source§fn scalar_one() -> Result<Self>
fn scalar_one() -> Result<Self>
Source§fn ones(indices: &[DynIndex]) -> Result<Self>
fn ones(indices: &[DynIndex]) -> Result<Self>
Source§fn onehot(index_vals: &[(DynIndex, usize)]) -> Result<Self>
fn onehot(index_vals: &[(DynIndex, usize)]) -> Result<Self>
Source§fn isapprox(&self, other: &Self, atol: f64, rtol: f64) -> bool
fn isapprox(&self, other: &Self, atol: f64, rtol: f64) -> bool
isapprox semantics). Read moreSource§fn delta(
input_indices: &[<Self as TensorIndex>::Index],
output_indices: &[<Self as TensorIndex>::Index],
) -> Result<Self>
fn delta( input_indices: &[<Self as TensorIndex>::Index], output_indices: &[<Self as TensorIndex>::Index], ) -> Result<Self>
Auto Trait Implementations§
impl Freeze for TensorDynLen
impl !RefUnwindSafe for TensorDynLen
impl Send for TensorDynLen
impl Sync for TensorDynLen
impl Unpin for TensorDynLen
impl UnsafeUnpin for TensorDynLen
impl !UnwindSafe for TensorDynLen
Blanket Implementations§
§impl<U> As for U
impl<U> As for U
§fn as_<T>(self) -> Twhere
T: CastFrom<U>,
fn as_<T>(self) -> Twhere
T: CastFrom<U>,
self to type T. The semantics of numeric casting with the as operator are followed, so <T as As>::as_::<U> can be used in the same way as T as U for numeric conversions. Read moreSource§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
§impl<T> DistributionExt for Twhere
T: ?Sized,
impl<T> DistributionExt for Twhere
T: ?Sized,
fn rand<T>(&self, rng: &mut (impl Rng + ?Sized)) -> Twhere
Self: Distribution<T>,
§impl<T> DistributionExt for Twhere
T: ?Sized,
impl<T> DistributionExt for Twhere
T: ?Sized,
fn rand<T>(&self, rng: &mut (impl Rng + ?Sized)) -> Twhere
Self: Distribution<T>,
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more