Tensor

Struct Tensor 

Source
pub struct Tensor<T> { /* private fields */ }
Expand description

Multi-dimensional dense tensor.

Tensor<T> owns or shares a DataBuffer together with shape, strides, and memory-space metadata.

§Zero-copy views

Operations like permute, broadcast, and diagonal return new tensors that share the underlying buffer and only adjust metadata.

§Accessing raw data

Use DataBuffer::as_slice via Tensor::buffer together with Tensor::dims, Tensor::strides, and Tensor::offset to build backend-specific views.

§GPU async support

The optional CompletionEvent tracks pending GPU computation so future backends can chain asynchronous work without forcing CPU synchronization.

§Examples

use tenferro_device::LogicalMemorySpace;
use tenferro_tensor::{MemoryOrder, Tensor};

let t = Tensor::<f64>::zeros(
    &[2, 3],
    LogicalMemorySpace::MainMemory,
    MemoryOrder::ColumnMajor,
).unwrap();
assert_eq!(t.dims(), &[2, 3]);
assert_eq!(t.len(), 6);

Implementations§

Source§

impl<T> Tensor<T>
where T: Scalar,

Source

pub fn stack(tensors: &[&Tensor<T>], dim: isize) -> Result<Tensor<T>, Error>

Stack tensors along a new dimension.

Creates a new dimension and concatenates the input tensors along it. All input tensors must have the same shape. Negative dimensions are supported and count from the end.

This is a dense materialization operation that allocates a new buffer. It is implemented by inserting a size-1 axis with Tensor::unsqueeze and then delegating to Tensor::cat, so it materializes logical values, resolves conjugation, and supports the same CPU and same-device CUDA paths as concatenation.

§Arguments
  • tensors - Slice of input tensors to stack. Must not be empty.
  • dim - Position to insert the new dimension. Must be in range [-ndim-1, ndim].
§Errors

Returns an error if:

  • The input list is empty
  • Tensors have different shapes
  • Tensors have different memory spaces or devices
  • The dimension is out of range
§Examples
use tenferro_device::LogicalMemorySpace;
use tenferro_tensor::{MemoryOrder, Tensor};

let a = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let b = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();

let stacked = Tensor::stack(&[&a, &b], 0).unwrap();
assert_eq!(stacked.dims(), &[2, 2, 3]);
Source

pub fn cat(tensors: &[&Tensor<T>], dim: isize) -> Result<Tensor<T>, Error>

Concatenate tensors along an existing dimension.

Joins tensors along the specified dimension. All tensors must have the same rank and matching sizes on non-concatenated dimensions. Negative dimensions are supported and count from the end.

This is a dense materialization operation that allocates a new buffer. Logical conjugation is materialized per input, the output is resolved (conjugated = false), and any preferred compute-device hint is cleared. Main-memory tensors are always supported; with cuda enabled, same-device GPU tensors are also supported.

§Arguments
  • tensors - Slice of input tensors to concatenate. Must not be empty.
  • dim - Dimension along which to concatenate. Must be in range [-ndim, ndim-1].
§Errors

Returns an error if:

  • The input list is empty
  • Any tensor is rank-0 (scalars cannot be concatenated)
  • Tensors have different ranks
  • Tensors have mismatched sizes on non-concatenated dimensions
  • Tensors have different memory spaces
  • The dimension is out of range
  • Non-main-memory tensors are provided without cuda support
§Examples
use tenferro_device::LogicalMemorySpace;
use tenferro_tensor::{MemoryOrder, Tensor};

let a = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let b = Tensor::<f64>::zeros(&[2, 4], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();

let concatenated = Tensor::cat(&[&a, &b], 1).unwrap();
assert_eq!(concatenated.dims(), &[2, 7]);
Source§

impl<T> Tensor<T>
where T: Scalar,

Source

pub fn zeros( dims: &[usize], memory_space: LogicalMemorySpace, order: MemoryOrder, ) -> Result<Tensor<T>, Error>

Create a tensor filled with zeros.

Allocates directly on the target device without an intermediate CPU buffer. For GPU targets (with the cuda feature) this avoids the CPU-allocate-then-transfer overhead.

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
use tenferro_device::LogicalMemorySpace;

let a = Tensor::<f64>::zeros(
    &[3, 4],
    LogicalMemorySpace::MainMemory,
    MemoryOrder::ColumnMajor,
).unwrap();
Source

pub fn empty( dims: &[usize], memory_space: LogicalMemorySpace, order: MemoryOrder, ) -> Result<Tensor<T>, Error>

Create a tensor with allocated storage.

The current safe implementation initializes the backing storage to zero, which keeps the constructor deterministic while preserving the requested layout and device placement. For GPU targets this allocates uninitialised device memory directly.

The *_like family preserves a row-major layout only when the source tensor is row-major contiguous and not column-major contiguous. Ambiguous or non-contiguous inputs fall back to column-major.

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
use tenferro_device::LogicalMemorySpace;

let a = Tensor::<f64>::empty(
    &[3, 4],
    LogicalMemorySpace::MainMemory,
    MemoryOrder::ColumnMajor,
).unwrap();
Source

pub fn ones( dims: &[usize], memory_space: LogicalMemorySpace, order: MemoryOrder, ) -> Result<Tensor<T>, Error>

Create a tensor filled with ones.

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
use tenferro_device::LogicalMemorySpace;

let a = Tensor::<f64>::ones(
    &[2, 3],
    LogicalMemorySpace::MainMemory,
    MemoryOrder::ColumnMajor,
).unwrap();
Source

pub fn empty_strided( dims: &[usize], strides: &[isize], offset: isize, memory_space: LogicalMemorySpace, ) -> Result<Tensor<T>, Error>

Create a tensor with explicit strides.

§Errors

Returns an error if the layout would access storage outside the allocated buffer.

§Examples
use tenferro_tensor::Tensor;
use tenferro_device::LogicalMemorySpace;

let t = Tensor::<f64>::empty_strided(&[2, 2], &[1, 2], 0, LogicalMemorySpace::MainMemory).unwrap();
assert_eq!(t.strides(), &[1, 2]);
Source

pub fn full( dims: &[usize], value: T, memory_space: LogicalMemorySpace, order: MemoryOrder, ) -> Result<Tensor<T>, Error>

Create a tensor filled with value.

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
use tenferro_device::LogicalMemorySpace;

let a = Tensor::<f64>::full(
    &[2, 3],
    7.5,
    LogicalMemorySpace::MainMemory,
    MemoryOrder::ColumnMajor,
).unwrap();
Source

pub fn from_slice( data: &[T], dims: &[usize], order: MemoryOrder, ) -> Result<Tensor<T>, Error>

Create a tensor from a data slice.

order describes how to interpret data at the import boundary. View operations continue to use tenferro’s internal column-major semantics.

§Errors

Returns an error if data.len() does not match the product of dims.

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};

let data = [1.0, 2.0, 3.0, 4.0];
let t = Tensor::<f64>::from_slice(&data, &[2, 2], MemoryOrder::ColumnMajor).unwrap();
Source

pub fn from_row_major_slice( data: &[T], dims: &[usize], ) -> Result<Tensor<T>, Error>

Create a tensor from a row-major data slice.

This is a convenience wrapper around from_slice with MemoryOrder::RowMajor. It lets NumPy / C users pass data in their natural order while tenferro internally stores it in column-major layout.

§Errors

Returns an error if data.len() does not match the product of dims.

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};

// Row-major: data is laid out row by row.
// [[1, 2],
//  [3, 4]]
let t = Tensor::<f64>::from_row_major_slice(
    &[1.0, 2.0, 3.0, 4.0],
    &[2, 2],
).unwrap();
assert_eq!(t.dims(), &[2, 2]);
Source

pub fn from_vec( data: Vec<T>, dims: &[usize], strides: &[isize], offset: isize, ) -> Result<Tensor<T>, Error>

Create a tensor from an owned Vec<T> with explicit layout.

§Errors

Returns an error if the layout is inconsistent with the data length.

§Examples
use tenferro_tensor::Tensor;

let t = Tensor::<f64>::from_vec(vec![1.0, 2.0, 3.0, 4.0], &[2, 2], &[1, 2], 0).unwrap();
Source

pub unsafe fn from_external_parts( ptr: *const T, len: usize, dims: &[usize], strides: &[isize], offset: isize, release: impl FnOnce() + Send + 'static, ) -> Result<Tensor<T>, Error>

Create a tensor from externally-owned CPU-accessible memory.

§Safety
  • ptr must remain valid for at least len elements until release is called.
  • The layout described by dims, strides, and offset must stay in bounds.
§Examples
use tenferro_tensor::Tensor;

let data = vec![1.0, 2.0, 3.0, 4.0];
let ptr = data.as_ptr();
let tensor = unsafe {
    Tensor::from_external_parts(ptr, data.len(), &[2, 2], &[1, 2], 0, move || drop(data))
}.unwrap();
assert_eq!(tensor.dims(), &[2, 2]);
Source

pub fn try_into_data_vec(self) -> Option<Vec<T>>

Try to extract the underlying data as Vec<T>.

§Examples
let t = Tensor::<f64>::from_slice(&[1.0, 2.0], &[2], MemoryOrder::ColumnMajor).unwrap();
let _data = t.try_into_data_vec();
Source

pub fn empty_like(&self) -> Result<Tensor<T>, Error>

Create a tensor with the same shape and layout convention as another tensor.

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
use tenferro_device::LogicalMemorySpace;

let base = Tensor::<f64>::zeros(
    &[2, 3],
    LogicalMemorySpace::MainMemory,
    MemoryOrder::ColumnMajor,
).unwrap();
let like = base.empty_like().unwrap();
assert_eq!(like.dims(), base.dims());
Source

pub fn zeros_like(&self) -> Result<Tensor<T>, Error>

Create a zero-filled tensor with the same shape and layout convention as another tensor.

The *_like family preserves a row-major layout only when the source tensor is row-major contiguous and not column-major contiguous. Ambiguous or non-contiguous inputs fall back to column-major.

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
use tenferro_device::LogicalMemorySpace;

let base = Tensor::<f64>::zeros(
    &[2, 3],
    LogicalMemorySpace::MainMemory,
    MemoryOrder::ColumnMajor,
).unwrap();
let like = base.zeros_like().unwrap();
assert_eq!(like.dims(), base.dims());
Source

pub fn ones_like(&self) -> Result<Tensor<T>, Error>

Create a one-filled tensor with the same shape and layout convention as another tensor.

The *_like family preserves a row-major layout only when the source tensor is row-major contiguous and not column-major contiguous. Ambiguous or non-contiguous inputs fall back to column-major.

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
use tenferro_device::LogicalMemorySpace;

let base = Tensor::<f64>::zeros(
    &[2, 3],
    LogicalMemorySpace::MainMemory,
    MemoryOrder::ColumnMajor,
).unwrap();
let like = base.ones_like().unwrap();
assert_eq!(like.dims(), base.dims());
Source

pub fn full_like(&self, value: T) -> Result<Tensor<T>, Error>

Create a tensor filled with value and matching the shape/layout convention of another tensor.

The *_like family preserves a row-major layout only when the source tensor is row-major contiguous and not column-major contiguous. Ambiguous or non-contiguous inputs fall back to column-major.

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
use tenferro_device::LogicalMemorySpace;

let base = Tensor::<f64>::zeros(
    &[2, 3],
    LogicalMemorySpace::MainMemory,
    MemoryOrder::ColumnMajor,
).unwrap();
let like = base.full_like(3.25).unwrap();
assert_eq!(like.dims(), base.dims());
Source§

impl Tensor<f64>

Source

pub fn rand( dims: &[usize], memory_space: LogicalMemorySpace, order: MemoryOrder, generator: Option<&mut Generator>, ) -> Result<Tensor<f64>, Error>

Create a tensor filled with uniform samples on [0, 1).

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};

let t = Tensor::<f64>::rand(
    &[2, 2],
    tenferro_device::LogicalMemorySpace::MainMemory,
    MemoryOrder::ColumnMajor,
    None,
).unwrap();
assert_eq!(t.dims(), &[2, 2]);
Source

pub fn randn( dims: &[usize], memory_space: LogicalMemorySpace, order: MemoryOrder, generator: Option<&mut Generator>, ) -> Result<Tensor<f64>, Error>

Create a tensor filled with standard-normal samples.

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};

let t = Tensor::<f64>::randn(
    &[4],
    tenferro_device::LogicalMemorySpace::MainMemory,
    MemoryOrder::ColumnMajor,
    None,
).unwrap();
assert_eq!(t.dims(), &[4]);
Source

pub fn rand_like( reference: &Tensor<f64>, generator: Option<&mut Generator>, ) -> Result<Tensor<f64>, Error>

Create a tensor with the same shape/layout convention as another tensor and fill it with uniform samples on [0, 1).

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};

let base = Tensor::<f64>::zeros(&[2, 3], tenferro_device::LogicalMemorySpace::MainMemory, MemoryOrder::RowMajor).unwrap();
let t = Tensor::<f64>::rand_like(&base, None).unwrap();
assert_eq!(t.dims(), base.dims());
Source

pub fn randn_like( reference: &Tensor<f64>, generator: Option<&mut Generator>, ) -> Result<Tensor<f64>, Error>

Create a tensor with the same shape/layout convention as another tensor and fill it with standard-normal samples.

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};

let base = Tensor::<f64>::zeros(&[2, 3], tenferro_device::LogicalMemorySpace::MainMemory, MemoryOrder::RowMajor).unwrap();
let t = Tensor::<f64>::randn_like(&base, None).unwrap();
assert_eq!(t.dims(), base.dims());
Source§

impl Tensor<i32>

Source

pub fn randint( low: i32, high: i32, dims: &[usize], memory_space: LogicalMemorySpace, order: MemoryOrder, generator: Option<&mut Generator>, ) -> Result<Tensor<i32>, Error>

Create a tensor filled with integer samples in [low, high).

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};

let t = Tensor::<i32>::randint(
    -2,
    5,
    &[2, 2],
    tenferro_device::LogicalMemorySpace::MainMemory,
    MemoryOrder::ColumnMajor,
    None,
).unwrap();
assert_eq!(t.dims(), &[2, 2]);
Source

pub fn randint_like( reference: &Tensor<i32>, low: i32, high: i32, generator: Option<&mut Generator>, ) -> Result<Tensor<i32>, Error>

Create a tensor with the same shape/layout convention as another tensor and fill it with integer samples in [low, high).

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};

let base = Tensor::<i32>::zeros(&[2, 3], tenferro_device::LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let t = Tensor::<i32>::randint_like(&base, -2, 5, None).unwrap();
assert_eq!(t.dims(), base.dims());
Source§

impl<T> Tensor<T>
where T: Scalar + Float + NumCast,

Source

pub fn eye( n: usize, memory_space: LogicalMemorySpace, order: MemoryOrder, ) -> Result<Tensor<T>, Error>

Create an identity matrix.

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
use tenferro_device::LogicalMemorySpace;

let id = Tensor::<f64>::eye(
    3,
    LogicalMemorySpace::MainMemory,
    MemoryOrder::ColumnMajor,
).unwrap();
assert_eq!(id.dims(), &[3, 3]);
Source

pub fn arange( start: T, end: T, step: T, memory_space: LogicalMemorySpace, order: MemoryOrder, ) -> Result<Tensor<T>, Error>

Create a regularly spaced 1-D tensor from start toward end.

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
use tenferro_device::LogicalMemorySpace;

let xs = Tensor::<f64>::arange(
    0.0,
    5.0,
    1.0,
    LogicalMemorySpace::MainMemory,
    MemoryOrder::ColumnMajor,
).unwrap();
assert_eq!(xs.dims(), &[5]);
Source

pub fn linspace( start: T, end: T, n_samples: isize, memory_space: LogicalMemorySpace, order: MemoryOrder, ) -> Result<Tensor<T>, Error>

Create a 1-D tensor containing n_samples evenly spaced values.

Returns an error if n_samples is negative.

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
use tenferro_device::LogicalMemorySpace;

let xs = Tensor::<f64>::linspace(
    0.0,
    1.0,
    5,
    LogicalMemorySpace::MainMemory,
    MemoryOrder::ColumnMajor,
).unwrap();
assert_eq!(xs.dims(), &[5]);
Source§

impl<T> Tensor<T>

Source

pub fn conj(&self) -> Tensor<T>
where T: Conjugate,

Return a lazily-conjugated tensor (shared buffer, flag flip).

§Examples
use num_complex::Complex64;

let data = vec![Complex64::new(1.0, 2.0), Complex64::new(3.0, -4.0)];
let a = Tensor::from_slice(&data, &[2], MemoryOrder::ColumnMajor).unwrap();
let a_conj = a.conj();
assert!(a_conj.is_conjugated());
Source

pub fn into_conj(self) -> Tensor<T>
where T: Conjugate,

Consume this tensor and return a lazily-conjugated version.

§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let tc = t.into_conj();
assert!(tc.is_conjugated());
Source§

impl<T> Tensor<T>
where T: Scalar,

Source

pub fn deep_clone(&self) -> Tensor<T>

Create a deep copy with an exclusively-owned contiguous buffer.

Unlike clone (which is a shallow Arc refcount bump), this always allocates a fresh buffer and copies element data. The returned tensor is contiguous in column-major order and has buffer.is_unique() == true, so set and get_mut are guaranteed to succeed.

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};

let a = Tensor::<f64>::from_slice(
    &[1.0, 2.0, 3.0, 4.0], &[2, 2], MemoryOrder::ColumnMajor,
).unwrap();
let b = a.clone(); // shallow — shares buffer

let mut c = a.deep_clone(); // deep — independent buffer
c.set(&[0, 0], 99.0).unwrap();
assert_eq!(c.get(&[0, 0]), Some(&99.0));
assert_eq!(a.get(&[0, 0]), Some(&1.0)); // original unchanged
Source

pub fn contiguous(&self, order: MemoryOrder) -> Tensor<T>

Return a contiguous copy of this tensor in the given memory order.

order controls the materialized output buffer only. It does not change the internal column-major semantics used by view operations such as reshape.

§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::RowMajor).unwrap();
let c = t.contiguous(MemoryOrder::RowMajor);
assert!(c.is_contiguous());
Source

pub fn into_contiguous(self, order: MemoryOrder) -> Tensor<T>

Consume this tensor and return a contiguous version.

§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let c = t.into_contiguous(MemoryOrder::ColumnMajor);
assert!(c.is_contiguous());
Source

pub fn into_column_major(self) -> Tensor<T>

Consume this tensor and return a contiguous column-major version.

This is a convenience wrapper around into_contiguous(MemoryOrder::ColumnMajor) since column-major is tenferro’s canonical internal layout.

§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::RowMajor).unwrap();
let col_major = t.into_column_major();
assert!(col_major.is_col_major_contiguous());
Source

pub fn to_vec(&self) -> Vec<T>

Copy tensor data into a flat Vec<T> in column-major order.

The returned vector has length self.len() with elements laid out in column-major (Fortran) order. For a 2-D tensor with shape [m, n], the first m elements are column 0, the next m are column 1, and so on.

This method internally materializes a contiguous copy when the tensor is not already column-major contiguous, so it always returns owned data regardless of the original layout.

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};

let t = Tensor::<f64>::from_row_major_slice(
    &[1.0, 2.0, 3.0, 4.0, 5.0, 6.0],
    &[2, 3],
).unwrap();
// Matrix (row-major input):
//   [[1, 2, 3],
//    [4, 5, 6]]
// Column-major output: col0=[1,4], col1=[2,5], col2=[3,6]
assert_eq!(t.to_vec(), vec![1.0, 4.0, 2.0, 5.0, 3.0, 6.0]);
Source

pub fn tril(&self, diagonal: isize) -> Tensor<T>

Extract the lower triangular part of a matrix.

§Examples
let a = Tensor::<f64>::ones(&[3, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let lower = a.tril(0);
assert_eq!(lower.dims(), &[3, 3]);
Source

pub fn triu(&self, diagonal: isize) -> Tensor<T>

Extract the upper triangular part of a matrix.

§Examples
let a = Tensor::<f64>::ones(&[3, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let upper = a.triu(0);
assert_eq!(upper.dims(), &[3, 3]);
Source§

impl<T> Tensor<T>

Source

pub fn get(&self, index: &[usize]) -> Option<&T>

Access a single element by multi-dimensional index.

Returns None if the index is out of bounds or the underlying buffer is not CPU-accessible.

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};

// Column-major: data is laid out column by column.
// from_slice with ColumnMajor and data [1,2,3,4] gives:
//   column 0 = [1, 2], column 1 = [3, 4]
//   matrix = [[1, 3],
//             [2, 4]]
let t = Tensor::<f64>::from_slice(
    &[1.0, 2.0, 3.0, 4.0], &[2, 2], MemoryOrder::ColumnMajor,
).unwrap();
assert_eq!(t.get(&[0, 0]), Some(&1.0));
assert_eq!(t.get(&[1, 0]), Some(&2.0));
assert_eq!(t.get(&[0, 1]), Some(&3.0));
assert_eq!(t.get(&[1, 1]), Some(&4.0));
assert_eq!(t.get(&[2, 0]), None); // out of bounds
Source

pub fn get_mut(&mut self, index: &[usize]) -> Option<&mut T>

Access a single element mutably by multi-dimensional index.

Returns None if the index is out of bounds, the buffer is not CPU-accessible, or the buffer is shared (Arc refcount > 1).

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};

let mut t = Tensor::<f64>::from_slice(
    &[1.0, 2.0, 3.0, 4.0], &[2, 2], MemoryOrder::ColumnMajor,
).unwrap();
*t.get_mut(&[0, 1]).unwrap() = 99.0;
assert_eq!(t.get(&[0, 1]), Some(&99.0));
// Out of bounds returns None:
assert!(t.get_mut(&[2, 0]).is_none());
Source

pub fn set(&mut self, index: &[usize], value: T) -> Result<(), Error>

Write a value at the given multi-dimensional index.

Returns Ok(()) on success, or an error if the index is out of bounds, the buffer is not CPU-accessible, or the buffer is shared (Arc refcount > 1). Call deep_clone first to obtain an exclusively-owned copy.

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};

let mut t = Tensor::<f64>::from_slice(
    &[1.0, 2.0, 3.0, 4.0], &[2, 2], MemoryOrder::ColumnMajor,
).unwrap();
t.set(&[1, 0], 10.0).unwrap();
assert_eq!(t.get(&[1, 0]), Some(&10.0));

// Shared buffers cannot be written:
let shared = t.clone(); // refcount == 2
// t.set(&[0, 0], 5.0) would fail here because buffer is shared
Source§

impl<T> Tensor<T>

Source

pub fn dims(&self) -> &[usize]

Returns the shape (size of each dimension).

§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
assert_eq!(t.dims(), &[2, 3]);
Source

pub fn strides(&self) -> &[isize]

Returns the strides (in units of T).

§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let _strides = t.strides();
Source

pub fn offset(&self) -> isize

Returns the element offset into the data buffer.

§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
assert_eq!(t.offset(), 0);
Source

pub fn buffer(&self) -> &DataBuffer<T>

Returns a reference to the underlying data buffer.

§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let _buf = t.buffer();
Source

pub fn buffer_mut(&mut self) -> &mut DataBuffer<T>

Returns a mutable reference to the underlying data buffer.

§Examples
let mut t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let _buf = t.buffer_mut();
Source

pub fn ndim(&self) -> usize

Returns the number of dimensions (rank).

§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
assert_eq!(t.ndim(), 2);
Source

pub fn len(&self) -> usize

Returns the total number of elements.

§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
assert_eq!(t.len(), 6);
Source

pub fn is_empty(&self) -> bool

Returns true if the tensor has zero elements.

§Examples
let t = Tensor::<f64>::zeros(&[0, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
assert!(t.is_empty());
Source

pub fn logical_memory_space(&self) -> LogicalMemorySpace

Returns the logical memory space where this tensor’s data resides.

§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
assert_eq!(t.logical_memory_space(), LogicalMemorySpace::MainMemory);
Source

pub fn preferred_compute_device(&self) -> Option<ComputeDevice>

Returns the preferred compute device override, if set.

§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
assert!(t.preferred_compute_device().is_none());
Source

pub fn set_preferred_compute_device(&mut self, device: Option<ComputeDevice>)

Set the preferred compute device override.

§Examples
let mut t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
t.set_preferred_compute_device(Some(ComputeDevice::Cpu { device_id: 0 }));
Source

pub fn is_conjugated(&self) -> bool

Returns true if this tensor is logically conjugated.

§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
assert!(!t.is_conjugated());
Source

pub fn fw_grad(&self) -> Option<&Tensor<T>>

Returns a reference to the forward-mode tangent, if set.

§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
assert!(t.fw_grad().is_none());
Source

pub fn has_fw_grad(&self) -> bool

Returns true if this tensor carries a forward-mode tangent.

§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
assert!(!t.has_fw_grad());
Source

pub fn set_fw_grad(&mut self, grad: Tensor<T>)

Attach a forward-mode tangent to this tensor.

§Examples
let mut t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let grad = Tensor::<f64>::ones(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
t.set_fw_grad(grad);
Source

pub fn detach_fw_grad(&mut self) -> Option<Tensor<T>>

Detach and return the forward-mode tangent, leaving None.

§Examples
let mut t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
t.set_fw_grad(Tensor::<f64>::ones(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap());
let _grad = t.detach_fw_grad().unwrap();
Source

pub fn effective_compute_devices( &self, op_kind: OpKind, ) -> Result<Vec<ComputeDevice>, Error>

Return the effective compute devices for a given operation kind.

§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let _devices = t.effective_compute_devices(OpKind::BatchedGemm).unwrap();
Source§

impl<T> Tensor<T>
where T: Scalar,

Source

pub fn zero_trailing_by_counts<R>( &self, keep_counts: &Tensor<R>, axis: usize, structural_rank: usize, ) -> Result<Tensor<T>, Error>
where R: KeepCountScalar,

Return a contiguous tensor with trailing elements zeroed according to batch-local keep counts.

structural_rank splits the payload dims from the trailing batch dims. axis is interpreted within the structural prefix [0, structural_rank).

Phase 1 supports main-memory tensors only.

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};

let payload = Tensor::from_slice(&[1.0, 2.0, 3.0, 4.0], &[2, 2], MemoryOrder::ColumnMajor)?;
let keep_counts = Tensor::from_slice(&[1.0], &[], MemoryOrder::ColumnMajor)?;
let trimmed = payload.zero_trailing_by_counts(&keep_counts, 1, 2)?;
assert_eq!(trimmed.buffer().as_slice().unwrap(), &[1.0, 2.0, 0.0, 0.0]);
Source

pub fn merge_strict_lower_and_upper( lower: &Tensor<T>, upper: &Tensor<T>, ) -> Result<Tensor<T>, Error>

Merge a strict-lower source and an upper-with-diagonal source into one packed matrix.

lower must have shape [m, k, *batch] and upper must have shape [k, n, *batch] where k = min(m, n). The output has shape [m, n, *batch] with entries selected from lower when row > col and from upper otherwise.

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};

let lower = Tensor::from_slice(&[1.0, 2.0, 1.0, 3.0], &[2, 2], MemoryOrder::ColumnMajor)?;
let upper = Tensor::from_slice(&[4.0, 0.0, 5.0, 6.0], &[2, 2], MemoryOrder::ColumnMajor)?;
let packed = Tensor::merge_strict_lower_and_upper(&lower, &upper)?;
assert_eq!(packed.buffer().as_slice().unwrap(), &[4.0, 2.0, 5.0, 6.0]);
Source§

impl<T> Tensor<T>
where T: Scalar,

Source

pub fn to_memory_space_async( &self, target: LogicalMemorySpace, ) -> Result<Tensor<T>, Error>

Asynchronously transfer this tensor to a different memory space.

§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let t2 = t.to_memory_space_async(LogicalMemorySpace::MainMemory).unwrap();
assert_eq!(t2.dims(), &[2, 3]);
Source§

impl<T> Tensor<T>

Source

pub fn wait(&self)

Wait for any pending GPU computation to complete.

§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
t.wait();
Source

pub fn is_ready(&self) -> bool

Check if tensor data is ready without blocking.

§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
assert!(t.is_ready());
Source§

impl<T> Tensor<T>

Source

pub fn permute(&self, perm: &[usize]) -> Result<Tensor<T>, Error>

Permute (reorder) the dimensions of the tensor.

§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let transposed = t.permute(&[1, 0]).unwrap();
assert_eq!(transposed.dims(), &[3, 2]);
Source

pub fn broadcast(&self, target_dims: &[usize]) -> Result<Tensor<T>, Error>

Broadcast the tensor to a larger shape.

§Examples
let t = Tensor::<f64>::zeros(&[1, 3], LogicalMemorySpace::MainMemory, MemoryOrder::RowMajor).unwrap();
let b = t.broadcast(&[4, 3]).unwrap();
assert_eq!(b.dims(), &[4, 3]);
Source

pub fn diagonal(&self, axes: &[(usize, usize)]) -> Result<Tensor<T>, Error>

Extract a diagonal view by merging pairs of axes.

§Examples
let t = Tensor::<f64>::zeros(&[3, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let d = t.diagonal(&[(0, 1)]).unwrap();
assert_eq!(d.dims(), &[3]);
Source

pub fn view(&self, new_dims: &[usize]) -> Result<Tensor<T>, Error>

Return a zero-copy view with a different shape.

This is the strict metadata-only variant of reshape. The returned tensor shares storage with self and therefore requires the input layout to be contiguous (column-major). For PyTorch-style view-or-copy semantics that handle non-contiguous inputs, use reshape instead.

§Errors

Returns StrideError if the tensor is not contiguous.

§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::RowMajor).unwrap();
let r = t.view(&[6]).unwrap();
assert_eq!(r.dims(), &[6]);
Source

pub fn reshape(&self, new_dims: &[usize]) -> Result<Tensor<T>, Error>
where T: Scalar,

Reshape the tensor to a new shape.

Reshape follows tenferro’s internal column-major semantics and PyTorch-style view-or-copy behavior: it returns a zero-copy view when the current layout is compatible with column-major ordering, and otherwise materializes a contiguous column-major copy first before returning the view.

For strict zero-copy semantics that reject non-contiguous inputs, use view instead.

§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::RowMajor).unwrap();
let r = t.reshape(&[6]).unwrap();
assert_eq!(r.dims(), &[6]);
Source

pub fn view_as_strided( &self, new_dims: Vec<usize>, new_strides: Vec<isize>, ) -> Result<Tensor<T>, Error>

Create a zero-copy view with explicit dims and strides.

§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let view = t.view_as_strided(vec![3, 2], vec![2, 1]).unwrap();
assert_eq!(view.dims(), &[3, 2]);
Source

pub fn select(&self, dim: usize, index: usize) -> Result<Tensor<T>, Error>

Select a single index along a dimension, removing that dimension.

§Examples
let t = Tensor::<f64>::zeros(&[2, 3, 4], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let slice = t.select(2, 1).unwrap();
assert_eq!(slice.dims(), &[2, 3]);
Source

pub fn narrow( &self, dim: usize, start: usize, length: usize, ) -> Result<Tensor<T>, Error>

Narrow (slice) a dimension to a sub-range.

§Examples
let t = Tensor::<f64>::zeros(&[2, 10], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let sub = t.narrow(1, 2, 3).unwrap();
assert_eq!(sub.dims(), &[2, 3]);
Source

pub fn unsqueeze(&self, dim: isize) -> Result<Tensor<T>, Error>

Insert a size-1 dimension at the specified position.

This is a zero-copy view operation. Negative dimensions are supported and count from the end.

§Arguments
  • dim - Position to insert the new dimension. Must be in range [-ndim-1, ndim].
§Errors

Returns an error if the dimension is out of range.

§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let u = t.unsqueeze(0).unwrap();
assert_eq!(u.dims(), &[1, 2, 3]);

let u2 = t.unsqueeze(-1).unwrap();
assert_eq!(u2.dims(), &[2, 3, 1]);
Source

pub fn squeeze(&self) -> Result<Tensor<T>, Error>

Remove all size-1 dimensions from the tensor.

This is a zero-copy view operation.

§Examples
let t = Tensor::<f64>::zeros(&[1, 2, 1, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let s = t.squeeze().unwrap();
assert_eq!(s.dims(), &[2, 3]);
Source

pub fn squeeze_dim(&self, dim: isize) -> Result<Tensor<T>, Error>

Remove a specific size-1 dimension from the tensor.

This is a zero-copy view operation. Negative dimensions are supported and count from the end.

§Arguments
  • dim - Dimension to remove. Must be in range [-ndim, ndim-1] and have size 1.
§Errors

Returns an error if:

  • The dimension is out of range
  • The dimension does not have size 1
§Examples
let t = Tensor::<f64>::zeros(&[2, 1, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let s = t.squeeze_dim(1).unwrap();
assert_eq!(s.dims(), &[2, 3]);

let s2 = t.squeeze_dim(-2).unwrap();
assert_eq!(s2.dims(), &[2, 3]);
Source

pub fn mT(&self) -> Result<Tensor<T>, Error>

Return a zero-copy view with the last two axes transposed.

This is a metadata-only operation. For batched matrices, leading batch axes are preserved and only the final two matrix axes are swapped.

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};

let t = Tensor::<f64>::from_slice(
    &[1.0, 2.0, 3.0, 4.0, 5.0, 6.0],
    &[2, 3],
    MemoryOrder::ColumnMajor,
)
.unwrap();
let mt = t.mT().unwrap();
assert_eq!(mt.dims(), &[3, 2]);
Source§

impl<T> Tensor<T>
where T: Conjugate,

Source

pub fn mH(&self) -> Result<Tensor<T>, Error>

Return a zero-copy conjugate-transpose view over the last two axes.

This is equivalent to self.mT()?.conj(): swap the trailing matrix axes and toggle the lazy conjugation flag.

§Examples
use num_complex::Complex64;
use tenferro_tensor::{MemoryOrder, Tensor};

let z = Tensor::<Complex64>::from_slice(
    &[Complex64::new(1.0, 2.0), Complex64::new(3.0, 4.0)],
    &[2, 1],
    MemoryOrder::ColumnMajor,
)
.unwrap();
let mh = z.mH().unwrap();
assert_eq!(mh.dims(), &[1, 2]);
assert!(mh.is_conjugated());
Source§

impl Tensor<Complex<f32>>

Source

pub fn view_as_real(&self) -> Result<Tensor<f32>, Error>

Return a zero-copy real view of a complex tensor.

The last logical axis is expanded to length 2, exposing the real and imaginary parts as adjacent real-valued elements.

§Examples
use num_complex::Complex32;
use tenferro_device::LogicalMemorySpace;
use tenferro_tensor::{MemoryOrder, Tensor};

let z = Tensor::<Complex32>::from_slice(
    &[Complex32::new(1.0, 2.0)],
    &[1],
    MemoryOrder::ColumnMajor,
).unwrap();
let r = z.view_as_real().unwrap();
assert_eq!(r.dims(), &[1, 2]);
assert_eq!(r.logical_memory_space(), LogicalMemorySpace::MainMemory);
Source

pub fn real(&self) -> Result<Tensor<f32>, Error>

Return a zero-copy view of the real part of a resolved complex tensor.

This is implemented as view_as_real() followed by selecting the real lane of the trailing size-2 axis.

§Examples
use num_complex::Complex32;
use tenferro_tensor::{MemoryOrder, Tensor};

let z = Tensor::<Complex32>::from_slice(
    &[Complex32::new(1.0, 2.0)],
    &[1],
    MemoryOrder::ColumnMajor,
)
.unwrap();
let real = z.real().unwrap();
assert_eq!(real.dims(), &[1]);
Source

pub fn imag(&self) -> Result<Tensor<f32>, Error>

Return a zero-copy view of the imaginary part of a resolved complex tensor.

This is implemented as view_as_real() followed by selecting the imaginary lane of the trailing size-2 axis.

§Examples
use num_complex::Complex32;
use tenferro_tensor::{MemoryOrder, Tensor};

let z = Tensor::<Complex32>::from_slice(
    &[Complex32::new(1.0, 2.0)],
    &[1],
    MemoryOrder::ColumnMajor,
)
.unwrap();
let imag = z.imag().unwrap();
assert_eq!(imag.dims(), &[1]);
Source§

impl Tensor<Complex<f64>>

Source

pub fn view_as_real(&self) -> Result<Tensor<f64>, Error>

Return a zero-copy real view of a complex tensor.

The last logical axis is expanded to length 2, exposing the real and imaginary parts as adjacent real-valued elements.

§Examples
use num_complex::Complex64;
use tenferro_tensor::{MemoryOrder, Tensor};

let z = Tensor::<Complex64>::from_slice(
    &[Complex64::new(1.0, 2.0)],
    &[1],
    MemoryOrder::ColumnMajor,
).unwrap();
let r = z.view_as_real().unwrap();
assert_eq!(r.dims(), &[1, 2]);
Source

pub fn real(&self) -> Result<Tensor<f64>, Error>

Return a zero-copy view of the real part of a resolved complex tensor.

This is implemented as view_as_real() followed by selecting the real lane of the trailing size-2 axis.

§Examples
use num_complex::Complex64;
use tenferro_tensor::{MemoryOrder, Tensor};

let z = Tensor::<Complex64>::from_slice(
    &[Complex64::new(1.0, 2.0)],
    &[1],
    MemoryOrder::ColumnMajor,
)
.unwrap();
let real = z.real().unwrap();
assert_eq!(real.dims(), &[1]);
Source

pub fn imag(&self) -> Result<Tensor<f64>, Error>

Return a zero-copy view of the imaginary part of a resolved complex tensor.

This is implemented as view_as_real() followed by selecting the imaginary lane of the trailing size-2 axis.

§Examples
use num_complex::Complex64;
use tenferro_tensor::{MemoryOrder, Tensor};

let z = Tensor::<Complex64>::from_slice(
    &[Complex64::new(1.0, 2.0)],
    &[1],
    MemoryOrder::ColumnMajor,
)
.unwrap();
let imag = z.imag().unwrap();
assert_eq!(imag.dims(), &[1]);
Source§

impl Tensor<f32>

Source

pub fn view_as_complex(&self) -> Result<Tensor<Complex<f32>>, Error>

Return a zero-copy complex view of a real tensor whose last dimension stores paired real and imaginary components.

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};

let r = Tensor::<f32>::from_slice(&[1.0, 2.0], &[2], MemoryOrder::ColumnMajor).unwrap();
let z = r.view_as_complex().unwrap();
assert_eq!(z.dims(), &[] as &[usize]);
Source§

impl Tensor<f64>

Source

pub fn view_as_complex(&self) -> Result<Tensor<Complex<f64>>, Error>

Return a zero-copy complex view of a real tensor whose last dimension stores paired real and imaginary components.

§Examples
use tenferro_tensor::{MemoryOrder, Tensor};

let r = Tensor::<f64>::from_slice(&[1.0, 2.0], &[2], MemoryOrder::ColumnMajor).unwrap();
let z = r.view_as_complex().unwrap();
assert_eq!(z.dims(), &[] as &[usize]);
Source§

impl<T> Tensor<T>

Methods that require no element-type bounds at all.

These operate only on tensor metadata (dims, strides, offset, buffer reference) and never read or write element values.

Source

pub fn is_contiguous(&self) -> bool

Returns true if the tensor data is contiguous in memory.

§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
assert!(t.is_contiguous());
Source

pub fn is_col_major_contiguous(&self) -> bool

Check if the tensor has column-major contiguous layout.

§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
assert!(t.is_col_major_contiguous());
Source

pub fn is_row_major_contiguous(&self) -> bool

Check if the tensor has row-major contiguous layout.

§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::RowMajor).unwrap();
assert!(t.is_row_major_contiguous());

Trait Implementations§

Source§

impl<T> AsRef<Tensor<T>> for StructuredTensor<T>
where T: Scalar,

Source§

fn as_ref(&self) -> &Tensor<T>

Converts this type into a shared reference of the (usually inferred) input type.
Source§

impl<T> Clone for Tensor<T>

Source§

fn clone(&self) -> Tensor<T>

Shallow clone: shares the underlying data buffer.

For a deep copy, materialize into a new allocation with Tensor::contiguous or another explicit data-producing operation.

1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl<T> Debug for Tensor<T>

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), Error>

Formats the value using the given formatter. Read more
Source§

impl<T> Differentiable for Tensor<T>
where T: Scalar,

Source§

type Tangent = Tensor<T>

The tangent type for this value. Read more
Source§

fn zero_tangent(&self) -> Tensor<T>

Returns the zero tangent for this value (additive identity).
Source§

fn num_elements(&self) -> usize

Returns the number of scalar elements in this value. Read more
Source§

fn seed_cotangent(&self) -> Tensor<T>

Returns the seed cotangent for reverse-mode pullback. Read more
Source§

fn accumulate_tangent(a: Tensor<T>, b: &Tensor<T>) -> Tensor<T>

Accumulates (adds) two tangents: a + b.
Source§

impl<T> From<Tensor<T>> for StructuredTensor<T>
where T: Scalar,

Source§

fn from(value: Tensor<T>) -> StructuredTensor<T>

Converts to this type from the input type.

Auto Trait Implementations§

§

impl<T> Freeze for Tensor<T>

§

impl<T> !RefUnwindSafe for Tensor<T>

§

impl<T> Send for Tensor<T>
where T: Send,

§

impl<T> Sync for Tensor<T>
where T: Sync,

§

impl<T> Unpin for Tensor<T>

§

impl<T> !UnwindSafe for Tensor<T>

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
§

impl<T> ByRef<T> for T

§

fn by_ref(&self) -> &T

Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dest: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dest. Read more
§

impl<T> DistributionExt for T
where T: ?Sized,

§

fn rand<T>(&self, rng: &mut (impl Rng + ?Sized)) -> T
where Self: Distribution<T>,

Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> IntoEither for T

Source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
§

impl<T> Pointable for T

§

const ALIGN: usize

The alignment of pointer.
§

type Init = T

The type for initializers.
§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
§

impl<V, T> VZip<V> for T
where V: MultiLane<T>,

§

fn vzip(self) -> V

§

impl<T, U> Imply<T> for U
where T: ?Sized, U: ?Sized,

§

impl<T> MaybeSend for T

§

impl<T> MaybeSendSync for T

§

impl<T> MaybeSync for T