pub struct Tensor<T> { /* private fields */ }Expand description
Multi-dimensional dense tensor.
Tensor<T> owns or shares a DataBuffer together with shape, strides,
and memory-space metadata.
§Zero-copy views
Operations like permute, broadcast,
and diagonal return new tensors that share the
underlying buffer and only adjust metadata.
§Accessing raw data
Use DataBuffer::as_slice via Tensor::buffer together with
Tensor::dims, Tensor::strides, and Tensor::offset to build
backend-specific views.
§GPU async support
The optional CompletionEvent tracks pending GPU computation so future
backends can chain asynchronous work without forcing CPU synchronization.
§Examples
use tenferro_device::LogicalMemorySpace;
use tenferro_tensor::{MemoryOrder, Tensor};
let t = Tensor::<f64>::zeros(
&[2, 3],
LogicalMemorySpace::MainMemory,
MemoryOrder::ColumnMajor,
).unwrap();
assert_eq!(t.dims(), &[2, 3]);
assert_eq!(t.len(), 6);Implementations§
Source§impl<T> Tensor<T>where
T: Scalar,
impl<T> Tensor<T>where
T: Scalar,
Sourcepub fn stack(tensors: &[&Tensor<T>], dim: isize) -> Result<Tensor<T>, Error>
pub fn stack(tensors: &[&Tensor<T>], dim: isize) -> Result<Tensor<T>, Error>
Stack tensors along a new dimension.
Creates a new dimension and concatenates the input tensors along it. All input tensors must have the same shape. Negative dimensions are supported and count from the end.
This is a dense materialization operation that allocates a new buffer.
It is implemented by inserting a size-1 axis with
Tensor::unsqueeze and then delegating to Tensor::cat, so it
materializes logical values, resolves conjugation, and supports the same
CPU and same-device CUDA paths as concatenation.
§Arguments
tensors- Slice of input tensors to stack. Must not be empty.dim- Position to insert the new dimension. Must be in range[-ndim-1, ndim].
§Errors
Returns an error if:
- The input list is empty
- Tensors have different shapes
- Tensors have different memory spaces or devices
- The dimension is out of range
§Examples
use tenferro_device::LogicalMemorySpace;
use tenferro_tensor::{MemoryOrder, Tensor};
let a = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let b = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let stacked = Tensor::stack(&[&a, &b], 0).unwrap();
assert_eq!(stacked.dims(), &[2, 2, 3]);Sourcepub fn cat(tensors: &[&Tensor<T>], dim: isize) -> Result<Tensor<T>, Error>
pub fn cat(tensors: &[&Tensor<T>], dim: isize) -> Result<Tensor<T>, Error>
Concatenate tensors along an existing dimension.
Joins tensors along the specified dimension. All tensors must have the same rank and matching sizes on non-concatenated dimensions. Negative dimensions are supported and count from the end.
This is a dense materialization operation that allocates a new buffer.
Logical conjugation is materialized per input, the output is resolved
(conjugated = false), and any preferred compute-device hint is cleared.
Main-memory tensors are always supported; with cuda enabled, same-device
GPU tensors are also supported.
§Arguments
tensors- Slice of input tensors to concatenate. Must not be empty.dim- Dimension along which to concatenate. Must be in range[-ndim, ndim-1].
§Errors
Returns an error if:
- The input list is empty
- Any tensor is rank-0 (scalars cannot be concatenated)
- Tensors have different ranks
- Tensors have mismatched sizes on non-concatenated dimensions
- Tensors have different memory spaces
- The dimension is out of range
- Non-main-memory tensors are provided without
cudasupport
§Examples
use tenferro_device::LogicalMemorySpace;
use tenferro_tensor::{MemoryOrder, Tensor};
let a = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let b = Tensor::<f64>::zeros(&[2, 4], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let concatenated = Tensor::cat(&[&a, &b], 1).unwrap();
assert_eq!(concatenated.dims(), &[2, 7]);Source§impl<T> Tensor<T>where
T: Scalar,
impl<T> Tensor<T>where
T: Scalar,
Sourcepub fn zeros(
dims: &[usize],
memory_space: LogicalMemorySpace,
order: MemoryOrder,
) -> Result<Tensor<T>, Error>
pub fn zeros( dims: &[usize], memory_space: LogicalMemorySpace, order: MemoryOrder, ) -> Result<Tensor<T>, Error>
Create a tensor filled with zeros.
Allocates directly on the target device without an intermediate CPU
buffer. For GPU targets (with the cuda feature) this avoids the
CPU-allocate-then-transfer overhead.
§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
use tenferro_device::LogicalMemorySpace;
let a = Tensor::<f64>::zeros(
&[3, 4],
LogicalMemorySpace::MainMemory,
MemoryOrder::ColumnMajor,
).unwrap();Sourcepub fn empty(
dims: &[usize],
memory_space: LogicalMemorySpace,
order: MemoryOrder,
) -> Result<Tensor<T>, Error>
pub fn empty( dims: &[usize], memory_space: LogicalMemorySpace, order: MemoryOrder, ) -> Result<Tensor<T>, Error>
Create a tensor with allocated storage.
The current safe implementation initializes the backing storage to zero, which keeps the constructor deterministic while preserving the requested layout and device placement. For GPU targets this allocates uninitialised device memory directly.
The *_like family preserves a row-major layout only when the source
tensor is row-major contiguous and not column-major contiguous.
Ambiguous or non-contiguous inputs fall back to column-major.
§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
use tenferro_device::LogicalMemorySpace;
let a = Tensor::<f64>::empty(
&[3, 4],
LogicalMemorySpace::MainMemory,
MemoryOrder::ColumnMajor,
).unwrap();Sourcepub fn ones(
dims: &[usize],
memory_space: LogicalMemorySpace,
order: MemoryOrder,
) -> Result<Tensor<T>, Error>
pub fn ones( dims: &[usize], memory_space: LogicalMemorySpace, order: MemoryOrder, ) -> Result<Tensor<T>, Error>
Sourcepub fn empty_strided(
dims: &[usize],
strides: &[isize],
offset: isize,
memory_space: LogicalMemorySpace,
) -> Result<Tensor<T>, Error>
pub fn empty_strided( dims: &[usize], strides: &[isize], offset: isize, memory_space: LogicalMemorySpace, ) -> Result<Tensor<T>, Error>
Create a tensor with explicit strides.
§Errors
Returns an error if the layout would access storage outside the allocated buffer.
§Examples
use tenferro_tensor::Tensor;
use tenferro_device::LogicalMemorySpace;
let t = Tensor::<f64>::empty_strided(&[2, 2], &[1, 2], 0, LogicalMemorySpace::MainMemory).unwrap();
assert_eq!(t.strides(), &[1, 2]);Sourcepub fn full(
dims: &[usize],
value: T,
memory_space: LogicalMemorySpace,
order: MemoryOrder,
) -> Result<Tensor<T>, Error>
pub fn full( dims: &[usize], value: T, memory_space: LogicalMemorySpace, order: MemoryOrder, ) -> Result<Tensor<T>, Error>
Sourcepub fn from_slice(
data: &[T],
dims: &[usize],
order: MemoryOrder,
) -> Result<Tensor<T>, Error>
pub fn from_slice( data: &[T], dims: &[usize], order: MemoryOrder, ) -> Result<Tensor<T>, Error>
Create a tensor from a data slice.
order describes how to interpret data at the import boundary. View
operations continue to use tenferro’s internal column-major semantics.
§Errors
Returns an error if data.len() does not match the product of dims.
§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
let data = [1.0, 2.0, 3.0, 4.0];
let t = Tensor::<f64>::from_slice(&data, &[2, 2], MemoryOrder::ColumnMajor).unwrap();Sourcepub fn from_row_major_slice(
data: &[T],
dims: &[usize],
) -> Result<Tensor<T>, Error>
pub fn from_row_major_slice( data: &[T], dims: &[usize], ) -> Result<Tensor<T>, Error>
Create a tensor from a row-major data slice.
This is a convenience wrapper around
from_slice with MemoryOrder::RowMajor.
It lets NumPy / C users pass data in their natural order while
tenferro internally stores it in column-major layout.
§Errors
Returns an error if data.len() does not match the product of dims.
§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
// Row-major: data is laid out row by row.
// [[1, 2],
// [3, 4]]
let t = Tensor::<f64>::from_row_major_slice(
&[1.0, 2.0, 3.0, 4.0],
&[2, 2],
).unwrap();
assert_eq!(t.dims(), &[2, 2]);Sourcepub fn from_vec(
data: Vec<T>,
dims: &[usize],
strides: &[isize],
offset: isize,
) -> Result<Tensor<T>, Error>
pub fn from_vec( data: Vec<T>, dims: &[usize], strides: &[isize], offset: isize, ) -> Result<Tensor<T>, Error>
Sourcepub unsafe fn from_external_parts(
ptr: *const T,
len: usize,
dims: &[usize],
strides: &[isize],
offset: isize,
release: impl FnOnce() + Send + 'static,
) -> Result<Tensor<T>, Error>
pub unsafe fn from_external_parts( ptr: *const T, len: usize, dims: &[usize], strides: &[isize], offset: isize, release: impl FnOnce() + Send + 'static, ) -> Result<Tensor<T>, Error>
Create a tensor from externally-owned CPU-accessible memory.
§Safety
ptrmust remain valid for at leastlenelements untilreleaseis called.- The layout described by
dims,strides, andoffsetmust stay in bounds.
§Examples
use tenferro_tensor::Tensor;
let data = vec![1.0, 2.0, 3.0, 4.0];
let ptr = data.as_ptr();
let tensor = unsafe {
Tensor::from_external_parts(ptr, data.len(), &[2, 2], &[1, 2], 0, move || drop(data))
}.unwrap();
assert_eq!(tensor.dims(), &[2, 2]);Sourcepub fn try_into_data_vec(self) -> Option<Vec<T>>
pub fn try_into_data_vec(self) -> Option<Vec<T>>
Sourcepub fn empty_like(&self) -> Result<Tensor<T>, Error>
pub fn empty_like(&self) -> Result<Tensor<T>, Error>
Create a tensor with the same shape and layout convention as another tensor.
§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
use tenferro_device::LogicalMemorySpace;
let base = Tensor::<f64>::zeros(
&[2, 3],
LogicalMemorySpace::MainMemory,
MemoryOrder::ColumnMajor,
).unwrap();
let like = base.empty_like().unwrap();
assert_eq!(like.dims(), base.dims());Sourcepub fn zeros_like(&self) -> Result<Tensor<T>, Error>
pub fn zeros_like(&self) -> Result<Tensor<T>, Error>
Create a zero-filled tensor with the same shape and layout convention as another tensor.
The *_like family preserves a row-major layout only when the source
tensor is row-major contiguous and not column-major contiguous.
Ambiguous or non-contiguous inputs fall back to column-major.
§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
use tenferro_device::LogicalMemorySpace;
let base = Tensor::<f64>::zeros(
&[2, 3],
LogicalMemorySpace::MainMemory,
MemoryOrder::ColumnMajor,
).unwrap();
let like = base.zeros_like().unwrap();
assert_eq!(like.dims(), base.dims());Sourcepub fn ones_like(&self) -> Result<Tensor<T>, Error>
pub fn ones_like(&self) -> Result<Tensor<T>, Error>
Create a one-filled tensor with the same shape and layout convention as another tensor.
The *_like family preserves a row-major layout only when the source
tensor is row-major contiguous and not column-major contiguous.
Ambiguous or non-contiguous inputs fall back to column-major.
§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
use tenferro_device::LogicalMemorySpace;
let base = Tensor::<f64>::zeros(
&[2, 3],
LogicalMemorySpace::MainMemory,
MemoryOrder::ColumnMajor,
).unwrap();
let like = base.ones_like().unwrap();
assert_eq!(like.dims(), base.dims());Sourcepub fn full_like(&self, value: T) -> Result<Tensor<T>, Error>
pub fn full_like(&self, value: T) -> Result<Tensor<T>, Error>
Create a tensor filled with value and matching the shape/layout convention of another tensor.
The *_like family preserves a row-major layout only when the source
tensor is row-major contiguous and not column-major contiguous.
Ambiguous or non-contiguous inputs fall back to column-major.
§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
use tenferro_device::LogicalMemorySpace;
let base = Tensor::<f64>::zeros(
&[2, 3],
LogicalMemorySpace::MainMemory,
MemoryOrder::ColumnMajor,
).unwrap();
let like = base.full_like(3.25).unwrap();
assert_eq!(like.dims(), base.dims());Source§impl Tensor<f64>
impl Tensor<f64>
Sourcepub fn rand(
dims: &[usize],
memory_space: LogicalMemorySpace,
order: MemoryOrder,
generator: Option<&mut Generator>,
) -> Result<Tensor<f64>, Error>
pub fn rand( dims: &[usize], memory_space: LogicalMemorySpace, order: MemoryOrder, generator: Option<&mut Generator>, ) -> Result<Tensor<f64>, Error>
Sourcepub fn randn(
dims: &[usize],
memory_space: LogicalMemorySpace,
order: MemoryOrder,
generator: Option<&mut Generator>,
) -> Result<Tensor<f64>, Error>
pub fn randn( dims: &[usize], memory_space: LogicalMemorySpace, order: MemoryOrder, generator: Option<&mut Generator>, ) -> Result<Tensor<f64>, Error>
Sourcepub fn rand_like(
reference: &Tensor<f64>,
generator: Option<&mut Generator>,
) -> Result<Tensor<f64>, Error>
pub fn rand_like( reference: &Tensor<f64>, generator: Option<&mut Generator>, ) -> Result<Tensor<f64>, Error>
Create a tensor with the same shape/layout convention as another tensor and fill it with
uniform samples on [0, 1).
§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
let base = Tensor::<f64>::zeros(&[2, 3], tenferro_device::LogicalMemorySpace::MainMemory, MemoryOrder::RowMajor).unwrap();
let t = Tensor::<f64>::rand_like(&base, None).unwrap();
assert_eq!(t.dims(), base.dims());Sourcepub fn randn_like(
reference: &Tensor<f64>,
generator: Option<&mut Generator>,
) -> Result<Tensor<f64>, Error>
pub fn randn_like( reference: &Tensor<f64>, generator: Option<&mut Generator>, ) -> Result<Tensor<f64>, Error>
Create a tensor with the same shape/layout convention as another tensor and fill it with standard-normal samples.
§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
let base = Tensor::<f64>::zeros(&[2, 3], tenferro_device::LogicalMemorySpace::MainMemory, MemoryOrder::RowMajor).unwrap();
let t = Tensor::<f64>::randn_like(&base, None).unwrap();
assert_eq!(t.dims(), base.dims());Source§impl Tensor<i32>
impl Tensor<i32>
Sourcepub fn randint(
low: i32,
high: i32,
dims: &[usize],
memory_space: LogicalMemorySpace,
order: MemoryOrder,
generator: Option<&mut Generator>,
) -> Result<Tensor<i32>, Error>
pub fn randint( low: i32, high: i32, dims: &[usize], memory_space: LogicalMemorySpace, order: MemoryOrder, generator: Option<&mut Generator>, ) -> Result<Tensor<i32>, Error>
Sourcepub fn randint_like(
reference: &Tensor<i32>,
low: i32,
high: i32,
generator: Option<&mut Generator>,
) -> Result<Tensor<i32>, Error>
pub fn randint_like( reference: &Tensor<i32>, low: i32, high: i32, generator: Option<&mut Generator>, ) -> Result<Tensor<i32>, Error>
Create a tensor with the same shape/layout convention as another tensor and fill it with
integer samples in [low, high).
§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
let base = Tensor::<i32>::zeros(&[2, 3], tenferro_device::LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let t = Tensor::<i32>::randint_like(&base, -2, 5, None).unwrap();
assert_eq!(t.dims(), base.dims());Source§impl<T> Tensor<T>
impl<T> Tensor<T>
Sourcepub fn eye(
n: usize,
memory_space: LogicalMemorySpace,
order: MemoryOrder,
) -> Result<Tensor<T>, Error>
pub fn eye( n: usize, memory_space: LogicalMemorySpace, order: MemoryOrder, ) -> Result<Tensor<T>, Error>
Sourcepub fn arange(
start: T,
end: T,
step: T,
memory_space: LogicalMemorySpace,
order: MemoryOrder,
) -> Result<Tensor<T>, Error>
pub fn arange( start: T, end: T, step: T, memory_space: LogicalMemorySpace, order: MemoryOrder, ) -> Result<Tensor<T>, Error>
Create a regularly spaced 1-D tensor from start toward end.
§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
use tenferro_device::LogicalMemorySpace;
let xs = Tensor::<f64>::arange(
0.0,
5.0,
1.0,
LogicalMemorySpace::MainMemory,
MemoryOrder::ColumnMajor,
).unwrap();
assert_eq!(xs.dims(), &[5]);Sourcepub fn linspace(
start: T,
end: T,
n_samples: isize,
memory_space: LogicalMemorySpace,
order: MemoryOrder,
) -> Result<Tensor<T>, Error>
pub fn linspace( start: T, end: T, n_samples: isize, memory_space: LogicalMemorySpace, order: MemoryOrder, ) -> Result<Tensor<T>, Error>
Create a 1-D tensor containing n_samples evenly spaced values.
Returns an error if n_samples is negative.
§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
use tenferro_device::LogicalMemorySpace;
let xs = Tensor::<f64>::linspace(
0.0,
1.0,
5,
LogicalMemorySpace::MainMemory,
MemoryOrder::ColumnMajor,
).unwrap();
assert_eq!(xs.dims(), &[5]);Source§impl<T> Tensor<T>
impl<T> Tensor<T>
Sourcepub fn conj(&self) -> Tensor<T>where
T: Conjugate,
pub fn conj(&self) -> Tensor<T>where
T: Conjugate,
Return a lazily-conjugated tensor (shared buffer, flag flip).
§Examples
use num_complex::Complex64;
let data = vec![Complex64::new(1.0, 2.0), Complex64::new(3.0, -4.0)];
let a = Tensor::from_slice(&data, &[2], MemoryOrder::ColumnMajor).unwrap();
let a_conj = a.conj();
assert!(a_conj.is_conjugated());Source§impl<T> Tensor<T>where
T: Scalar,
impl<T> Tensor<T>where
T: Scalar,
Sourcepub fn deep_clone(&self) -> Tensor<T>
pub fn deep_clone(&self) -> Tensor<T>
Create a deep copy with an exclusively-owned contiguous buffer.
Unlike clone (which is a shallow Arc refcount
bump), this always allocates a fresh buffer and copies element data.
The returned tensor is contiguous in column-major order and has
buffer.is_unique() == true, so set and
get_mut are guaranteed to succeed.
§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
let a = Tensor::<f64>::from_slice(
&[1.0, 2.0, 3.0, 4.0], &[2, 2], MemoryOrder::ColumnMajor,
).unwrap();
let b = a.clone(); // shallow — shares buffer
let mut c = a.deep_clone(); // deep — independent buffer
c.set(&[0, 0], 99.0).unwrap();
assert_eq!(c.get(&[0, 0]), Some(&99.0));
assert_eq!(a.get(&[0, 0]), Some(&1.0)); // original unchangedSourcepub fn contiguous(&self, order: MemoryOrder) -> Tensor<T>
pub fn contiguous(&self, order: MemoryOrder) -> Tensor<T>
Return a contiguous copy of this tensor in the given memory order.
order controls the materialized output buffer only. It does not change
the internal column-major semantics used by view operations such as
reshape.
§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::RowMajor).unwrap();
let c = t.contiguous(MemoryOrder::RowMajor);
assert!(c.is_contiguous());Sourcepub fn into_contiguous(self, order: MemoryOrder) -> Tensor<T>
pub fn into_contiguous(self, order: MemoryOrder) -> Tensor<T>
Sourcepub fn into_column_major(self) -> Tensor<T>
pub fn into_column_major(self) -> Tensor<T>
Consume this tensor and return a contiguous column-major version.
This is a convenience wrapper around into_contiguous(MemoryOrder::ColumnMajor)
since column-major is tenferro’s canonical internal layout.
§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::RowMajor).unwrap();
let col_major = t.into_column_major();
assert!(col_major.is_col_major_contiguous());Sourcepub fn to_vec(&self) -> Vec<T>
pub fn to_vec(&self) -> Vec<T>
Copy tensor data into a flat Vec<T> in column-major order.
The returned vector has length self.len() with elements laid out
in column-major (Fortran) order. For a 2-D tensor with shape
[m, n], the first m elements are column 0, the next m are
column 1, and so on.
This method internally materializes a contiguous copy when the tensor is not already column-major contiguous, so it always returns owned data regardless of the original layout.
§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
let t = Tensor::<f64>::from_row_major_slice(
&[1.0, 2.0, 3.0, 4.0, 5.0, 6.0],
&[2, 3],
).unwrap();
// Matrix (row-major input):
// [[1, 2, 3],
// [4, 5, 6]]
// Column-major output: col0=[1,4], col1=[2,5], col2=[3,6]
assert_eq!(t.to_vec(), vec![1.0, 4.0, 2.0, 5.0, 3.0, 6.0]);Source§impl<T> Tensor<T>
impl<T> Tensor<T>
Sourcepub fn get(&self, index: &[usize]) -> Option<&T>
pub fn get(&self, index: &[usize]) -> Option<&T>
Access a single element by multi-dimensional index.
Returns None if the index is out of bounds or the underlying buffer
is not CPU-accessible.
§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
// Column-major: data is laid out column by column.
// from_slice with ColumnMajor and data [1,2,3,4] gives:
// column 0 = [1, 2], column 1 = [3, 4]
// matrix = [[1, 3],
// [2, 4]]
let t = Tensor::<f64>::from_slice(
&[1.0, 2.0, 3.0, 4.0], &[2, 2], MemoryOrder::ColumnMajor,
).unwrap();
assert_eq!(t.get(&[0, 0]), Some(&1.0));
assert_eq!(t.get(&[1, 0]), Some(&2.0));
assert_eq!(t.get(&[0, 1]), Some(&3.0));
assert_eq!(t.get(&[1, 1]), Some(&4.0));
assert_eq!(t.get(&[2, 0]), None); // out of boundsSourcepub fn get_mut(&mut self, index: &[usize]) -> Option<&mut T>
pub fn get_mut(&mut self, index: &[usize]) -> Option<&mut T>
Access a single element mutably by multi-dimensional index.
Returns None if the index is out of bounds, the buffer is not
CPU-accessible, or the buffer is shared (Arc refcount > 1).
§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
let mut t = Tensor::<f64>::from_slice(
&[1.0, 2.0, 3.0, 4.0], &[2, 2], MemoryOrder::ColumnMajor,
).unwrap();
*t.get_mut(&[0, 1]).unwrap() = 99.0;
assert_eq!(t.get(&[0, 1]), Some(&99.0));
// Out of bounds returns None:
assert!(t.get_mut(&[2, 0]).is_none());Sourcepub fn set(&mut self, index: &[usize], value: T) -> Result<(), Error>
pub fn set(&mut self, index: &[usize], value: T) -> Result<(), Error>
Write a value at the given multi-dimensional index.
Returns Ok(()) on success, or an error if the index is out of bounds,
the buffer is not CPU-accessible, or the buffer is shared
(Arc refcount > 1). Call deep_clone first to
obtain an exclusively-owned copy.
§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
let mut t = Tensor::<f64>::from_slice(
&[1.0, 2.0, 3.0, 4.0], &[2, 2], MemoryOrder::ColumnMajor,
).unwrap();
t.set(&[1, 0], 10.0).unwrap();
assert_eq!(t.get(&[1, 0]), Some(&10.0));
// Shared buffers cannot be written:
let shared = t.clone(); // refcount == 2
// t.set(&[0, 0], 5.0) would fail here because buffer is sharedSource§impl<T> Tensor<T>
impl<T> Tensor<T>
Sourcepub fn buffer(&self) -> &DataBuffer<T>
pub fn buffer(&self) -> &DataBuffer<T>
Sourcepub fn buffer_mut(&mut self) -> &mut DataBuffer<T>
pub fn buffer_mut(&mut self) -> &mut DataBuffer<T>
Sourcepub fn logical_memory_space(&self) -> LogicalMemorySpace
pub fn logical_memory_space(&self) -> LogicalMemorySpace
Sourcepub fn preferred_compute_device(&self) -> Option<ComputeDevice>
pub fn preferred_compute_device(&self) -> Option<ComputeDevice>
Sourcepub fn set_preferred_compute_device(&mut self, device: Option<ComputeDevice>)
pub fn set_preferred_compute_device(&mut self, device: Option<ComputeDevice>)
Sourcepub fn is_conjugated(&self) -> bool
pub fn is_conjugated(&self) -> bool
Sourcepub fn has_fw_grad(&self) -> bool
pub fn has_fw_grad(&self) -> bool
Sourcepub fn set_fw_grad(&mut self, grad: Tensor<T>)
pub fn set_fw_grad(&mut self, grad: Tensor<T>)
Sourcepub fn detach_fw_grad(&mut self) -> Option<Tensor<T>>
pub fn detach_fw_grad(&mut self) -> Option<Tensor<T>>
Detach and return the forward-mode tangent, leaving None.
§Examples
let mut t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
t.set_fw_grad(Tensor::<f64>::ones(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap());
let _grad = t.detach_fw_grad().unwrap();Sourcepub fn effective_compute_devices(
&self,
op_kind: OpKind,
) -> Result<Vec<ComputeDevice>, Error>
pub fn effective_compute_devices( &self, op_kind: OpKind, ) -> Result<Vec<ComputeDevice>, Error>
Source§impl<T> Tensor<T>where
T: Scalar,
impl<T> Tensor<T>where
T: Scalar,
Sourcepub fn zero_trailing_by_counts<R>(
&self,
keep_counts: &Tensor<R>,
axis: usize,
structural_rank: usize,
) -> Result<Tensor<T>, Error>where
R: KeepCountScalar,
pub fn zero_trailing_by_counts<R>(
&self,
keep_counts: &Tensor<R>,
axis: usize,
structural_rank: usize,
) -> Result<Tensor<T>, Error>where
R: KeepCountScalar,
Return a contiguous tensor with trailing elements zeroed according to batch-local keep counts.
structural_rank splits the payload dims from the trailing batch dims.
axis is interpreted within the structural prefix [0, structural_rank).
Phase 1 supports main-memory tensors only.
§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
let payload = Tensor::from_slice(&[1.0, 2.0, 3.0, 4.0], &[2, 2], MemoryOrder::ColumnMajor)?;
let keep_counts = Tensor::from_slice(&[1.0], &[], MemoryOrder::ColumnMajor)?;
let trimmed = payload.zero_trailing_by_counts(&keep_counts, 1, 2)?;
assert_eq!(trimmed.buffer().as_slice().unwrap(), &[1.0, 2.0, 0.0, 0.0]);Sourcepub fn merge_strict_lower_and_upper(
lower: &Tensor<T>,
upper: &Tensor<T>,
) -> Result<Tensor<T>, Error>
pub fn merge_strict_lower_and_upper( lower: &Tensor<T>, upper: &Tensor<T>, ) -> Result<Tensor<T>, Error>
Merge a strict-lower source and an upper-with-diagonal source into one packed matrix.
lower must have shape [m, k, *batch] and upper must have shape [k, n, *batch]
where k = min(m, n). The output has shape [m, n, *batch] with entries selected
from lower when row > col and from upper otherwise.
§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
let lower = Tensor::from_slice(&[1.0, 2.0, 1.0, 3.0], &[2, 2], MemoryOrder::ColumnMajor)?;
let upper = Tensor::from_slice(&[4.0, 0.0, 5.0, 6.0], &[2, 2], MemoryOrder::ColumnMajor)?;
let packed = Tensor::merge_strict_lower_and_upper(&lower, &upper)?;
assert_eq!(packed.buffer().as_slice().unwrap(), &[4.0, 2.0, 5.0, 6.0]);Source§impl<T> Tensor<T>where
T: Scalar,
impl<T> Tensor<T>where
T: Scalar,
Sourcepub fn to_memory_space_async(
&self,
target: LogicalMemorySpace,
) -> Result<Tensor<T>, Error>
pub fn to_memory_space_async( &self, target: LogicalMemorySpace, ) -> Result<Tensor<T>, Error>
Source§impl<T> Tensor<T>
impl<T> Tensor<T>
Sourcepub fn view(&self, new_dims: &[usize]) -> Result<Tensor<T>, Error>
pub fn view(&self, new_dims: &[usize]) -> Result<Tensor<T>, Error>
Return a zero-copy view with a different shape.
This is the strict metadata-only variant of reshape. The returned tensor
shares storage with self and therefore requires the input layout to be
contiguous (column-major). For PyTorch-style view-or-copy semantics that
handle non-contiguous inputs, use reshape instead.
§Errors
Returns StrideError if the tensor is not contiguous.
§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::RowMajor).unwrap();
let r = t.view(&[6]).unwrap();
assert_eq!(r.dims(), &[6]);Sourcepub fn reshape(&self, new_dims: &[usize]) -> Result<Tensor<T>, Error>where
T: Scalar,
pub fn reshape(&self, new_dims: &[usize]) -> Result<Tensor<T>, Error>where
T: Scalar,
Reshape the tensor to a new shape.
Reshape follows tenferro’s internal column-major semantics and PyTorch-style view-or-copy behavior: it returns a zero-copy view when the current layout is compatible with column-major ordering, and otherwise materializes a contiguous column-major copy first before returning the view.
For strict zero-copy semantics that reject non-contiguous inputs, use
view instead.
§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::RowMajor).unwrap();
let r = t.reshape(&[6]).unwrap();
assert_eq!(r.dims(), &[6]);Sourcepub fn view_as_strided(
&self,
new_dims: Vec<usize>,
new_strides: Vec<isize>,
) -> Result<Tensor<T>, Error>
pub fn view_as_strided( &self, new_dims: Vec<usize>, new_strides: Vec<isize>, ) -> Result<Tensor<T>, Error>
Sourcepub fn unsqueeze(&self, dim: isize) -> Result<Tensor<T>, Error>
pub fn unsqueeze(&self, dim: isize) -> Result<Tensor<T>, Error>
Insert a size-1 dimension at the specified position.
This is a zero-copy view operation. Negative dimensions are supported and count from the end.
§Arguments
dim- Position to insert the new dimension. Must be in range[-ndim-1, ndim].
§Errors
Returns an error if the dimension is out of range.
§Examples
let t = Tensor::<f64>::zeros(&[2, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let u = t.unsqueeze(0).unwrap();
assert_eq!(u.dims(), &[1, 2, 3]);
let u2 = t.unsqueeze(-1).unwrap();
assert_eq!(u2.dims(), &[2, 3, 1]);Sourcepub fn squeeze_dim(&self, dim: isize) -> Result<Tensor<T>, Error>
pub fn squeeze_dim(&self, dim: isize) -> Result<Tensor<T>, Error>
Remove a specific size-1 dimension from the tensor.
This is a zero-copy view operation. Negative dimensions are supported and count from the end.
§Arguments
dim- Dimension to remove. Must be in range[-ndim, ndim-1]and have size 1.
§Errors
Returns an error if:
- The dimension is out of range
- The dimension does not have size 1
§Examples
let t = Tensor::<f64>::zeros(&[2, 1, 3], LogicalMemorySpace::MainMemory, MemoryOrder::ColumnMajor).unwrap();
let s = t.squeeze_dim(1).unwrap();
assert_eq!(s.dims(), &[2, 3]);
let s2 = t.squeeze_dim(-2).unwrap();
assert_eq!(s2.dims(), &[2, 3]);Sourcepub fn mT(&self) -> Result<Tensor<T>, Error>
pub fn mT(&self) -> Result<Tensor<T>, Error>
Return a zero-copy view with the last two axes transposed.
This is a metadata-only operation. For batched matrices, leading batch axes are preserved and only the final two matrix axes are swapped.
§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
let t = Tensor::<f64>::from_slice(
&[1.0, 2.0, 3.0, 4.0, 5.0, 6.0],
&[2, 3],
MemoryOrder::ColumnMajor,
)
.unwrap();
let mt = t.mT().unwrap();
assert_eq!(mt.dims(), &[3, 2]);Source§impl<T> Tensor<T>where
T: Conjugate,
impl<T> Tensor<T>where
T: Conjugate,
Sourcepub fn mH(&self) -> Result<Tensor<T>, Error>
pub fn mH(&self) -> Result<Tensor<T>, Error>
Return a zero-copy conjugate-transpose view over the last two axes.
This is equivalent to self.mT()?.conj(): swap the trailing matrix axes
and toggle the lazy conjugation flag.
§Examples
use num_complex::Complex64;
use tenferro_tensor::{MemoryOrder, Tensor};
let z = Tensor::<Complex64>::from_slice(
&[Complex64::new(1.0, 2.0), Complex64::new(3.0, 4.0)],
&[2, 1],
MemoryOrder::ColumnMajor,
)
.unwrap();
let mh = z.mH().unwrap();
assert_eq!(mh.dims(), &[1, 2]);
assert!(mh.is_conjugated());Source§impl Tensor<Complex<f32>>
impl Tensor<Complex<f32>>
Sourcepub fn view_as_real(&self) -> Result<Tensor<f32>, Error>
pub fn view_as_real(&self) -> Result<Tensor<f32>, Error>
Return a zero-copy real view of a complex tensor.
The last logical axis is expanded to length 2, exposing the real and
imaginary parts as adjacent real-valued elements.
§Examples
use num_complex::Complex32;
use tenferro_device::LogicalMemorySpace;
use tenferro_tensor::{MemoryOrder, Tensor};
let z = Tensor::<Complex32>::from_slice(
&[Complex32::new(1.0, 2.0)],
&[1],
MemoryOrder::ColumnMajor,
).unwrap();
let r = z.view_as_real().unwrap();
assert_eq!(r.dims(), &[1, 2]);
assert_eq!(r.logical_memory_space(), LogicalMemorySpace::MainMemory);Sourcepub fn real(&self) -> Result<Tensor<f32>, Error>
pub fn real(&self) -> Result<Tensor<f32>, Error>
Return a zero-copy view of the real part of a resolved complex tensor.
This is implemented as view_as_real() followed by selecting the real
lane of the trailing size-2 axis.
§Examples
use num_complex::Complex32;
use tenferro_tensor::{MemoryOrder, Tensor};
let z = Tensor::<Complex32>::from_slice(
&[Complex32::new(1.0, 2.0)],
&[1],
MemoryOrder::ColumnMajor,
)
.unwrap();
let real = z.real().unwrap();
assert_eq!(real.dims(), &[1]);Sourcepub fn imag(&self) -> Result<Tensor<f32>, Error>
pub fn imag(&self) -> Result<Tensor<f32>, Error>
Return a zero-copy view of the imaginary part of a resolved complex tensor.
This is implemented as view_as_real() followed by selecting the
imaginary lane of the trailing size-2 axis.
§Examples
use num_complex::Complex32;
use tenferro_tensor::{MemoryOrder, Tensor};
let z = Tensor::<Complex32>::from_slice(
&[Complex32::new(1.0, 2.0)],
&[1],
MemoryOrder::ColumnMajor,
)
.unwrap();
let imag = z.imag().unwrap();
assert_eq!(imag.dims(), &[1]);Source§impl Tensor<Complex<f64>>
impl Tensor<Complex<f64>>
Sourcepub fn view_as_real(&self) -> Result<Tensor<f64>, Error>
pub fn view_as_real(&self) -> Result<Tensor<f64>, Error>
Return a zero-copy real view of a complex tensor.
The last logical axis is expanded to length 2, exposing the real and
imaginary parts as adjacent real-valued elements.
§Examples
use num_complex::Complex64;
use tenferro_tensor::{MemoryOrder, Tensor};
let z = Tensor::<Complex64>::from_slice(
&[Complex64::new(1.0, 2.0)],
&[1],
MemoryOrder::ColumnMajor,
).unwrap();
let r = z.view_as_real().unwrap();
assert_eq!(r.dims(), &[1, 2]);Sourcepub fn real(&self) -> Result<Tensor<f64>, Error>
pub fn real(&self) -> Result<Tensor<f64>, Error>
Return a zero-copy view of the real part of a resolved complex tensor.
This is implemented as view_as_real() followed by selecting the real
lane of the trailing size-2 axis.
§Examples
use num_complex::Complex64;
use tenferro_tensor::{MemoryOrder, Tensor};
let z = Tensor::<Complex64>::from_slice(
&[Complex64::new(1.0, 2.0)],
&[1],
MemoryOrder::ColumnMajor,
)
.unwrap();
let real = z.real().unwrap();
assert_eq!(real.dims(), &[1]);Sourcepub fn imag(&self) -> Result<Tensor<f64>, Error>
pub fn imag(&self) -> Result<Tensor<f64>, Error>
Return a zero-copy view of the imaginary part of a resolved complex tensor.
This is implemented as view_as_real() followed by selecting the
imaginary lane of the trailing size-2 axis.
§Examples
use num_complex::Complex64;
use tenferro_tensor::{MemoryOrder, Tensor};
let z = Tensor::<Complex64>::from_slice(
&[Complex64::new(1.0, 2.0)],
&[1],
MemoryOrder::ColumnMajor,
)
.unwrap();
let imag = z.imag().unwrap();
assert_eq!(imag.dims(), &[1]);Source§impl Tensor<f32>
impl Tensor<f32>
Sourcepub fn view_as_complex(&self) -> Result<Tensor<Complex<f32>>, Error>
pub fn view_as_complex(&self) -> Result<Tensor<Complex<f32>>, Error>
Return a zero-copy complex view of a real tensor whose last dimension stores paired real and imaginary components.
§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
let r = Tensor::<f32>::from_slice(&[1.0, 2.0], &[2], MemoryOrder::ColumnMajor).unwrap();
let z = r.view_as_complex().unwrap();
assert_eq!(z.dims(), &[] as &[usize]);Source§impl Tensor<f64>
impl Tensor<f64>
Sourcepub fn view_as_complex(&self) -> Result<Tensor<Complex<f64>>, Error>
pub fn view_as_complex(&self) -> Result<Tensor<Complex<f64>>, Error>
Return a zero-copy complex view of a real tensor whose last dimension stores paired real and imaginary components.
§Examples
use tenferro_tensor::{MemoryOrder, Tensor};
let r = Tensor::<f64>::from_slice(&[1.0, 2.0], &[2], MemoryOrder::ColumnMajor).unwrap();
let z = r.view_as_complex().unwrap();
assert_eq!(z.dims(), &[] as &[usize]);Source§impl<T> Tensor<T>
Methods that require no element-type bounds at all.
impl<T> Tensor<T>
Methods that require no element-type bounds at all.
These operate only on tensor metadata (dims, strides, offset, buffer reference) and never read or write element values.
Sourcepub fn is_contiguous(&self) -> bool
pub fn is_contiguous(&self) -> bool
Sourcepub fn is_col_major_contiguous(&self) -> bool
pub fn is_col_major_contiguous(&self) -> bool
Sourcepub fn is_row_major_contiguous(&self) -> bool
pub fn is_row_major_contiguous(&self) -> bool
Trait Implementations§
Source§impl<T> Clone for Tensor<T>
impl<T> Clone for Tensor<T>
Source§fn clone(&self) -> Tensor<T>
fn clone(&self) -> Tensor<T>
Shallow clone: shares the underlying data buffer.
For a deep copy, materialize into a new allocation with
Tensor::contiguous or another explicit data-producing operation.
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read moreSource§impl<T> Differentiable for Tensor<T>where
T: Scalar,
impl<T> Differentiable for Tensor<T>where
T: Scalar,
Source§fn zero_tangent(&self) -> Tensor<T>
fn zero_tangent(&self) -> Tensor<T>
Source§fn num_elements(&self) -> usize
fn num_elements(&self) -> usize
Source§fn seed_cotangent(&self) -> Tensor<T>
fn seed_cotangent(&self) -> Tensor<T>
Auto Trait Implementations§
impl<T> Freeze for Tensor<T>
impl<T> !RefUnwindSafe for Tensor<T>
impl<T> Send for Tensor<T>where
T: Send,
impl<T> Sync for Tensor<T>where
T: Sync,
impl<T> Unpin for Tensor<T>
impl<T> !UnwindSafe for Tensor<T>
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
§impl<T> DistributionExt for Twhere
T: ?Sized,
impl<T> DistributionExt for Twhere
T: ?Sized,
fn rand<T>(&self, rng: &mut (impl Rng + ?Sized)) -> Twhere
Self: Distribution<T>,
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more