pub struct TracedTensor {
pub id: TracedTensorId,
pub rank: usize,
pub dtype: DType,
pub fragment: Arc<Fragment<StdTensorOp>>,
pub val: LocalValId,
pub data: Option<Arc<Tensor>>,
/* private fields */
}Fields§
§id: TracedTensorId§rank: usize§dtype: DType§fragment: Arc<Fragment<StdTensorOp>>§val: LocalValId§data: Option<Arc<Tensor>>Implementations§
Source§impl TracedTensor
impl TracedTensor
Sourcepub fn from_tensor_concrete_shape(tensor: Tensor) -> Self
pub fn from_tensor_concrete_shape(tensor: Tensor) -> Self
Build a TracedTensor leaf from a concrete Tensor, keeping its
shape as a concrete shape_hint.
This is the common constructor when you have concrete tensor data that you want to use both for graph building and for evaluation. The resulting tensor is treated as a concrete-shape leaf by downstream passes (binary einsum decomposition, build-time reshape folding, etc.).
§Examples
use tenferro::{Tensor, TracedTensor};
let a = TracedTensor::from_tensor_concrete_shape(
Tensor::from_vec(vec![2, 3], vec![1.0_f64, 2.0, 3.0, 4.0, 5.0, 6.0]),
);
assert_eq!(a.rank, 2);
assert!(a.is_concrete_shape());Sourcepub fn from_tensor_symbolic_shape(tensor: Tensor) -> Self
pub fn from_tensor_symbolic_shape(tensor: Tensor) -> Self
Build a TracedTensor leaf from a concrete Tensor but advertise
a symbolic shape during graph construction.
The tensor data is still attached (so plain eval works without
bindings), but graph passes see the leaf as shape-symbolic. This is
useful for building a single traced program that should not bake in
shape-specific optimizations — e.g. mixing a known-shape tensor into
an einsum with other input_symbolic_shape placeholders forces the
einsum to be kept as a single NaryEinsum op rather than
decomposing at build time.
§Examples
use tenferro::{Tensor, TracedTensor};
let t = TracedTensor::from_tensor_symbolic_shape(
Tensor::from_vec(vec![2, 3], vec![1.0_f64, 2.0, 3.0, 4.0, 5.0, 6.0]),
);
assert_eq!(t.rank, 2);
assert!(!t.is_concrete_shape());Sourcepub fn input_concrete_shape(dtype: DType, shape: &[usize]) -> Self
pub fn input_concrete_shape(dtype: DType, shape: &[usize]) -> Self
Build a data-less placeholder leaf with a fixed (concrete) shape.
Must be bound via TracedTensor::eval_with_inputs before evaluation.
Use this when you know the exact shape of the input but want to build
the graph once and feed different concrete tensors at eval time.
§Examples
use tenferro_tensor::DType;
use tenferro::TracedTensor;
let x = TracedTensor::input_concrete_shape(DType::F64, &[2, 3]);
assert_eq!(x.rank, 2);
assert!(x.is_concrete_shape());Sourcepub fn input_symbolic_shape(dtype: DType, rank: usize) -> Self
pub fn input_symbolic_shape(dtype: DType, rank: usize) -> Self
Build a data-less placeholder leaf with the given rank but fully
symbolic shape (every dim is a distinct SymDim::TensorAxis).
Must be bound via TracedTensor::eval_with_inputs before
evaluation. Use this to build shape-agnostic graphs — in particular,
einsum calls containing at least one input_symbolic_shape input are
kept as a single NaryEinsum op so the contraction path can be
optimized at eval time against the actual bound shapes.
§Examples
use tenferro_tensor::DType;
use tenferro::TracedTensor;
let x = TracedTensor::input_symbolic_shape(DType::F64, 2);
assert_eq!(x.rank, 2);
assert!(!x.is_concrete_shape());Sourcepub fn from_vec<T: TensorScalar>(shape: Vec<usize>, data: Vec<T>) -> Self
pub fn from_vec<T: TensorScalar>(shape: Vec<usize>, data: Vec<T>) -> Self
Build a concrete-shape TracedTensor leaf from typed Vec<T>
data. Equivalent to
TracedTensor::from_tensor_concrete_shape(Tensor::from_vec(shape, data)).
§Examples
use tenferro::TracedTensor;
let a = TracedTensor::from_vec(vec![2, 3], vec![1.0_f64, 2.0, 3.0, 4.0, 5.0, 6.0]);
assert_eq!(a.rank, 2);Sourcepub fn is_concrete_shape(&self) -> bool
pub fn is_concrete_shape(&self) -> bool
Returns true iff every dim of this tensor’s shape_hint is a
constant SymDim (i.e. the shape is fully known at graph-build time).
§Examples
use tenferro_tensor::DType;
use tenferro::TracedTensor;
let a = TracedTensor::from_vec(vec![2, 3], vec![1.0_f64; 6]);
let b = TracedTensor::input_symbolic_shape(DType::F64, 2);
assert!(a.is_concrete_shape());
assert!(!b.is_concrete_shape());Sourcepub fn input_key(&self) -> Option<TensorInputKey>
pub fn input_key(&self) -> Option<TensorInputKey>
If this TracedTensor is a leaf (single-node input fragment),
return its input key. Computed tensors return None.
pub fn eval<B: TensorBackend>( &mut self, engine: &mut Engine<B>, ) -> Result<&Tensor>
Sourcepub fn eval_with_inputs<B: TensorBackend>(
&mut self,
engine: &mut Engine<B>,
bindings: &[(&TracedTensor, &Tensor)],
) -> Result<&Tensor>
pub fn eval_with_inputs<B: TensorBackend>( &mut self, engine: &mut Engine<B>, bindings: &[(&TracedTensor, &Tensor)], ) -> Result<&Tensor>
Evaluate this traced tensor, binding external tensors to any placeholder leaves present in the graph.
Each (placeholder, tensor) pair maps a placeholder
TracedTensor (built via
TracedTensor::input_concrete_shape or
TracedTensor::input_symbolic_shape) to the concrete
Tensor that should take its place during execution. Leaves that
carry their own data (from TracedTensor::from_vec,
TracedTensor::from_tensor_concrete_shape, or
TracedTensor::from_tensor_symbolic_shape) must not appear in
bindings.
§Errors
Error::UnexpectedBindingif a binding’s left side is not a data-less placeholder leaf.Error::DuplicateBindingif the same placeholder key appears more than once inbindings.Error::PlaceholderDtypeMismatchif a binding tensor’s dtype differs from the placeholder’s declared dtype.Error::PlaceholderShapeMismatchif a binding tensor’s shape differs from aninput_concrete_shapeplaceholder’s fixed shape.Error::PlaceholderRankMismatchif a binding tensor’s rank differs from aninput_symbolic_shapeplaceholder’s rank.Error::UnboundPlaceholderif the compiled graph contains a placeholder that has no entry inbindings.
§Examples
use tenferro::{CpuBackend, Engine, Tensor, TracedTensor};
use tenferro_tensor::DType;
let mut engine = Engine::new(CpuBackend::new());
let x = TracedTensor::input_symbolic_shape(DType::F64, 1);
let mut y = &x + &x;
let concrete = Tensor::from_vec(vec![3], vec![1.0_f64, 2.0, 3.0]);
let out = y.eval_with_inputs(&mut engine, &[(&x, &concrete)]).unwrap();
assert_eq!(out.shape(), &[3]);pub fn grad(&self, wrt: &TracedTensor) -> Result<TracedTensor>
Sourcepub fn try_grad(&self, wrt: &TracedTensor) -> Result<Option<TracedTensor>>
pub fn try_grad(&self, wrt: &TracedTensor) -> Result<Option<TracedTensor>>
Sourcepub fn checkpoint<B: TensorBackend>(
&mut self,
engine: &mut Engine<B>,
) -> Result<()>
pub fn checkpoint<B: TensorBackend>( &mut self, engine: &mut Engine<B>, ) -> Result<()>
Evaluate this tensor and replace its graph with a concrete leaf.
This keeps downstream forward evaluation rooted at the concrete value while retaining the original fragment chain for later reverse-mode AD.
§Examples
use tenferro::{CpuBackend, Engine, TracedTensor};
let mut engine = Engine::new(CpuBackend::new());
let x = TracedTensor::from_vec(vec![], vec![3.0_f64]);
let mut y = &x * &x;
y.checkpoint(&mut engine).unwrap();
assert_eq!(y.eval(&mut engine).unwrap().shape(), &[] as &[usize]);pub fn jvp(&self, wrt: &TracedTensor, tangent: &TracedTensor) -> TracedTensor
Sourcepub fn try_jvp(
&self,
wrt: &TracedTensor,
tangent: &TracedTensor,
) -> Option<TracedTensor>
pub fn try_jvp( &self, wrt: &TracedTensor, tangent: &TracedTensor, ) -> Option<TracedTensor>
Like jvp but returns None when the output does not
depend on wrt (i.e. the tangent is structurally zero).
pub fn vjp(&self, wrt: &TracedTensor, cotangent: &TracedTensor) -> TracedTensor
Sourcepub fn add(&self, other: &TracedTensor) -> TracedTensor
pub fn add(&self, other: &TracedTensor) -> TracedTensor
Sourcepub fn mul(&self, other: &TracedTensor) -> TracedTensor
pub fn mul(&self, other: &TracedTensor) -> TracedTensor
Sourcepub fn div(&self, other: &TracedTensor) -> TracedTensor
pub fn div(&self, other: &TracedTensor) -> TracedTensor
Sourcepub fn neg(&self) -> TracedTensor
pub fn neg(&self) -> TracedTensor
Sourcepub fn conj(&self) -> TracedTensor
pub fn conj(&self) -> TracedTensor
Sourcepub fn abs(&self) -> TracedTensor
pub fn abs(&self) -> TracedTensor
Sourcepub fn sign(&self) -> TracedTensor
pub fn sign(&self) -> TracedTensor
Sourcepub fn scale_real(&self, factor: f64) -> TracedTensor
pub fn scale_real(&self, factor: f64) -> TracedTensor
Sourcepub fn scale_complex(&self, factor: Complex64) -> TracedTensor
pub fn scale_complex(&self, factor: Complex64) -> TracedTensor
Scale by a complex scalar: y = factor * x.
This currently supports complex tensors only. For real scaling, prefer
scale_real.
§Examples
use num_complex::Complex64;
let y = x.scale_complex(Complex64::new(0.0, 1.0)); // multiply by iSourcepub fn exp(&self) -> TracedTensor
pub fn exp(&self) -> TracedTensor
Sourcepub fn log(&self) -> TracedTensor
pub fn log(&self) -> TracedTensor
Sourcepub fn sin(&self) -> TracedTensor
pub fn sin(&self) -> TracedTensor
Sourcepub fn cos(&self) -> TracedTensor
pub fn cos(&self) -> TracedTensor
Sourcepub fn tanh(&self) -> TracedTensor
pub fn tanh(&self) -> TracedTensor
Sourcepub fn sqrt(&self) -> TracedTensor
pub fn sqrt(&self) -> TracedTensor
Sourcepub fn rsqrt(&self) -> TracedTensor
pub fn rsqrt(&self) -> TracedTensor
Sourcepub fn pow(&self, other: &TracedTensor) -> TracedTensor
pub fn pow(&self, other: &TracedTensor) -> TracedTensor
Sourcepub fn expm1(&self) -> TracedTensor
pub fn expm1(&self) -> TracedTensor
Sourcepub fn log1p(&self) -> TracedTensor
pub fn log1p(&self) -> TracedTensor
Sourcepub fn convert(&self, to: DType) -> TracedTensor
pub fn convert(&self, to: DType) -> TracedTensor
Sourcepub fn dot_general(
&self,
other: &TracedTensor,
config: DotGeneralConfig,
) -> TracedTensor
pub fn dot_general( &self, other: &TracedTensor, config: DotGeneralConfig, ) -> TracedTensor
Sourcepub fn reduce_sum(&self, axes: &[usize]) -> TracedTensor
pub fn reduce_sum(&self, axes: &[usize]) -> TracedTensor
Sourcepub fn reshape(&self, shape: &[usize]) -> TracedTensor
pub fn reshape(&self, shape: &[usize]) -> TracedTensor
Sourcepub fn reshape_sym(&self, shape: &[SymDim]) -> Result<TracedTensor>
pub fn reshape_sym(&self, shape: &[SymDim]) -> Result<TracedTensor>
Sourcepub fn broadcast_in_dim(&self, shape: &[usize], dims: &[usize]) -> TracedTensor
pub fn broadcast_in_dim(&self, shape: &[usize], dims: &[usize]) -> TracedTensor
Sourcepub fn transpose(&self, perm: &[usize]) -> TracedTensor
pub fn transpose(&self, perm: &[usize]) -> TracedTensor
Sourcepub fn extract_diag(&self, axis_a: usize, axis_b: usize) -> TracedTensor
pub fn extract_diag(&self, axis_a: usize, axis_b: usize) -> TracedTensor
Sourcepub fn embed_diag(&self, axis_a: usize, axis_b: usize) -> TracedTensor
pub fn embed_diag(&self, axis_a: usize, axis_b: usize) -> TracedTensor
Sourcepub fn sum(&self, axes: &[usize]) -> TracedTensor
pub fn sum(&self, axes: &[usize]) -> TracedTensor
Sourcepub fn broadcast(&self, shape: &[usize], dims: &[usize]) -> TracedTensor
pub fn broadcast(&self, shape: &[usize], dims: &[usize]) -> TracedTensor
Sourcepub fn shape_of(&self, axis: usize) -> TracedTensor
pub fn shape_of(&self, axis: usize) -> TracedTensor
Return the runtime size of one axis as a scalar f64 tensor.
The result is metadata-derived and therefore has no gradient.
§Examples
use tenferro::{CpuBackend, Engine, TracedTensor};
let mut engine = Engine::new(CpuBackend::new());
let x = TracedTensor::from_vec(vec![2, 3], vec![1.0_f64, 2.0, 3.0, 4.0, 5.0, 6.0]);
let mut cols = x.shape_of(1);
assert_eq!(cols.eval(&mut engine).unwrap().shape(), &[] as &[usize]);Sourcepub fn dynamic_truncate(&self, size: &TracedTensor, axis: usize) -> TracedTensor
pub fn dynamic_truncate(&self, size: &TracedTensor, axis: usize) -> TracedTensor
Truncate this tensor along axis to the first size elements.
size is read at runtime from a scalar traced tensor. Values are
rounded to the nearest integer, clamped to [0, self.shape[axis]],
and the output keeps the same element dtype as the input.
§Examples
use tenferro::{CpuBackend, Engine, TracedTensor};
let mut engine = Engine::new(CpuBackend::new());
let x = TracedTensor::from_vec(vec![4], vec![1.0_f64, 2.0, 3.0, 4.0]);
let size = TracedTensor::from_vec(vec![], vec![2.0_f64]);
let mut y = x.dynamic_truncate(&size, 0);
assert_eq!(y.eval(&mut engine).unwrap().shape(), &[2]);Sourcepub fn pad_to_match(
&self,
reference: &TracedTensor,
axis: usize,
) -> TracedTensor
pub fn pad_to_match( &self, reference: &TracedTensor, axis: usize, ) -> TracedTensor
Pad this tensor with zeros along axis to match reference.shape[axis].
If reference is smaller along that axis, this is a no-op.
§Examples
use tenferro::{CpuBackend, Engine, TracedTensor};
let mut engine = Engine::new(CpuBackend::new());
let x = TracedTensor::from_vec(vec![2], vec![1.0_f64, 2.0]);
let reference = TracedTensor::from_vec(vec![4], vec![0.0_f64, 0.0, 0.0, 0.0]);
let mut y = x.pad_to_match(&reference, 0);
assert_eq!(y.eval(&mut engine).unwrap().shape(), &[4]);Trait Implementations§
Source§impl Add for &TracedTensor
impl Add for &TracedTensor
Source§type Output = TracedTensor
type Output = TracedTensor
+ operator.Source§fn add(self, rhs: &TracedTensor) -> TracedTensor
fn add(self, rhs: &TracedTensor) -> TracedTensor
+ operation. Read moreSource§impl Clone for TracedTensor
impl Clone for TracedTensor
Source§fn clone(&self) -> TracedTensor
fn clone(&self) -> TracedTensor
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read moreSource§impl Div for &TracedTensor
impl Div for &TracedTensor
Source§type Output = TracedTensor
type Output = TracedTensor
/ operator.Source§fn div(self, rhs: &TracedTensor) -> TracedTensor
fn div(self, rhs: &TracedTensor) -> TracedTensor
/ operation. Read moreSource§impl Mul<&TracedTensor> for f64
impl Mul<&TracedTensor> for f64
Source§type Output = TracedTensor
type Output = TracedTensor
* operator.Source§fn mul(self, rhs: &TracedTensor) -> TracedTensor
fn mul(self, rhs: &TracedTensor) -> TracedTensor
* operation. Read moreSource§impl Mul<f64> for &TracedTensor
impl Mul<f64> for &TracedTensor
Source§type Output = TracedTensor
type Output = TracedTensor
* operator.Source§impl Mul for &TracedTensor
impl Mul for &TracedTensor
Source§type Output = TracedTensor
type Output = TracedTensor
* operator.Source§fn mul(self, rhs: &TracedTensor) -> TracedTensor
fn mul(self, rhs: &TracedTensor) -> TracedTensor
* operation. Read moreSource§impl Neg for &TracedTensor
impl Neg for &TracedTensor
Source§type Output = TracedTensor
type Output = TracedTensor
- operator.Source§fn neg(self) -> TracedTensor
fn neg(self) -> TracedTensor
- operation. Read moreAuto Trait Implementations§
impl Freeze for TracedTensor
impl RefUnwindSafe for TracedTensor
impl Send for TracedTensor
impl Sync for TracedTensor
impl Unpin for TracedTensor
impl UnsafeUnpin for TracedTensor
impl UnwindSafe for TracedTensor
Blanket Implementations§
§impl<Rhs, Lhs, Output> AddByRef<Rhs> for Lhs
impl<Rhs, Lhs, Output> AddByRef<Rhs> for Lhs
type Output = Output
fn add_by_ref(&self, rhs: &Rhs) -> <Lhs as AddByRef<Rhs>>::Output
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
§impl<T> DistributionExt for Twhere
T: ?Sized,
impl<T> DistributionExt for Twhere
T: ?Sized,
fn rand<T>(&self, rng: &mut (impl Rng + ?Sized)) -> Twhere
Self: Distribution<T>,
§impl<Rhs, Lhs, Output> DivByRef<Rhs> for Lhs
impl<Rhs, Lhs, Output> DivByRef<Rhs> for Lhs
type Output = Output
fn div_by_ref(&self, rhs: &Rhs) -> <Lhs as DivByRef<Rhs>>::Output
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more