Expand description
Bridge between the Burn deep learning framework and tenferro tensor network operations.
This crate allows Burn tensors to be used with tenferro’s einsum and tensor network contraction routines, enabling seamless integration of tensor network methods into Burn-based deep learning pipelines.
§Examples
ⓘ
use burn::backend::NdArray;
use burn::tensor::Tensor;
use tenferro_burn::einsum;
// Matrix multiplication via einsum
let a: Tensor<NdArray<f64>, 2> = Tensor::ones([3, 4], &Default::default());
let b: Tensor<NdArray<f64>, 2> = Tensor::ones([4, 5], &Default::default());
let c: Tensor<NdArray<f64>, 2> = einsum("ij,jk->ik", vec![a, b]);Modules§
- backward
- Backward-mode (autodiff) implementation of
TensorNetworkOpsfor the [Autodiff<B, C>] backend. - convert
- Conversion utilities between Burn tensor primitives and tenferro tensors.
- forward
- Forward-mode (inference) implementation of
TensorNetworkOpsfor the NdArray backend.
Traits§
- Tensor
Network Ops - Trait for backends that support tenferro tensor network operations.
Functions§
- einsum
- High-level einsum on Burn tensors, dispatching to the backend’s
TensorNetworkOps::tn_einsumimplementation.