Determinant AD Notes
Conventions
Unless noted otherwise, Linearization and Transpose are written for the raw-output-space determinant maps before any DB observable projection. For complex tensors, Transpose means the adjoint under the real Frobenius inner product
\langle X, Y \rangle_{\mathbb{R}} = \operatorname{Re}\operatorname{tr}(X^\dagger Y).
Forward
This note covers two raw operators:
A \mapsto \det(A), \qquad A \mapsto (\operatorname{sign}, \operatorname{logabsdet}).
Linearization
For det,
\dot{d} = \det(A)\operatorname{tr}(A^{-1}\dot{A}).
For slogdet, if w = \operatorname{tr}(A^{-1}\dot{A}), then
\dot{\operatorname{logabsdet}} = \operatorname{Re}(w), \qquad \dot{\operatorname{sign}} = i\,\operatorname{Im}(w)\operatorname{sign}.
JVP
The JVP is the same linearization evaluated at the tangent matrix \dot{A}.
Transpose
For det, a raw output cotangent \bar{d} gives
\bar{A} = \overline{\bar{d}\det(A)}\,A^{-\mathsf{H}},
with the real case reducing to A^{-\mathsf{T}}.
For slogdet, a raw output cotangent on the pair (\bar{\operatorname{sign}}, \bar{\operatorname{logabsdet}}) yields the same solve-style adjoint summarized later in the note.
VJP (JAX convention)
JAX reads these transpose maps directly on the scalar or tuple output, with the same singularity caveats as the raw determinant formulas.
VJP (PyTorch convention)
PyTorch uses the same raw adjoint structure, together with the real-input projection and the singular-matrix fallback discussed below.
1. Determinant
Forward Definition
For
d = \det(A), \qquad A \in \mathbb{C}^{N \times N},
Jacobi’s formula gives
\dot{d} = \det(A) \cdot \operatorname{tr}(A^{-1}\dot{A}).
Reverse Rule
Given a cotangent \bar{d}:
- real case:
\bar{A} = \bar{d} \cdot \det(A) \cdot A^{-\mathsf{T}}
- complex case:
\bar{A} = \bar{d} \cdot \overline{\det(A)} \cdot A^{-\mathsf{H}}.
Singular matrix handling
The inverse formula fails at singular matrices, but the adjugate interpretation still makes sense:
- rank N-1: the adjugate is rank 1 and can be reconstructed from an SVD
- rank \le N-2: the adjugate vanishes
The rank-N-1 adjugate can be reconstructed from the leave-one-out singular value products together with the orientation/phase factor carried by the singular vectors.
2. slogdet
Forward Definition
(\operatorname{sign}, \operatorname{logabsdet}) = \operatorname{slogdet}(A).
If w = \operatorname{tr}(A^{-1}\dot{A}), then in the complex case
\dot{\operatorname{logabsdet}} = \operatorname{Re}(w), \qquad \dot{\operatorname{sign}} = i \operatorname{Im}(w)\operatorname{sign}.
Reverse Rule
Given cotangents \bar{s} for the sign output and \bar{\ell} for the log-magnitude output:
- real case:
\bar{A} = \bar{\ell} \cdot A^{-\mathsf{T}}
- complex case:
\bar{A} = g \cdot A^{-\mathsf{H}}, \qquad g = \bar{\ell} - i \operatorname{Im}(\bar{s}^* s),
where s = \operatorname{sign}(A).
slogdet is not differentiable at singular matrices because \operatorname{logabsdet} = -\infty there.
Verification
- compare primal
det(A)andslogdet(A)with direct evaluation - compare JVP/VJP against finite differences away from singularity
References
- C. G. J. Jacobi, “De formatione et proprietatibus determinantium,” 1841.
- M. B. Giles, “An extended collection of matrix derivative results for forward and reverse mode AD,” 2008.
DB Families
The DB publishes the determinant value directly.
The DB publishes the differentiable slogdet observable directly.