In
mathematics
Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
and
computer algebra
In mathematics and computer science, computer algebra, also called symbolic computation or algebraic computation, is a scientific area that refers to the study and development of algorithms and software for manipulating expression (mathematics), ...
, automatic differentiation (auto-differentiation, autodiff, or AD), also called algorithmic differentiation, computational differentiation, and differentiation arithmetic
[ Hend Dawood and Nefertiti Megahed (2023). Automatic differentiation of uncertainties: an interval computational differentiation for first and higher derivatives with implementation. PeerJ Computer Science 9:e1301 https://doi.org/10.7717/peerj-cs.1301.][ Hend Dawood and Nefertiti Megahed (2019). A Consistent and Categorical Axiomatization of Differentiation Arithmetic Applicable to First and Higher Order Derivatives. Punjab University Journal of Mathematics. 51(11). pp. 77-100. doi: 10.5281/zenodo.3479546. http://doi.org/10.5281/zenodo.3479546.] is a set of techniques to evaluate the
partial derivative
In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant (as opposed to the total derivative, in which all variables are allowed to vary). P ...
of a function specified by a computer program. Automatic differentiation is a subtle and central tool to automatize the simultaneous computation of the numerical values of arbitrarily complex functions and their derivatives with no need for the symbolic representation of the derivative, only the function rule or an algorithm thereof is required.
Auto-differentiation is thus neither numeric nor symbolic, nor is it a combination of both. It is also preferable to ordinary numerical methods: In contrast to the more traditional numerical methods based on finite differences, auto-differentiation is 'in theory' exact, and in comparison to symbolic algorithms, it is computationally inexpensive.
[ Hend Dawood and Yasser Dawood (2022). Interval Root Finding and Interval Polynomials: Methods and Applications in Science and Engineering. In S. Chakraverty, editor, Polynomial Paradigms: Trends and Applications in Science and Engineering, chapter 15. IOP Publishing. ISBN 978-0-7503-5065-5. doi: 10.1088/978-0-7503-5067-9ch15. URL https://doi.org/10.1088/978-0-7503-5067-9ch15.]
Automatic differentiation exploits the fact that every computer calculation, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (
exp,
log,
sin,
cos, etc.). By applying the
chain rule
In calculus, the chain rule is a formula that expresses the derivative of the Function composition, composition of two differentiable functions and in terms of the derivatives of and . More precisely, if h=f\circ g is the function such that h ...
repeatedly to these operations, partial derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor of more arithmetic operations than the original program.
Difference from other differentiation methods
Automatic differentiation is distinct from
symbolic differentiation and
numerical differentiation.
Symbolic differentiation faces the difficulty of converting a computer program into a single
mathematical expression and can lead to inefficient code. Numerical differentiation (the method of finite differences) can introduce
round-off errors in the
discretization
In applied mathematics, discretization is the process of transferring continuous functions, models, variables, and equations into discrete counterparts. This process is usually carried out as a first step toward making them suitable for numeri ...
process and cancellation. Both of these classical methods have problems with calculating higher derivatives, where complexity and errors increase. Finally, both of these classical methods are slow at computing partial derivatives of a function with respect to ''many'' inputs, as is needed for
gradient-based
optimization
Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfiel ...
algorithms. Automatic differentiation solves all of these problems.
Applications
Currently, for its efficiency and accuracy in computing first and higher order
derivative
In mathematics, the derivative is a fundamental tool that quantifies the sensitivity to change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is t ...
s, auto-differentiation is a celebrated technique with diverse applications in
scientific computing and
mathematics
Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
. It should therefore come as no surprise that there are numerous computational implementations of auto-differentiation. Among these, one mentions
INTLAB, Sollya, and InCLosure.
[ Siegfried M. Rump (1999). INTLAB–INTerval LABoratory. In T. Csendes, editor, Developments in Reliable Computing, pages 77–104. Kluwer Academic Publishers, Dordrecht.][S. Chevillard, M. Joldes, and C. Lauter. Sollya (2010). An Environment for the Development of Numerical Codes. In K. Fukuda, J. van der Hoeven, M. Joswig, and N. Takayama, editors, Mathematical Software - ICMS 2010, volume 6327 of Lecture Notes in Computer Science, pages 28–31, Heidelberg, Germany. Springer.][ Hend Dawood (2022). InCLosure (Interval enCLosure)–A Language and Environment for Reliable Scientific Computing. Computer Software, Version 4.0. Department of Mathematics. Faculty of Science, Cairo University, Giza, Egypt, September 2022. url: https://doi.org/10.5281/zenodo.2702404.] In practice, there are two types (modes) of algorithmic differentiation: a forward-type and a reversed-type.
Presently, the two types are highly correlated and complementary and both have a wide variety of applications in, e.g., non-linear
optimization
Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfiel ...
,
sensitivity analysis,
robotics
Robotics is the interdisciplinary study and practice of the design, construction, operation, and use of robots.
Within mechanical engineering, robotics is the design and construction of the physical structures of robots, while in computer s ...
,
machine learning
Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
,
computer graphics
Computer graphics deals with generating images and art with the aid of computers. Computer graphics is a core technology in digital photography, film, video games, digital art, cell phone and computer displays, and many specialized applications. ...
, and
computer vision
Computer vision tasks include methods for image sensor, acquiring, Image processing, processing, Image analysis, analyzing, and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical ...
.
[Christian P. Fries (2019). Stochastic Automatic Differentiation: Automatic Differentiation for Monte-Carlo Simulations. Quantitative Finance, 19(6):1043–1059. doi: 10.1080/14697688.2018.1556398. url: https://doi.org/10.1080/14697688.2018.1556398.][ Hend Dawood and Yasser Dawood (2020). Universal Intervals: Towards a Dependency-Aware Interval Algebra. In S. Chakraverty, editor, Mathematical Methods in Interdisciplinary Sciences. chapter 10, pages 167–214. John Wiley & Sons, Hoboken, New Jersey. ISBN 978-1-119-58550-3. doi: 10.1002/9781119585640.ch10. url: https://doi.org/10.1002/9781119585640.ch10.][ Hend Dawood (2014). Interval Mathematics as a Potential Weapon against Uncertainty. In S. Chakraverty, editor, Mathematics of Uncertainty Modeling in the Analysis of Engineering and Science Problems. chapter 1, pages 1–38. IGI Global, Hershey, PA. ISBN 978-1-4666-4991-0.] Automatic differentiation is particularly important in the field of
machine learning
Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
. For example, it allows one to implement
backpropagation in a
neural network without a manually-computed derivative.
Forward and reverse accumulation
Chain rule of partial derivatives of composite functions
Fundamental to automatic differentiation is the decomposition of differentials provided by the
chain rule
In calculus, the chain rule is a formula that expresses the derivative of the Function composition, composition of two differentiable functions and in terms of the derivatives of and . More precisely, if h=f\circ g is the function such that h ...
of
partial derivative
In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant (as opposed to the total derivative, in which all variables are allowed to vary). P ...
s of
composite functions. For the simple composition
the chain rule gives
Two types of automatic differentiation
Usually, two distinct modes of automatic differentiation are presented.
* forward accumulation (also called bottom-up, forward mode, or tangent mode)
* reverse accumulation (also called top-down, reverse mode, or adjoint mode)
Forward accumulation specifies that one traverses the chain rule from inside to outside (that is, first compute
and then
and lastly
), while reverse accumulation traverses from outside to inside (first compute
and then
and lastly
). More succinctly,
* Forward accumulation computes the recursive relation:
with
, and,
* Reverse accumulation computes the recursive relation:
with
.
The value of the partial derivative, called the ''seed'', is propagated forward or backward and is initially
or
. Forward accumulation evaluates the function and calculates the derivative with respect to one independent variable in one pass. For each independent variable
a separate pass is therefore necessary in which the derivative with respect to that independent variable is set to one (
) and of all others to zero (
). In contrast, reverse accumulation requires the evaluated partial functions for the partial derivatives. Reverse accumulation therefore evaluates the function first and calculates the derivatives with respect to all independent variables in an additional pass.
Which of these two types should be used depends on the sweep count. The
computational complexity of one sweep is proportional to the complexity of the original code.
* Forward accumulation is more efficient than reverse accumulation for functions with as only sweeps are necessary, compared to sweeps for reverse accumulation.
* Reverse accumulation is more efficient than forward accumulation for functions with as only sweeps are necessary, compared to sweeps for forward accumulation.
Backpropagation of errors in multilayer perceptrons, a technique used in
machine learning
Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
, is a special case of reverse accumulation.
Forward accumulation was introduced by R.E. Wengert in 1964.
According to Andreas Griewank, reverse accumulation has been suggested since the late 1960s, but the inventor is unknown.
Seppo Linnainmaa published reverse accumulation in 1976.
Forward accumulation

In forward accumulation AD, one first fixes the ''independent variable'' with respect to which differentiation is performed and computes the derivative of each sub-
expression recursively. In a pen-and-paper calculation, this involves repeatedly substituting the derivative of the ''inner'' functions in the chain rule:
This can be generalized to multiple variables as a matrix product of
Jacobians.
Compared to reverse accumulation, forward accumulation is natural and easy to implement as the flow of derivative information coincides with the order of evaluation. Each variable
is augmented with its derivative
(stored as a numerical value, not a symbolic expression),
as denoted by the dot. The derivatives are then computed in sync with the evaluation steps and combined with other derivatives via the chain rule.
Using the chain rule, if
has predecessors in the computational graph:
:
As an example, consider the function:
For clarity, the individual sub-expressions have been labeled with the variables
.
The choice of the independent variable to which differentiation is performed affects the ''seed'' values and . Given interest in the derivative of this function with respect to , the seed values should be set to:
With the seed values set, the values propagate using the chain rule as shown. Figure 2 shows a pictorial depiction of this process as a computational graph.
:
To compute the
gradient of this example function, which requires not only
but also
, an ''additional'' sweep is performed over the computational graph using the seed values
.
Implementation
= Pseudocode
=
Forward accumulation calculates the function and the derivative (but only for one independent variable each) in one pass. The associated method call expects the expression ''Z'' to be derived with regard to a variable ''V''. The method returns a pair of the evaluated function and its derivative. The method traverses the expression tree recursively until a variable is reached. If the derivative with respect to this variable is requested, its derivative is 1, 0 otherwise. Then the partial function as well as the partial derivative are evaluated.
tuple evaluateAndDerive(Expression Z, Variable V)
= C++
=
#include
struct ValueAndPartial ;
struct Variable;
struct Expression ;
struct Variable: public Expression ;
struct Plus: public Expression ;
struct Multiply: public Expression ;
int main ()
Reverse accumulation

In reverse accumulation AD, the ''dependent variable'' to be differentiated is fixed and the derivative is computed ''with respect to'' each sub-
expression recursively. In a pen-and-paper calculation, the derivative of the ''outer'' functions is repeatedly substituted in the chain rule:
In reverse accumulation, the quantity of interest is the ''adjoint'', denoted with a bar
; it is a derivative of a chosen dependent variable with respect to a subexpression
:
Using the chain rule, if
has successors in the computational graph:
:
Reverse accumulation traverses the chain rule from outside to inside, or in the case of the computational graph in Figure 3, from top to bottom. The example function is scalar-valued, and thus there is only one seed for the derivative computation, and only one sweep of the computational graph is needed to calculate the (two-component) gradient. This is only
half the work when compared to forward accumulation, but reverse accumulation requires the storage of the intermediate variables as well as the instructions that produced them in a data structure known as a "tape" or a Wengert list (however, Wengert published forward accumulation, not reverse accumulation
), which may consume significant memory if the computational graph is large. This can be mitigated to some extent by storing only a subset of the intermediate variables and then reconstructing the necessary work variables by repeating the evaluations, a technique known as
rematerialization.
Checkpointing is also used to save intermediary states.
The operations to compute the derivative using reverse accumulation are shown in the table below (note the reversed order):
The data flow graph of a computation can be manipulated to calculate the gradient of its original calculation. This is done by adding an adjoint node for each primal node, connected by adjoint edges which parallel the primal edges but flow in the opposite direction. The nodes in the adjoint graph represent multiplication by the derivatives of the functions calculated by the nodes in the primal. For instance, addition in the primal causes fanout in the adjoint; fanout in the primal causes addition in the adjoint; a
unary function in the primal causes in the adjoint; etc.
Implementation
= Pseudo code
=
Reverse accumulation requires two passes: In the forward pass, the function is evaluated first and the partial results are cached. In the reverse pass, the partial derivatives are calculated and the previously derived value is backpropagated. The corresponding method call expects the expression ''Z'' to be derived and ''seeded'' with the derived value of the parent expression. For the top expression, Z differentiated with respect to Z, this is 1. The method traverses the expression tree recursively until a variable is reached and adds the current ''seed'' value to the derivative expression.
void derive(Expression Z, float seed)
= C++
=
#include
struct Expression ;
struct Variable: public Expression ;
struct Plus: public Expression ;
struct Multiply: public Expression ;
int main ()
Beyond forward and reverse accumulation
Forward and reverse accumulation are just two (extreme) ways of traversing the chain rule. The problem of computing a full Jacobian of with a minimum number of arithmetic operations is known as the ''optimal Jacobian accumulation'' (OJA) problem, which is
NP-complete. Central to this proof is the idea that algebraic dependencies may exist between the local partials that label the edges of the graph. In particular, two or more edge labels may be recognized as equal. The complexity of the problem is still open if it is assumed that all edge labels are unique and algebraically independent.
Automatic differentiation using dual numbers
Forward mode automatic differentiation is accomplished by augmenting the
algebra
Algebra is a branch of mathematics that deals with abstract systems, known as algebraic structures, and the manipulation of expressions within those systems. It is a generalization of arithmetic that introduces variables and algebraic ope ...
of
real numbers
In mathematics, a real number is a number that can be used to measurement, measure a continuous variable, continuous one-dimensional quantity such as a time, duration or temperature. Here, ''continuous'' means that pairs of values can have arbi ...
and obtaining a new
arithmetic
Arithmetic is an elementary branch of mathematics that deals with numerical operations like addition, subtraction, multiplication, and division. In a wider sense, it also includes exponentiation, extraction of roots, and taking logarithms.
...
. An additional component is added to every number to represent the derivative of a function at the number, and all arithmetic operators are extended for the augmented algebra. The augmented algebra is the algebra of
dual numbers.
Replace every number
with the number
, where
is a real number, but
is an
abstract number with the property
(an
infinitesimal; see ''
Smooth infinitesimal analysis''). Using only this, regular arithmetic gives
using
.
Now,
polynomials
In mathematics, a polynomial is a mathematical expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication and exponentiation to nonnegative int ...
can be calculated in this augmented arithmetic. If
, then
where
denotes the derivative of
with respect to its first argument, and
, called a ''seed'', can be chosen arbitrarily.
The new arithmetic consists of
ordered pairs, elements written
, with ordinary arithmetics on the first component, and first order differentiation arithmetic on the second component, as described above. Extending the above results on polynomials to
analytic functions gives a list of the basic arithmetic and some standard functions for the new arithmetic:
and in general for the primitive function
,
where
and
are the derivatives of
with respect to its first and second arguments, respectively.
When a binary basic arithmetic operation is applied to mixed arguments—the pair
and the real number
—the real number is first lifted to
. The derivative of a function
at the point
is now found by calculating
using the above arithmetic, which gives
as the result.
Implementation
An example implementation based on the dual number approach follows.
Pseudo code
C++
#include
struct Dual ;
// Example: Finding the partials of z = x * (x + y) + y * y at (x, y) = (2, 3)
Dual f(Dual x, Dual y)
int main ()
Vector arguments and functions
Multivariate functions can be handled with the same efficiency and mechanisms as univariate functions by adopting a directional derivative operator. That is, if it is sufficient to compute
, the directional derivative
of
at
in the direction
may be calculated as
using the same arithmetic as above. If all the elements of
are desired, then
function evaluations are required. Note that in many optimization applications, the directional derivative is indeed sufficient.
High order and many variables
The above arithmetic can be generalized to calculate second order and higher derivatives of multivariate functions. However, the arithmetic rules quickly grow complicated: complexity is quadratic in the highest derivative degree. Instead, truncated
Taylor polynomial algebra can be used. The resulting arithmetic, defined on generalized dual numbers, allows efficient computation using functions as if they were a data type. Once the Taylor polynomial of a function is known, the derivatives are easily extracted.
Implementation
Forward-mode AD is implemented by a
nonstandard interpretation of the program in which real numbers are replaced by dual numbers, constants are lifted to dual numbers with a zero epsilon coefficient, and the numeric primitives are lifted to operate on dual numbers. This nonstandard interpretation is generally implemented using one of two strategies: ''source code transformation'' or ''operator overloading''.
Source code transformation (SCT)
The source code for a function is replaced by an automatically generated source code that includes statements for calculating the derivatives interleaved with the original instructions.
Source code transformation can be implemented for all programming languages, and it is also easier for the compiler to do compile time optimizations. However, the implementation of the AD tool itself is more difficult and the build system is more complex.
Operator overloading (OO)
Operator overloading is a possibility for source code written in a language supporting it. Objects for real numbers and elementary mathematical operations must be overloaded to cater for the augmented arithmetic depicted above. This requires no change in the form or sequence of operations in the original source code for the function to be differentiated, but often requires changes in basic data types for numbers and vectors to support overloading and often also involves the insertion of special flagging operations. Due to the inherent operator overloading overhead on each loop, this approach usually demonstrates weaker speed performance.
Operator overloading and source code transformation
Overloaded Operators can be used to extract the valuation graph, followed by automatic generation of the AD-version of the primal function at run-time. Unlike the classic OO AAD, such AD-function does not change from one iteration to the next one. Hence there is any OO or tape interpretation run-time overhead per Xi sample.
With the AD-function being generated at runtime, it can be optimised to take into account the current state of the program and precompute certain values. In addition, it can be generated in a way to consistently utilize native CPU vectorization to process 4(8)-double chunks of user data (AVX2\AVX512 speed up x4-x8). With multithreading added into account, such approach can lead to a final acceleration of order 8 × #Cores compared to the traditional AAD tools. A reference implementation is available on GitHub.
See also
*
Differentiable programming
Notes
References
Further reading
*
*
*
*
*
External links
www.autodiff.org An "entry site to everything you want to know about automatic differentiation"
Automatic Differentiation of Parallel OpenMP ProgramsAutomatic Differentiation, C++ Templates and PhotogrammetryAutomatic Differentiation, Operator Overloading ApproachCompute analytic derivatives of any Fortran77, Fortran95, or C program through a web-based interfaceAutomatic Differentiation of Fortran programs
Description and example code for forward Automatic Differentiation in Scala
finmath-lib stochastic automatic differentiation Automatic differentiation for random variables (Java implementation of the stochastic automatic differentiation).
Adjoint Algorithmic Differentiation: Calibration and Implicit Function Theoreman
implementationTangent
Exact First- and Second-Order Greeks by Algorithmic Differentiation
Adjoint Algorithmic Differentiation of a GPU Accelerated Application
Adjoint Methods in Computational Finance Software Tool Support for Algorithmic Differentiationop
More than a Thousand Fold Speed Up for xVA Pricing Calculations with Intel Xeon Scalable Processors
Sparse truncated Taylor series implementation with VBIC95 example for higher order derivatives
{{DEFAULTSORT:Automatic Differentiation
Differential calculus
Computer algebra
Articles with example pseudocode
Articles with example Python (programming language) code
Articles with example C++ code