HOME

TheInfoList



OR:

In
calculus Calculus is the mathematics, mathematical study of continuous change, in the same way that geometry is the study of shape, and algebra is the study of generalizations of arithmetic operations. Originally called infinitesimal calculus or "the ...
, the product rule (or Leibniz rule or Leibniz product rule) is a formula used to find the
derivative In mathematics, the derivative is a fundamental tool that quantifies the sensitivity to change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is t ...
s of products of two or more functions. For two functions, it may be stated in Lagrange's notation as (u \cdot v)' = u ' \cdot v + u \cdot v' or in Leibniz's notation as \frac (u\cdot v) = \frac \cdot v + u \cdot \frac. The rule may be extended or generalized to products of three or more functions, to a rule for higher-order derivatives of a product, and to other contexts.


Discovery

Discovery of this rule is credited to
Gottfried Leibniz Gottfried Wilhelm Leibniz (or Leibnitz; – 14 November 1716) was a German polymath active as a mathematician, philosopher, scientist and diplomat who is credited, alongside Isaac Newton, Sir Isaac Newton, with the creation of calculus in ad ...
, who demonstrated it using "infinitesimals" (a precursor to the modern differential). (However, J. M. Child, a translator of Leibniz's papers, argues that it is due to
Isaac Barrow Isaac Barrow (October 1630 – 4 May 1677) was an English Christian theologian and mathematician who is generally given credit for his early role in the development of infinitesimal calculus; in particular, for proof of the fundamental theorem ...
.) Here is Leibniz's argument: Let ''u'' and ''v'' be functions. Then ''d(uv)'' is the same thing as the difference between two successive ''uvs; let one of these be ''uv'', and the other ''u+du'' times ''v+dv''; then: \begin d(u\cdot v) & = (u + du)\cdot (v + dv) - u\cdot v \\ & = u\cdot dv + v\cdot du + du\cdot dv. \end Since the term ''du''·''dv'' is "negligible" (compared to ''du'' and ''dv''), Leibniz concluded that d(u\cdot v) = v\cdot du + u\cdot dv and this is indeed the differential form of the product rule. If we divide through by the differential ''dx'', we obtain \frac (u\cdot v) = v \cdot \frac + u \cdot \frac which can also be written in Lagrange's notation as (u\cdot v)' = v\cdot u' + u\cdot v'.


Examples

*Suppose we want to differentiate f(x)=x^2\text (x). By using the product rule, one gets the derivative f'(x)=2x\cdot \text (x)+x^2\text (x) (since the derivative of x^2 is 2x, and the derivative of the
sine In mathematics, sine and cosine are trigonometric functions of an angle. The sine and cosine of an acute angle are defined in the context of a right triangle: for the specified angle, its sine is the ratio of the length of the side opposite th ...
function is the cosine function). *One special case of the product rule is the constant multiple rule, which states: if is a number, and f(x) is a differentiable function, then c\cdot f(x) is also differentiable, and its derivative is (cf)'(x)=c \cdot f'(x). This follows from the product rule since the derivative of any constant is zero. This, combined with the sum rule for derivatives, shows that differentiation is
linear In mathematics, the term ''linear'' is used in two distinct senses for two different properties: * linearity of a '' function'' (or '' mapping''); * linearity of a '' polynomial''. An example of a linear function is the function defined by f(x) ...
. *The rule for
integration by parts In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivati ...
is derived from the product rule, as is (a weak version of) the
quotient rule In calculus, the quotient rule is a method of finding the derivative of a function (mathematics), function that is the ratio of two differentiable functions. Let h(x)=\frac, where both and are differentiable and g(x)\neq 0. The quotient rule sta ...
. (It is a "weak" version in that it does not prove that the quotient is differentiable but only says what its derivative is it is differentiable.)


Proofs


Limit definition of derivative

Let and suppose that and are each differentiable at . We want to prove that is differentiable at and that its derivative, , is given by . To do this, f(x)g(x+\Delta x)-f(x)g(x+\Delta x) (which is zero, and thus does not change the value) is added to the numerator to permit its factoring, and then properties of limits are used. \begin h'(x) &= \lim_ \frac \\ pt &= \lim_ \frac \\ pt &= \lim_ \frac \\ pt &= \lim_ \frac \\ pt &= \lim_ \frac \cdot \lim_ g(x+\Delta x) + \lim_ f(x) \cdot \lim_ \frac \\ pt &= f'(x)g(x)+f(x)g'(x). \end The fact that \lim_ g(x+\Delta x) = g(x) follows from the fact that differentiable functions are continuous.


Linear approximations

By definition, if f, g: \mathbb \to \mathbb are differentiable at x , then we can write linear approximations: f(x+h) = f(x) + f'(x)h + \varepsilon_1(h) and g(x+h) = g(x) + g'(x)h + \varepsilon_2(h), where the error terms are small with respect to ''h'': that is, \lim_ \frac = \lim_ \frac = 0, also written \varepsilon_1, \varepsilon_2 \sim o(h). Then: \begin f(x+h)g(x+h) - f(x)g(x) &= (f(x) + f'(x)h +\varepsilon_1(h))(g(x) + g'(x)h + \varepsilon_2(h)) - f(x)g(x) \\ 5em &= f(x)g(x) + f'(x)g(x)h + f(x)g'(x)h -f(x)g(x) + \text \\ 5em &= f'(x)g(x)h + f(x)g'(x)h + o(h) . \end The "error terms" consist of items such as f(x)\varepsilon_2(h), f'(x)g'(x)h^2 and hf'(x)\varepsilon_1(h) which are easily seen to have magnitude o(h). Dividing by h and taking the limit h\to 0 gives the result.


Quarter squares

This proof uses the
chain rule In calculus, the chain rule is a formula that expresses the derivative of the Function composition, composition of two differentiable functions and in terms of the derivatives of and . More precisely, if h=f\circ g is the function such that h ...
and the quarter square function q(x)=\tfrac14x^2 with derivative q'(x) = \tfrac12 x. We have: uv=q(u+v)-q(u-v), and differentiating both sides gives: \begin f' &= q'(u+v)(u'+v') - q'(u-v)(u'-v') \\ pt&= \left(\tfrac12(u+v)(u'+v')\right) - \left(\tfrac12(u-v)(u'-v')\right) \\ pt&= \tfrac12(uu' + vu' + uv' + vv') - \tfrac12(uu' - vu' - uv' + vv') \\ pt&= vu'+uv' . \end


Multivariable chain rule

The product rule can be considered a special case of the
chain rule In calculus, the chain rule is a formula that expresses the derivative of the Function composition, composition of two differentiable functions and in terms of the derivatives of and . More precisely, if h=f\circ g is the function such that h ...
for several variables, applied to the multiplication function m(u,v) = uv: = \frac\frac+\frac\frac = v \frac + u \frac.


Non-standard analysis

Let ''u'' and ''v'' be continuous functions in ''x'', and let ''dx'', ''du'' and ''dv'' be
infinitesimal In mathematics, an infinitesimal number is a non-zero quantity that is closer to 0 than any non-zero real number is. The word ''infinitesimal'' comes from a 17th-century Modern Latin coinage ''infinitesimus'', which originally referred to the " ...
s within the framework of
non-standard analysis The history of calculus is fraught with philosophical debates about the meaning and logical validity of fluxions or infinitesimal numbers. The standard way to resolve these debates is to define the operations of calculus using (ε, δ)-definitio ...
, specifically the
hyperreal number In mathematics, hyperreal numbers are an extension of the real numbers to include certain classes of infinite and infinitesimal numbers. A hyperreal number x is said to be finite if, and only if, , x, for some integer n
s. Using st to denote the
standard part function In nonstandard analysis, the standard part function is a function from the limited (finite) hyperreal numbers to the real numbers. Briefly, the standard part function "rounds off" a finite hyperreal to the nearest real. It associates to every suc ...
that associates to a finite hyperreal number the real infinitely close to it, this gives \begin \frac &= \operatorname\left(\frac\right) \\ &= \operatorname\left(\frac\right) \\ &= \operatorname\left(\frac\right) \\ &= \operatorname\left(u \frac + (v + dv) \frac\right) \\ &= u \frac + v \frac. \end This was essentially Leibniz's proof exploiting the transcendental law of homogeneity (in place of the standard part above).


Smooth infinitesimal analysis

In the context of Lawvere's approach to infinitesimals, let dx be a nilsquare infinitesimal. Then du = u'\ dx and dv = v'\ dx, so that \begin d(uv) & = (u + du)(v + dv) -uv \\ & = uv + u \cdot dv + v \cdot du + du \cdot dv - uv \\ & = u \cdot dv + v \cdot du + du \cdot dv \\ & = u \cdot dv + v \cdot du \end since du \, dv = u' v' (dx)^2 = 0. Dividing by dx then gives \frac = u \frac + v \frac or (uv)' = u \cdot v' + v \cdot u'.


Logarithmic differentiation

Let h(x) = f(x) g(x). Taking the
absolute value In mathematics, the absolute value or modulus of a real number x, is the non-negative value without regard to its sign. Namely, , x, =x if x is a positive number, and , x, =-x if x is negative (in which case negating x makes -x positive), ...
of each function and the natural log of both sides of the equation, \ln, h(x), = \ln, f(x) g(x), Applying properties of the absolute value and logarithms, \ln, h(x), = \ln, f(x), + \ln, g(x), Taking the
logarithmic derivative In mathematics, specifically in calculus and complex analysis, the logarithmic derivative of a function is defined by the formula \frac where is the derivative of . Intuitively, this is the infinitesimal relative change in ; that is, the in ...
of both sides and then solving for h'(x) : \frac = \frac + \frac Solving for h'(x) and substituting back f(x) g(x) for h(x) gives: \begin h'(x) &= h(x)\left(\frac + \frac\right) \\ &= f(x) g(x)\left(\frac + \frac\right) \\ &= f'(x) g(x) + f(x) g'(x). \end Note: Taking the absolute value of the functions is necessary for the
logarithmic differentiation In calculus, logarithmic differentiation or differentiation by taking logarithms is a method used to differentiate functions by employing the logarithmic derivative of a function , (\ln f)' = \frac \quad \implies \quad f' = f \cdot (\ln f)' ...
of functions that may have negative values, as logarithms are only real-valued for positive arguments. This works because \tfrac(\ln , u, ) = \tfrac, which justifies taking the absolute value of the functions for logarithmic differentiation.


Generalizations


Product of more than two factors

The product rule can be generalized to products of more than two factors. For example, for three factors we have \frac = \fracvw + u\fracw + uv\frac. For a collection of functions f_1, \dots, f_k, we have \frac \left \prod_^k f_i(x) \right = \sum_^k \left(\left(\frac f_i(x) \right) \prod_^k f_j(x) \right) = \left( \prod_^k f_i(x) \right) \left( \sum_^k \frac \right). The
logarithmic derivative In mathematics, specifically in calculus and complex analysis, the logarithmic derivative of a function is defined by the formula \frac where is the derivative of . Intuitively, this is the infinitesimal relative change in ; that is, the in ...
provides a simpler expression of the last form, as well as a direct proof that does not involve any
recursion Recursion occurs when the definition of a concept or process depends on a simpler or previous version of itself. Recursion is used in a variety of disciplines ranging from linguistics to logic. The most common application of recursion is in m ...
. The ''logarithmic derivative'' of a function , denoted here , is the derivative of the
logarithm In mathematics, the logarithm of a number is the exponent by which another fixed value, the base, must be raised to produce that number. For example, the logarithm of to base is , because is to the rd power: . More generally, if , the ...
of the function. It follows that \operatorname(f)=\frac f. Using that the logarithm of a product is the sum of the logarithms of the factors, the sum rule for derivatives gives immediately \operatorname(f_1\cdots f_k)= \sum_^k\operatorname(f_i). The last above expression of the derivative of a product is obtained by multiplying both members of this equation by the product of the f_i.


Higher derivatives

It can also be generalized to the general Leibniz rule for the ''n''th derivative of a product of two factors, by symbolically expanding according to the
binomial theorem In elementary algebra, the binomial theorem (or binomial expansion) describes the algebraic expansion of powers of a binomial. According to the theorem, the power expands into a polynomial with terms of the form , where the exponents and a ...
: d^n(uv) = \sum_^n \cdot d^(u)\cdot d^(v). Applied at a specific point ''x'', the above formula gives: (uv)^(x) = \sum_^n \cdot u^(x)\cdot v^(x). Furthermore, for the ''n''th derivative of an arbitrary number of factors, one has a similar formula with multinomial coefficients: \left(\prod_^kf_i\right)^=\sum_\prod_^kf_i^.


Higher partial derivatives

For
partial derivative In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant (as opposed to the total derivative, in which all variables are allowed to vary). P ...
s, we have (uv) = \sum_S \cdot where the index runs through all
subset In mathematics, a Set (mathematics), set ''A'' is a subset of a set ''B'' if all Element (mathematics), elements of ''A'' are also elements of ''B''; ''B'' is then a superset of ''A''. It is possible for ''A'' and ''B'' to be equal; if they a ...
s of , and is the
cardinality The thumb is the first digit of the hand, next to the index finger. When a person is standing in the medical anatomical position (where the palm is facing to the front), the thumb is the outermost digit. The Medical Latin English noun for thum ...
of . For example, when , \begin & (uv) \\ ex= & u \cdot + \cdot + \cdot + \cdot \\ ex& + \cdot + \cdot + \cdot + \cdot v. \\ 3ex\end


Banach space

Suppose ''X'', ''Y'', and ''Z'' are
Banach space In mathematics, more specifically in functional analysis, a Banach space (, ) is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and ...
s (which includes
Euclidean space Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, in Euclid's ''Elements'', it was the three-dimensional space of Euclidean geometry, but in modern mathematics there are ''Euclidean spaces ...
) and ''B'' : ''X'' × ''Y'' → ''Z'' is a continuous
bilinear operator In mathematics, a bilinear map is a function combining elements of two vector spaces to yield an element of a third vector space, and is linear in each of its arguments. Matrix multiplication is an example. A bilinear map can also be defined fo ...
. Then ''B'' is differentiable, and its derivative at the point (''x'',''y'') in ''X'' × ''Y'' is the
linear map In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that p ...
''D''(''x'',''y'')''B'' : ''X'' × ''Y'' → ''Z'' given by (D_\left( x,y \right)\,B)\left( u,v \right) = B\left( u,y \right) + B\left( x,v \right)\qquad\forall (u,v)\in X \times Y. This result can be extended to more general topological vector spaces.


In vector calculus

The product rule extends to various product operations of vector functions on \mathbb^n:, Section 13.2. * For
scalar multiplication In mathematics, scalar multiplication is one of the basic operations defining a vector space in linear algebra (or more generally, a module in abstract algebra). In common geometrical contexts, scalar multiplication of a real Euclidean vector ...
: (f \cdot \mathbf g)' = f'\cdot \mathbf g + f \cdot \mathbf g' * For
dot product In mathematics, the dot product or scalar productThe term ''scalar product'' means literally "product with a Scalar (mathematics), scalar as a result". It is also used for other symmetric bilinear forms, for example in a pseudo-Euclidean space. N ...
: (\mathbf f \cdot \mathbf g)' = \mathbf f' \cdot \mathbf g + \mathbf f \cdot \mathbf g' * For
cross product In mathematics, the cross product or vector product (occasionally directed area product, to emphasize its geometric significance) is a binary operation on two vectors in a three-dimensional oriented Euclidean vector space (named here E), and ...
of vector functions on \mathbb^3: (\mathbf f \times \mathbf g)' = \mathbf f' \times \mathbf g + \mathbf f \times \mathbf g' There are also analogues for other analogs of the derivative: if ''f'' and ''g'' are scalar fields then there is a product rule with the
gradient In vector calculus, the gradient of a scalar-valued differentiable function f of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p gives the direction and the rate of fastest increase. The g ...
: \nabla (f \cdot g) = \nabla f \cdot g + f \cdot \nabla g Such a rule will hold for any continuous bilinear product operation. Let ''B'' : ''X'' × ''Y'' → ''Z'' be a continuous bilinear map between vector spaces, and let ''f'' and ''g'' be differentiable functions into ''X'' and ''Y'', respectively. The only properties of multiplication used in the proof using the limit definition of derivative is that multiplication is continuous and bilinear. So for any continuous bilinear operation, H(f, g)' = H(f', g) + H(f, g'). This is also a special case of the product rule for bilinear maps in
Banach space In mathematics, more specifically in functional analysis, a Banach space (, ) is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and ...
.


Derivations in abstract algebra and differential geometry

In
abstract algebra In mathematics, more specifically algebra, abstract algebra or modern algebra is the study of algebraic structures, which are set (mathematics), sets with specific operation (mathematics), operations acting on their elements. Algebraic structur ...
, the product rule is the defining property of a derivation. In this terminology, the product rule states that the derivative operator is a derivation on functions. In
differential geometry Differential geometry is a Mathematics, mathematical discipline that studies the geometry of smooth shapes and smooth spaces, otherwise known as smooth manifolds. It uses the techniques of Calculus, single variable calculus, vector calculus, lin ...
, a
tangent vector In mathematics, a tangent vector is a vector that is tangent to a curve or surface at a given point. Tangent vectors are described in the differential geometry of curves in the context of curves in R''n''. More generally, tangent vectors are ...
to a
manifold In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point. More precisely, an n-dimensional manifold, or ''n-manifold'' for short, is a topological space with the property that each point has a N ...
''M'' at a point ''p'' may be defined abstractly as an operator on real-valued functions which behaves like a
directional derivative In multivariable calculus, the directional derivative measures the rate at which a function changes in a particular direction at a given point. The directional derivative of a multivariable differentiable (scalar) function along a given vect ...
at ''p'': that is, a
linear functional In mathematics, a linear form (also known as a linear functional, a one-form, or a covector) is a linear mapIn some texts the roles are reversed and vectors are defined as linear maps from covectors to scalars from a vector space to its field of ...
''v'' which is a derivation, v(fg) = v(f)\,g(p) + f(p) \, v(g). Generalizing (and dualizing) the formulas of vector calculus to an ''n''-dimensional manifold ''M,'' one may take
differential forms In mathematics, differential forms provide a unified approach to define integrands over curves, surfaces, solids, and higher-dimensional manifolds. The modern notion of differential forms was pioneered by Élie Cartan. It has many applications, ...
of degrees ''k'' and ''l'', denoted \alpha\in \Omega^k(M), \beta\in \Omega^\ell(M), with the wedge or
exterior product In mathematics, specifically in topology, the interior of a subset of a topological space is the union of all subsets of that are open in . A point that is in the interior of is an interior point of . The interior of is the complement of ...
operation \alpha\wedge\beta\in \Omega^(M), as well as the
exterior derivative On a differentiable manifold, the exterior derivative extends the concept of the differential of a function to differential forms of higher degree. The exterior derivative was first described in its current form by Élie Cartan in 1899. The re ...
d:\Omega^m(M)\to\Omega^(M). Then one has the graded Leibniz rule: d(\alpha\wedge\beta)= d\alpha \wedge \beta + (-1)^ \alpha\wedge d\beta.


Applications

Among the applications of the product rule is a proof that x^n = nx^ when ''n'' is a positive integer (this rule is true even if ''n'' is not positive or is not an integer, but the proof of that must rely on other methods). The proof is by
mathematical induction Mathematical induction is a method for mathematical proof, proving that a statement P(n) is true for every natural number n, that is, that the infinitely many cases P(0), P(1), P(2), P(3), \dots  all hold. This is done by first proving a ...
on the exponent ''n''. If ''n'' = 0 then ''x''''n'' is constant and ''nx''''n'' − 1 = 0. The rule holds in that case because the derivative of a constant function is 0. If the rule holds for any particular exponent ''n'', then for the next value, ''n'' + 1, we have \begin \frac &= \frac \left( x^n\cdot x\right) \\ ex&= x \frac x^n + x^n \frac x & \text \\ ex&= x\left(n x^\right) + x^n\cdot 1 & \text \\ ex&= \left(n + 1\right) x^n. \end Therefore, if the proposition is true for ''n'', it is true also for ''n'' + 1, and therefore for all natural ''n''.


See also

* * * * * * * * * * * *


References

{{Calculus topics Articles containing proofs Differentiation rules Theorems in mathematical analysis Theorems in calculus