Separable Differential Equation
   HOME

TheInfoList



OR:

In
mathematics Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
, separation of variables (also known as the Fourier method) is any of several methods for solving ordinary and
partial differential equation In mathematics, a partial differential equation (PDE) is an equation which involves a multivariable function and one or more of its partial derivatives. The function is often thought of as an "unknown" that solves the equation, similar to ho ...
s, in which algebra allows one to rewrite an equation so that each of two variables occurs on a different side of the equation.


Ordinary differential equations (ODE)

A differential equation for the unknown f(x) is separable if it can be written in the form :\frac f(x) = g(x)h(f(x)) where g and h are given functions. This is perhaps more transparent when written using y = f(x) as: :\frac=g(x)h(y). So now as long as ''h''(''y'') ≠ 0, we can rearrange terms to obtain: : = g(x) \, dx, where the two variables ''x'' and ''y'' have been separated. Note ''dx'' (and ''dy'') can be viewed, at a simple level, as just a convenient notation, which provides a handy mnemonic aid for assisting with manipulations. A formal definition of ''dx'' as a
differential (infinitesimal) In mathematics, differential refers to several related notions derived from the early days of calculus, put on a rigorous footing, such as infinitesimal differences and the derivatives of functions. The term is used in various branches of mathema ...
is somewhat advanced.


Alternative notation

Those who dislike
Leibniz's notation In calculus, Leibniz's notation, named in honor of the 17th-century German philosopher and mathematician Gottfried Wilhelm Leibniz, uses the symbols and to represent infinitely small (or infinitesimal) increments of and , respectively, just a ...
may prefer to write this as :\frac \frac = g(x), but that fails to make it quite as obvious why this is called "separation of variables". Integrating both sides of the equation with respect to x, we have or equivalently, :\int \frac \, dy = \int g(x) \, dx because of the substitution rule for integrals. If one can evaluate the two integrals, one can find a solution to the differential equation. Observe that this process effectively allows us to treat the
derivative In mathematics, the derivative is a fundamental tool that quantifies the sensitivity to change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is t ...
\frac as a fraction which can be separated. This allows us to solve separable differential equations more conveniently, as demonstrated in the example below. (Note that we do not need to use two
constants of integration In calculus, the constant of integration, often denoted by C (or c), is a constant term added to an antiderivative of a function f(x) to indicate that the indefinite integral of f(x) (i.e., the Set (mathematics), set of all antiderivatives of f(x) ...
, in equation () as in :\int \frac \, dy + C_1 = \int g(x) \, dx + C_2, because a single constant C = C_2 - C_1 is equivalent.)


Example

Population growth is often modeled by the "logistic" differential equation : \frac=kP\left(1-\frac\right) where P is the population with respect to time t, k is the rate of growth, and K is the
carrying capacity The carrying capacity of an ecosystem is the maximum population size of a biological species that can be sustained by that specific environment, given the food, habitat, water, and other resources available. The carrying capacity is defined as the ...
of the environment. Separation of variables now leads to : \begin & \int\frac=\int k\,dt \end which is readily integrated using partial fractions on the left side yielding : P(t)=\frac where A is the constant of integration. We can find A in terms of P\left(0\right)=P_0 at t=0. Noting e^0=1 we get : A=\frac.


Generalization of separable ODEs to the nth order

Much like one can speak of a separable first-order ODE, one can speak of a separable second-order, third-order or ''n''th-order ODE. Consider the separable first-order ODE: :\frac=f(y)g(x) The derivative can alternatively be written the following way to underscore that it is an operator working on the unknown function, ''y'': :\frac=\frac(y) Thus, when one separates variables for first-order equations, one in fact moves the ''dx'' denominator of the operator to the side with the ''x'' variable, and the ''d''(''y'') is left on the side with the ''y'' variable. The second-derivative operator, by analogy, breaks down as follows: :\frac = \frac\left(\frac\right) = \frac\left(\frac(y)\right) The third-, fourth- and ''n''th-derivative operators break down in the same way. Thus, much like a first-order separable ODE is reducible to the form :\frac=f(y)g(x) a separable second-order ODE is reducible to the form :\frac=f\left(y'\right)g(x) and an nth-order separable ODE is reducible to :\frac=f\!\left(y^\right)g(x)


Example

Consider the simple nonlinear second-order differential equation:y''=(y')^2.This equation is an equation only of ''y'''' and ''y''', meaning it is reducible to the general form described above and is, therefore, separable. Since it is a second-order separable equation, collect all ''x'' variables on one side and all ''y''' variables on the other to get:\frac=dx.Now, integrate the right side with respect to ''x'' and the left with respect to ''y:\int \frac=\int dx.This gives-\frac=x+C_1,which simplifies to:y'=-\frac~.This is now a simple integral problem that gives the final answer:y=C_2-\ln, x+C_1, .


Partial differential equations

The method of separation of variables is also used to solve a wide range of linear partial differential equations with boundary and initial conditions, such as the
heat equation In mathematics and physics (more specifically thermodynamics), the heat equation is a parabolic partial differential equation. The theory of the heat equation was first developed by Joseph Fourier in 1822 for the purpose of modeling how a quanti ...
,
wave equation The wave equation is a second-order linear partial differential equation for the description of waves or standing wave fields such as mechanical waves (e.g. water waves, sound waves and seismic waves) or electromagnetic waves (including light ...
,
Laplace equation In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties in 1786. This is often written as \nabla^2\! f = 0 or \Delta f = 0, where \Delt ...
,
Helmholtz equation In mathematics, the Helmholtz equation is the eigenvalue problem for the Laplace operator. It corresponds to the elliptic partial differential equation: \nabla^2 f = -k^2 f, where is the Laplace operator, is the eigenvalue, and is the (eigen)fun ...
and
biharmonic equation In mathematics, the biharmonic equation is a fourth-order partial differential equation which arises in areas of continuum mechanics, including linear elasticity theory and the solution of Stokes flows. Specifically, it is used in the modeling of t ...
. The analytical method of separation of variables for solving partial differential equations has also been generalized into a computational method of decomposition in invariant structures that can be used to solve systems of partial differential equations.


Example: homogeneous case

Consider the one-dimensional
heat equation In mathematics and physics (more specifically thermodynamics), the heat equation is a parabolic partial differential equation. The theory of the heat equation was first developed by Joseph Fourier in 1822 for the purpose of modeling how a quanti ...
. The equation is The variable ''u'' denotes temperature. The boundary condition is homogeneous, that is Let us attempt to find a nontrivial solution satisfying the boundary conditions but with the following property: ''u'' is a product in which the dependence of ''u'' on ''x'', ''t'' is separated, that is: Substituting ''u'' back into equation and using the
product rule In calculus, the product rule (or Leibniz rule or Leibniz product rule) is a formula used to find the derivatives of products of two or more functions. For two functions, it may be stated in Lagrange's notation as (u \cdot v)' = u ' \cdot v ...
, where ''λ'' must be constant since the right hand side depends only on ''x'' and the left hand side only on ''t''. Thus: and −''λ'' here is the
eigenvalue In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
for both differential operators, and ''T''(''t'') and ''X''(''x'') are corresponding
eigenfunction In mathematics, an eigenfunction of a linear operator ''D'' defined on some function space is any non-zero function f in that space that, when acted upon by ''D'', is only multiplied by some scaling factor called an eigenvalue. As an equation, th ...
s. We will now show that solutions for ''X''(''x'') for values of ''λ'' ≤ 0 cannot occur: Suppose that ''λ'' < 0. Then there exist real numbers ''B'', ''C'' such that :X(x) = B e^ + C e^. From we get and therefore ''B'' = 0 = ''C'' which implies ''u'' is identically 0. Suppose that ''λ'' = 0. Then there exist real numbers ''B'', ''C'' such that :X(x) = Bx + C. From we conclude in the same manner as in 1 that ''u'' is identically 0. Therefore, it must be the case that ''λ'' > 0. Then there exist real numbers ''A'', ''B'', ''C'' such that :T(t) = A e^, and :X(x) = B \sin(\sqrt \, x) + C \cos(\sqrt \, x). From we get ''C'' = 0 and that for some positive integer ''n'', :\sqrt = n \frac. This solves the heat equation in the special case that the dependence of ''u'' has the special form of . In general, the sum of solutions to which satisfy the boundary conditions also satisfies and . Hence a complete solution can be given as :u(x,t) = \sum_^ D_n \sin \frac \exp\left(-\frac\right), where ''D''''n'' are coefficients determined by initial condition. Given the initial condition :u\big, _=f(x), we can get :f(x) = \sum_^ D_n \sin \frac. This is the Fourier sine series expansion of ''f''(''x'') which is amenable to
Fourier analysis In mathematics, Fourier analysis () is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. Fourier analysis grew from the study of Fourier series, and is named after Joseph Fo ...
. Multiplying both sides with \sin \frac and integrating over results in :D_n = \frac \int_0^L f(x) \sin \frac \, dx. This method requires that the eigenfunctions ''X'', here \left\_^, are
orthogonal In mathematics, orthogonality (mathematics), orthogonality is the generalization of the geometric notion of ''perpendicularity''. Although many authors use the two terms ''perpendicular'' and ''orthogonal'' interchangeably, the term ''perpendic ...
and
complete Complete may refer to: Logic * Completeness (logic) * Completeness of a theory, the property of a theory that every formula in the theory's language or its negation is provable Mathematics * The completeness of the real numbers, which implies t ...
. In general this is guaranteed by
Sturm–Liouville theory In mathematics and its applications, a Sturm–Liouville problem is a second-order linear ordinary differential equation of the form \frac \left (x) \frac\right+ q(x)y = -\lambda w(x) y for given functions p(x), q(x) and w(x), together with some ...
.


Example: nonhomogeneous case

Suppose the equation is nonhomogeneous, with the boundary condition the same as . Expand ''h''(''x,t''), ''u''(''x'',''t'') and ''f''(''x'') into where ''h''''n''(''t'') and ''b''''n'' can be calculated by integration, while ''u''''n''(''t'') is to be determined. Substitute and back to and considering the orthogonality of sine functions we get : u'_(t)+\alpha\fracu_(t)=h_(t), which are a sequence of
linear differential equations In mathematics, a linear differential equation is a differential equation that is linear in the unknown function and its derivatives, so it can be written in the form a_0(x)y + a_1(x)y' + a_2(x)y'' \cdots + a_n(x)y^ = b(x) where and are arbi ...
that can be readily solved with, for instance,
Laplace transform In mathematics, the Laplace transform, named after Pierre-Simon Laplace (), is an integral transform that converts a Function (mathematics), function of a Real number, real Variable (mathematics), variable (usually t, in the ''time domain'') to a f ...
, or
Integrating factor In mathematics, an integrating factor is a function that is chosen to facilitate the solving of a given equation involving differentials. It is commonly used to solve non-exact ordinary differential equations, but is also used within multivari ...
. Finally, we can get : u_(t)=e^ \left (b_+\int_^h_(s)e^ \, ds \right). If the boundary condition is nonhomogeneous, then the expansion of and is no longer valid. One has to find a function ''v'' that satisfies the boundary condition only, and subtract it from ''u''. The function ''u-v'' then satisfies homogeneous boundary condition, and can be solved with the above method.


Example: mixed derivatives

For some equations involving mixed derivatives, the equation does not separate as easily as the heat equation did in the first example above, but nonetheless separation of variables may still be applied. Consider the two-dimensional
biharmonic equation In mathematics, the biharmonic equation is a fourth-order partial differential equation which arises in areas of continuum mechanics, including linear elasticity theory and the solution of Stokes flows. Specifically, it is used in the modeling of t ...
:\frac + 2\frac + \frac = 0. Proceeding in the usual manner, we look for solutions of the form :u(x,y) = X(x)Y(y) and we obtain the equation :\frac + 2\frac\frac + \frac = 0. Writing this equation in the form :E(x) + F(x)G(y) + H(y) = 0, Taking the derivative of this expression with respect to x gives E'(x)+F'(x)G(y)=0 which means G(y)=const. or F'(x)=0 and likewise, taking derivative with respect to y leads to F(x)G'(y)+H'(y)=0 and thus F(x)=const. or G'(y)=0 , hence either ''F''(''x'') or ''G''(''y'') must be a constant, say −λ. This further implies that either -E(x)=F(x)G(y)+H(y) or -H(y)=E(x)+F(x)G(y) are constant. Returning to the equation for ''X'' and ''Y'', we have two cases :\begin X''(x) &= -\lambda_1X(x) \\ X^(x) &= \mu_1X(x) \\ Y^(y) - 2\lambda_1Y''(y) &= -\mu_1Y(y) \end and :\begin Y''(y) &= -\lambda_2Y(y) \\ Y^(y) &= \mu_2Y(y) \\ X^(x) - 2\lambda_2X''(x) &= -\mu_2X(x) \end which can each be solved by considering the separate cases for \lambda_i<0, \lambda_i=0, \lambda_i>0 and noting that \mu_i=\lambda_i^2.


Curvilinear coordinates

In
orthogonal curvilinear coordinates In geometry, curvilinear coordinates are a coordinate system for Euclidean space in which the coordinate lines may be curved. These coordinates may be derived from a set of Cartesian coordinates by using a transformation that is locally inv ...
, separation of variables can still be used, but in some details different from that in Cartesian coordinates. For instance, regularity or periodic condition may determine the eigenvalues in place of boundary conditions. See
spherical harmonics In mathematics and physical science, spherical harmonics are special functions defined on the surface of a sphere. They are often employed in solving partial differential equations in many scientific fields. The table of spherical harmonics co ...
for example.


Applicability


Partial differential equations

For many PDEs, such as the wave equation, Helmholtz equation and Schrödinger equation, the applicability of separation of variables is a result of the
spectral theorem In linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized (that is, represented as a diagonal matrix in some basis). This is extremely useful because computations involvin ...
. In some cases, separation of variables may not be possible. Separation of variables may be possible in some coordinate systems but not others,''John Renze, Eric W. Weisstein'', Separation of variables and which coordinate systems allow for separation depends on the symmetry properties of the equation.Willard Miller(1984) ''Symmetry and Separation of Variables'', Cambridge University Press Below is an outline of an argument demonstrating the applicability of the method to certain linear equations, although the precise method may differ in individual cases (for instance in the biharmonic equation above). Consider an initial boundary value problem for a function u(x,t) on D = \ in two variables: : (Tu)(x,t) = (Su)(x,t) where T is a differential operator with respect to x and S is a differential operator with respect to t with boundary data: :(Tu)(0,t) = (Tu)(l,t) = 0 for t \geq 0 :(Su)(x,0)=h(x) for 0 \leq x \leq l where h is a known function. We look for solutions of the form u(x,t) = f(x) g(t). Dividing the PDE through by f(x)g(t) gives : \frac = \frac The right hand side depends only on x and the left hand side only on t so both must be equal to a constant K , which gives two ordinary differential equations :Tf = Kf, Sg = Kg which we can recognize as eigenvalue problems for the operators for T and S. If T is a compact, self-adjoint operator on the space L^2 ,l/math> along with the relevant boundary conditions, then by the Spectral theorem there exists a basis for L^2 ,l/math> consisting of eigenfunctions for T. Let the spectrum of T be E and let f_ be an eigenfunction with eigenvalue \lambda \in E. Then for any function which at each time t is square-integrable with respect to x, we can write this function as a linear combination of the f_. In particular, we know the solution u can be written as :u(x,t) = \sum_ c_(t)f_(x) For some functions c_(t). In the separation of variables, these functions are given by solutions to Sg = Kg Hence, the spectral theorem ensures that the separation of variables will (when it is possible) find all the solutions. For many differential operators, such as \frac, we can show that they are self-adjoint by integration by parts. While these operators may not be compact, their inverses (when they exist) may be, as in the case of the wave equation, and these inverses have the same eigenfunctions and eigenvalues as the original operator (with the possible exception of zero).David Benson (2007) ''Music: A Mathematical Offering'', Cambridge University Press, Appendix W


Matrices

The matrix form of the separation of variables is the
Kronecker sum In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product (which is denoted by the same symbol) from vectors ...
. As an example we consider the 2D discrete Laplacian on a
regular grid A regular grid is a tessellation of ''n''-dimensional Euclidean space by Congruence_(geometry), congruent parallelepiped#Parallelotope, parallelotopes (e.g. bricks). Its opposite is Unstructured grid, irregular grid. Grids of this type appear on ...
: :L = \mathbf\oplus\mathbf=\mathbf\otimes\mathbf+\mathbf\otimes\mathbf, \, where \mathbf and \mathbf are 1D discrete Laplacians in the ''x''- and ''y''-directions, correspondingly, and \mathbf are the identities of appropriate sizes. See the main article
Kronecker sum of discrete Laplacians In mathematics, the Kronecker sum of discrete Laplacians, named after Leopold Kronecker, is a discrete version of the separation of variables for the continuous Laplacian in a rectangular cuboid domain. General form of the Kronecker sum of discret ...
for details.


Software and AI

Some mathematical programs are able to do separation of variables: Xcas among others.
Microsoft Copilot Microsoft Copilot (or simply Copilot) is a generative artificial intelligence chatbot developed by Microsoft. Based on the GPT-4 series of large language models, it was launched in 2023 as Microsoft's primary replacement for the discontinued C ...
and
ChatGPT ChatGPT is a generative artificial intelligence chatbot developed by OpenAI and released on November 30, 2022. It uses large language models (LLMs) such as GPT-4o as well as other Multimodal learning, multimodal models to create human-like re ...
can do separation of variables.


See also

*
Inseparable differential equation In mathematics, an ordinary differential equation (ODE) is a differential equation (DE) dependent on only a single independent variable (mathematics), variable. As with any other DE, its unknown(s) consists of one (or more) Function (mathematic ...


Notes


References

* * *


External links

* * {{mathworld2 , urlname=SeparationofVariables , title=Separation of variables, urlname2=DifferentialEquation, title2=Differential Equation, author=John Renze, Eric W. Weisstein
Methods of Generalized and Functional Separation of Variables
at EqWorld: The World of Mathematical Equations
Examples
of separating variables to solve PDEs
"A Short Justification of Separation of Variables"
Ordinary differential equations Partial differential equations