Orthogonal coordinate system
   HOME

TheInfoList



OR:

In
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
, orthogonal coordinates are defined as a set of ''d'' coordinates q = (''q''1, ''q''2, ..., ''q''''d'') in which the coordinate hypersurfaces all meet at right angles (note: superscripts are indices, not exponents). A coordinate surface for a particular coordinate ''q''''k'' is the curve, surface, or hypersurface on which ''q''''k'' is a constant. For example, the three-dimensional
Cartesian coordinates A Cartesian coordinate system (, ) in a plane is a coordinate system that specifies each point uniquely by a pair of numerical coordinates, which are the signed distances to the point from two fixed perpendicular oriented lines, measured in ...
(''x'', ''y'', ''z'') is an orthogonal coordinate system, since its coordinate surfaces ''x'' = constant, ''y'' = constant, and ''z'' = constant are planes that meet at right angles to one another, i.e., are perpendicular. Orthogonal coordinates are a special but extremely common case of curvilinear coordinates.


Motivation

While vector operations and physical laws are normally easiest to derive in
Cartesian coordinates A Cartesian coordinate system (, ) in a plane is a coordinate system that specifies each point uniquely by a pair of numerical coordinates, which are the signed distances to the point from two fixed perpendicular oriented lines, measured in ...
, non-Cartesian orthogonal coordinates are often used instead for the solution of various problems, especially boundary value problems, such as those arising in field theories of
quantum mechanics Quantum mechanics is a fundamental theory in physics that provides a description of the physical properties of nature at the scale of atoms and subatomic particles. It is the foundation of all quantum physics including quantum chemistry, ...
, fluid flow, electrodynamics, plasma physics and the
diffusion Diffusion is the net movement of anything (for example, atoms, ions, molecules, energy) generally from a region of higher concentration to a region of lower concentration. Diffusion is driven by a gradient in Gibbs free energy or chemical ...
of
chemical species A chemical species is a chemical substance or ensemble composed of chemically identical molecular entities that can explore the same set of molecular energy levels on a characteristic or delineated time scale. These energy levels determine the wa ...
or
heat In thermodynamics, heat is defined as the form of energy crossing the boundary of a thermodynamic system by virtue of a temperature difference across the boundary. A thermodynamic system does not ''contain'' heat. Nevertheless, the term is ...
. The chief advantage of non-Cartesian coordinates is that they can be chosen to match the symmetry of the problem. For example, the pressure wave due to an explosion far from the ground (or other barriers) depends on 3D space in Cartesian coordinates, however the pressure predominantly moves away from the center, so that in spherical coordinates the problem becomes very nearly one-dimensional (since the pressure wave dominantly depends only on time and the distance from the center). Another example is (slow) fluid in a straight circular pipe: in Cartesian coordinates, one has to solve a (difficult) two dimensional boundary value problem involving a partial differential equation, but in cylindrical coordinates the problem becomes one-dimensional with an
ordinary differential equation In mathematics, an ordinary differential equation (ODE) is a differential equation whose unknown(s) consists of one (or more) function(s) of one variable and involves the derivatives of those functions. The term ''ordinary'' is used in contrast ...
instead of a
partial differential equation In mathematics, a partial differential equation (PDE) is an equation which imposes relations between the various partial derivatives of a multivariable function. The function is often thought of as an "unknown" to be solved for, similarly to h ...
. The reason to prefer orthogonal coordinates instead of general curvilinear coordinates is simplicity: many complications arise when coordinates are not orthogonal. For example, in orthogonal coordinates many problems may be solved by
separation of variables In mathematics, separation of variables (also known as the Fourier method) is any of several methods for solving ordinary and partial differential equations, in which algebra allows one to rewrite an equation so that each of two variables occurs ...
. Separation of variables is a mathematical technique that converts a complex ''d''-dimensional problem into ''d'' one-dimensional problems that can be solved in terms of known functions. Many equations can be reduced to
Laplace's equation In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as \nabla^2\! f = 0 or \Delta f = 0, where \Delta = \na ...
or the Helmholtz equation.
Laplace's equation In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as \nabla^2\! f = 0 or \Delta f = 0, where \Delta = \na ...
is separable in 13 orthogonal coordinate systems (the 14 listed in the table below with the exception of toroidal), and the Helmholtz equation is separable in 11 orthogonal coordinate systems. Orthogonal coordinates never have off-diagonal terms in their metric tensor. In other words, the infinitesimal squared distance ''ds''2 can always be written as a scaled sum of the squared infinitesimal coordinate displacements : ds^2 = \sum_^d \left( h_k \, dq^ \right)^2 where ''d'' is the dimension and the scaling functions (or scale factors) : h_(\mathbf)\ \stackrel\ \sqrt = , \mathbf e_k, equal the square roots of the diagonal components of the metric tensor, or the lengths of the local basis vectors \mathbf e_k described below. These scaling functions ''h''''i'' are used to calculate differential operators in the new coordinates, e.g., the
gradient In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p is the "direction and rate of fastest increase". If the gr ...
, the Laplacian, the
divergence In vector calculus, divergence is a vector operator that operates on a vector field, producing a scalar field giving the quantity of the vector field's source at each point. More technically, the divergence represents the volume density of ...
and the curl. A simple method for generating orthogonal coordinates systems in two dimensions is by a conformal mapping of a standard two-dimensional grid of
Cartesian coordinates A Cartesian coordinate system (, ) in a plane is a coordinate system that specifies each point uniquely by a pair of numerical coordinates, which are the signed distances to the point from two fixed perpendicular oriented lines, measured in ...
. A
complex number In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the fo ...
''z'' = ''x'' + ''iy'' can be formed from the real coordinates ''x'' and ''y'', where ''i'' represents the
imaginary unit The imaginary unit or unit imaginary number () is a solution to the quadratic equation x^2+1=0. Although there is no real number with this property, can be used to extend the real numbers to what are called complex numbers, using addition an ...
. Any holomorphic function ''w'' = ''f''(''z'') with non-zero complex derivative will produce a conformal mapping; if the resulting complex number is written , then the curves of constant ''u'' and ''v'' intersect at right angles, just as the original lines of constant ''x'' and ''y'' did. Orthogonal coordinates in three and higher dimensions can be generated from an orthogonal two-dimensional coordinate system, either by projecting it into a new dimension (''cylindrical coordinates'') or by rotating the two-dimensional system about one of its symmetry axes. However, there are other orthogonal coordinate systems in three dimensions that cannot be obtained by projecting or rotating a two-dimensional system, such as the ellipsoidal coordinates. More general orthogonal coordinates may be obtained by starting with some necessary coordinate surfaces and considering their
orthogonal trajectories In mathematics an orthogonal trajectory is a curve, which intersects any curve of a given pencil of (planar) curves ''orthogonally''. For example, the orthogonal trajectories of a pencil of ''concentric circles'' are the lines through their comm ...
.


Basis vectors


Covariant basis

In
Cartesian coordinates A Cartesian coordinate system (, ) in a plane is a coordinate system that specifies each point uniquely by a pair of numerical coordinates, which are the signed distances to the point from two fixed perpendicular oriented lines, measured in ...
, the
basis vectors In mathematics, a set of vectors in a vector space is called a basis if every element of may be written in a unique way as a finite linear combination of elements of . The coefficients of this linear combination are referred to as componen ...
are fixed (constant). In the more general setting of curvilinear coordinates, a point in space is specified by the coordinates, and at every such point there is bound a set of basis vectors, which generally are not constant: this is the essence of curvilinear coordinates in general and is a very important concept. What distinguishes orthogonal coordinates is that, though the basis vectors vary, they are always
orthogonal In mathematics, orthogonality is the generalization of the geometric notion of '' perpendicularity''. By extension, orthogonality is also used to refer to the separation of specific features of a system. The term also has specialized meanings in ...
with respect to each other. In other words, :\mathbf e_i \cdot \mathbf e_j = 0 \quad \text \quad i \neq j These basis vectors are by definition the
tangent vector In mathematics, a tangent vector is a vector that is tangent to a curve or surface at a given point. Tangent vectors are described in the differential geometry of curves in the context of curves in R''n''. More generally, tangent vectors are e ...
s of the curves obtained by varying one coordinate, keeping the others fixed: :\mathbf e_i = \frac where r is some point and ''q''''i'' is the coordinate for which the basis vector is extracted. In other words, a curve is obtained by fixing all but one coordinate; the unfixed coordinate is varied as in a parametric curve, and the derivative of the curve with respect to the parameter (the varying coordinate) is the basis vector for that coordinate. Note that the vectors are not necessarily of equal length. The useful functions known as scale factors of the coordinates are simply the lengths h_i of the basis vectors \hat_i (see table below). The scale factors are sometimes called Lamé coefficients, not to be confused with Lamé parameters (solid mechanics). The normalized basis vectors are notated with a hat and obtained by dividing by the length: :\hat_i = \frac = \frac A vector field may be specified by its components with respect to the basis vectors or the normalized basis vectors, and one must be sure which case is meant. Components in the normalized basis are most common in applications for clarity of the quantities (for example, one may want to deal with tangential velocity instead of tangential velocity times a scale factor); in derivations the normalized basis is less common since it is more complicated.


Contravariant basis

The basis vectors shown above are covariant basis vectors (because they "co-vary" with vectors). In the case of orthogonal coordinates, the contravariant basis vectors are easy to find since they will be in the same direction as the covariant vectors but
reciprocal length Reciprocal length or inverse length is a quantity or measurement used in several branches of science and mathematics. As the reciprocal of length, common units used for this measurement include the reciprocal metre or inverse metre (symbol: m− ...
(for this reason, the two sets of basis vectors are said to be reciprocal with respect to each other): :\mathbf e^i = \frac = \frac this follows from the fact that, by definition, \mathbf e_i \cdot \mathbf e^j = \delta^j_i, using the Kronecker delta. Note that: :\hat_i = \frac = h_i \mathbf e^i = \hat^i We now face three different basis sets commonly used to describe vectors in orthogonal coordinates: the covariant basis e''i'', the contravariant basis e''i'', and the normalized basis ê''i''. While a vector is an ''objective quantity'', meaning its identity is independent of any coordinate system, the components of a vector depend on what basis the vector is represented in. To avoid confusion, the components of the vector x with respect to the e''i'' basis are represented as x''i'', while the components with respect to the e''i'' basis are represented as x''i'': :\mathbf x = \sum_i x^i \mathbf e_i = \sum_i x_i \mathbf e^i The position of the indices represent how the components are calculated (upper indices should not be confused with
exponentiation Exponentiation is a mathematical operation, written as , involving two numbers, the '' base'' and the ''exponent'' or ''power'' , and pronounced as " (raised) to the (power of) ". When is a positive integer, exponentiation corresponds to ...
). Note that the summation symbols Σ (capital Sigma) and the summation range, indicating summation over all basis vectors (''i'' = 1, 2, ..., ''d''), are often omitted. The components are related simply by: :h_i^2 x^i = x_i There is no distinguishing widespread notation in use for vector components with respect to the normalized basis; in this article we'll use subscripts for vector components and note that the components are calculated in the normalized basis.


Vector algebra

Vector addition and negation are done component-wise just as in Cartesian coordinates with no complication. Extra considerations may be necessary for other vector operations. Note however, that all of these operations assume that two vectors in a vector field are bound to the same point (in other words, the tails of vectors coincide). Since basis vectors generally vary in orthogonal coordinates, if two vectors are added whose components are calculated at different points in space, the different basis vectors require consideration.


Dot product

The
dot product In mathematics, the dot product or scalar productThe term ''scalar product'' means literally "product with a scalar as a result". It is also used sometimes for other symmetric bilinear forms, for example in a pseudo-Euclidean space. is an alg ...
in
Cartesian coordinates A Cartesian coordinate system (, ) in a plane is a coordinate system that specifies each point uniquely by a pair of numerical coordinates, which are the signed distances to the point from two fixed perpendicular oriented lines, measured in ...
(
Euclidean space Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, that is, in Euclid's ''Elements'', it was the three-dimensional space of Euclidean geometry, but in modern mathematics there are Euclidea ...
with an
orthonormal In linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal (or perpendicular along a line) unit vectors. A set of vectors form an orthonormal set if all vectors in the set are mutually orthogonal and all of ...
basis set) is simply the sum of the products of components. In orthogonal coordinates, the dot product of two vectors x and y takes this familiar form when the components of the vectors are calculated in the normalized basis: :\mathbf x \cdot \mathbf y = \sum_i x_i \hat_i \cdot \sum_j y_j \hat_j = \sum_i x_i y_i This is an immediate consequence of the fact that the normalized basis at some point can form a Cartesian coordinate system: the basis set is
orthonormal In linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal (or perpendicular along a line) unit vectors. A set of vectors form an orthonormal set if all vectors in the set are mutually orthogonal and all of ...
. For components in the covariant or contravariant bases, :\mathbf x \cdot \mathbf y = \sum_i h_i^2 x^i y^i = \sum_i \frac = \sum_i x^i y_i = \sum_i x_i y^i This can be readily derived by writing out the vectors in component form, normalizing the basis vectors, and taking the dot product. For example, in 2D: : \begin \mathbf x \cdot \mathbf y & = \left(x^1 \mathbf e_1 + x^2 \mathbf e_2\right) \cdot \left(y_1 \mathbf e^1 + y_2 \mathbf e^2\right) \\
0pt PT, Pt, or pt may refer to: Arts and entertainment * ''P.T.'' (video game), acronym for ''Playable Teaser'', a short video game released to promote the cancelled video game ''Silent Hills'' * Porcupine Tree, a British progressive rock group ...
& = \left(x^1 h_1 \hat_1 + x^2 h_2 \hat_2\right) \cdot \left(y_1 \frac + y_2 \frac\right) = x^1 y_1 + x ^2 y_2 \end where the fact that the normalized covariant and contravariant bases are equal has been used.


Cross product

The
cross product In mathematics, the cross product or vector product (occasionally directed area product, to emphasize its geometric significance) is a binary operation on two vectors in a three-dimensional oriented Euclidean vector space (named here E), and ...
in 3D Cartesian coordinates is: :\mathbf x \times \mathbf y = (x_2 y_3 - x_3 y_2) \hat_1 + (x_3 y_1 - x_1 y_3) \hat_2 + (x_1 y_2 - x_2 y_1) \hat_3 The above formula then remains valid in orthogonal coordinates if the components are calculated in the normalized basis. To construct the cross product in orthogonal coordinates with covariant or contravariant bases we again must simply normalize the basis vectors, for example: :\mathbf x \times \mathbf y = \sum_i x^i \mathbf e_i \times \sum_j y^j \mathbf e_j = \sum_i x^i h_i \hat_i \times \sum_j y^j h_j \hat_j which, written expanded out, :\mathbf x \times \mathbf y = \left(x^2 y^3 - x^3 y^2\right) \frac \mathbf e_1 + \left(x^3 y^1 - x^1 y^3\right) \frac \mathbf e_2 + \left(x^1 y^2 - x^2 y^1\right) \frac \mathbf e_3 Terse notation for the cross product, which simplifies generalization to non-orthogonal coordinates and higher dimensions, is possible with the Levi-Civita tensor, which will have components other than zeros and ones if the scale factors are not all equal to one.


Vector calculus


Differentiation

Looking at an infinitesimal displacement from some point, it's apparent that :d\mathbf r = \sum_i \frac \, dq^i = \sum_i \mathbf e_i \, dq^i By definition, the gradient of a function must satisfy (this definition remains true if ''ƒ'' is any
tensor In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects related to a vector space. Tensors may map between different objects such as vectors, scalars, and even other tensor ...
) :df = \nabla f \cdot d\mathbf r \quad \Rightarrow \quad df = \nabla f \cdot \sum_i \mathbf e_i \, dq^i It follows then that del operator must be: :\nabla = \sum_i \mathbf e^i \frac and this happens to remain true in general curvilinear coordinates. Quantities like the
gradient In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p is the "direction and rate of fastest increase". If the gr ...
and Laplacian follow through proper application of this operator.


Basis vector formulae

From dr and normalized basis vectors ê''i'', the following can be constructed. where :J = \left, \frac \cdot \left(\frac \times \frac \right)\ = \left, \frac \ = h_1 h_2 h_3 is the Jacobian determinant, which has the geometric interpretation of the deformation in volume from the infinitesimal cube d''x''d''y''d''z'' to the infinitesimal curved volume in the orthogonal coordinates.


Integration

Using the line element shown above, the
line integral In mathematics, a line integral is an integral where the function to be integrated is evaluated along a curve. The terms ''path integral'', ''curve integral'', and ''curvilinear integral'' are also used; '' contour integral'' is used as well, ...
along a path \scriptstyle \mathcal P of a vector F is: :\int_ \mathbf F \cdot d\mathbf r = \int_ \sum_i F_i \mathbf e^i \cdot \sum_j \mathbf e_j \, dq^j = \sum_i \int_ F_i \, dq^i An infinitesimal element of area for a surface described by holding one coordinate ''qk'' constant is: :dA_k = \prod_ ds_i = \prod_ h_i \, dq^i Similarly, the volume element is: :dV = \prod_i ds_i = \prod_i h_i \, dq^i where the large symbol Π (capital Pi) indicates a product the same way that a large Σ indicates summation. Note that the product of all the scale factors is the Jacobian determinant. As an example, the
surface integral In mathematics, particularly multivariable calculus, a surface integral is a generalization of multiple integrals to integration over surfaces. It can be thought of as the double integral analogue of the line integral. Given a surface, one ...
of a vector function F over a ''q''1 = ''constant'' surface \scriptstyle\mathcal S in 3D is: :\int_ \mathbf F \cdot d\mathbf A = \int_ \mathbf F \cdot \hat \ d A = \int_ \mathbf F \cdot \hat_1 \ d A = \int_ F^1 \frac \, dq^2 \, dq^3 Note that F1/''h''1 is the component of F normal to the surface.


Differential operators in three dimensions

Since these operations are common in application, all vector components in this section are presented with respect to the normalised basis: F_i = \mathbf \cdot \hat_i. The above expressions can be written in a more compact form using the Levi-Civita symbol \epsilon_ and the Jacobian determinant J = h_1 h_2 h_3, assuming summation over repeated indices: Also notice the gradient of a scalar field can be expressed in terms of the
Jacobian matrix In vector calculus, the Jacobian matrix (, ) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variable ...
J containing canonical partial derivatives: :\mathbf = \left frac, \frac, \frac\right/math> upon a change of basis: :\nabla \phi = \mathbf \mathbf \mathbf^T where the rotation and scaling matrices are: :\mathbf = mathbf_1, \mathbf_2, \mathbf_3 :\mathbf = \mathrm( _1^, h_2^, h_3^.


Table of orthogonal coordinates

Besides the usual cartesian coordinates, several others are tabulated below.Vector Analysis (2nd Edition), M.R. Spiegel, S. Lipschutz, D. Spellman, Schaum’s Outlines, McGraw Hill (USA), 2009,
Interval notation In mathematics, a (real) interval is a set of real numbers that contains all real numbers lying between any two numbers of the set. For example, the set of numbers satisfying is an interval which contains , , and all numbers in between. Othe ...
is used for compactness in the coordinates column.


See also

* Curvilinear coordinates *
Geodetic coordinates Geodetic coordinates are a type of curvilinear orthogonal coordinate system used in geodesy based on a '' reference ellipsoid''. They include geodetic latitude (north/south) , ''longitude'' (east/west) , and ellipsoidal height (also known as geo ...
*
Tensor In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects related to a vector space. Tensors may map between different objects such as vectors, scalars, and even other tensor ...
* Vector field *
Skew coordinates A system of skew coordinates is a curvilinear coordinate system where the coordinate surfaces are not orthogonal, in contrast to orthogonal coordinates. Skew coordinates tend to be more complicated to work with compared to orthogonal coordinates si ...


Notes


References

* Korn GA and Korn TM. (1961) ''Mathematical Handbook for Scientists and Engineers'', McGraw-Hill, pp. 164–182. * * Margenau H. and Murphy GM. (1956) ''The Mathematics of Physics and Chemistry'', 2nd. ed., Van Nostrand, pp. 172–192. * Leonid P. Lebedev and Michael J. Cloud (2003) ''Tensor Analysis'', pp. 81 – 88. {{DEFAULTSORT:Orthogonal Coordinates