HOME

TheInfoList



OR:

In
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrice ...
, two vectors in an inner product space are orthonormal if they are orthogonal (or perpendicular along a line) unit vectors. A set of vectors form an orthonormal set if all vectors in the set are mutually orthogonal and all of unit length. An orthonormal set which forms a basis is called an orthonormal basis.


Intuitive overview

The construction of orthogonality of vectors is motivated by a desire to extend the intuitive notion of perpendicular vectors to higher-dimensional spaces. In the Cartesian plane, two vectors are said to be ''perpendicular'' if the angle between them is 90° (i.e. if they form a right angle). This definition can be formalized in Cartesian space by defining the dot product and specifying that two vectors in the plane are orthogonal if their dot product is zero. Similarly, the construction of the norm of a vector is motivated by a desire to extend the intuitive notion of the length of a vector to higher-dimensional spaces. In Cartesian space, the ''norm'' of a vector is the square root of the vector dotted with itself. That is, :\, \mathbf \, = \sqrt Many important results in
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrice ...
deal with collections of two or more orthogonal vectors. But often, it is easier to deal with vectors of
unit length Unit may refer to: Arts and entertainment * UNIT, a fictional military organization in the science fiction television series ''Doctor Who'' * Unit of action, a discrete piece of action (or beat) in a theatrical presentation Music * ''Unit'' (a ...
. That is, it often simplifies things to only consider vectors whose norm equals 1. The notion of restricting orthogonal pairs of vectors to only those of unit length is important enough to be given a special name. Two vectors which are orthogonal and of length 1 are said to be ''orthonormal''.


Simple example

What does a pair of orthonormal vectors in 2-D Euclidean space look like? Let u = (x1, y1) and v = (x2, y2). Consider the restrictions on x1, x2, y1, y2 required to make u and v form an orthonormal pair. * From the orthogonality restriction, u • v = 0. * From the unit length restriction on u, , , u, , = 1. * From the unit length restriction on v, , , v, , = 1. Expanding these terms gives 3 equations: #x_1 x_2 + y_1 y_2 = 0 \quad #\sqrt = 1 #\sqrt = 1 Converting from Cartesian to polar coordinates, and considering Equation (2) and Equation (3) immediately gives the result r1 = r2 = 1. In other words, requiring the vectors be of unit length restricts the vectors to lie on the unit circle. After substitution, Equation (1) becomes \cos \theta _1 \cos \theta _2 + \sin \theta _1 \sin \theta _2 = 0. Rearranging gives \tan \theta _1 = - \cot \theta _2. Using a trigonometric identity to convert the cotangent term gives : \tan ( \theta_1 ) = \tan \left( \theta_2 + \tfrac \right) : \Rightarrow \theta _1 = \theta _2 + \tfrac It is clear that in the plane, orthonormal vectors are simply radii of the unit circle whose difference in angles equals 90°.


Definition

Let \mathcal be an inner-product space. A set of vectors : \left\ \in \mathcal is called orthonormal
if and only if In logic and related fields such as mathematics and philosophy, "if and only if" (shortened as "iff") is a biconditional logical connective between statements, where either both statements are true or both are false. The connective is bic ...
: \forall i,j : \langle u_i , u_j \rangle = \delta_ where \delta_ \, is the Kronecker delta and \langle \cdot , \cdot \rangle is the inner product defined over \mathcal.


Significance

Orthonormal sets are not especially significant on their own. However, they display certain features that make them fundamental in exploring the notion of diagonalizability of certain operators on vector spaces.


Properties

Orthonormal sets have certain very appealing properties, which make them particularly easy to work with. *Theorem. If is an orthonormal list of vectors, then \forall \textbf := _1, \cdots, a_n \ \, a_1 \textbf_1 + a_2 \textbf_2 + \cdots + a_n \textbf_n\, ^2 = , a_1, ^2 + , a_2, ^2 + \cdots + , a_n, ^2 *Theorem. Every orthonormal list of vectors is linearly independent.


Existence

* Gram-Schmidt theorem. If is a linearly independent list of vectors in an inner-product space \mathcal, then there exists an orthonormal list of vectors in \mathcal such that ''span''(e1, e2,...,en) = ''span''(v1, v2,...,vn). Proof of the Gram-Schmidt theorem is
constructive Although the general English usage of the adjective constructive is "helping to develop or improve something; helpful to someone, instead of upsetting and negative," as in the phrase "constructive criticism," in legal writing ''constructive'' has ...
, and discussed at length elsewhere. The Gram-Schmidt theorem, together with the axiom of choice, guarantees that every vector space admits an orthonormal basis. This is possibly the most significant use of orthonormality, as this fact permits operators on inner-product spaces to be discussed in terms of their action on the space's orthonormal basis vectors. What results is a deep relationship between the diagonalizability of an operator and how it acts on the orthonormal basis vectors. This relationship is characterized by the Spectral Theorem.


Examples


Standard basis

The standard basis for the coordinate space F''n'' is : Any two vectors ei, ej where i≠j are orthogonal, and all vectors are clearly of unit length. So forms an orthonormal basis.


Real-valued functions

When referring to real-valued functions, usually the L² inner product is assumed unless otherwise stated. Two functions \phi(x) and \psi(x) are orthonormal over the interval ,b/math> if :(1)\quad\langle\phi(x),\psi(x)\rangle = \int_a^b\phi(x)\psi(x)dx = 0,\quad :(2)\quad, , \phi(x), , _2 = , , \psi(x), , _2 = \left \phi(x), ^2dx\right\frac = \left \psi(x), ^2dx\right\frac = 1.


Fourier series

The Fourier series is a method of expressing a periodic function in terms of sinusoidal basis functions. Taking C ��π,πto be the space of all real-valued functions continuous on the interval ��π,πand taking the inner product to be :\langle f, g \rangle = \int_^ f(x)g(x)dx it can be shown that :\left\, \quad n \in \mathbb forms an orthonormal set. However, this is of little consequence, because C ��π,πis infinite-dimensional, and a finite set of vectors cannot span it. But, removing the restriction that ''n'' be finite makes the set dense in C ��π,πand therefore an orthonormal basis of C ��π,π


See also

*
Orthogonalization In linear algebra, orthogonalization is the process of finding a set of orthogonal vectors that span a particular subspace. Formally, starting with a linearly independent set of vectors in an inner product space (most commonly the Euclidean s ...


Sources

* * {{Citation , last1=Chen , first1=Wai-Kai , title=Fundamentals of Circuits and Filters , publisher= CRC Press , location= Boca Raton , edition=3rd , pag
62
isbn=978-1-4200-5887-1 , year=2009 Linear algebra Functional analysis