Eigenmoments
   HOME

TheInfoList



OR:

EigenMoments is a set of
orthogonal In mathematics, orthogonality (mathematics), orthogonality is the generalization of the geometric notion of ''perpendicularity''. Although many authors use the two terms ''perpendicular'' and ''orthogonal'' interchangeably, the term ''perpendic ...
, noise robust, invariant to rotation, scaling and translation and distribution sensitive moments. Their application can be found in
signal processing Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing ''signals'', such as audio signal processing, sound, image processing, images, Scalar potential, potential fields, Seismic tomograph ...
and
computer vision Computer vision tasks include methods for image sensor, acquiring, Image processing, processing, Image analysis, analyzing, and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical ...
as descriptors of the signal or image. The descriptors can later be used for
classification Classification is the activity of assigning objects to some pre-existing classes or categories. This is distinct from the task of establishing the classes themselves (for example through cluster analysis). Examples include diagnostic tests, identif ...
purposes. It is obtained by performing
orthogonalization In linear algebra, orthogonalization is the process of finding a set of orthogonal vectors that span a particular subspace. Formally, starting with a linearly independent set of vectors in an inner product space (most commonly the Euclidean s ...
, via eigen analysis on geometric moments.M. K. Hu, "Visual Pattern Recognition by Moment Invariants", IRE Trans. Info. Theory, vol. IT-8, pp.179–187, 1962


Framework summary

EigenMoments are computed by performing eigen analysis on the moment space of an image by maximizing
signal-to-noise ratio Signal-to-noise ratio (SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR is defined as the ratio of signal power to noise power, often expressed in deci ...
in the feature space in form of
Rayleigh quotient In mathematics, the Rayleigh quotient () for a given complex Hermitian matrix M and nonzero vector ''x'' is defined as:R(M,x) = .For real matrices and vectors, the condition of being Hermitian reduces to that of being symmetric, and the conjugat ...
. This approach has several benefits in Image processing applications: # Dependency of moments in the moment space on the distribution of the images being transformed, ensures
decorrelation Decorrelation is a general term for any process that is used to reduce autocorrelation within a signal, or cross-correlation within a set of signals, while preserving other aspects of the signal. A frequently used method of decorrelation is the us ...
of the final feature space after eigen analysis on the moment space. # The ability of EigenMoments to take into account distribution of the image makes it more versatile and adaptable for different genres. # Generated moment kernels are orthogonal and therefore analysis on the moment space becomes easier. Transformation with orthogonal moment kernels into moment space is analogous to projection of the image onto a number of orthogonal axes. # Nosiy components can be removed. This makes EigenMoments robust for classification applications. # Optimal information compaction can be obtained and therefore a few number of moments are needed to characterize the images.


Problem formulation

Assume that a signal vector s \in \mathcal^n is taken from a certain distribution having correlation C \in \mathcal^ , i.e. C=E s^T/math> where E denotes expected value. Dimension of signal space, n, is often too large to be useful for practical application such as pattern classification, we need to transform the signal space into a space with lower dimensionality. This is performed by a two-step linear transformation: q=W^T X^T s, where q= _1,...,q_nT \in \mathcal^k is the transformed signal, X= _1,...,x_nT \in \mathcal^ a fixed
transformation matrix In linear algebra, linear transformations can be represented by matrices. If T is a linear transformation mapping \mathbb^n to \mathbb^m and \mathbf x is a column vector with n entries, then there exists an m \times n matrix A, called the transfo ...
which transforms the signal into the moment space, and W= _1,...,w_nT \in \mathcal^ the transformation matrix which we are going to determine by maximizing the
SNR The initialism SNR may refer to: * Signal-to-noise ratio Signal-to-noise ratio (SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR is defined as the ratio ...
of the feature space resided by q. For the case of Geometric Moments, X would be the monomials. If m=k=n, a full rank transformation would result, however usually we have m \leq n and k \leq m. This is specially the case when n is of high dimensions. Finding W that maximizes the
SNR The initialism SNR may refer to: * Signal-to-noise ratio Signal-to-noise ratio (SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR is defined as the ratio ...
of the feature space: SNR_ = \frac, where N is the correlation matrix of the noise signal. The problem can thus be formulated as =argmax_w \frac subject to constraints: w_i^T X^T NX w_j=\delta_, where \delta_ is the
Kronecker delta In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: \delta_ = \begin 0 &\text i \neq j, \\ 1 &\ ...
. It can be observed that this maximization is Rayleigh quotient by letting A=X^TCX and B=X^TNX and therefore can be written as: =\underset \frac, w_i^TBw_j=\delta_


Rayleigh quotient

Optimization of
Rayleigh quotient In mathematics, the Rayleigh quotient () for a given complex Hermitian matrix M and nonzero vector ''x'' is defined as:R(M,x) = .For real matrices and vectors, the condition of being Hermitian reduces to that of being symmetric, and the conjugat ...
Strang, Linear Algebra and Its Applications, second ed., Academic Press, New York, 1980. has the form: \max_w R(w)= \max_w \frac and A and B, both are
symmetric Symmetry () in everyday life refers to a sense of harmonious and beautiful proportion and balance. In mathematics, the term has a more precise definition and is usually used to refer to an object that is invariant under some transformations ...
and B is
positive definite In mathematics, positive definiteness is a property of any object to which a bilinear form In mathematics, a bilinear form is a bilinear map on a vector space (the elements of which are called '' vectors'') over a field ''K'' (the elements of w ...
and therefore
invertible In mathematics, the concept of an inverse element generalises the concepts of opposite () and reciprocal () of numbers. Given an operation denoted here , and an identity element denoted , if , one says that is a left inverse of , and that ...
. Scaling w does not change the value of the object function and hence and additional scalar constraint w^Bw=1 can be imposed on w and no solution would be lost when the objective function is optimized. This constraint optimization problem can be solved using Lagrangian multiplier: \max_w subject to =1 \max_w \mathcal(w) = \max_w (wAw-\lambda w^Bw) equating first derivative to zero and we will have: Aw=\lambda Bw which is an instance of
Generalized Eigenvalue Problem In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the mat ...
(GEP). The GEP has the form: Aw=\lambda Bw for any pair (w,\lambda) that is a solution to above equation, w is called a
generalized eigenvector In linear algebra, a generalized eigenvector of an n\times n matrix A is a vector which satisfies certain criteria which are more relaxed than those for an (ordinary) eigenvector. Let V be an n-dimensional vector space and let A be the matrix r ...
and \lambda is called a generalized eigenvalue. Finding w and \lambda that satisfies this equations would produce the result which optimizes
Rayleigh quotient In mathematics, the Rayleigh quotient () for a given complex Hermitian matrix M and nonzero vector ''x'' is defined as:R(M,x) = .For real matrices and vectors, the condition of being Hermitian reduces to that of being symmetric, and the conjugat ...
. One way of maximizing
Rayleigh quotient In mathematics, the Rayleigh quotient () for a given complex Hermitian matrix M and nonzero vector ''x'' is defined as:R(M,x) = .For real matrices and vectors, the condition of being Hermitian reduces to that of being symmetric, and the conjugat ...
is through solving the Generalized Eigen Problem.
Dimension reduction Dimensionality reduction, or dimension reduction, is the transformation of data from a high-dimensional space into a low-dimensional space so that the low-dimensional representation retains some meaningful properties of the original data, ideally ...
can be performed by simply choosing the first components w_i, i=1,...,k, with the highest values for R(w) out of the m components, and discard the rest. Interpretation of this transformation is
rotating Rotation or rotational/rotary motion is the circular movement of an object around a central line, known as an ''axis of rotation''. A plane figure can rotate in either a clockwise or counterclockwise sense around a perpendicular axis intersec ...
and
scaling Scaling may refer to: Science and technology Mathematics and physics * Scaling (geometry), a linear transformation that enlarges or diminishes objects * Scale invariance, a feature of objects or laws that do not change if scales of length, energ ...
the moment space, transforming it into a feature space with maximized
SNR The initialism SNR may refer to: * Signal-to-noise ratio Signal-to-noise ratio (SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR is defined as the ratio ...
and therefore, the first k components are the components with highest k
SNR The initialism SNR may refer to: * Signal-to-noise ratio Signal-to-noise ratio (SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR is defined as the ratio ...
values. The other method to look at this solution is to use the concept of simultaneous diagonalization instead of Generalized Eigen Problem.


Simultaneous diagonalization

# Let A=X^TCX and B=X^TNX as mentioned earlier. We can write W as two separate transformation matrices: W=W_1W_2. #W_1 can be found by first diagonalize B: P^TBP=D_B. Where D_B is a
diagonal matrix In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagon ...
sorted in increasing order. Since B is positive definite, thus D_B>0. We can discard those
eigenvalues In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
that large and retain those close to 0, since this means the energy of the noise is close to 0 in this space, at this stage it is also possible to discard those
eigenvectors In linear algebra, an eigenvector ( ) or characteristic vector is a Vector (mathematics and physics), vector that has its direction (geometry), direction unchanged (or reversed) by a given linear map, linear transformation. More precisely, an e ...
that have large
eigenvalues In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
. Let \hat P be the first k columns of P, now \hatB\hat P=\hat where \hat is the k \times k principal submatrix of D_B. #Let W_1=\hat \hat^ and hence: W_1^T B W_1=(\hat P \hat^)^TB(\hat P \hat^)=I. W_1 whiten B and reduces the dimensionality from m to k. The transformed space resided by q'=W_1^TX^Ts is called the noise space. #Then, we diagonalize W_1^T A W_1: W_2^T W_1^T A W_1 W_2 = D_A, where W_2^T W_2 =I. D_A is the matrix with
eigenvalues In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
of W_1^T A W_1 on its diagonal. We may retain all the
eigenvalues In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
and their corresponding
eigenvectors In linear algebra, an eigenvector ( ) or characteristic vector is a Vector (mathematics and physics), vector that has its direction (geometry), direction unchanged (or reversed) by a given linear map, linear transformation. More precisely, an e ...
since most of the noise are already discarded in previous step. #Finally the transformation is given by: W=W_1W_2 where W diagonalizes both the numerator and denominator of the
SNR The initialism SNR may refer to: * Signal-to-noise ratio Signal-to-noise ratio (SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR is defined as the ratio ...
, W^TAW=D_A, W^TBW=I and the transformation of signal s is defined as q=W^TX^Ts=W_2^TW_1^TX^Ts.


Information loss

To find the information loss when we discard some of the eigenvalues and eigenvectors we can perform following analysis: \begin \eta &=& 1- \frac\\ &=& 1- \frac \end


Eigenmoments

Eigenmoments are derived by applying the above framework on Geometric Moments. They can be derived for both 1D and 2D signals.


1D signal

If we let X= ,x,x^2,...,x^/math>, i.e. the
monomials In mathematics, a monomial is, roughly speaking, a polynomial which has only one term. Two definitions of a monomial may be encountered: # A monomial, also called a power product or primitive monomial, is a product of powers of variables with non ...
, after the transformation X^T we obtain Geometric Moments, denoted by vector M, of signal s= (x)/math>,i.e. M=X^Ts. In practice it is difficult to estimate the correlation signal due to insufficient number of samples, therefore parametric approaches are utilized. One such model can be defined as: r(x_1,x_2)=r(0,0)e^, where r(0,0)=E r(ss^T)/math>. This model of correlation can be replaced by other models however this model covers general natural images. Since r(0,0) does not affect the maximization it can be dropped. A=X^TCX=\int_^\int_^ _1^j x_2^i e^^dx_1dx_2 The correlation of noise can be modelled as \sigma_n^2\delta(x_1,x_2), where \sigma_n^2 is the energy of noise. Again \sigma_n^2 can be dropped because the constant does not have any effect on the maximization problem. B=X^TNX=\int_^\int_^ _1^j x_2^i\delta(x_1,x_2)^dx_1dx_2 B=X^TNX=\int_^ _1^^dx_1=X^TX Using the computed A and B and applying the algorithm discussed in previous section we find W and set of transformed
monomials In mathematics, a monomial is, roughly speaking, a polynomial which has only one term. Two definitions of a monomial may be encountered: # A monomial, also called a power product or primitive monomial, is a product of powers of variables with non ...
\Phi= phi_1,...,\phi_kXW which produces the moment kernels of EM. The moment kernels of EM decorrelate the correlation in the image. \Phi^TC\Phi=(XW)^TC(XW)=D_C, and are orthogonal: \begin\Phi^T\Phi& = & (XW)^T(XW) \\ & = & W^TX^TX\\ & = & W^TX^TNXW\\ & = & W^TBW\\ & = & I\\ \end


Example computation

Taking c=0.5, the dimension of moment space as m=6 and the dimension of feature space as k=4, we will have: W= \left( \begin 0.0 & 0 & -0.7745 & -0.8960 \\ 2.8669 & -4.4622 & 0.0 & 0.0 \\ 0.0 & 0.0 & 7.9272 & 2.4523 \\ -4.0225 & 20.6505 & 0.0 & 0.0 \\ 0.0 & 0.0 & -9.2789 & -0.1239 \\ -0.5092 & -18.4582 & 0.0 & 0.0 \end \right) and \begin \phi_1&=& 2.8669x - 4.0225x^3 - 0.5092x^5 \\ \phi_2&=&-4.4622x + 20.6505x^3 - 18.4582x^5 \\ \phi_3&=&-0.7745 + 7.9272x^2 - 9.2789x^4 \\ \phi_4&=&-0.8960 + 2.4523x^2 - 0.1239x^4 \\ \end


2D signal

The derivation for 2D signal is the same as 1D signal except that conventional Geometric Moments are directly employed to obtain the set of 2D EigenMoments. The definition of Geometric Moments of order (p+q) for 2D image signal is: m_=\int_^1\int_^1 x^py^qf(x,y)dxdy. which can be denoted as M=\_^. Then the set of 2D EigenMoments are: \Omega=W^TMW, where \Omega=\_^ is a matrix that contains the set of EigenMoments. \Omega_=\Sigma_^\Sigma_^w_w_m_.


EigenMoment invariants (EMI)

In order to obtain a set of moment invariants we can use normalized Geometric Moments \hat M instead of M. Normalized Geometric Moments are invariant to Rotation, Scaling and Transformation and defined by: \begin \hat m_ & = & \alpha^p+q+2\int_^\int_^ x-x^c)cos(\theta)+(y-y^c)sin(\theta)p\\ & = & \times (x-x^c)sin(\theta)+(y-y^c)cos(\theta)q\\ & = & \times f(x,y)dxdy,\\ \end where: (x^c,y^c) = (m_/m_,m_/m_) is the centroid of the image f(x,y) and \begin \alpha&=& _^/m_\\ \theta&=&\fractan^\frac \end . m_^ in this equation is a scaling factor depending on the image. m_^ is usually set to 1 for binary images.


See also

*
Computer vision Computer vision tasks include methods for image sensor, acquiring, Image processing, processing, Image analysis, analyzing, and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical ...
*
Signal processing Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing ''signals'', such as audio signal processing, sound, image processing, images, Scalar potential, potential fields, Seismic tomograph ...
*
Image moment In image processing, computer vision and related fields, an image moment is a certain particular weighted average ( moment) of the image pixels' intensities, or a function of such moments, usually chosen to have some attractive property or interp ...


References

{{Reflist


External links


implementation of EigenMoments in Matlab
Signal processing Computer vision