HOME

TheInfoList



OR:

In multilinear algebra, the higher-order singular value decomposition (HOSVD) of a tensor is a specific orthogonal Tucker decomposition. It may be regarded as one generalization of the matrix singular value decomposition. It has applications in
computer vision Computer vision is an interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to understand and automate tasks that the hum ...
, computer graphics, machine learning, scientific computing, and signal processing. Some aspects can be traced as far back as
F. L. Hitchcock Frank Lauren Hitchcock (March 6, 1875 – May 31, 1957) was an American mathematician and physicist known for his formulation of the transportation problem in 1941. Academic life Frank did his preparatory study at Phillips Andover Academy. He ...
in 1928, but it was
L. R. Tucker Ledyard R. Tucker (19 September 1910 – 16 August 2004) was an American mathematician who specialized in statistics and psychometrics. His Ph.D. advisor at the University of Chicago was Louis Leon Thurstone. He was a lecturer in psychology at ...
who developed for third-order tensors the general Tucker decomposition in the 1960s, further advocated by L. De Lathauwer ''et al.'' in their Multilinear SVD work that employs the power method, and advocated by Vasilescu and Terzopoulos that developed M-mode SVD. The term HOSVD was coined by Lieven DeLathauwer, but the algorithm referred to commonly in the literature as the HOSVD and attributed to either Tucker or DeLathauwer was developed by Vasilescu and Terzopoulos.M. A. O. Vasilescu, D. Terzopoulos (2002) with the name M-mode SVD. It is a particular orthogonal Tucker that is suitable for parallel computatio
"Multilinear Analysis of Image Ensembles: TensorFaces"
Proc. 7th European Conference on Computer Vision (ECCV'02), Copenhagen, Denmark, May, 2002
M. A. O. Vasilescu, D. Terzopoulos (2003
"Multilinear Subspace Analysis of Image Ensembles"
"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’03), Madison, WI, June, 2003"
M. A. O. Vasilescu, D. Terzopoulos (2005
"Multilinear Independent Component Analysis"
"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, June 2005, vol.1, 547–553."
Robust and L1-norm-based variants of HOSVD have also been proposed.


Definition

For the purpose of this article, the abstract tensor \mathcal is assumed to be given in coordinates with respect to some basis as a M-way array, also denoted by \mathcal\in\mathbb^, where ''M'' is the number of modes and the order of the tensor. \mathbb is the complex numbers and it includes both the real numbers \mathbb and the pure imaginary numbers. Let _m \in \mathbb^be a unitary matrix containing a basis of the left singular vectors of the standard mode-''m'' flattening \mathcal_ of \mathcal such that the ''j''th column \mathbf_j of _m corresponds to the ''j''th largest singular value of \mathcal_. Observe that the mode/factor matrix _m does not depend on the particular on the specific definition of the mode ''m'' flattening. By the properties of the multilinear multiplication, we have\begin \mathcal &=& \mathcal\times (, , \ldots, ) \\ &=& \mathcal \times (_1 _1^H, _2 _2^H, \ldots, _M _M^H) \\ &=& \left(\mathcal \times (_1^H, _2^H, \ldots, _M^H) \right) \times (_1, _2, \ldots, _M), \endwhere \cdot^H denotes the conjugate transpose. The second equality is because the _m's are unitary matrices. Define now the core tensor\mathcal := \mathcal \times (_1^H, _2^H, \ldots, _M^H).Then, the HOSVD of \mathcal is the decomposition\mathcal = \mathcal\times (_1, _2, \ldots, _M). The above construction shows that every tensor has a HOSVD.


Compact HOSVD

As in the case of the compact singular value decomposition of a matrix, it is also possible to consider a compact HOSVD, which is very useful in applications. Assume that _m \in \mathbb^ is a matrix with unitary columns containing a basis of the left singular vectors corresponding to the nonzero singular values of the standard factor-''m'' flattening \mathcal_ of \mathcal. Let the columns of _m be sorted such that the r_m th column _ of _m corresponds to the ''r_m''th largest nonzero singular value of \mathcal_. Since the columns of _m form a basis for the image of \mathcal_, we have\mathcal_ = _m _m^H \mathcal_ = \bigl( \mathcal \times_m (_m _m^H) \bigr)_,where the first equality is due to the properties of
orthogonal projections In linear algebra and functional analysis, a projection is a linear transformation P from a vector space to itself (an endomorphism) such that P\circ P=P. That is, whenever P is applied twice to any vector, it gives the same result as if it wer ...
(in the Hermitian inner product) and the last equality is due to the properties of multilinear multiplication. As flattenings are bijective maps and the above formula is valid for all m=1,2,\ldots,m,\ldots,M, we find as before that\begin \mathcal &=& \mathcal \times (_1 _1^H, _2 _2^H, \ldots, _M _M^H)\\ &=& \left(\mathcal \times (_1^H, _2^H, \ldots, _M^H)\right) \times (_1, _2, \ldots, _M) \\ &=& \mathcal \times (_1, _2, \ldots, _M), \endwhere the core tensor \mathcal is now of size R_1 \times R_2 \times \cdots \times R_M.


Multilinear rank

The multilinear rank of \mathcal is denoted with rank-(R_1, R_2, \ldots, R_M) . The multilinear rank is a tuple in \mathbb^M where R_m := \mathrm( \mathcal_ ). Not all tuples in \mathbb^M are multilinear ranks. The multilinear ranks are bounded by 1 \le R_m \le I_m and it satisfy the constraint R_m \le \prod_ R_i must hold. The compact HOSVD is a rank-revealing deomposition in the sense that the dimensions of its core tensor correspond with the components of the multilinear rank of the tensor.


Interpretation

The following geometric interpretation is valid for both the full and compact HOSVD. Let (R_1, R_2, \ldots, R_M) be the multilinear rank of the tensor \mathcal. Since \mathcal \in ^ is a multidimensional array, we can expand it as follows\mathcal = \sum_^ \sum_^ \cdots \sum_^ s_ \mathbf_ \otimes \mathbf_ \otimes \cdots \otimes \mathbf_,where \mathbf_ is the r_mth standard basis vector of ^. By definition of the multilinear multiplication, it holds that\mathcal = \sum_^ \sum_^ \cdots \sum_^ s_ \mathbf_ \otimes \mathbf_ \otimes \cdots \otimes \mathbf_,where the \mathbf_ are the columns of _m \in ^. It is easy to verify that B = \_ is an orthonormal set of tensors. This means that the HOSVD can be interpreted as a way to express the tensor \mathcal with respect to a specifically chosen orthonormal basis B with the coefficients given as the multidimensional array \mathcal.


Computation

Let \mathcal \in ^ be a tensor with a rank-(R_1, R_2, \ldots, R_M), where \mathbb C contains the reals \mathbb as a subset.


Classic computation

The strategy for computing the Multilinear SVD and the M-mode SVD was introduced in the 1960s by
L. R. Tucker Ledyard R. Tucker (19 September 1910 – 16 August 2004) was an American mathematician who specialized in statistics and psychometrics. His Ph.D. advisor at the University of Chicago was Louis Leon Thurstone. He was a lecturer in psychology at ...
, further advocated by L. De Lathauwer ''et al.'', and by Vasilescu and Terzopulous. The term HOSVD was coined by Lieven De Lathauwer, but the algorithm typically referred to in the literature as HOSVD was introduced by Vasilescu and Terzopoulos with the name M-mode SVD. It is a parallel computation that employs the matrix SVD to compute the orthonormal mode matrices.


M-mode SVD:

* For m=0,1,\ldots,M, do the following: # Construct the mode-''m'' flattening \mathcal_; # Compute the (compact) singular value decomposition \mathcal_ = _m _m ^T_m , and store the left singular vectors \in \mathbb^; * Compute the core tensor \mathcal via the multilinear multiplication \mathcal = \mathcal\times_0 _0^H \times_1 _1^H \times_2 _2^H \ldots \times_m _m^H \ldots \times_M _M^H


Interlacing computation

A strategy that is significantly faster when some or all r_k \ll n_k consists of interlacing the computation of the core tensor and the factor matrices, as follows: * Set \mathcal^0 = \mathcal; * For m = 0,1,2 \ldots, M perform the following: *# Construct the standard mode-''m'' flattening \mathcal_^; *# Compute the (compact) singular value decomposition \mathcal_^ = U_m \Sigma_m V^T_m , and store the left singular vectors U_m \in F^; *# Set \mathcal^m = U_m^H \times_m \mathcal^ , or, equivalently, \mathcal^m_ = \Sigma_m V_m^T .


Approximation

In applications, such as those mentioned below, a common problem consists of approximating a given tensor \mathcal \in \mathbb^ by one with a reduced multilinear rank. Formally, if the multilinear rank of \mathcal is denoted by \mathrm(R_1,R_2,\ldots,R_m,\ldots,R_M) , then computing the optimal \mathcal that approximates \mathcal for a given reduced \mathrm(\bar R_1,\bar R_2,\ldots,\bar R_m,\ldots,\bar R_M) is a nonlinear non-convex \ell_2 -optimization problem \min_ \frac \, \mathcal - \mathcal \, _F^2 \quad\text\quad \mathrm(\bar R_1, \bar R_2, \ldots, \bar R_M), where (\bar R_1, \bar R_2, \ldots, \bar R_M) \in \mathbb^M is the reduced multilinear rank with 1 \le \bar R_m < R_m \le I_m , and the norm \, .\, _F is the Frobenius norm. A simple idea for trying to solve this optimization problem is to truncate the (compact) SVD in step 2 of either the classic or the interlaced computation. A classically truncated HOSVD is obtained by replacing step 2 in the classic computation by * Compute a rank-\bar R_m truncated SVD \mathcal_ \approx U_m \Sigma_m V^T_m , and store the top \bar R_m left singular vectors U_m \in F^; while a sequentially truncated HOSVD (or successively truncated HOSVD) is obtained by replacing step 2 in the interlaced computation by * Compute a rank-\bar R_m truncated SVD \mathcal_^ \approx U_m \Sigma_m V^T_m , and store the top \bar R_m left singular vectors U_m \in F^. Unfortunately, truncation does not result in an optimal solution for the best low multilinear rank optimization problem,. However, both the classically and interleaved truncated HOSVD result in a quasi-optimal solution: if \mathcal_t denotes the classically or sequentially truncated HOSVD and \mathcal^* denotes the optimal solution to the best low multilinear rank approximation problem, then\, \mathcal - \mathcal_t \, _F \le \sqrt \, \mathcal - \mathcal^* \, _F; in practice this means that if there exists an optimal solution with a small error, then a truncated HOSVD will for many intended purposes also yield a sufficiently good solution.


Applications

The HOSVD is most commonly applied to the extraction of relevant information from multi-way arrays. Starting in the early 2000s, Vasilescu addresed causal questions by reframing the data analysis, recognition and synthesis problems as multilinear tensor problems. The power of the tensor framework was showcased by decomposing and representing an image in terms of its causal factors of data formation, in the context of Human Motion Signatures for gait recognition,M. A. O. Vasilescu (2002
"Human Motion Signatures: Analysis, Synthesis, Recognition," Proceedings of International Conference on Pattern Recognition (ICPR 2002), Vol. 3, Quebec City, Canada, Aug, 2002, 456–460.
/ref> face recognition— TensorFacesM.A.O. Vasilescu, D. Terzopoulos (2003
"Multilinear Subspace Analysis for Image Ensembles,'' M. A. O. Vasilescu, D. Terzopoulos, Proc. Computer Vision and Pattern Recognition Conf. (CVPR '03), Vol.2, Madison, WI, June, 2003, 93–99.''
/ref>M.A.O. Vasilescu, D. Terzopoulos (2002
"Multilinear Analysis of Image Ensembles: TensorFaces," Proc. 7th European Conference on Computer Vision (ECCV'02), Copenhagen, Denmark, May, 2002, in Computer Vision -- ECCV 2002, Lecture Notes in Computer Science, Vol. 2350, A. Heyden et al. (Eds.), Springer-Verlag, Berlin, 2002, 447–460.
/ref> and computer graphics—TensorTextures.M.A.O. Vasilescu, D. Terzopoulos (2004
"TensorTextures: Multilinear Image-Based Rendering", M. A. O. Vasilescu and D. Terzopoulos, Proc. ACM SIGGRAPH 2004 Conference Los Angeles, CA, August, 2004, in Computer Graphics Proceedings, Annual Conference Series, 2004, 336–342.
/ref> The HOSVD has been successfully applied to signal processing and big data, e.g., in genomic signal processing. These applications also inspired a higher-order GSVD (HO GSVD) and a tensor GSVD. A combination of HOSVD and SVD also has been applied for real-time event detection from complex data streams (multivariate data with space and time dimensions) in disease surveillance. It is also used in tensor product model transformation-based controller design. The concept of HOSVD was carried over to functions by Baranyi and Yam via the
TP model transformation In mathematics, the tensor product (TP) model transformation was proposed by Baranyi and Yam as key concept for higher-order singular value decomposition of functions. It transforms a function (which can be given via closed formulas or neural netw ...
. This extension led to the definition of the HOSVD-based canonical form of tensor product functions and Linear Parameter Varying system models and to convex hull manipulation based control optimization theory, see
TP model transformation in control theories Baranyi and Yam proposed the TP model transformation as a new concept in quasi-LPV (qLPV) based control, which plays a central role in the highly desirable bridging between identification and polytopic systems theories. It is also used as a TS (Taka ...
. HOSVD was proposed to be applied to multi-view data analysis and was successfully applied to in silico drug discovery from gene expression.


Robust L1-norm variant

L1-Tucker is the L1-norm-based,
robust Robustness is the property of being strong and healthy in constitution. When it is transposed into a system, it refers to the ability of tolerating perturbations that might affect the system’s functional body. In the same line ''robustness'' ca ...
variant of Tucker decomposition. L1-HOSVD is the analogous of HOSVD for the solution to L1-Tucker.


References

{{DEFAULTSORT:Higher Order Singular Value Decomposition Multilinear algebra Tensors