Partial Least Squares
   HOME

TheInfoList



OR:

Partial least squares (PLS) regression is a
statistical Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
method that bears some relation to principal components regression and is a reduced rank regression; instead of finding
hyperplane In geometry, a hyperplane is a generalization of a two-dimensional plane in three-dimensional space to mathematical spaces of arbitrary dimension. Like a plane in space, a hyperplane is a flat hypersurface, a subspace whose dimension is ...
s of maximum
variance In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion ...
between the response and independent variables, it finds a
linear regression In statistics, linear regression is a statistical model, model that estimates the relationship between a Scalar (mathematics), scalar response (dependent variable) and one or more explanatory variables (regressor or independent variable). A mode ...
model by projecting the predicted variables and the
observable variable In physics, an observable is a physical property or physical quantity that can be measured. In classical mechanics, an observable is a real-valued "function" on the set of all possible system states, e.g., position and momentum. In quantum m ...
s to a new space of maximum covariance (see below). Because both the ''X'' and ''Y'' data are projected to new spaces, the PLS family of methods are known as bilinear factor models. Partial least squares discriminant analysis (PLS-DA) is a variant used when the ''Y'' is categorical. PLS is used to find the fundamental relations between two
matrices Matrix (: matrices or matrixes) or MATRIX may refer to: Science and mathematics * Matrix (mathematics), a rectangular array of numbers, symbols or expressions * Matrix (logic), part of a formula in prenex normal form * Matrix (biology), the ...
(''X'' and ''Y''), i.e. a
latent variable In statistics, latent variables (from Latin: present participle of ) are variables that can only be inferred indirectly through a mathematical model from other observable variables that can be directly observed or measured. Such '' latent va ...
approach to modeling the
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. The sign of the covariance, therefore, shows the tendency in the linear relationship between the variables. If greater values of one ...
structures in these two spaces. A PLS model will try to find the multidimensional direction in the ''X'' space that explains the maximum multidimensional variance direction in the ''Y'' space. PLS regression is particularly suited when the matrix of predictors has more variables than observations, and when there is
multicollinearity In statistics, multicollinearity or collinearity is a situation where the predictors in a regression model are linearly dependent. Perfect multicollinearity refers to a situation where the predictive variables have an ''exact'' linear rela ...
among ''X'' values. By contrast, standard regression will fail in these cases (unless it is regularized). Partial least squares was introduced by the Swedish statistician Herman O. A. Wold, who then developed it with his son, Svante Wold. An alternative term for PLS is ''projection to latent structures'', but the term ''partial least squares'' is still dominant in many areas. Although the original applications were in the social sciences, PLS regression is today most widely used in
chemometrics Chemometrics is the science of extracting information from chemical systems by data-driven means. Chemometrics is inherently interdisciplinary, using methods frequently employed in core data-analytic disciplines such as multivariate statistics, ap ...
and related areas. It is also used in
bioinformatics Bioinformatics () is an interdisciplinary field of science that develops methods and Bioinformatics software, software tools for understanding biological data, especially when the data sets are large and complex. Bioinformatics uses biology, ...
, sensometrics,
neuroscience Neuroscience is the scientific study of the nervous system (the brain, spinal cord, and peripheral nervous system), its functions, and its disorders. It is a multidisciplinary science that combines physiology, anatomy, molecular biology, ...
, and
anthropology Anthropology is the scientific study of humanity, concerned with human behavior, human biology, cultures, society, societies, and linguistics, in both the present and past, including archaic humans. Social anthropology studies patterns of behav ...
.


Core idea

We are given a sample of n paired observations (\vec_i, \vec_i), i \in . In the first step j=1, the partial least squares regression searches for the normalized direction \vec_j, \vec_j that maximizes the covariance : \max_ \operatorname E underbrace_ \underbrace_ Note below, the algorithm is denoted in matrix notation.


Underlying model

The general underlying model of multivariate PLS with \ell components is :X = T P^\mathrm + E :Y = U Q^\mathrm + F where * is an n \times m matrix of predictors * is an n \times p matrix of responses * and are n \times \ell matrices that are, respectively, projections of (the ''X score'', ''component'' or ''factor'' matrix) and projections of (the ''Y scores'') * and are, respectively, m \times \ell and p \times \ell ''loading'' matrices * and matrices and are the error terms, assumed to be independent and identically distributed random normal variables. The decompositions of and are made so as to maximise the
covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. The sign of the covariance, therefore, shows the tendency in the linear relationship between the variables. If greater values of one ...
between and . Note that this covariance is defined pair by pair: the covariance of column ''i'' of (length ''n'') with the column ''i'' of (length ''n'') is maximized. Additionally, the covariance of the column i of with the column ''j'' of (with i \ne j) is zero. In PLSR, the loadings are thus chosen so that the scores form an orthogonal basis. This is a major difference with PCA where orthogonality is imposed onto loadings (and not the scores).


Algorithms

A number of variants of PLS exist for estimating the factor and loading matrices and . Most of them construct estimates of the linear regression between and as Y = X \tilde + \tilde_0. Some PLS algorithms are only appropriate for the case where is a column vector, while others deal with the general case of a matrix . Algorithms also differ on whether they estimate the factor matrix as an orthogonal (that is,
orthonormal In linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal unit vectors. A unit vector means that the vector has a length of 1, which is also known as normalized. Orthogonal means that the vectors are all perpe ...
) matrix or not. The final prediction will be the same for all these varieties of PLS, but the components will differ. PLS is composed of iteratively repeating the following steps ''k'' times (for ''k'' components): # finding the directions of maximal covariance in input and output space # performing least squares regression on the input score # deflating the input X and/or target Y


PLS1

PLS1 is a widely used algorithm appropriate for the vector case. It estimates as an orthonormal matrix. (Caution: the vectors in the code below may not be normalized appropriately; see talk.) In pseudocode it is expressed below (capital letters are matrices, lower case letters are vectors if they are superscripted and scalars if they are subscripted). 1 2 3 , an initial estimate of . 4 5 6 7 8 9 10 11 12 13 14 15 16 define to be the matrix Do the same to form the matrix and vector. 17 18 19 This form of the algorithm does not require centering of the input and , as this is performed implicitly by the algorithm. This algorithm features 'deflation' of the matrix (subtraction of t_k t^ ^\mathrm), but deflation of the vector is not performed, as it is not necessary (it can be proved that deflating yields the same results as not deflating). The user-supplied variable is the limit on the number of latent factors in the regression; if it equals the rank of the matrix , the algorithm will yield the least squares regression estimates for and B_0


Extensions


OPLS

In 2002 a new method was published called orthogonal projections to latent structures (OPLS). In OPLS, continuous variable data is separated into predictive and uncorrelated (orthogonal) information. This leads to improved diagnostics, as well as more easily interpreted visualization. However, these changes only improve the interpretability, not the predictivity, of the PLS models. Similarly, OPLS-DA (Discriminant Analysis) may be applied when working with discrete variables, as in classification and biomarker studies. The general underlying model of OPLS is :X = T P^\mathrm +T_\text P^\mathrm_\text + E :Y = U Q^\mathrm + F or in O2-PLS :X = T P^\mathrm +T_\text P^\mathrm_\text + E :Y = U Q^\mathrm +U_\text Q^\mathrm_\text + F


L-PLS

Another extension of PLS regression, named L-PLS for its L-shaped matrices, connects 3 related data blocks to improve predictability. In brief, a new ''Z'' matrix, with the same number of columns as the ''X'' matrix, is added to the PLS regression analysis and may be suitable for including additional background information on the interdependence of the predictor variables.


3PRF

In 2015 partial least squares was related to a procedure called the three-pass regression filter (3PRF). Supposing the number of observations and variables are large, the 3PRF (and hence PLS) is asymptotically normal for the "best" forecast implied by a linear latent factor model. In stock market data, PLS has been shown to provide accurate out-of-sample forecasts of returns and cash-flow growth.


Partial least squares SVD

A PLS version based on singular value decomposition (SVD) provides a memory efficient implementation that can be used to address high-dimensional problems, such as relating millions of genetic markers to thousands of imaging features in imaging genetics, on consumer-grade hardware.


PLS correlation

PLS correlation (PLSC) is another methodology related to PLS regression, which has been used in neuroimaging and sport science, to quantify the strength of the relationship between data sets. Typically, PLSC divides the data into two blocks (sub-groups) each containing one or more variables, and then uses singular value decomposition (SVD) to establish the strength of any relationship (i.e. the amount of shared information) that might exist between the two component sub-groups. It does this by using SVD to determine the inertia (i.e. the sum of the singular values) of the covariance matrix of the sub-groups under consideration.


See also

*
Canonical correlation In statistics, canonical-correlation analysis (CCA), also called canonical variates analysis, is a way of inferring information from cross-covariance matrices. If we have two vectors ''X'' = (''X''1, ..., ''X'n'') and ''Y'' ...
*
Data mining Data mining is the process of extracting and finding patterns in massive data sets involving methods at the intersection of machine learning, statistics, and database systems. Data mining is an interdisciplinary subfield of computer science and ...
*
Deming regression In statistics, Deming regression, named after W. Edwards Deming, is an errors-in-variables model that tries to find the line of best fit for a two-dimensional data set. It differs from the simple linear regression in that it accounts for errors ...
*
Feature extraction Feature may refer to: Computing * Feature recognition, could be a hole, pocket, or notch * Feature (computer vision), could be an edge, corner or blob * Feature (machine learning), in statistics: individual measurable properties of the phenome ...
*
Machine learning Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
*
Partial least squares path modeling The partial least squares path modeling or partial least squares structural equation modeling (PLS-PM, PLS-SEM) is a method for structural equation modeling that allows estimation of complex cause-effect relationships in path models with latent va ...
*
Principal component analysis Principal component analysis (PCA) is a linear dimensionality reduction technique with applications in exploratory data analysis, visualization and data preprocessing. The data is linearly transformed onto a new coordinate system such that th ...
* Regression analysis *
Total sum of squares In statistical data analysis the total sum of squares (TSS or SST) is a quantity that appears as part of a standard way of presenting results of such analyses. For a set of observations, y_i, i\leq n, it is defined as the sum over all squared dif ...
* Projection pursuit regression


References


Literature

* * * * * * * * * * * * * * *


External links


A short introduction to PLS regression and its history

Video: Derivation of PLS by Prof. H. Harry Asada
{{DEFAULTSORT:Partial Least Squares Regression Latent variable models Least squares Articles with example pseudocode