MDS Plot
   HOME

TheInfoList



OR:

Multidimensional scaling (MDS) is a means of visualizing the level of similarity of individual cases of a data set. MDS is used to translate distances between each pair of n objects in a set into a configuration of n points mapped into an abstract
Cartesian space In geometry, a Cartesian coordinate system (, ) in a plane is a coordinate system that specifies each point uniquely by a pair of real numbers called ''coordinates'', which are the signed distances to the point from two fixed perpendicular o ...
. More technically, MDS refers to a set of related
ordination Ordination is the process by which individuals are Consecration in Christianity, consecrated, that is, set apart and elevated from the laity class to the clergy, who are thus then authorized (usually by the religious denomination, denominationa ...
techniques used in
information visualization Data and information visualization (data viz/vis or info viz/vis) is the practice of designing and creating Graphics, graphic or visual Representation (arts), representations of a large amount of complex quantitative and qualitative data and i ...
, in particular to display the information contained in a
distance matrix In mathematics, computer science and especially graph theory, a distance matrix is a square matrix (two-dimensional array) containing the distances, taken pairwise, between the elements of a set. Depending upon the application involved, the ''dist ...
. It is a form of
non-linear dimensionality reduction Nonlinear dimensionality reduction, also known as manifold learning, is any of various related techniques that aim to project high-dimensional data, potentially existing across non-linear manifolds which cannot be adequately captured by linear de ...
. Given a distance matrix with the distances between each pair of objects in a set, and a chosen number of dimensions, ''N'', an MDS
algorithm In mathematics and computer science, an algorithm () is a finite sequence of Rigour#Mathematics, mathematically rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algo ...
places each object into ''N''-
dimension In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coo ...
al space (a lower-dimensional representation) such that the between-object distances are preserved as well as possible. For ''N'' = 1, 2, and 3, the resulting points can be visualized on a
scatter plot A scatter plot, also called a scatterplot, scatter graph, scatter chart, scattergram, or scatter diagram, is a type of plot or mathematical diagram using Cartesian coordinates to display values for typically two variables for a set of dat ...
. Core theoretical contributions to MDS were made by James O. Ramsay of
McGill University McGill University (French: Université McGill) is an English-language public research university in Montreal, Quebec, Canada. Founded in 1821 by royal charter,Frost, Stanley Brice. ''McGill University, Vol. I. For the Advancement of Learning, ...
, who is also regarded as the founder of
functional data analysis Functional data analysis (FDA) is a branch of statistics that analyses data providing information about curves, surfaces or anything else varying over a continuum. In its most general form, under an FDA framework, each sample element of functional ...
.


Types

MDS algorithms fall into a
taxonomy image:Hierarchical clustering diagram.png, 280px, Generalized scheme of taxonomy Taxonomy is a practice and science concerned with classification or categorization. Typically, there are two parts to it: the development of an underlying scheme o ...
, depending on the meaning of the input matrix:


Classical multidimensional scaling

It is also known as Principal Coordinates Analysis (PCoA), Torgerson Scaling or Torgerson–Gower scaling. It takes an input matrix giving dissimilarities between pairs of items and outputs a coordinate matrix whose configuration minimizes a
loss function In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost ...
called ''strain'', which is given by \text_D(x_1,x_2,...,x_n)=\Biggl(\frac \Biggr)^, where x_ denote vectors in ''N''-dimensional space, x_i^T x_j denotes the scalar product between x_ and x_, and b_ are the elements of the matrix B defined on step 2 of the following algorithm, which are computed from the distances. : Steps of a Classical MDS algorithm: : Classical MDS uses the fact that the coordinate matrix X can be derived by
eigenvalue decomposition In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the mat ...
from B=XX'. And the matrix B can be computed from proximity matrix D by using double centering. :# Set up the squared proximity matrix D^= _^2/math> :# Apply double centering: B=-\fracCD^C using the
centering matrix In mathematics and multivariate statistics, the centering matrixJohn I. Marden, ''Analyzing and Modeling Rank Data'', Chapman & Hall, 1995, , page 59. is a symmetric and idempotent matrix, which when multiplied with a vector has the same effect a ...
C=I-\fracJ_n, where n is the number of objects, I is the n \times n identity matrix, and J_ is an n\times n matrix of all ones. :# Determine the m largest
eigenvalues In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
\lambda_1,\lambda_2,...,\lambda_m and corresponding
eigenvectors In linear algebra, an eigenvector ( ) or characteristic vector is a Vector (mathematics and physics), vector that has its direction (geometry), direction unchanged (or reversed) by a given linear map, linear transformation. More precisely, an e ...
e_1,e_2,...,e_m of B (where m is the number of dimensions desired for the output). :# Now, X=E_m\Lambda_m^ , where E_m is the matrix of m eigenvectors and \Lambda_m is the
diagonal matrix In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagon ...
of m eigenvalues of B. :Classical MDS assumes metric distances. So this is not applicable for direct dissimilarity ratings.


Metric multidimensional scaling (mMDS)

It is a superset of classical MDS that generalizes the optimization procedure to a variety of loss functions and input matrices of known distances with weights and so on. A useful loss function in this context is called ''stress'', which is often minimized using a procedure called stress majorization. Metric MDS minimizes the cost function called “stress” which is a residual sum of squares:
\text_D(x_1,x_2,...,x_n)=\sqrt.
Metric scaling uses a power transformation with a user-controlled exponent p: d_^p and -d_^ for distance. In classical scaling p=1. Non-metric scaling is defined by the use of isotonic regression to nonparametrically estimate a transformation of the dissimilarities.


Non-metric multidimensional scaling (NMDS)

In contrast to metric MDS, non-metric MDS finds both a
non-parametric Nonparametric statistics is a type of statistical analysis that makes minimal assumptions about the underlying distribution of the data being studied. Often these models are infinite-dimensional, rather than finite dimensional, as in parametric sta ...
monotonic In mathematics, a monotonic function (or monotone function) is a function between ordered sets that preserves or reverses the given order. This concept first arose in calculus, and was later generalized to the more abstract setting of ord ...
relationship between the dissimilarities in the item-item matrix and the Euclidean distances between items, and the location of each item in the low-dimensional space. Let d_ be the dissimilarity between points i, j. Let \hat d_ = \, x_i - x_j\, be the Euclidean distance between embedded points x_i, x_j. Now, for each choice of the embedded points x_i and is a monotonically increasing function f, define the "stress" function: :
S(x_1, ..., x_n; f)=\sqrt.
The factor of \sum_ \hat d_^2 in the denominator is necessary to prevent a "collapse". Suppose we define instead S=\sqrt, then it can be trivially minimized by setting f = 0, then collapse every point to the same point. A few variants of this cost function exist. MDS programs automatically minimize stress in order to obtain the MDS solution. The core of a non-metric MDS algorithm is a twofold optimization process. First the optimal monotonic transformation of the proximities has to be found. Secondly, the points of a configuration have to be optimally arranged, so that their distances match the scaled proximities as closely as possible. NMDS needs to optimize two objectives simultaneously. This is usually done iteratively: :# Initialize x_i randomly, e. g. by sampling from a normal distribution. :# Do until a stopping criterion (for example, S < \epsilon) :## Solve for f = \arg\min_f S(x_1, ..., x_n ; f) by
isotonic regression In statistics and numerical analysis, isotonic regression or monotonic regression is the technique of fitting a free-form line to a sequence of observations such that the fitted line is non-decreasing function, non-decreasing (or non-increasing) ...
. :## Solve for x_1, ..., x_n = \arg\min_ S(x_1, ..., x_n ; f) by gradient descent or other methods. :#Return x_i and f
Louis Guttman Louis Guttman (; February 10, 1916 – October 25, 1987) was an American sociologist and Professor of Social and Psychological Assessment at the Hebrew University of Jerusalem, known primarily for his work in social statistics. Biography Louis ( ...
's smallest space analysis (SSA) is an example of a non-metric MDS procedure.


Generalized multidimensional scaling (GMD)

An extension of metric multidimensional scaling, in which the target space is an arbitrary smooth non-Euclidean space. In cases where the dissimilarities are distances on a surface and the target space is another surface, GMDS allows finding the minimum-distortion embedding of one surface into another.


Super multidimensional scaling (SMDS)

An extension of MDS, known as Super MDS, incorporates both distance and angle information for improved source localization. Unlike traditional MDS, which uses only distance measurements, Super MDS processes both distance and angle-of-arrival (AOA) data algebraically (without iteration) to achieve better accuracy. The method proceeds in the following steps: # Construct the Reduced Edge Gram Kernel: For a network of N sources in an \eta-dimensional space, define the edge vectors as v_ = x_ - x_. The dissimilarity is given by k_ = \langle v_i, v_j \rangle. Assemble these into the full kernel K = VV^T, and then form the reduced kernel using the N-1 independent vectors: \bar = \ ^T, # Eigen-Decomposition: Compute the eigen-decomposition of \bar, # Estimate Edge Vectors: Recover the edge vectors as \hat = \Bigl( U_\, \Lambda^_ \Bigr)^T , # Procrustes Alignment: Retrieve \hat from V via Procrustes Transformation, # Compute Coordinates: Solve the following linear equations to compute the coordinate estimates \begin 1 \vline \mathbf_ \\ \hline \mathbf_ \end \cdot \begin\mathbf_ \\ \hline mathbf \end=\begin \mathbf_ \\ \hline mathbf \end, This concise approach reduces the need for multiple anchors and enhances localization precision by leveraging angle constraints.


Details

The data to be analyzed is a collection of M objects (colors, faces, stocks, . . .) on which a ''distance function'' is defined, :d_ := distance between i-th and j-th objects. These distances are the entries of the ''dissimilarity matrix'' : D := \begin d_ & d_ & \cdots & d_ \\ d_ & d_ & \cdots & d_ \\ \vdots & \vdots & & \vdots \\ d_ & d_ & \cdots & d_ \end. The goal of MDS is, given D, to find M vectors x_1,\ldots,x_M \in \mathbb^N such that :\, x_i - x_j\, \approx d_ for all i,j\in , where \, \cdot\, is a Norm (mathematics), vector norm. In classical MDS, this norm is the Euclidean distance, but, in a broader sense, it may be a metric (mathematics), metric or arbitrary distance function.Joseph Kruskal, Kruskal, J. B., and Wish, M. (1978), ''Multidimensional Scaling'', Sage University Paper series on Quantitative Application in the Social Sciences, 07-011. Beverly Hills and London: Sage Publications. For example, when dealing with mixed-type data that contain numerical as well as categorical descriptors, Gower's distance is a common alternative. In other words, MDS attempts to find a mapping from the M objects into \mathbb^N such that distances are preserved. If the dimension N is chosen to be 2 or 3, we may plot the vectors x_i to obtain a visualization of the similarities between the M objects. Note that the vectors x_i are not unique: With the Euclidean distance, they may be arbitrarily translated, rotated, and reflected, since these transformations do not change the pairwise distances \, x_i - x_j\, . (Note: The symbol \mathbb indicates the set of real numbers, and the notation \mathbb^N refers to the Cartesian product of N copies of \mathbb, which is an N-dimensional vector space over the field of the real numbers.) There are various approaches to determining the vectors x_i. Usually, MDS is formulated as an optimization (mathematics), optimization problem, where (x_1,\ldots,x_M) is found as a minimizer of some cost function, for example, : \underset \sum_ ( \, x_i - x_j\, - d_ )^2. \, A solution may then be found by numerical optimization techniques. For some particularly chosen cost functions, minimizers can be stated analytically in terms of matrix Eigendecomposition of a matrix, eigendecompositions.


Procedure

There are several steps in conducting MDS research: # Formulating the problem – What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for? # Obtaining input data – For example, :- Respondents are asked a series of questions. For each product pair, they are asked to rate similarity (usually on a 7-point Likert scale from very similar to very dissimilar). The first question could be for Coke/Pepsi for example, the next for Coke/Hires rootbeer, the next for Pepsi/Dr Pepper, the next for Dr Pepper/Hires rootbeer, etc. The number of questions is a function of the number of brands and can be calculated as Q = N (N - 1) / 2 where ''Q'' is the number of questions and ''N'' is the number of brands. This approach is referred to as the “Perception data : direct approach”. There are two other approaches. There is the “Perception data : derived approach” in which products are decomposed into attributes that are rated on a semantic differential scale. The other is the “Preference data approach” in which respondents are asked their preference rather than similarity. # Running the MDS statistical program – Software for running the procedure is available in many statistical software packages. Often there is a choice between Metric MDS (which deals with interval or ratio level data), and Nonmetric MDS (which deals with ordinal data). # Decide number of dimensions – The researcher must decide on the number of dimensions they want the computer to create. Interpretability of the MDS solution is often important, and lower dimensional solutions will typically be easier to interpret and visualize. However, dimension selection is also an issue of balancing underfitting and overfitting. Lower dimensional solutions may underfit by leaving out important dimensions of the dissimilarity data. Higher dimensional solutions may overfit to noise in the dissimilarity measurements. Model selection tools like Akaike information criterion, AIC, Bayesian information criterion, BIC, Bayes factors, or Cross-validation (statistics), cross-validation can thus be useful to select the dimensionality that balances underfitting and overfitting. # Mapping the results and defining the dimensions – The statistical program (or a related module) will map the results. The map will plot each product (usually in two-dimensional space). The proximity of products to each other indicate either how similar they are or how preferred they are, depending on which approach was used. How the dimensions of the embedding actually correspond to dimensions of system behavior, however, are not necessarily obvious. Here, a subjective judgment about the correspondence can be made (see perceptual mapping). # Test the results for reliability and validity – Compute R-squared to determine what proportion of variance of the scaled data can be accounted for by the MDS procedure. An R-square of 0.6 is considered the minimum acceptable level. An R-square of 0.8 is considered good for metric scaling and .9 is considered good for non-metric scaling. Other possible tests are Kruskal’s Stress, split data tests, data stability tests (i.e., eliminating one brand), and test-retest reliability. # Report the results comprehensively – Along with the mapping, at least distance measure (e.g., Sorenson index, Jaccard index) and reliability (e.g., stress value) should be given. It is also very advisable to give the algorithm (e.g., Kruskal, Mather), which is often defined by the program used (sometimes replacing the algorithm report), if you have given a start configuration or had a random choice, the number of runs, the assessment of dimensionality, the Monte Carlo method results, the number of iterations, the assessment of stability, and the proportional variance of each axis (r-square).


Implementations

* ELKI includes two MDS implementations. * MATLAB includes two MDS implementations (for classical (''cmdscale'') and non-classical (''mdscale'') MDS respectively). * The R (programming language), R programming language offers several MDS implementations, e.g. base ''cmdscale'' function, package
smacof
ref>
(mMDS and nMDS), an
vegan
(weighted MDS). * scikit-learn contains functio
sklearn.manifold.MDS


See also

* Data clustering * t-distributed stochastic neighbor embedding * Factor analysis * Discriminant analysis * Dimensionality reduction * Distance geometry * Cayley–Menger determinant * Sammon mapping * Iconography of correlations


References


Bibliography

* * * * * * {{Authority control Dimension reduction Quantitative marketing research Psychometrics