HOME

TheInfoList



OR:

In
computer vision Computer vision tasks include methods for image sensor, acquiring, Image processing, processing, Image analysis, analyzing, and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical ...
,
pattern recognition Pattern recognition is the task of assigning a class to an observation based on patterns extracted from data. While similar, pattern recognition (PR) is not to be confused with pattern machines (PM) which may possess PR capabilities but their p ...
, and
robotics Robotics is the interdisciplinary study and practice of the design, construction, operation, and use of robots. Within mechanical engineering, robotics is the design and construction of the physical structures of robots, while in computer s ...
, point-set registration, also known as point-cloud registration or scan matching, is the process of finding a spatial transformation (''e.g.,''
scaling Scaling may refer to: Science and technology Mathematics and physics * Scaling (geometry), a linear transformation that enlarges or diminishes objects * Scale invariance, a feature of objects or laws that do not change if scales of length, energ ...
,
rotation Rotation or rotational/rotary motion is the circular movement of an object around a central line, known as an ''axis of rotation''. A plane figure can rotate in either a clockwise or counterclockwise sense around a perpendicular axis intersect ...
and
translation Translation is the communication of the semantics, meaning of a #Source and target languages, source-language text by means of an Dynamic and formal equivalence, equivalent #Source and target languages, target-language text. The English la ...
) that aligns two
point cloud A point cloud is a discrete set of data Point (geometry), points in space. The points may represent a 3D shape or object. Each point Position (geometry), position has its set of Cartesian coordinates (X, Y, Z). Points may contain data other than ...
s. The purpose of finding such a transformation includes merging multiple data sets into a globally consistent model (or coordinate frame), and mapping a new measurement to a known data set to identify features or to estimate its pose. Raw 3D point cloud data are typically obtained from
Lidar Lidar (, also LIDAR, an acronym of "light detection and ranging" or "laser imaging, detection, and ranging") is a method for determining ranging, ranges by targeting an object or a surface with a laser and measuring the time for the reflected li ...
s and RGB-D cameras. 3D point clouds can also be generated from computer vision algorithms such as
triangulation In trigonometry and geometry, triangulation is the process of determining the location of a point by forming triangles to the point from known points. Applications In surveying Specifically in surveying, triangulation involves only angle m ...
,
bundle adjustment In photogrammetry and computer stereo vision, bundle adjustment is simultaneous refining of the 3D coordinates describing the scene geometry, the parameters of the relative motion, and the optical characteristics of the camera(s) employed to acq ...
, and more recently, monocular image depth estimation using
deep learning Deep learning is a subset of machine learning that focuses on utilizing multilayered neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience a ...
. For 2D point set registration used in image processing and feature-based
image registration Image registration is the process of transforming different sets of data into one coordinate system. Data may be multiple photographs, data from different sensors, times, depths, or viewpoints. It is used in computer vision, medical imaging, mil ...
, a point set may be 2D pixel coordinates obtained by
feature extraction Feature may refer to: Computing * Feature recognition, could be a hole, pocket, or notch * Feature (computer vision), could be an edge, corner or blob * Feature (machine learning), in statistics: individual measurable properties of the phenome ...
from an image, for example
corner detection Corner detection is an approach used within computer vision systems to extract certain kinds of Feature detection (computer vision), features and infer the contents of an image. Corner detection is frequently used in motion detection, image reg ...
. Point cloud registration has extensive applications in
autonomous driving Vehicular automation is using technology to assist or replace the operator of a vehicle such as a car, truck, aircraft, rocket, military vehicle, or boat. Assisted vehicles are ''semi-autonomous'', whereas vehicles that can travel without a ...
, motion estimation and 3D reconstruction, object detection and pose estimation,
robotic manipulation Robotics is the interdisciplinary study and practice of the design, construction, operation, and use of robots. Within mechanical engineering, robotics is the design and construction of the physical structures of robots, while in computer s ...
,
simultaneous localization and mapping Simultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an Intelligent agent, agent's location within it. While this initially ap ...
(SLAM), panorama stitching, virtual and augmented reality, and
medical imaging Medical imaging is the technique and process of imaging the interior of a body for clinical analysis and medical intervention, as well as visual representation of the function of some organs or tissues (physiology). Medical imaging seeks to revea ...
. As a special case, registration of two point sets that only differ by a 3D rotation (''i.e.,'' there is no scaling and translation), is called the Wahba Problem and also related to the
orthogonal procrustes problem The orthogonal Procrustes problem is a matrix approximation problem in linear algebra. In its classical form, one is given two matrices A and B and asked to find an orthogonal matrix \Omega which most closely maps A to B. Specifically, the orthog ...
.


Formulation

The problem may be summarized as follows: Let \lbrace\mathcal,\mathcal\rbrace be two finite size point sets in a finite-dimensional real vector space \mathbb^d, which contain M and N points respectively (''e.g.,'' d=3 recovers the typical case of when \mathcal and \mathcal are 3D point sets). The problem is to find a transformation to be applied to the moving "model" point set \mathcal such that the difference (typically defined in the sense of point-wise
Euclidean distance In mathematics, the Euclidean distance between two points in Euclidean space is the length of the line segment between them. It can be calculated from the Cartesian coordinates of the points using the Pythagorean theorem, and therefore is o ...
) between \mathcal and the static "scene" set \mathcal is minimized. In other words, a mapping from \mathbb^d to \mathbb^d is desired which yields the best alignment between the transformed "model" set and the "scene" set. The mapping may consist of a rigid or non-rigid transformation. The transformation model may be written as T, using which the transformed, registered model point set is: The output of a point set registration algorithm is therefore the ''optimal transformation'' T^\star such that \mathcal is best aligned to \mathcal, according to some defined notion of distance function \operatorname(\cdot,\cdot): where \mathcal is used to denote the set of all possible transformations that the optimization tries to search for. The most popular choice of the distance function is to take the square of the
Euclidean distance In mathematics, the Euclidean distance between two points in Euclidean space is the length of the line segment between them. It can be calculated from the Cartesian coordinates of the points using the Pythagorean theorem, and therefore is o ...
for every pair of points: where \, \cdot \, _2 denotes the vector 2-norm, s_m is the '' corresponding point'' in set \mathcal that attains the ''shortest distance'' to a given point m in set \mathcal after transformation. Minimizing such a function in rigid registration is equivalent to solving a
least squares The method of least squares is a mathematical optimization technique that aims to determine the best fit function by minimizing the sum of the squares of the differences between the observed values and the predicted values of the model. The me ...
problem.


Types of algorithms

When the correspondences (''i.e.,'' s_m \leftrightarrow m) are given before the optimization, for example, using feature matching techniques, then the optimization only needs to estimate the transformation. This type of registration is called correspondence-based registration. On the other hand, if the correspondences are unknown, then the optimization is required to jointly find out the correspondences and transformation together. This type of registration is called simultaneous pose and correspondence registration.


Rigid registration

Given two point sets, rigid registration yields a
rigid transformation In mathematics, a rigid transformation (also called Euclidean transformation or Euclidean isometry) is a geometric transformation of a Euclidean space that preserves the Euclidean distance between every pair of points. The rigid transformation ...
which maps one point set to the other. A rigid transformation is defined as a transformation that does not change the distance between any two points. Typically such a transformation consists of
translation Translation is the communication of the semantics, meaning of a #Source and target languages, source-language text by means of an Dynamic and formal equivalence, equivalent #Source and target languages, target-language text. The English la ...
and
rotation Rotation or rotational/rotary motion is the circular movement of an object around a central line, known as an ''axis of rotation''. A plane figure can rotate in either a clockwise or counterclockwise sense around a perpendicular axis intersect ...
. In rare cases, the point set may also be mirrored. In robotics and computer vision, rigid registration has the most applications.


Non-rigid registration

Given two point sets, non-rigid registration yields a non-rigid transformation which maps one point set to the other. Non-rigid transformations include
affine transformations In Euclidean geometry, an affine transformation or affinity (from the Latin, '' affinis'', "connected with") is a geometric transformation that preserves lines and parallelism, but not necessarily Euclidean distances and angles. More generally ...
such as
scaling Scaling may refer to: Science and technology Mathematics and physics * Scaling (geometry), a linear transformation that enlarges or diminishes objects * Scale invariance, a feature of objects or laws that do not change if scales of length, energ ...
and
shear mapping In plane geometry, a shear mapping is an affine transformation that displaces each point in a fixed direction by an amount proportional to its signed distance function, signed distance from a given straight line, line parallel (geometry), paral ...
. However, in the context of point set registration, non-rigid registration typically involves nonlinear transformation. If the eigenmodes of variation of the point set are known, the nonlinear transformation may be parametrized by the eigenvalues. A nonlinear transformation may also be parametrized as a
thin plate spline Thin plate splines (TPS) are a spline-based technique for data interpolation and smoothing. They were introduced to geometric design by Duchon. They are an important special case of a polyharmonic spline. Robust Point Matching (RPM) is a common ...
.


Other types

Some approaches to point set registration use algorithms that solve the more general graph matching problem. However, the computational complexity of such methods tend to be high and they are limited to rigid registrations. In this article, we will only consider algorithms for rigid registration, where the transformation is assumed to contain 3D rotations and translations (possibly also including a uniform scaling). The PCL (Point Cloud Library) is an open-source framework for n-dimensional point cloud and 3D geometry processing. It includes several point registration algorithms.


Correspondence-based registration

Correspondence-based methods assume the putative correspondences m \leftrightarrow s_m are given for every point m \in \mathcal. Therefore, we arrive at a setting where both point sets \mathcal and \mathcal have N points and the correspondences m_i \leftrightarrow s_i,i=1,\dots,N are given.


Outlier-free registration

In the simplest case, one can assume that all the correspondences are correct, meaning that the points m_i,s_i \in \mathbb^3 are generated as follows:where l > 0 is a uniform scaling factor (in many cases l=1 is assumed), R \in \text(3) is a proper 3D rotation matrix (\text(d) is the
special orthogonal group In mathematics, the orthogonal group in dimension , denoted , is the group of distance-preserving transformations of a Euclidean space of dimension that preserve a fixed point, where the group operation is given by composing transformations. ...
of degree d), t \in \mathbb^3 is a 3D translation vector and \epsilon_i \in \mathbb^3 models the unknown additive noise (''e.g.,''
Gaussian noise Carl Friedrich Gauss (1777–1855) is the eponym of all of the topics listed below. There are over 100 topics all named after this German mathematician and scientist, all in the fields of mathematics, physics, and astronomy. The English eponymo ...
). Specifically, if the noise \epsilon_i is assumed to follow a zero-mean isotropic Gaussian distribution with standard deviation \sigma_i, ''i.e.,'' \epsilon_i \sim \mathcal(0,\sigma_i^2 I_3 ), then the following optimization can be shown to yield the
maximum likelihood estimate In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed stati ...
for the unknown scale, rotation and translation:Note that when the scaling factor is 1 and the translation vector is zero, then the optimization recovers the formulation of the Wahba problem. Despite the non-convexity of the optimization () due to non-convexity of the set \text(3), seminal work by Berthold K.P. Horn showed that () actually admits a closed-form solution, by decoupling the estimation of scale, rotation and translation. Similar results were discovered by Arun ''et al''. In addition, in order to find a unique transformation (l,R,t), at least N=3 non-collinear points in each point set are required. More recently, Briales and Gonzalez-Jimenez have developed a semidefinite relaxation using Lagrangian duality, for the case where the model set \mathcal contains different 3D primitives such as points, lines and planes (which is the case when the model \mathcal is a 3D mesh). Interestingly, the semidefinite relaxation is empirically tight, ''i.e.,'' a certifiably globally optimal solution can be extracted from the solution of the semidefinite relaxation.


Robust registration

The
least squares The method of least squares is a mathematical optimization technique that aims to determine the best fit function by minimizing the sum of the squares of the differences between the observed values and the predicted values of the model. The me ...
formulation () is known to perform arbitrarily badly in the presence of
outlier In statistics, an outlier is a data point that differs significantly from other observations. An outlier may be due to a variability in the measurement, an indication of novel data, or it may be the result of experimental error; the latter are ...
s. An outlier correspondence is a pair of measurements s_i \leftrightarrow m_i that departs from the generative model (). In this case, one can consider a different generative model as follows:where if the i-th pair s_i \leftrightarrow m_i is an inlier, then it obeys the outlier-free model (), ''i.e.,'' s_i is obtained from m_i by a spatial transformation plus some small noise; however, if the i-th pair s_i \leftrightarrow m_i is an outlier, then s_i can be any arbitrary vector o_i. Since one does not know which correspondences are outliers beforehand, robust registration under the generative model () is of paramount importance for computer vision and robotics deployed in the real world, because current feature matching techniques tend to output highly corrupted correspondences where over 95\% of the correspondences can be outliers. Next, we describe several common paradigms for robust registration.


Maximum consensus

Maximum consensus seeks to find the largest set of correspondences that are consistent with the generative model () for some choice of spatial transformation (l,R,t). Formally speaking, maximum consensus solves the following optimization:where \vert \mathcal \vert denotes the
cardinality The thumb is the first digit of the hand, next to the index finger. When a person is standing in the medical anatomical position (where the palm is facing to the front), the thumb is the outermost digit. The Medical Latin English noun for thum ...
of the set \mathcal. The constraint in () enforces that every pair of measurements in the inlier set \mathcal must have residuals smaller than a pre-defined threshold \xi. Unfortunately, recent analyses have shown that globally solving problem (cb.4) is
NP-Hard In computational complexity theory, a computational problem ''H'' is called NP-hard if, for every problem ''L'' which can be solved in non-deterministic polynomial-time, there is a polynomial-time reduction from ''L'' to ''H''. That is, assumi ...
, and global algorithms typically have to resort to branch-and-bound (BnB) techniques that take exponential-time complexity in the worst case. Although solving consensus maximization exactly is hard, there exist efficient heuristics that perform quite well in practice. One of the most popular heuristics is the Random Sample Consensus (RANSAC) scheme. RANSAC is an iterative hypothesize-and-verify method. At each iteration, the method first randomly samples 3 out of the total number of N correspondences and computes a hypothesis (l,R,t) using Horn's method, then the method evaluates the constraints in () to count how many correspondences actually agree with such a hypothesis (i.e., it computes the residual \Vert s_i - lRm_i - t \Vert_2^2 / \sigma_i^2 and compares it with the threshold \xi for each pair of measurements). The algorithm terminates either after it has found a consensus set that has enough correspondences, or after it has reached the total number of allowed iterations. RANSAC is highly efficient because the main computation of each iteration is carrying out the closed-form solution in Horn's method. However, RANSAC is non-deterministic and only works well in the low-outlier-ratio regime (''e.g.,'' below 50\%), because its runtime grows exponentially with respect to the outlier ratio. To fill the gap between the fast but inexact RANSAC scheme and the exact but exhaustive BnB optimization, recent researches have developed deterministic approximate methods to solve consensus maximization.


Outlier removal

Outlier removal methods seek to pre-process the set of highly corrupted correspondences before estimating the spatial transformation. The motivation of outlier removal is to significantly reduce the number of outlier correspondences, while maintaining inlier correspondences, so that optimization over the transformation becomes easier and more efficient (''e.g.,'' RANSAC works poorly when the outlier ratio is above 95\% but performs quite well when outlier ratio is below 50\%). Parra ''et al.'' have proposed a method called Guaranteed Outlier Removal (GORE) that uses geometric constraints to prune outlier correspondences while guaranteeing to preserve inlier correspondences. GORE has been shown to be able to drastically reduce the outlier ratio, which can significantly boost the performance of consensus maximization using RANSAC or BnB. Yang and Carlone have proposed to build pairwise translation-and-rotation-invariant measurements (TRIMs) from the original set of measurements and embed TRIMs as the edges of a
graph Graph may refer to: Mathematics *Graph (discrete mathematics), a structure made of vertices and edges **Graph theory, the study of such graphs and their properties *Graph (topology), a topological space resembling a graph in the sense of discret ...
whose nodes are the 3D points. Since inliers are pairwise consistent in terms of the scale, they must form a clique within the graph. Therefore, using efficient algorithms for computing the maximum clique of a graph can find the inliers and effectively prune the outliers. The maximum clique based outlier removal method is also shown to be quite useful in real-world point set registration problems. Similar outlier removal ideas were also proposed by Parra ''et al.''.


M-estimation

M-estimation replaces the least squares objective function in () with a robust cost function that is less sensitive to outliers. Formally, M-estimation seeks to solve the following problem:where \rho(\cdot) represents the choice of the robust cost function. Note that choosing \rho(x) = x^2 recovers the least squares estimation in (). Popular robust cost functions include \ell_1-norm loss, Huber loss, Geman-McClure loss and truncated least squares loss. M-estimation has been one of the most popular paradigms for robust estimation in robotics and computer vision. Because robust objective functions are typically non-convex (''e.g.,'' the truncated least squares loss v.s. the least squares loss), algorithms for solving the non-convex M-estimation are typically based on local optimization, where first an initial guess is provided, following by iterative refinements of the transformation to keep decreasing the objective function. Local optimization tends to work well when the initial guess is close to the global minimum, but it is also prone to get stuck in local minima if provided with poor initialization.


Graduated non-convexity

Graduated non-convexity (GNC) is a general-purpose framework for solving non-convex optimization problems without initialization. It has achieved success in early vision and machine learning applications. The key idea behind GNC is to solve the hard non-convex problem by starting from an easy convex problem. Specifically, for a given robust cost function \rho(\cdot), one can construct a surrogate function \rho_(\cdot) with a hyper-parameter \mu, tuning which can gradually increase the non-convexity of the surrogate function \rho_(\cdot) until it converges to the target function \rho(\cdot). Therefore, at each level of the hyper-parameter \mu, the following optimization is solved:Black and Rangarajan proved that the objective function of each optimization () can be dualized into a sum of
weighted least squares Weighted least squares (WLS), also known as weighted linear regression, is a generalization of ordinary least squares and linear regression in which knowledge of the unequal variance of observations (''heteroscedasticity'') is incorporated into ...
and a so-called outlier process function on the weights that determine the confidence of the optimization in each pair of measurements. Using Black-Rangarajan duality and GNC tailored for the Geman-McClure function, Zhou ''et al.'' developed the fast global registration algorithm that is robust against about 80\% outliers in the correspondences. More recently, Yang ''et al.'' showed that the joint use of GNC (tailored to the Geman-McClure function and the truncated least squares function) and Black-Rangarajan duality can lead to a general-purpose solver for robust registration problems, including point clouds and mesh registration.


Certifiably robust registration

Almost none of the robust registration algorithms mentioned above (except the BnB algorithm that runs in exponential-time in the worst case) comes with performance guarantees, which means that these algorithms can return completely incorrect estimates without notice. Therefore, these algorithms are undesirable for safety-critical applications like autonomous driving. Very recently, Yang ''et al.'' has developed the first certifiably robust registration algorithm, named ''Truncated least squares Estimation And SEmidefinite Relaxation'' (TEASER). For point cloud registration, TEASER not only outputs an estimate of the transformation, but also quantifies the optimality of the given estimate. TEASER adopts the following truncated least squares (TLS) estimator:which is obtained by choosing the TLS robust cost function \rho(x) = \min (x^2, \bar^2), where \bar^2is a pre-defined constant that determines the maximum allowed residuals to be considered inliers. The TLS objective function has the property that for inlier correspondences (\Vert s_i - l Rm_i - t \Vert_2^2 / \sigma_i^2 < \bar^2), the usual least square penalty is applied; while for outlier correspondences (\Vert s_i - l Rm_i - t \Vert_2^2 / \sigma_i^2 > \bar^2), no penalty is applied and the outliers are discarded. If the TLS optimization () is solved to global optimality, then it is equivalent to running Horn's method on only the inlier correspondences. However, solving () is quite challenging due to its combinatorial nature. TEASER solves () as follows : (i) It builds invariant measurements such that the estimation of scale, rotation and translation can be decoupled and solved separately, a strategy that is inspired by the original Horn's method; (ii) The same TLS estimation is applied for each of the three sub-problems, where the scale TLS problem can be solved exactly using an algorithm called adaptive voting, the rotation TLS problem can relaxed to a semidefinite program (SDP) where the relaxation is exact in practice, even with large amount of outliers; the translation TLS problem can solved using component-wise adaptive voting. A fast implementation leveraging GNC i
open-sourced here
In practice, TEASER can tolerate more than 99\% outlier correspondences and runs in milliseconds. In addition to developing TEASER, Yang ''et al.'' also prove that, under some mild conditions on the point cloud data, TEASER's estimated transformation has bounded errors from the ground-truth transformation.


Simultaneous pose and correspondence registration


Iterative closest point

The iterative closest point (ICP) algorithm was introduced by Besl and McKay. The algorithm performs rigid registration in an iterative fashion by alternating in (i) given the transformation, finding the closest point in \mathcal for every point in \mathcal; and (ii) given the correspondences, finding the best rigid transformation by solving the
least squares The method of least squares is a mathematical optimization technique that aims to determine the best fit function by minimizing the sum of the squares of the differences between the observed values and the predicted values of the model. The me ...
problem (). As such, it works best if the initial pose of \mathcal is sufficiently close to \mathcal. In
pseudocode In computer science, pseudocode is a description of the steps in an algorithm using a mix of conventions of programming languages (like assignment operator, conditional operator, loop) with informal, usually self-explanatory, notation of actio ...
, the basic algorithm is implemented as follows: algorithm ''θ'' := ''θ''0 while not registered: ''θ'' := least_squares(''X'') return ''θ'' Here, the function least_squares performs
least squares The method of least squares is a mathematical optimization technique that aims to determine the best fit function by minimizing the sum of the squares of the differences between the observed values and the predicted values of the model. The me ...
optimization to minimize the distance in each of the \langle m_i, \hat_i \rangle pairs, using the closed-form solutions by Horn and Arun. Because the cost function of registration depends on finding the closest point in \mathcal to every point in \mathcal, it can change as the algorithm is running. As such, it is difficult to prove that ICP will in fact converge exactly to the local optimum. In fact, empirically, ICP and EM-ICP do not converge to the local minimum of the cost function. Nonetheless, because ICP is intuitive to understand and straightforward to implement, it remains the most commonly used point set registration algorithm. Many variants of ICP have been proposed, affecting all phases of the algorithm from the selection and matching of points to the minimization strategy. For example, the
expectation maximization Expectation, or expectations, as well as expectancy or expectancies, may refer to: Science * Expectancy effect, including observer-expectancy effects and subject-expectancy effects such as the placebo effect * Expectancy theory of motivation * ...
algorithm is applied to the ICP algorithm to form the EM-ICP method, and the Levenberg-Marquardt algorithm is applied to the ICP algorithm to form the LM-ICP method.


Robust point matching

Robust point matching (RPM) was introduced by Gold et al. The method performs registration using deterministic annealing and soft assignment of correspondences between point sets. Whereas in ICP the correspondence generated by the nearest-neighbour heuristic is binary, RPM uses a ''soft'' correspondence where the correspondence between any two points can be anywhere from 0 to 1, although it ultimately converges to either 0 or 1. The correspondences found in RPM is always one-to-one, which is not always the case in ICP. Let m_i be the ith point in \mathcal and s_j be the jth point in \mathcal. The ''match matrix'' \mathbf is defined as such: The problem is then defined as: Given two point sets \mathcal and \mathcal find the
Affine transformation In Euclidean geometry, an affine transformation or affinity (from the Latin, '' affinis'', "connected with") is a geometric transformation that preserves lines and parallelism, but not necessarily Euclidean distances and angles. More general ...
T and the match matrix \mathbf that best relates them. Knowing the optimal transformation makes it easy to determine the match matrix, and vice versa. However, the RPM algorithm determines both simultaneously. The transformation may be decomposed into a translation vector and a transformation matrix: :T(m) = \mathbfm + \mathbf The matrix \mathbf in 2D is composed of four separate parameters \lbrace a, \theta, b, c\rbrace, which are scale, rotation, and the vertical and horizontal shear components respectively. The cost function is then: subject to \forall j~\sum_^M \mu_ \leq 1, \forall i~\sum_^N \mu_ \leq 1, \forall ij~\mu_ \in \lbrace0, 1\rbrace. The \alpha term biases the objective towards stronger correlation by decreasing the cost if the match matrix has more ones in it. The function g(\mathbf) serves to regularize the Affine transformation by penalizing large values of the scale and shear components: :g(\mathbf(a,\theta, b, c)) = \gamma(a^2 + b^2 + c^2) for some regularization parameter \gamma. The RPM method optimizes the cost function using the Softassign algorithm. The 1D case will be derived here. Given a set of variables \lbrace Q_j\rbrace where Q_j\in \mathbb^1. A variable \mu_j is associated with each Q_j such that \sum_^J \mu_j = 1. The goal is to find \mathbf that maximizes \sum_^J \mu_j Q_j. This can be formulated as a continuous problem by introducing a control parameter \beta > 0. In the deterministic annealing method, the control parameter \beta is slowly increased as the algorithm runs. Let \mathbf be: this is known as the
softmax function The softmax function, also known as softargmax or normalized exponential function, converts a tuple of real numbers into a probability distribution of possible outcomes. It is a generalization of the logistic function to multiple dimensions, a ...
. As \beta increases, it approaches a binary value as desired in Equation (). The problem may now be generalized to the 2D case, where instead of maximizing \sum_^J \mu_j Q_j, the following is maximized: where :Q_ = -(\lVert s_j - \mathbf - \mathbf m_i \rVert^2 - \alpha) = -\frac This is straightforward, except that now the constraints on \mu are
doubly stochastic matrix In mathematics, especially in probability and combinatorics, a doubly stochastic matrix (also called bistochastic matrix) is a square matrix X=(x_) of nonnegative real numbers, each of whose rows and columns sums to 1, i.e., :\sum_i x_=\sum_j x_ ...
constraints: \forall j~\sum_^M \mu_ = 1 and \forall i~\sum_^N \mu_ = 1. As such the denominator from Equation () cannot be expressed for the 2D case simply. To satisfy the constraints, it is possible to use a result due to Sinkhorn, which states that a doubly stochastic matrix is obtained from any square matrix with all positive entries by the iterative process of alternating row and column normalizations. Thus the algorithm is written as such: while has not converged: ''// update correspondence parameters by softassign'' ''// apply Sinkhorn's method'' ''// update pose parameters by coordinate descent'' update using analytical solution update using analytical solution update using
Newton's method In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
return and where the deterministic annealing control parameter \beta is initially set to \beta_0 and increases by factor \beta_r until it reaches the maximum value \beta_f. The summations in the normalization steps sum to M+1 and N+1 instead of just M and N because the constraints on \mu are inequalities. As such the M+1th and N+1th elements are slack variables. The algorithm can also be extended for point sets in 3D or higher dimensions. The constraints on the correspondence matrix \mathbf are the same in the 3D case as in the 2D case. Hence the structure of the algorithm remains unchanged, with the main difference being how the rotation and translation matrices are solved.


Thin plate spline robust point matching

The thin plate spline robust point matching (TPS-RPM) algorithm by Chui and Rangarajan augments the RPM method to perform non-rigid registration by parametrizing the transformation as a
thin plate spline Thin plate splines (TPS) are a spline-based technique for data interpolation and smoothing. They were introduced to geometric design by Duchon. They are an important special case of a polyharmonic spline. Robust Point Matching (RPM) is a common ...
. However, because the thin plate spline parametrization only exists in three dimensions, the method cannot be extended to problems involving four or more dimensions.


Kernel correlation

The kernel correlation (KC) approach of point set registration was introduced by Tsin and Kanade. Compared with ICP, the KC algorithm is more robust against noisy data. Unlike ICP, where, for every model point, only the closest scene point is considered, here every scene point affects every model point. As such this is a ''multiply-linked'' registration algorithm. For some kernel function K, the kernel correlation KC of two points x_i, x_j is defined thus: The kernel function K chosen for point set registration is typically symmetric and non-negative kernel, similar to the ones used in the Parzen window density estimation. The
Gaussian kernel In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form f(x) = \exp (-x^2) and with parametric extension f(x) = a \exp\left( -\frac \right) for arbitrary real constants , and non-zero . It is ...
typically used for its simplicity, although other ones like the
Epanechnikov kernel The term kernel is used in statistical analysis to refer to a window function. The term "kernel" has several distinct meanings in different branches of statistics. Bayesian statistics In statistics, especially in Bayesian statistics, the kernel ...
and the tricube kernel may be substituted. The kernel correlation of an entire point set \mathcal is defined as the sum of the kernel correlations of every point in the set to every other point in the set: The logarithm of KC of a point set is proportional, within a constant factor, to the
information entropy In information theory, the entropy of a random variable quantifies the average level of uncertainty or information associated with the variable's potential states or possible outcomes. This measures the expected amount of information needed ...
. Observe that the KC is a measure of a "compactness" of the point set—trivially, if all points in the point set were at the same location, the KC would evaluate to a large value. The cost function of the point set registration algorithm for some transformation parameter \theta is defined thus: Some algebraic manipulation yields: The expression is simplified by observing that KC(\mathcal) is independent of \theta. Furthermore, assuming rigid registration, KC(T(\mathcal, \theta)) is invariant when \theta is changed because the Euclidean distance between every pair of points stays the same under
rigid transformation In mathematics, a rigid transformation (also called Euclidean transformation or Euclidean isometry) is a geometric transformation of a Euclidean space that preserves the Euclidean distance between every pair of points. The rigid transformation ...
. So the above equation may be rewritten as: The kernel density estimates are defined as: :P_(x, \theta) = \frac \sum_ K(x, T(m, \theta)) :P_(x) = \frac \sum_ K(x, s) The cost function can then be shown to be the correlation of the two kernel density estimates: Having established the cost function, the algorithm simply uses
gradient descent Gradient descent is a method for unconstrained mathematical optimization. It is a first-order iterative algorithm for minimizing a differentiable multivariate function. The idea is to take repeated steps in the opposite direction of the gradi ...
to find the optimal transformation. It is computationally expensive to compute the cost function from scratch on every iteration, so a discrete version of the cost function Equation () is used. The kernel density estimates P_, P_ can be evaluated at grid points and stored in a
lookup table In computer science, a lookup table (LUT) is an array data structure, array that replaces runtime (program lifecycle phase), runtime computation of a mathematical function (mathematics), function with a simpler array indexing operation, in a proc ...
. Unlike the ICP and related methods, it is not necessary to find the nearest neighbour, which allows the KC algorithm to be comparatively simple in implementation. Compared to ICP and EM-ICP for noisy 2D and 3D point sets, the KC algorithm is less sensitive to noise and results in correct registration more often.


Gaussian mixture model

The kernel density estimates are sums of Gaussians and may therefore be represented as
Gaussian mixture model In statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observati ...
s (GMM). Jian and Vemuri use the GMM version of the KC registration algorithm to perform non-rigid registration parametrized by
thin plate spline Thin plate splines (TPS) are a spline-based technique for data interpolation and smoothing. They were introduced to geometric design by Duchon. They are an important special case of a polyharmonic spline. Robust Point Matching (RPM) is a common ...
s.


Coherent point drift

Coherent point drift (CPD) was introduced by Myronenko and Song. The algorithm takes a probabilistic approach to aligning point sets, similar to the GMM KC method. Unlike earlier approaches to non-rigid registration which assume a
thin plate spline Thin plate splines (TPS) are a spline-based technique for data interpolation and smoothing. They were introduced to geometric design by Duchon. They are an important special case of a polyharmonic spline. Robust Point Matching (RPM) is a common ...
transformation model, CPD is agnostic with regard to the transformation model used. The point set \mathcal represents the
Gaussian mixture model In statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observati ...
(GMM) centroids. When the two point sets are optimally aligned, the correspondence is the maximum of the GMM
posterior probability The posterior probability is a type of conditional probability that results from updating the prior probability with information summarized by the likelihood via an application of Bayes' rule. From an epistemological perspective, the posteri ...
for a given data point. To preserve the topological structure of the point sets, the GMM centroids are forced to move coherently as a group. The
expectation maximization Expectation, or expectations, as well as expectancy or expectancies, may refer to: Science * Expectancy effect, including observer-expectancy effects and subject-expectancy effects such as the placebo effect * Expectancy theory of motivation * ...
algorithm is used to optimize the cost function. Let there be points in \mathcal and points in \mathcal. The GMM
probability density function In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a Function (mathematics), function whose value at any given sample (or point) in the sample space (the s ...
for a point is: where, in dimensions, p(s, i) is the
Gaussian distribution In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real number, real-valued random variable. The general form of its probability density function is f(x ...
centered on point m_i \in \mathcal. :p(s, i) = \frac \exp The membership probabilities P(i)=\frac is equal for all GMM components. The weight of the uniform distribution is denoted as w\in ,1/math>. The mixture model is then: The GMM centroids are re-parametrized by a set of parameters \theta estimated by maximizing the likelihood. This is equivalent to minimizing the negative log-likelihood function: where it is assumed that the data is
independent and identically distributed Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in Pennsylvania, United States * Independentes (English: Independents), a Portuguese artist ...
. The correspondence probability between two points m_i and s_j is defined as the
posterior probability The posterior probability is a type of conditional probability that results from updating the prior probability with information summarized by the likelihood via an application of Bayes' rule. From an epistemological perspective, the posteri ...
of the GMM centroid given the data point: :P(i, s_j) = \frac The
expectation maximization Expectation, or expectations, as well as expectancy or expectancies, may refer to: Science * Expectancy effect, including observer-expectancy effects and subject-expectancy effects such as the placebo effect * Expectancy theory of motivation * ...
(EM) algorithm is used to find \theta and \sigma^2. The EM algorithm consists of two steps. First, in the E-step or ''estimation'' step, it guesses the values of parameters ("old" parameter values) and then uses
Bayes' theorem Bayes' theorem (alternatively Bayes' law or Bayes' rule, after Thomas Bayes) gives a mathematical rule for inverting Conditional probability, conditional probabilities, allowing one to find the probability of a cause given its effect. For exampl ...
to compute the posterior probability distributions P^(i,s_j) of mixture components. Second, in the M-step or ''maximization'' step, the "new" parameter values are then found by minimizing the expectation of the complete negative log-likelihood function, i.e. the cost function: Ignoring constants independent of \theta and \sigma, Equation () can be expressed thus: where :N_\mathbf = \sum_^N \sum_^M P^(i, s_j) \leq N with N=N_\mathbf only if w=0. The posterior probabilities of GMM components computed using previous parameter values P^ is: Minimizing the cost function in Equation () necessarily decreases the negative log-likelihood function in Equation () unless it is already at a local minimum. Thus, the algorithm can be expressed using the following pseudocode, where the point sets \mathcal and \mathcal are represented as M\times D and N\times D matrices \mathbf and \mathbf respectively: while not registered: ''// E-step, compute '' ''// M-step, solve for optimal transformation'' return ''θ'' where the vector \mathbf is a column vector of ones. The solve function differs by the type of registration performed. For example, in rigid registration, the output is a scale , a rotation matrix \mathbf, and a translation vector \mathbf. The parameter \theta can be written as a tuple of these: :\theta = \lbrace a, \mathbf, \mathbf\rbrace which is initialized to one, the
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. It has unique properties, for example when the identity matrix represents a geometric transformation, the obje ...
, and a column vector of zeroes: :\theta_0 = \lbrace 1, \mathbf, \mathbf\rbrace The aligned point set is: :T(\mathbf) = a\mathbf\mathbf^T + \mathbf\mathbf^T The solve_rigid function for rigid registration can then be written as follows, with derivation of the algebra explained in Myronenko's 2010 paper. ''//'' diag(''ξ'')''is the
diagonal matrix In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagon ...
formed from vector ξ'' ''//'' ''is the
trace Trace may refer to: Arts and entertainment Music * ''Trace'' (Son Volt album), 1995 * ''Trace'' (Died Pretty album), 1993 * Trace (band), a Dutch progressive rock band * ''The Trace'' (album), by Nell Other uses in arts and entertainment * ...
of a matrix'' For affine registration, where the goal is to find an
affine transformation In Euclidean geometry, an affine transformation or affinity (from the Latin, '' affinis'', "connected with") is a geometric transformation that preserves lines and parallelism, but not necessarily Euclidean distances and angles. More general ...
instead of a rigid one, the output is an affine transformation matrix \mathbf and a translation \mathbf such that the aligned point set is: :T(\mathbf) = \mathbf\mathbf^T + \mathbf\mathbf^T The solve_affine function for rigid registration can then be written as follows, with derivation of the algebra explained in Myronenko's 2010 paper. It is also possible to use CPD with non-rigid registration using a parametrization derived using
calculus of variations The calculus of variations (or variational calculus) is a field of mathematical analysis that uses variations, which are small changes in Function (mathematics), functions and functional (mathematics), functionals, to find maxima and minima of f ...
. Sums of Gaussian distributions can be computed in
linear time In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations ...
using the fast Gauss transform (FGT). Consequently, the
time complexity In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations ...
of CPD is O(M+N), which is asymptotically much faster than O(MN) methods.


Bayesian coherent point drift (BCPD)

A variant of coherent point drift, called Bayesian coherent point drift (BCPD), was derived through a Bayesian formulation of point set registration. BCPD has several advantages over CPD, e.g., (1) nonrigid and rigid registrations can be performed in a single algorithm, (2) the algorithm can be accelerated regardless of the Gaussianity of a Gram matrix to define motion coherence, (3) the algorithm is more robust against outliers because of a more reasonable definition of an outlier distribution. Additionally, in the Bayesian formulation, motion coherence was introduced through a prior distribution of displacement vectors, providing a clear difference between tuning parameters that control motion coherence. BCPD was further accelerated by a method called BCPD++, which is a three-step procedure composed of (1) downsampling of point sets, (2) registration of downsampled point sets, and (3) interpolation of a deformation field. The method can register point sets composed of more than 10M points while maintaining its registration accuracy.


Coherent point drift with local surface geometry (LSG-CPD)

An variant of coherent point drift called CPD with Local Surface Geometry (LSG-CPD) for rigid point cloud registration. The method adaptively adds different levels of point-to-plane penalization on top of the point-to-point penalization based on the flatness of the local surface. This results in GMM components with anisotropic covariances, instead of the isotropic covariances in the original CPD. The anisotropic covariance matrix is modeled as: where \Sigma_m is the anisotropic covariance matrix of the m-th point in the target set; \mathbf_m is the normal vector corresponding to the same point; \mathbf is an identity matrix, serving as a regularizer, pulling the problem away from ill-posedness. \alpha_m is penalization coefficient (a modified sigmoid function), which is set adaptively to add different levels of point-to-plane penalization depending on how flat the local surface is. This is realized by evaluating the surface variation \kappa_m within the neighborhood of the m-th target point. \alpha_ is the upper bound of the penalization. The point cloud registration is formulated as a maximum likelihood estimation (MLE) problem and solve it with the Expectation-Maximization (EM) algorithm. In the E step, the correspondence computation is recast into simple matrix manipulations and efficiently computed on a GPU. In the M step, an unconstrained optimization on a matrix Lie group is designed to efficiently update the rigid transformation of the registration. Taking advantage of the local geometric covariances, the method shows a superior performance in accuracy and robustness to noise and outliers, compared with the baseline CPD. An enhanced runtime performance is expected thanks to the GPU accelerated correspondence calculation. An implementation of the LSG-CPD i
open-sourced here


Sorting the Correspondence Space (SCS)

This algorithm was introduced in 2013 by H. Assalih to accommodate sonar image registration. These types of images tend to have high amounts of noise, so it is expected to have many outliers in the point sets to match. SCS delivers high robustness against outliers and can surpass ICP and CPD performance in the presence of outliers. SCS doesn't use iterative optimization in high dimensional space and is neither probabilistic nor spectral. SCS can match rigid and non-rigid transformations, and performs best when the target transformation is between three and six
degrees of freedom In many scientific fields, the degrees of freedom of a system is the number of parameters of the system that may vary independently. For example, a point in the plane has two degrees of freedom for translation: its two coordinates; a non-infinite ...
.


See also

* Point feature matching * Point-set triangulation * Normal distributions transform


References


External links

{{Commons category
Reference implementation of thin plate spline robust point matching

Reference implementation of kernel correlation point set registrationReference implementation of coherent point driftReference implementation of ICP variantsReference implementation of Bayesian coherent point driftReference implementation of LSG-CPD
Computer vision Pattern matching Point (geometry) Robotics engineering