Kernel Machines
   HOME

TheInfoList



OR:

In
machine learning Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
, kernel machines are a class of algorithms for
pattern analysis Pattern recognition is the task of assigning a Categorical variable, class to an observation based on patterns extracted from data. While similar, pattern recognition (PR) is not to be confused with pattern machines (PM) which may possess PR cap ...
, whose best known member is the
support-vector machine In machine learning, support vector machines (SVMs, also support vector networks) are supervised learning, supervised Maximum-margin hyperplane, max-margin models with associated learning algorithms that analyze data for Statistical classification ...
(SVM). These methods involve using linear classifiers to solve nonlinear problems. The general task of
pattern analysis Pattern recognition is the task of assigning a Categorical variable, class to an observation based on patterns extracted from data. While similar, pattern recognition (PR) is not to be confused with pattern machines (PM) which may possess PR cap ...
is to find and study general types of relations (for example
clusters may refer to: Science and technology Astronomy * Cluster (spacecraft), constellation of four European Space Agency spacecraft * Cluster II (spacecraft), a European Space Agency mission to study the magnetosphere * Asteroid cluster, a small ...
,
ranking A ranking is a relationship between a set of items, often recorded in a list, such that, for any two items, the first is either "ranked higher than", "ranked lower than", or "ranked equal to" the second. In mathematics, this is known as a weak ...
s, principal components,
correlation In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics ...
s,
classifications Classification is the activity of assigning objects to some pre-existing classes or categories. This is distinct from the task of establishing the classes themselves (for example through cluster analysis). Examples include diagnostic tests, identif ...
) in datasets. For many algorithms that solve these tasks, the data in raw representation have to be explicitly transformed into
feature vector In machine learning and pattern recognition, a feature is an individual measurable property or characteristic of a data set. Choosing informative, discriminating, and independent features is crucial to produce effective algorithms for pattern re ...
representations via a user-specified ''feature map'': in contrast, kernel methods require only a user-specified ''kernel'', i.e., a
similarity function In statistics and related fields, a similarity measure or similarity function or similarity metric is a real-valued function that quantifies the similarity between two objects. Although no single definition of a similarity exists, usually such mea ...
over all pairs of data points computed using
inner products In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often ...
. The feature map in kernel machines is infinite dimensional but only requires a finite dimensional matrix from user-input according to the
representer theorem For computer science, in statistical learning theory, a representer theorem is any of several related results stating that a minimizer f^ of a regularized Empirical risk minimization, empirical risk functional defined over a reproducing kernel Hi ...
. Kernel machines are slow to compute for datasets larger than a couple of thousand examples without parallel processing. Kernel methods owe their name to the use of
kernel function In operator theory, a branch of mathematics, a positive-definite kernel is a generalization of a positive-definite function or a positive-definite matrix. It was first introduced by James Mercer in the early 20th century, in the context of solvi ...
s, which enable them to operate in a high-dimensional, ''implicit''
feature space Feature may refer to: Computing * Feature recognition, could be a hole, pocket, or notch * Feature (computer vision), could be an edge, corner or blob * Feature (machine learning), in statistics: individual measurable properties of the phenom ...
without ever computing the coordinates of the data in that space, but rather by simply computing the
inner product In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, ofte ...
s between the
images An image or picture is a visual representation. An image can be two-dimensional, such as a drawing, painting, or photograph, or three-dimensional, such as a carving or sculpture. Images may be displayed through other media, including a project ...
of all pairs of data in the feature space. This operation is often computationally cheaper than the explicit computation of the coordinates. This approach is called the "kernel trick". Kernel functions have been introduced for sequence data,
graphs Graph may refer to: Mathematics *Graph (discrete mathematics), a structure made of vertices and edges **Graph theory, the study of such graphs and their properties * Graph (topology), a topological space resembling a graph in the sense of discre ...
, text, images, as well as vectors. Algorithms capable of operating with kernels include the
kernel perceptron In machine learning, the kernel perceptron is a variant of the popular perceptron learning algorithm that can learn kernel machines, i.e. non-linear classifiers that employ a kernel function to compute the similarity of unseen samples to training ...
, support-vector machines (SVM),
Gaussian process In probability theory and statistics, a Gaussian process is a stochastic process (a collection of random variables indexed by time or space), such that every finite collection of those random variables has a multivariate normal distribution. The di ...
es,
principal components analysis Principal component analysis (PCA) is a Linear map, linear dimensionality reduction technique with applications in exploratory data analysis, visualization and Data Preprocessing, data preprocessing. The data is linear map, linearly transformed ...
(PCA),
canonical correlation analysis In statistics, canonical-correlation analysis (CCA), also called canonical variates analysis, is a way of inferring information from cross-covariance matrices. If we have two vectors ''X'' = (''X''1, ..., ''X'n'') and ''Y'' ...
,
ridge regression Ridge regression (also known as Tikhonov regularization, named for Andrey Tikhonov) is a method of estimating the coefficients of multiple- regression models in scenarios where the independent variables are highly correlated. It has been used in m ...
,
spectral clustering In multivariate statistics, spectral clustering techniques make use of the spectrum (eigenvalues) of the similarity matrix of the data to perform dimensionality reduction before clustering in fewer dimensions. The similarity matrix is provided ...
, linear adaptive filters and many others. Most kernel algorithms are based on
convex optimization Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets). Many classes of convex optimization problems ...
or eigenproblems and are statistically well-founded. Typically, their statistical properties are analyzed using
statistical learning theory Statistical learning theory is a framework for machine learning drawing from the fields of statistics and functional analysis. Statistical learning theory deals with the statistical inference problem of finding a predictive function based on da ...
(for example, using
Rademacher complexity In computational learning theory (machine learning and theory of computation), Rademacher complexity, named after Hans Rademacher, measures richness of a class of sets with respect to a probability distribution. The concept can also be extended to ...
).


Motivation and informal explanation

Kernel methods can be thought of as instance-based learners: rather than learning some fixed set of parameters corresponding to the features of their inputs, they instead "remember" the i-th training example (\mathbf_i, y_i) and learn for it a corresponding weight w_i. Prediction for unlabeled inputs, i.e., those not in the training set, is treated by the application of a
similarity function In statistics and related fields, a similarity measure or similarity function or similarity metric is a real-valued function that quantifies the similarity between two objects. Although no single definition of a similarity exists, usually such mea ...
k, called a kernel, between the unlabeled input \mathbf and each of the training inputs \mathbf_i. For instance, a kernelized
binary classifier Binary classification is the task of classifying the elements of a set into one of two groups (each called ''class''). Typical binary classification problems include: * Medical testing to determine if a patient has a certain disease or not; * Qual ...
typically computes a weighted sum of similarities \hat = \sgn \sum_^n w_i y_i k(\mathbf_i, \mathbf), where * \hat \in \ is the kernelized binary classifier's predicted label for the unlabeled input \mathbf whose hidden true label y is of interest; * k \colon \mathcal \times \mathcal \to \mathbb is the kernel function that measures similarity between any pair of inputs \mathbf, \mathbf \in \mathcal; * the sum ranges over the labeled examples \_^n in the classifier's training set, with y_i \in \; * the w_i \in \mathbb are the weights for the training examples, as determined by the learning algorithm; * the
sign function In mathematics, the sign function or signum function (from '' signum'', Latin for "sign") is a function that has the value , or according to whether the sign of a given real number is positive or negative, or the given number is itself zer ...
\sgn determines whether the predicted classification \hat comes out positive or negative. Kernel classifiers were described as early as the 1960s, with the invention of the
kernel perceptron In machine learning, the kernel perceptron is a variant of the popular perceptron learning algorithm that can learn kernel machines, i.e. non-linear classifiers that employ a kernel function to compute the similarity of unseen samples to training ...
. They rose to great prominence with the popularity of the
support-vector machine In machine learning, support vector machines (SVMs, also support vector networks) are supervised learning, supervised Maximum-margin hyperplane, max-margin models with associated learning algorithms that analyze data for Statistical classification ...
(SVM) in the 1990s, when the SVM was found to be competitive with
neural networks A neural network is a group of interconnected units called neurons that send signals to one another. Neurons can be either Cell (biology), biological cells or signal pathways. While individual neurons are simple, many of them together in a netwo ...
on tasks such as
handwriting recognition Handwriting recognition (HWR), also known as handwritten text recognition (HTR), is the ability of a computer to receive and interpret intelligible handwriting, handwritten input from sources such as paper documents, photographs, touch-screens ...
.


Mathematics: the kernel trick

The kernel trick avoids the explicit mapping that is needed to get linear
learning algorithms Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalise to unseen data, and thus perform tasks without explicit inst ...
to learn a nonlinear function or
decision boundary __NOTOC__ In a statistical-classification problem with two classes, a decision boundary or decision surface is a hypersurface that partitions the underlying vector space into two sets, one for each class. The classifier will classify all the poin ...
. For all \mathbf and \mathbf in the input space \mathcal, certain functions k(\mathbf, \mathbf) can be expressed as an
inner product In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, ofte ...
in another space \mathcal. The function k \colon \mathcal \times \mathcal \to \mathbb is often referred to as a ''kernel'' or a ''
kernel function In operator theory, a branch of mathematics, a positive-definite kernel is a generalization of a positive-definite function or a positive-definite matrix. It was first introduced by James Mercer in the early 20th century, in the context of solvi ...
''. The word "kernel" is used in mathematics to denote a weighting function for a weighted sum or
integral In mathematics, an integral is the continuous analog of a Summation, sum, which is used to calculate area, areas, volume, volumes, and their generalizations. Integration, the process of computing an integral, is one of the two fundamental oper ...
. Certain problems in machine learning have more structure than an arbitrary weighting function k. The computation is made much simpler if the kernel can be written in the form of a "feature map" \varphi\colon \mathcal \to \mathcal which satisfies k(\mathbf, \mathbf) = \langle \varphi(\mathbf), \varphi(\mathbf) \rangle_\mathcal.The key restriction is that \langle \cdot, \cdot \rangle_\mathcal must be a proper inner product. On the other hand, an explicit representation for \varphi is not necessary, as long as \mathcal is an
inner product space In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, ofte ...
. The alternative follows from
Mercer's theorem In mathematics, specifically functional analysis, Mercer's theorem is a representation of a symmetric positive-definite function on a square as a sum of a convergent sequence of product functions. This theorem, presented in , is one of the most ...
: an implicitly defined function \varphi exists whenever the space \mathcal can be equipped with a suitable measure ensuring the function k satisfies
Mercer's condition In mathematics, specifically functional analysis, Mercer's theorem is a representation of a symmetric positive-definite function on a square as a sum of a convergent sequence of product functions. This theorem, presented in , is one of the most n ...
. Mercer's theorem is similar to a generalization of the result from linear algebra that associates an inner product to any positive-definite matrix. In fact, Mercer's condition can be reduced to this simpler case. If we choose as our measure the
counting measure In mathematics, specifically measure theory, the counting measure is an intuitive way to put a measure on any set – the "size" of a subset is taken to be the number of elements in the subset if the subset has finitely many elements, and infinit ...
\mu(T) = , T, for all T \subset X , which counts the number of points inside the set T, then the integral in Mercer's theorem reduces to a summation \sum_^n\sum_^n k(\mathbf_i, \mathbf_j) c_i c_j \geq 0.If this summation holds for all finite sequences of points (\mathbf_1, \dotsc, \mathbf_n) in \mathcal and all choices of n real-valued coefficients (c_1, \dots, c_n) (cf.
positive definite kernel In operator theory, a branch of mathematics, a positive-definite kernel is a generalization of a positive-definite function or a positive-definite matrix. It was first introduced by James Mercer in the early 20th century, in the context of solvi ...
), then the function k satisfies Mercer's condition. Some algorithms that depend on arbitrary relationships in the native space \mathcal would, in fact, have a linear interpretation in a different setting: the range space of \varphi. The linear interpretation gives us insight about the algorithm. Furthermore, there is often no need to compute \varphi directly during computation, as is the case with
support-vector machine In machine learning, support vector machines (SVMs, also support vector networks) are supervised learning, supervised Maximum-margin hyperplane, max-margin models with associated learning algorithms that analyze data for Statistical classification ...
s. Some cite this running time shortcut as the primary benefit. Researchers also use it to justify the meanings and properties of existing algorithms. Theoretically, a
Gram matrix In linear algebra, the Gram matrix (or Gramian matrix, Gramian) of a set of vectors v_1,\dots, v_n in an inner product space is the Hermitian matrix of inner products, whose entries are given by the inner product G_ = \left\langle v_i, v_j \right\r ...
\mathbf \in \mathbb^ with respect to \ (sometimes also called a "kernel matrix"), where K_ = k(\mathbf_i, \mathbf_j), must be positive semi-definite (PSD). Empirically, for machine learning heuristics, choices of a function k that do not satisfy Mercer's condition may still perform reasonably if k at least approximates the intuitive idea of similarity. Regardless of whether k is a Mercer kernel, k may still be referred to as a "kernel". If the kernel function k is also a
covariance function In probability theory and statistics, the covariance function describes how much two random variables change together (their ''covariance'') with varying spatial or temporal separation. For a random field or stochastic process ''Z''(''x'') on a dom ...
as used in Gaussian processes, then the Gram matrix \mathbf can also be called a
covariance matrix In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
.


Applications

Application areas of kernel methods are diverse and include
geostatistics Geostatistics is a branch of statistics focusing on spatial or spatiotemporal datasets. Developed originally to predict probability distributions of ore grades for mining operations, it is currently applied in diverse disciplines including pet ...
,
kriging In statistics, originally in geostatistics, kriging or Kriging (), also known as Gaussian process regression, is a method of interpolation based on Gaussian process governed by prior covariances. Under suitable assumptions of the prior, kriging g ...
,
inverse distance weighting Inverse distance weighting (IDW) is a type of Deterministic algorithm, deterministic method for multivariate interpolation with a known homogeneously scattered set of points. The assigned values to unknown points are calculated with a Weighted m ...
,
3D reconstruction In computer vision and computer graphics, 3D reconstruction is the process of capturing the shape and appearance of real objects. This process can be accomplished either by active or passive methods. If the model is allowed to change its shape i ...
,
bioinformatics Bioinformatics () is an interdisciplinary field of science that develops methods and Bioinformatics software, software tools for understanding biological data, especially when the data sets are large and complex. Bioinformatics uses biology, ...
,
cheminformatics Cheminformatics (also known as chemoinformatics) refers to the use of physical chemistry theory with computer and information science techniques—so called "'' in silico''" techniques—in application to a range of descriptive and prescriptive ...
, information extraction and
handwriting recognition Handwriting recognition (HWR), also known as handwritten text recognition (HTR), is the ability of a computer to receive and interpret intelligible handwriting, handwritten input from sources such as paper documents, photographs, touch-screens ...
.


Popular kernels

*
Fisher kernel In statistical classification, the Fisher kernel, named after Ronald Fisher, is a function that measures the similarity of two objects on the basis of sets of measurements for each object and a statistical model. In a classification procedure, the ...
*
Graph kernel In structure mining, a graph kernel is a kernel function that computes an inner product on graphs. Graph kernels can be intuitively understood as functions measuring the similarity of pairs of graphs. They allow kernelized learning algorithms su ...
s *
Kernel smoother A kernel smoother is a statistical technique to estimate a real valued function f: \mathbb^p \to \mathbb as the weighted average of neighboring observed data. The weight is defined by the ''kernel'', such that closer points are given higher weights ...
*
Polynomial kernel In machine learning, the polynomial kernel is a kernel function commonly used with support vector machines (SVMs) and other Kernel trick, kernelized models, that represents the similarity of vectors (training samples) in a feature space over pol ...
*
Radial basis function kernel In machine learning, the radial basis function kernel, or RBF kernel, is a popular Positive-definite kernel, kernel function used in various kernel method, kernelized learning algorithms. In particular, it is commonly used in support vector machine ...
(RBF) *
String kernel In machine learning and data mining, a string kernel is a kernel function that operates on strings, i.e. finite sequences of symbols that need not be of the same length. String kernels can be intuitively understood as functions measuring the simi ...
s *
Neural tangent kernel In the study of Artificial neural network, artificial neural networks (ANNs), the neural tangent kernel (NTK) is a Kernel method, kernel that describes the evolution of Deep learning, deep artificial neural networks during their training by gradien ...
*
Neural network Gaussian process A Neural Network Gaussian Process (NNGP) is a Gaussian process (GP) obtained as the limit of a certain type of sequence of neural networks. Specifically, a wide variety of network architectures converges to a GP in Large width limits of neural netw ...
(NNGP) kernel


See also

*
Kernel methods for vector output Kernel methods are a well-established tool to analyze the relationship between input data and the corresponding output of a function. Kernels encapsulate the properties of functions in a Kernel trick, computationally efficient way and allow algorith ...
*
Kernel density estimation In statistics, kernel density estimation (KDE) is the application of kernel smoothing for probability density estimation, i.e., a non-parametric method to estimate the probability density function of a random variable based on '' kernels'' as ...
*
Representer theorem For computer science, in statistical learning theory, a representer theorem is any of several related results stating that a minimizer f^ of a regularized Empirical risk minimization, empirical risk functional defined over a reproducing kernel Hi ...
*
Similarity learning Similarity learning is an area of supervised machine learning in artificial intelligence. It is closely related to regression and classification, but the goal is to learn a similarity function that measures how similar or related two objects ar ...
* Cover's theorem


References


Further reading

* * *


External links


Kernel-Machines Org
€”community website
onlineprediction.net Kernel Methods Article
{{DEFAULTSORT:Kernel Methods Kernel methods for machine learning Geostatistics Classification algorithms