In
statistics, kernel Fisher discriminant analysis (KFD),
also known as generalized discriminant analysis
and kernel discriminant analysis,
is a kernelized version of
linear discriminant analysis
Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features ...
(LDA). It is named after
Ronald Fisher
Sir Ronald Aylmer Fisher (17 February 1890 – 29 July 1962) was a British polymath who was active as a mathematician, statistician, biologist, geneticist, and academic. For his work in statistics, he has been described as "a genius who ...
.
Linear discriminant analysis
Intuitively, the idea of LDA is to find a projection where class separation is maximized. Given two sets of labeled data,
and
, we can calculate the mean value of each class,
and
, as
:
where
is the number of examples of class
. The goal of linear discriminant analysis is to give a large separation of the class means while also keeping the in-class variance small.
This is formulated as maximizing, with respect to
, the following ratio:
:
where
is the between-class covariance matrix and
is the total within-class covariance matrix:
:
The maximum of the above ratio is attained at
:
as can be shown by the
Lagrange multiplier
In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equality constraints (i.e., subject to the condition that one or more equations have to be satisfied ...
method (sketch of proof):
Maximizing
is equivalent to maximizing
:
subject to
:
This, in turn, is equivalent to maximizing
, where
is the Lagrange multiplier.
At the maximum, the derivatives of
with respect to
and
must be zero. Taking
yields
:
which is trivially satisfied by
and
Extending LDA
To extend LDA to non-linear mappings, the data, given as the
points
can be mapped to a new feature space,
via some function
In this new feature space, the function that needs to be maximized is
:
where
:
and
:
Further, note that
. Explicitly computing the mappings
and then performing LDA can be computationally expensive, and in many cases intractable. For example,
may be infinitely dimensional. Thus, rather than explicitly mapping the data to
, the data can be implicitly embedded by rewriting the algorithm in terms of
dot product
In mathematics, the dot product or scalar productThe term ''scalar product'' means literally "product with a scalar as a result". It is also used sometimes for other symmetric bilinear forms, for example in a pseudo-Euclidean space. is an alg ...
s and using kernel functions in which the dot product in the new feature space is replaced by a kernel function,
.
LDA can be reformulated in terms of dot products by first noting that
will have an expansion of
the form
:
Then note that
:
where
:
The numerator of
can then be written as:
:
Similarly, the denominator can be written as
:
with the
component of
defined as
is the identity matrix, and
the matrix with all entries equal to
. This identity can be derived by starting out with the expression for
and using the expansion of
and the definitions of
and
:
With these equations for the numerator and denominator of
, the equation for
can be rewritten as
:
Then, differentiating and setting equal to zero gives
:
Since only the direction of
, and hence the direction of
matters, the above can be solved for
as
:
Note that in practice,
is usually singular and so a multiple of the identity is added to it
:
Given the solution for
, the projection of a new data point is given by
:
Multi-class KFD
The extension to cases where there are more than two classes is relatively straightforward.
Let
be the number of classes. Then multi-class KFD involves projecting the data into a
-dimensional space using
discriminant functions
:
This can be written in matrix notation
:
where the
are the columns of
.
Further, the between-class covariance matrix is now
:
where
is the mean of all the data in the new feature space. The within-class covariance matrix is
:
The solution is now obtained by maximizing
:
The kernel trick can again be used and the goal of multi-class KFD becomes
:
where