HOME

TheInfoList



OR:

In
machine learning Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
, Platt scaling or Platt calibration is a way of transforming the outputs of a classification model into a probability distribution over classes. The method was invented by John Platt in the context of
support vector machines In machine learning, support vector machines (SVMs, also support vector networks) are supervised max-margin models with associated learning algorithms that analyze data for classification and regression analysis. Developed at AT&T Bell Laborato ...
, replacing an earlier method by
Vapnik Vladimir Naumovich Vapnik (; born 6 December 1936) is a statistician, researcher, and academic. He is one of the main developers of the Vapnik–Chervonenkis theory of statistical learning and the co-inventor of the support-vector machine method a ...
, but can be applied to other classification models. Platt scaling works by fitting a
logistic regression In statistics, a logistic model (or logit model) is a statistical model that models the logit, log-odds of an event as a linear function (calculus), linear combination of one or more independent variables. In regression analysis, logistic regres ...
model to a classifier's scores.


Problem formalization

Consider the problem of
binary classification Binary classification is the task of classifying the elements of a set into one of two groups (each called ''class''). Typical binary classification problems include: * Medical testing to determine if a patient has a certain disease or not; * Qual ...
: for inputs , we want to determine whether they belong to one of two classes, arbitrarily labeled and . We assume that the classification problem will be solved by a real-valued function , by predicting a class label . For many problems, it is convenient to get a probability P(y=1, x), i.e. a classification that not only gives an answer, but also a degree of certainty about the answer. Some classification models do not provide such a probability, or give poor probability estimates. L=1, k=1, x_0=0.


Algorithm

Platt scaling is an algorithm to solve the aforementioned problem. It produces probability estimates :\mathrm(y=1 , x) = \frac, i.e., a logistic transformation of the classifier output , where and are two scalar parameters that are learned by the algorithm. After scaling, values can be predicted as y=1 \text P(y=1, x) > \frac. If B \ne 0, then the probability estimates are modified from to the original decision function . The parameters and are estimated using a
maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed stati ...
method that optimizes on the same training set as that for the original classifier . To avoid
overfitting In mathematical modeling, overfitting is "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably". An overfi ...
to this set, a held-out calibration set or cross-validation can be used, but Platt additionally suggests transforming the labels to target probabilities :t_ = \frac for positive samples (), and :t_ = \frac for negative samples, . Here, and are the number of positive and negative samples, respectively. This transformation follows by applying Bayes' rule to a model of out-of-sample data that has a uniform prior over the labels. The constants 1 and 2, on the numerator and denominator respectively, are derived from the application of Laplace smoothing. Platt himself suggested using the Levenberg–Marquardt algorithm to optimize the parameters, but a Newton algorithm was later proposed that should be more numerically stable.


Analysis

Platt scaling has been shown to be effective for SVMs as well as other types of classification models, including boosted models and even naive Bayes classifiers, which produce distorted probability distributions. It is particularly effective for max-margin methods such as SVMs and boosted trees, which show sigmoidal distortions in their predicted probabilities, but has less of an effect with well- calibrated models such as
logistic regression In statistics, a logistic model (or logit model) is a statistical model that models the logit, log-odds of an event as a linear function (calculus), linear combination of one or more independent variables. In regression analysis, logistic regres ...
, multilayer perceptrons, and
random forest Random forests or random decision forests is an ensemble learning method for statistical classification, classification, regression analysis, regression and other tasks that works by creating a multitude of decision tree learning, decision trees ...
s. An alternative approach to probability calibration is to fit an isotonic regression model to an ill-calibrated probability model. This has been shown to work better than Platt scaling, in particular when enough training data is available. Platt scaling can also be applied to deep neural network classifiers. For image classification, such as CIFAR-100, small networks like LeNet-5 have good calibration but low accuracy, and large networks like ResNet has high accuracy but is overconfident in predictions. A 2017 paper proposed ''temperature scaling'', which simply multiplies the output logits of a network by a constant 1/T before taking the softmax. During training, T is set to 1. After training, T is optimized on a held-out calibration set to minimize the calibration loss.


See also

* Relevance vector machine: probabilistic alternative to the support vector machine


Notes


References

{{reflist, 30em Probabilistic models Statistical classification