AdaBoost, short for ''Adaptive
Boosting'', is a
statistical classification meta-algorithm formulated by
Yoav Freund and
Robert Schapire in 1995, who won the 2003
Gödel Prize for their work. It can be used in conjunction with many other types of learning algorithms to improve performance. The output of the other learning algorithms ('weak learners') is combined into a weighted sum that represents the final output of the boosted classifier. Usually, AdaBoost is presented for
binary classification, although it can be generalized to multiple classes or bounded intervals on the real line.
AdaBoost is adaptive in the sense that subsequent weak learners are tweaked in favor of those instances misclassified by previous classifiers. In some problems it can be less susceptible to the
overfitting
mathematical modeling, overfitting is "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably". An overfitt ...
problem than other learning algorithms. The individual learners can be weak, but as long as the performance of each one is slightly better than random guessing, the final model can be proven to converge to a strong learner.
Although AdaBoost is typically used to combine weak base learners (such as
decision stumps), it has been shown that it can also effectively combine strong base learners (such as deep
decision trees), producing an even more accurate model.
Every learning algorithm tends to suit some problem types better than others, and typically has many different parameters and configurations to adjust before it achieves optimal performance on a dataset. AdaBoost (with
decision trees as the weak learners) is often referred to as the best out-of-the-box classifier. When used with decision tree learning, information gathered at each stage of the AdaBoost algorithm about the relative 'hardness' of each training sample is fed into the tree growing algorithm such that later trees tend to focus on harder-to-classify examples.
Training
AdaBoost refers to a particular method of training a boosted classifier. A boosted classifier is a classifier of the form
:
where each
is a weak learner that takes an object
as input and returns a value indicating the class of the object. For example, in the two-class problem, the sign of the weak learner's output identifies the predicted object class and the absolute value gives the confidence in that classification. Similarly, the
-th classifier is positive if the sample is in a positive class and negative otherwise.
Each weak learner produces an output hypothesis
which fixes a prediction
for each sample in the training set. At each iteration
, a weak learner is selected and assigned a coefficient
such that the total training error
of the resulting
-stage boosted classifier is minimized.
:
">ssuming that
\alpha_m > 0 i.e. the weak classifier with the lowest weighted error (with weights
w_i^ = e^).
To determine the desired weight
\alpha_m that minimizes
E with the
k_m that we just determined, we differentiate:
:
\frac = \frac
Setting this to zero and solving for
\alpha_m yields:
:
\alpha_m = \frac\ln\left(\frac\right)
We calculate the weighted error rate of the weak classifier to be
\epsilon_m = \sum_ w_i^ / \sum_^N w_i^, so it follows that:
:
\alpha_m = \frac\ln\left( \frac\right)
which is the negative logit function multiplied by 0.5. Due to the convexity of
E as a function of
\alpha_m, this new expression for
\alpha_m gives the global minimum of the loss function.
Note: This derivation only applies when
k_m(x_i) \in \, though it can be a good starting guess in other cases, such as when the weak learner is biased (
k_m(x) \in \, a \neq -b), has multiple leaves (
k_m(x) \in \) or is some other function
k_m(x) \in \mathbb.
Thus we have derived the AdaBoost algorithm: At each iteration, choose the classifier
k_m, which minimizes the total weighted error
\sum_ w_i^, use this to calculate the error rate
\epsilon_m = \sum_ w_i^ / \sum_^N w_i^, use this to calculate the weight
\alpha_m = \frac\ln\left( \frac\right), and finally use this to improve the boosted classifier
C_ to
C_ = C_ + \alpha_m k_m.
Statistical understanding of boosting
Boosting is a form of linear
regression
Regression or regressions may refer to:
Science
* Marine regression, coastal advance due to falling sea level, the opposite of marine transgression
* Regression (medicine), a characteristic of diseases to express lighter symptoms or less extent ( ...
in which the features of each sample
x_i are the outputs of some weak learner
h applied to
x_i.
While regression tries to fit
F(x) to
y(x) as precisely as possible without loss of generalization, typically using
least square error
E(f) = (y(x) - f(x))^2, whereas the AdaBoost error function
E(f) = e^ takes into account the fact that only the sign of the final result is used, thus
, F(x), can be far larger than 1 without increasing error. However, the exponential increase in the error for sample
x_i as
-y(x_i)f(x_i) increases. resulting in excessive weights being assigned to outliers.
One feature of the choice of exponential error function is that the error of the final additive model is the product of the error of each stage, that is,
e^ = \prod_i e^. Thus it can be seen that the weight update in the AdaBoost algorithm is equivalent to recalculating the error on
F_t(x) after each stage.
There is a lot of flexibility allowed in the choice of loss function. As long as the loss function is
monotonic and
continuously differentiable, the classifier is always driven toward purer solutions.
Zhang (2004) provides a loss function based on least squares, a modified
Huber loss function
Huber is a German-language surname. It derives from the German word ''Hube'' meaning hide, a unit of land a farmer might possess, granting them the status of a free tenant. It is in the top ten most common surnames in the German-speaking world, ...
:
:
\phi(y,f(x)) = \begin -4y f(x) & \mboxy f(x) < -1, \\ (y f(x) - 1)^2 &\mbox -1 \le y f(x) \le 1. \\ 0 &\mbox y f(x) > 1 \end
This function is more well-behaved than LogitBoost for
f(x) close to 1 or -1, does not penalise ‘overconfident’ predictions (
y f(x) > 1), unlike unmodified least squares, and only penalises samples misclassified with confidence greater than 1 linearly, as opposed to quadratically or exponentially, and is thus less susceptible to the effects of outliers.
Boosting as gradient descent
Boosting can be seen as minimization of a
convex loss function over a
convex set
In geometry, a subset of a Euclidean space, or more generally an affine space over the reals, is convex if, given any two points in the subset, the subset contains the whole line segment that joins them. Equivalently, a convex set or a convex ...
of functions. Specifically, the loss being minimized by AdaBoost is the exponential loss
\sum_i \phi(i,y,f) = \sum_i e^,
whereas LogitBoost performs logistic regression, minimizing
\sum_i \phi(i,y,f) = \sum_i \ln\left(1+e^\right).
In the gradient descent analogy, the output of the classifier for each training point is considered a point
\left(F_t(x_1), \dots, F_t(x_n)\right) in n-dimensional space, where each axis corresponds to a training sample, each weak learner
h(x) corresponds to a vector of fixed orientation and length, and the goal is to reach the target point
(y_1, \dots, y_n) (or any region where the value of loss function
E_T(x_1, \dots, x_n) is less than the value at that point), in the fewest steps. Thus AdaBoost algorithms perform either
Cauchy
Baron Augustin-Louis Cauchy (, ; ; 21 August 178923 May 1857) was a French mathematician, engineer, and physicist who made pioneering contributions to several branches of mathematics, including mathematical analysis and continuum mechanics. He ...
(find
h(x) with the steepest gradient, choose
\alpha to minimize test error) or
Newton (choose some target point, find
\alpha h(x) that brings
F_t closest to that point) optimization of training error.
Example algorithm (Discrete AdaBoost)
With:
* Samples
x_1 \dots x_n
* Desired outputs
y_1 \dots y_n, y \in \
* Initial weights
w_ \dots w_ set to
\frac
* Error function
E(f(x), y, i) = e^
* Weak learners
h\colon x \rightarrow \
For
t in
1 \dots T:
* Choose
h_t(x):
** Find weak learner
h_t(x) that minimizes
\epsilon_t, the weighted sum error for misclassified points
\epsilon_t = \sum^n_ w_
** Choose
\alpha_t = \frac \ln \left(\frac\right)
* Add to ensemble:
**
F_t(x) = F_(x) + \alpha_t h_t(x)
* Update weights:
**
w_ = w_ e^ for
i in
1 \dots n
** Renormalize
w_ such that
\sum_i w_ = 1
**(Note: It can be shown that
\frac = \frac at every step, which can simplify the calculation of the new weights.)
Variants
Real AdaBoost
The output of decision trees is a class probability estimate
p(x) = P(y=1 , x), the probability that
x is in the positive class.
Friedman, Hastie and Tibshirani derive an analytical minimizer for
e^ for some fixed
p(x) (typically chosen using weighted least squares error):
:
f_t(x) = \frac \ln\left(\frac\right).
Thus, rather than multiplying the output of the entire tree by some fixed value, each leaf node is changed to output half the
logit transform of its previous value.
LogitBoost
LogitBoost represents an application of established
logistic regression
In statistics, the logistic model (or logit model) is a statistical model that models the probability of an event taking place by having the log-odds for the event be a linear function (calculus), linear combination of one or more independent var ...
techniques to the AdaBoost method. Rather than minimizing error with respect to y, weak learners are chosen to minimize the (weighted least-squares) error of
f_t(x) with respect to
:
z_t = \frac,
where
:
p_t(x) = \frac,
:
w_t = p_t(x)(1 - p_t(x))
:
y^* = \frac 2.
That is
z_t is the
Newton–Raphson approximation of the minimizer of the log-likelihood error at stage
t, and the weak learner
f_t is chosen as the learner that best approximates
z_t by weighted least squares.
As p approaches either 1 or 0, the value of
p_t(x_i)(1 - p_t(x_i)) becomes very small and the ''z'' term, which is large for misclassified samples, can become
numerically unstable, due to machine precision rounding errors. This can be overcome by enforcing some limit on the absolute value of ''z'' and the minimum value of ''w''
Gentle AdaBoost
While previous boosting algorithms choose
f_t greedily, minimizing the overall test error as much as possible at each step, GentleBoost features a bounded step size.
f_t is chosen to minimize
\sum_i w_ (y_i-f_t(x_i))^2, and no further coefficient is applied. Thus, in the case where a weak learner exhibits perfect classification performance, GentleBoost chooses
f_t(x) = \alpha_t h_t(x) exactly equal to
y, while steepest descent algorithms try to set
\alpha_t = \infty. Empirical observations about the good performance of GentleBoost appear to back up Schapire and Singer's remark that allowing excessively large values of
\alpha can lead to poor generalization performance.
Early Termination
A technique for speeding up processing of boosted classifiers, early termination refers to only testing each potential object with as many layers of the final classifier necessary to meet some confidence threshold, speeding up computation for cases where the class of the object can easily be determined. One such scheme is the object detection framework introduced by Viola and Jones: in an application with significantly more negative samples than positive, a cascade of separate boost classifiers is trained, the output of each stage biased such that some acceptably small fraction of positive samples is mislabeled as negative, and all samples marked as negative after each stage are discarded. If 50% of negative samples are filtered out by each stage, only a very small number of objects would pass through the entire classifier, reducing computation effort. This method has since been generalized, with a formula provided for choosing optimal thresholds at each stage to achieve some desired false positive and false negative rate.
In the field of statistics, where AdaBoost is more commonly applied to problems of moderate dimensionality,
early stopping is used as a strategy to reduce
overfitting
mathematical modeling, overfitting is "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably". An overfitt ...
. A validation set of samples is separated from the training set, performance of the classifier on the samples used for training is compared to performance on the validation samples, and training is terminated if performance on the validation sample is seen to decrease even as performance on the training set continues to improve.
Totally corrective algorithms
For steepest descent versions of AdaBoost, where
\alpha_t is chosen at each layer ''t'' to minimize test error, the next layer added is said to be ''maximally independent'' of layer ''t'': it is unlikely to choose a weak learner ''t+1'' that is similar to learner ''t''. However, there remains the possibility that ''t+1'' produces similar information to some other earlier layer. Totally corrective algorithms, such as
LPBoost, optimize the value of every coefficient after each step, such that new layers added are always maximally independent of every previous layer. This can be accomplished by backfitting,
linear programming
Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear relationships. Linear programming is ...
or some other method.
Pruning
Pruning is the process of removing poorly performing weak classifiers to improve memory and execution-time cost of the boosted classifier. The simplest methods, which can be particularly effective in conjunction with totally corrective training, are weight- or margin-trimming: when the coefficient, or the contribution to the total test error, of some weak classifier falls below a certain threshold, that classifier is dropped. Margineantu & Dietterich suggested an alternative criterion for trimming: weak classifiers should be selected such that the diversity of the ensemble is maximized. If two weak learners produce very similar outputs, efficiency can be improved by removing one of them and increasing the coefficient of the remaining weak learner.
See also
*
Bootstrap aggregating
Bootstrap aggregating, also called bagging (from bootstrap aggregating), is a machine learning ensemble meta-algorithm designed to improve the stability and accuracy of machine learning algorithms used in statistical classification and regress ...
*
CoBoosting
*
BrownBoost
*
Gradient boosting
Gradient boosting is a machine learning technique used in regression and classification tasks, among others. It gives a prediction model in the form of an ensemble of weak prediction models, which are typically decision trees. When a decision tr ...
*
References
Further reading
* original paper of Yoav Freund and Robert E.Schapire where AdaBoost is first introduced.
* On the margin explanation of boosting algorithm.
* On the doubt about margin explanation of boosting.
{{DEFAULTSORT:AdaBoost
Classification algorithms
Ensemble learning