HOME

TheInfoList



OR:

In machine learning, normalization is a statistical technique with various applications. There are mainly two forms of normalization, data normalization and activation normalization. Data normalization, or
feature scaling Feature scaling is a method used to normalize the range of independent variables or features of data. In data processing, it is also known as data normalization and is generally performed during the data preprocessing step. Motivation Since th ...
, is a general technique in statistics, and it includes methods that rescale input data so that they have well-behaved range, mean, variance, and other statistical properties. Activation normalization is specific to deep learning, and it includes methods that rescale the activation of hidden neurons inside a neural network. Normalization is often used for faster training convergence, less sensitivity to variations in input data, less overfitting, and better generalization to unseen data. They are often theoretically justified as reducing covariance shift, smoother optimization landscapes, increasing
regularization Regularization may refer to: * Regularization (linguistics) * Regularization (mathematics) * Regularization (physics) * Regularization (solid modeling) * Regularization Law, an Israeli law intended to retroactively legalize settlements See also ...
, though they are mainly justified by empirical success.


Batch normalization

Batch normalization (BatchNorm) operates on the activations of a layer for each mini-batch. Consider a simple feedforward network, defined by chaining together modules:x^ \mapsto x^ \mapsto x^ \mapsto \cdotswhere each network module can be a linear transform, a nonlinear activation function, a convolution, etc. x^ is the input vector, x^ is the output vector from the first module, etc. BatchNorm is a module that can be inserted at any point in the feedforward network. For example, suppose it is inserted just after x^, then the network would operate accordingly:\cdots \mapsto x^ \mapsto \mathrm(x^) \mapsto x^ \mapsto \cdots The BatchNorm module does not operate over individual inputs. Instead, it must operate over one batch of inputs at a time. Concretely, suppose we have a batch of inputs x^_, x^_, \dots, x^_ , fed all at once into the network. We would obtain in the middle of the network some vectorsx^_, x^_, \dots, x^_ The BatchNorm module computes the coordinate-wise mean and variance of these vectors: \begin \mu^_i &= \frac 1B \sum_^B x^_ \\ (\sigma^_i)^2 &= \frac \sum_^B (x_^ - \mu_i^)^2 \end where i indexes the coordinates of the vectors, and b indexes the elements of the batch. In other words, we are considering the i-th coordinate of each vector in the batch, and computing the mean and variance of this collection of numbers. It then normalizes each coordinate to have zero mean and unit variance: \hat^_ = \fracThe \epsilon is a small positive constant such as 10^ added to the variance for numerical stability, to avoid division by zero. Finally, it applies a linear transform:y^_ = \gamma_i \hat^_ + \beta_iHere, \gamma and \beta are parameters inside the BatchNorm module. They are learnable parameters, typically trained by gradient descent. The following code illustrates BatchNorm. import numpy as np def batchnorm(x, gamma, beta, epsilon=1e-8): # Mean and variance of each feature mu = np.mean(x, axis=0) # shape (N,) sigma2 = np.var(x, axis=0) # shape (N,) # Normalize the activations x_hat = (x - mu) / np.sqrt(sigma2 + epsilon) # shape (B, N) # Apply the linear transform y = gamma * x_hat + beta # shape (B, N) return y


Interpretation

\gamma and \beta allow the network to learn to undo the normalization if that is beneficial. Because a neural network can always be topped with a linear transform layer on top, BatchNorm can be interpreted as removing the purely linear transformations, so that its layers focus purely on modelling the nonlinear aspects of data. It is claimed in the original publication that BatchNorm works by reducing "internal covariance shift", though the claim has both supporters and detractors.


Special cases

The original paper recommended to only use BatchNorms after a linear transform, not after a nonlinear activation. That is, something like \mathrm(Wx + b), not \mathrm(\phi(Wx + b)). Also, the bias b does not matter, since will be canceled by the subsequent mean subtraction, so it is of form \mathrm(Wx). That is, if a BatchNorm is preceded by a linear transform, then that linear transform's bias term is set to constant zero. For
convolutional neural networks In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of artificial neural network (ANN), most commonly applied to analyze visual imagery. CNNs are also known as Shift Invariant or Space Invariant Artificial Neural Networ ...
(CNN), BatchNorm must preserve the translation invariance of CNN, which means that it must treat all outputs of the same kernel as if they are different data points within a batch. Concretely, suppose we have a 2-dimensional convolutional layer defined byx^_ = \sum_ K^_ x_^ + b^_cwhere * x^_ is the activation of the neuron at position (h, w) in the c-th channel of the l-th layer. * K^_ is a kernel tensor. Each channel c corresponds to a kernel K^_, with indices \Delta h, \Delta w, c'. * b^_c is the bias term for the c-th channel of the l-th layer. In order to preserve the translational invariance, BatchNorm treats all outputs from the same kernel in the same batch as more data in a batch. That is, it is applied once per ''kernel'' c (equivalently, once per channel c), not per ''activation'' x^_: \begin \mu^_c &= \frac \sum_^B \sum_^H \sum_^W x^_ \\ (\sigma^_c)^2 &= \frac \sum_^B \sum_^H \sum_^W (x_^ - \mu_c^)^2 \end where B is the batch size, H is the height of the feature map, and W is the width of the feature map. That is, even though there are only B data points in a batch, all BHW outputs from the kernel in this batch are treated equally. Subsequently, normalization and the linear transform is also done per kernel: \begin \hat^_ &= \frac \\ y^_ &= \gamma_c \hat^_ + \beta_c \end Similar considerations apply for BatchNorm for ''n''-dimensional convolutions. The following code illustrates BatchNorm for 2D convolutions: import numpy as np def batchnorm_cnn(x, gamma, beta, epsilon=1e-8): # Calculate the mean and variance for each channel. mean = np.mean(x, axis=(0, 1, 2), keepdims=True) var = np.var(x, axis=(0, 1, 2), keepdims=True) # Normalize the input tensor. x_hat = (x - mean) / np.sqrt(var + epsilon) # Scale and shift the normalized tensor. y = gamma * x_hat + beta return y


Layer normalization

Layer normalization (LayerNorm) is a common competitor to BatchNorm. Unlike BatchNorm, which normalizes activations across the batch dimension for a given feature, LayerNorm normalizes across all the features within a single data sample. Compared to BatchNorm, LayerNorm's performance is not affected by batch size. It is a key component of
Transformers ''Transformers'' is a media franchise produced by American toy company Hasbro and Japanese toy company Tomy, Takara Tomy. It primarily follows the Autobots and the Decepticons, two alien robot factions at war that can transform into other forms ...
. For a given data input and layer, LayerNorm computes the mean (\mu) and variance (\sigma^2) over all the neurons in the layer. Similar to BatchNorm, learnable parameters \gamma (scale) and \beta (shift) are applied. It is defined by:\hat = \frac, \quad y_i = \gamma_i \hat + \beta_iwhere \mu = \frac 1D \sum_^D x_i and \sigma^2 = \frac 1D \sum_^D (x_i - \mu)^2 , and i ranges over the neurons in that layer.


Examples

For example, in CNN, a LayerNorm applies to all activations in a layer. In the previous notation, we have\begin \mu^ &= \frac \sum_^H \sum_^W\sum_^C x^_ \\ (\sigma^)^2 &= \frac \sum_^H \sum_^W\sum_^C (x_^ - \mu^)^2 \\ \hat^_ &= \frac \\ y^_ &= \gamma^ \hat^_ + \beta^ \endnotice that the batch index b is removed, while the channel index c is added. In
recurrent neural networks A recurrent neural network (RNN) is a class of artificial neural networks where connections between nodes can create a cycle, allowing output from some nodes to affect subsequent input to the same nodes. This allows it to exhibit temporal dynamic ...
and
Transformers ''Transformers'' is a media franchise produced by American toy company Hasbro and Japanese toy company Tomy, Takara Tomy. It primarily follows the Autobots and the Decepticons, two alien robot factions at war that can transform into other forms ...
, LayerNorm is applied individually to each timestep. For example, if the hidden vector in an RNN at timestep t is x^ \in \mathbb^ where D is the dimension of the hidden vector, then LayerNorm will be applied with\hat^ = \frac, \quad y_i^ = \gamma_i \hat^ + \beta_iwhere \mu^ = \frac 1D \sum_^D x_i^ and (\sigma^)^2 = \frac 1D \sum_^D (x_i^ - \mu^)^2 .


Root mean square layer normalization

Root mean square layer normalization (RMSNorm) changes LayerNorm by \hat = \frac, \quad y_i = \gamma \hat + \beta Essentially it is LayerNorm where we enforce \mu, \epsilon = 0.


Other normalizations

Weight normalization (WeightNorm) is a technique inspired by BatchNorm. It normalizes weight matrices in a neural network, rather than its neural activations. Gradient normalization (GradNorm) normalizes gradient vectors during backpropagation. Adaptive layer norm (adaLN) computes the \gamma, \beta in a LayerNorm not from the layer activation itself, but from other data.


CNN-specific normalization

There are some activation normalization techniques that are only used for CNNs.


Local response normalization

Local response normalization was used in
AlexNet AlexNet is the name of a convolutional neural network (CNN) architecture, designed by Alex Krizhevsky in collaboration with Ilya Sutskever and Geoffrey Hinton, who was Krizhevsky's Ph.D. advisor. AlexNet competed in the ImageNet Large Scale Vis ...
. It was applied in a convolutional layer, just after a nonlinear activation function. It was defined byb_^i=\fracwhere a_^i is the activation of the neuron at location (x,y) and channel i. In words, each pixel in a channel is suppressed by the activations of the same pixel in its adjacent channels. The numbers k, n, \alpha, \beta are hyperparameters picked by using a validation set. It was a variant of the earlier local contrast normalization.b_^i=\fracwhere \bar a_^j is the average activation in a small window centered on location (x,y) and channel i. The numbers k, n, \alpha, \beta, and the size of the small window, are hyperparameters picked by using a validation set. Similar methods were called divisive normalization, as they divide activations by a number depending on the activations. They were originally inspired by biology, where it was used to explain nonlinear responses of cortical neurons and nonlinear masking in visual perception. Both kinds of local normalization were obsoleted by batch normalization, which is a more global form of normalization.


Group normalization

Group normalization (GroupNorm) is a technique only used for CNNs. It can be understood as the LayerNorm for CNN applied once per channel-group. Suppose at a layer l, there are channels 1, 2, \dots, C, then we partition it into groups g_1, \dots, g_G. Then, we apply LayerNorm to each group.


Instance normalization

Instance normalization (InstanceNorm), or contrast normalization, is a technique first developed for neural style transfer, and is only used for CNNs. It can be understood as the LayerNorm for CNN applied once per channel, or equivalently, as group normalization where each group consists of a single channel: \begin \mu^_c &= \frac \sum_^H \sum_^Wx^_ \\ (\sigma^_c)^2 &= \frac \sum_^H \sum_^W (x_^ - \mu^_c)^2 \\ \hat^_ &= \frac \\ y^_ &= \gamma^_c \hat^_ + \beta^_c \end


Adaptive instance normalization

Adaptive instance normalization (AdaIN) is a variant of instance normalization, designed specifically for neural style transfer with CNN, not for CNN in general. In the AdaIN method of style transfer, we take a CNN, and two input images, one content and one style. Each image is processed through the same CNN, and at a certain layer l, the AdaIn is applied. Let x^ be the activation in the content image, and x^ be the activation in the style image. Then, AdaIn first computes the mean and variance of the activations of the content image x'^, then use those as the \gamma, \beta for InstanceNorm on x^. Note that x^ itself remains unchanged. Explicitly, we have \begin y^_ &= \sigma^_c \left( \frac \right) + \mu^_c. \end


See also

*
Data preprocessing Data preprocessing can refer to manipulation or dropping of data before it is used in order to ensure or enhance performance, and is an important step in the data mining process. The phrase "garbage in, garbage out" is particularly applicable to ...
*
Feature scaling Feature scaling is a method used to normalize the range of independent variables or features of data. In data processing, it is also known as data normalization and is generally performed during the data preprocessing step. Motivation Since th ...


Further reading

* {{Cite web , title=Normalization Layers , url=https://nn.labml.ai/normalization/index.html , access-date=2024-08-07 , website=labml.ai Deep Learning Paper Implementations , language=en


References

Articles with example Python (programming language) code Deep learning Statistical data transformation Machine learning Neural networks