HOME

TheInfoList



OR:

In
machine learning Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
, a variational autoencoder (VAE) is an
artificial neural network In machine learning, a neural network (also artificial neural network or neural net, abbreviated ANN or NN) is a computational model inspired by the structure and functions of biological neural networks. A neural network consists of connected ...
architecture introduced by Diederik P. Kingma and Max Welling. It is part of the families of probabilistic graphical models and variational Bayesian methods. In addition to being seen as an autoencoder neural network architecture, variational autoencoders can also be studied within the mathematical formulation of variational Bayesian methods, connecting a neural encoder network to its decoder through a probabilistic latent space (for example, as a multivariate Gaussian distribution) that corresponds to the parameters of a variational distribution. Thus, the encoder maps each point (such as an image) from a large complex dataset into a distribution within the latent space, rather than to a single point in that space. The decoder has the opposite function, which is to map from the latent space to the input space, again according to a distribution (although in practice, noise is rarely added during the decoding stage). By mapping a point to a distribution instead of a single point, the network can avoid overfitting the training data. Both networks are typically trained together with the usage of the reparameterization trick, although the variance of the noise model can be learned separately. Although this type of model was initially designed for
unsupervised learning Unsupervised learning is a framework in machine learning where, in contrast to supervised learning, algorithms learn patterns exclusively from unlabeled data. Other frameworks in the spectrum of supervisions include weak- or semi-supervision, wh ...
, its effectiveness has been proven for semi-supervised learning and
supervised learning In machine learning, supervised learning (SL) is a paradigm where a Statistical model, model is trained using input objects (e.g. a vector of predictor variables) and desired output values (also known as a ''supervisory signal''), which are often ...
.


Overview of architecture and operation

A variational autoencoder is a generative model with a prior and noise distribution respectively. Usually such models are trained using the expectation-maximization meta-algorithm (e.g. probabilistic PCA, (spike & slab) sparse coding). Such a scheme optimizes a lower bound of the data likelihood, which is usually computationally intractable, and in doing so requires the discovery of q-distributions, or variational posteriors. These q-distributions are normally parameterized for each individual data point in a separate optimization process. However, variational autoencoders use a neural network as an amortized approach to jointly optimize across data points. In that way, the same parameters are reused for multiple data points, which can result in massive memory savings. The first neural network takes as input the data points themselves, and outputs parameters for the variational distribution. As it maps from a known input space to the low-dimensional latent space, it is called the encoder. The decoder is the second neural network of this model. It is a function that maps from the latent space to the input space, e.g. as the means of the noise distribution. It is possible to use another neural network that maps to the variance, however this can be omitted for simplicity. In such a case, the variance can be optimized with gradient descent. To optimize this model, one needs to know two terms: the "reconstruction error", and the
Kullback–Leibler divergence In mathematical statistics, the Kullback–Leibler (KL) divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how much a model probability distribution is diff ...
(KL-D). Both terms are derived from the free energy expression of the probabilistic model, and therefore differ depending on the noise distribution and the assumed prior of the data, here referred to as p-distribution. For example, a standard VAE task such as IMAGENET is typically assumed to have a gaussianly distributed noise; however, tasks such as binarized MNIST require a Bernoulli noise. The KL-D from the free energy expression maximizes the probability mass of the q-distribution that overlaps with the p-distribution, which unfortunately can result in mode-seeking behaviour. The "reconstruction" term is the remainder of the free energy expression, and requires a sampling approximation to compute its expectation value. More recent approaches replace
Kullback–Leibler divergence In mathematical statistics, the Kullback–Leibler (KL) divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how much a model probability distribution is diff ...
(KL-D) with various statistical distances, see "Statistical distance VAE variants" below.


Formulation

From the point of view of probabilistic modeling, one wants to maximize the likelihood of the data x by their chosen parameterized probability distribution p_(x) = p(x, \theta). This distribution is usually chosen to be a Gaussian N(x, \mu,\sigma) which is parameterized by \mu and \sigma respectively, and as a member of the exponential family it is easy to work with as a noise distribution. Simple distributions are easy enough to maximize, however distributions where a prior is assumed over the latents z results in intractable integrals. Let us find p_\theta(x) via marginalizing over z. : p_\theta(x) = \int_p_\theta() \, dz, where p_\theta() represents the joint distribution under p_\theta of the observable data x and its latent representation or encoding z . According to the
chain rule In calculus, the chain rule is a formula that expresses the derivative of the Function composition, composition of two differentiable functions and in terms of the derivatives of and . More precisely, if h=f\circ g is the function such that h ...
, the equation can be rewritten as : p_\theta(x) = \int_p_\theta()p_\theta(z) \, dz In the vanilla variational autoencoder, z is usually taken to be a finite-dimensional vector of real numbers, and p_\theta() to be a
Gaussian distribution In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real number, real-valued random variable. The general form of its probability density function is f(x ...
. Then p_\theta(x) is a mixture of Gaussian distributions. It is now possible to define the set of the relationships between the input data and its latent representation as * Prior p_\theta(z) * Likelihood p_\theta(x, z) * Posterior p_\theta(z, x) Unfortunately, the computation of p_\theta(z, x) is expensive and in most cases intractable. To speed up the calculus to make it feasible, it is necessary to introduce a further function to approximate the posterior distribution as :q_\phi() \approx p_\theta() with \phi defined as the set of real values that parametrize q. This is sometimes called ''amortized inference'', since by "investing" in finding a good q_\phi, one can later infer z from x quickly without doing any integrals. In this way, the problem is to find a good probabilistic autoencoder, in which the conditional likelihood distribution p_\theta(x, z) is computed by the ''probabilistic decoder'', and the approximated posterior distribution q_\phi(z, x) is computed by the ''probabilistic encoder''. Parametrize the encoder as E_\phi, and the decoder as D_\theta.


Evidence lower bound (ELBO)

Like many
deep learning Deep learning is a subset of machine learning that focuses on utilizing multilayered neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience a ...
approaches that use gradient-based optimization, VAEs require a differentiable loss function to update the network weights through backpropagation. For variational autoencoders, the idea is to jointly optimize the generative model parameters \theta to reduce the reconstruction error between the input and the output, and \phi to make q_\phi() as close as possible to p_\theta(z, x). As reconstruction loss,
mean squared error In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the errors—that is, the average squared difference betwee ...
and cross entropy are often used. As distance loss between the two distributions the Kullback–Leibler divergence D_(q_\phi()\parallel p_\theta()) is a good choice to squeeze q_\phi() under p_\theta(z, x). The distance loss just defined is expanded as : \begin D_(q_\phi()\parallel p_\theta()) &= \mathbb E_ \left ln \frac\right\ &= \mathbb E_ \left ln \frac\right\ &=\ln p_\theta(x) + \mathbb E_ \left ln \frac\right\end Now define the evidence lower bound (ELBO):L_(x) := \mathbb E_ \left ln \frac\right = \ln p_\theta(x) - D_(q_\phi()\parallel p_\theta()) Maximizing the ELBO\theta^*,\phi^* = \underset\operatorname \, L_(x) is equivalent to simultaneously maximizing \ln p_\theta(x) and minimizing D_(q_\phi()\parallel p_\theta()) . That is, maximizing the log-likelihood of the observed data, and minimizing the divergence of the approximate posterior q_\phi(\cdot , x) from the exact posterior p_\theta(\cdot , x) . The form given is not very convenient for maximization, but the following, equivalent form, is:L_(x) = \mathbb E_ \left z)\right- D_(q_\phi()\parallel p_\theta(\cdot)) where \ln p_\theta(x, z) is implemented as -\frac\, x - D_\theta(z)\, ^2_2, since that is, up to an additive constant, what x, z \sim \mathcal N(D_\theta(z), I) yields. That is, we model the distribution of x conditional on z to be a Gaussian distribution centered on D_\theta(z). The distribution of q_\phi(z , x) and p_\theta(z) are often also chosen to be Gaussians as z, x \sim \mathcal N(E_\phi(x), \sigma_\phi(x)^2I) and z \sim \mathcal N(0, I), with which we obtain by the formula for KL divergence of Gaussians:L_(x) = -\frac 12\mathbb E_ \left x - D_\theta(z)\, _2^2\right- \frac 12 \left( N\sigma_\phi(x)^2 + \, E_\phi(x)\, _2^2 - 2N\ln\sigma_\phi(x) \right) + Const Here N is the dimension of z . For a more detailed derivation and more interpretations of ELBO and its maximization, see its main page.


Reparameterization

To efficiently search for \theta^*,\phi^* = \underset\operatorname \, L_(x) the typical method is gradient ascent. It is straightforward to find\nabla_\theta \mathbb E_ \left ln \frac\right= \mathbb E_ \left \nabla_\theta \ln \frac\right However, \nabla_\phi \mathbb E_ \left ln \frac\right does not allow one to put the \nabla_\phi inside the expectation, since \phi appears in the probability distribution itself. The reparameterization trick (also known as stochastic backpropagation) bypasses this difficulty. The most important example is when z \sim q_\phi(\cdot , x) is normally distributed, as \mathcal N(\mu_\phi(x), \Sigma_\phi(x)) . : This can be reparametrized by letting \boldsymbol \sim \mathcal(0, \boldsymbol) be a "standard
random number generator Random number generation is a process by which, often by means of a random number generator (RNG), a sequence of numbers or symbols is generated that cannot be reasonably predicted better than by random chance. This means that the particular ou ...
", and construct z as z = \mu_\phi(x) + L_\phi(x)\epsilon . Here, L_\phi(x) is obtained by the Cholesky decomposition:\Sigma_\phi(x) = L_\phi(x)L_\phi(x)^T Then we have\nabla_\phi \mathbb E_ \left ln \frac\right = \mathbb _\left \nabla_\phi \ln \right and so we obtained an unbiased estimator of the gradient, allowing
stochastic gradient descent Stochastic gradient descent (often abbreviated SGD) is an Iterative method, iterative method for optimizing an objective function with suitable smoothness properties (e.g. Differentiable function, differentiable or Subderivative, subdifferentiable ...
. Since we reparametrized z, we need to find q_\phi(z, x). Let q_0 be the probability density function for \epsilon, then \ln q_\phi(z , x) = \ln q_0 (\epsilon) - \ln, \det(\partial_\epsilon z), where \partial_\epsilon z is the Jacobian matrix of z with respect to \epsilon. Since z = \mu_\phi(x) + L_\phi(x)\epsilon , this is \ln q_\phi(z , x) = -\frac 12 \, \epsilon\, ^2 - \ln, \det L_\phi(x), - \frac n2 \ln(2\pi)


Variations

Many variational autoencoders applications and extensions have been used to adapt the architecture to other domains and improve its performance. \beta-VAE is an implementation with a weighted Kullback–Leibler divergence term to automatically discover and interpret factorised latent representations. With this implementation, it is possible to force manifold disentanglement for \beta values greater than one. This architecture can discover disentangled latent factors without supervision. The conditional VAE (CVAE), inserts label information in the latent space to force a deterministic constrained representation of the learned data. Some structures directly deal with the quality of the generated samples or implement more than one latent space to further improve the representation learning. Some architectures mix VAE and generative adversarial networks to obtain hybrid models. It is not necessary to use gradients to update the encoder. In fact, the encoder is not necessary for the generative model.


Statistical distance VAE variants

After the initial work of Diederik P. Kingma and Max Welling, several procedures were proposed to formulate in a more abstract way the operation of the VAE. In these approaches the loss function is composed of two parts : * the usual reconstruction error part which seeks to ensure that the encoder-then-decoder mapping x \mapsto D_\theta(E_\psi(x)) is as close to the identity map as possible; the sampling is done at run time from the empirical distribution \mathbb^ of objects available (e.g., for MNIST or IMAGENET this will be the empirical probability law of all images in the dataset). This gives the term: \mathbb_ \left x - D_\theta(E_\phi(x))\, _2^2\right/math>. * a variational part that ensures that, when the empirical distribution \mathbb^ is passed through the encoder E_\phi, we recover the target distribution, denoted here \mu(dz) that is usually taken to be a
Multivariate normal distribution In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One d ...
. We will denote E_\phi \sharp \mathbb^ this pushforward measure which in practice is just the empirical distribution obtained by passing all dataset objects through the encoder E_\phi. In order to make sure that E_\phi \sharp \mathbb^ is close to the target \mu(dz), a Statistical distance d is invoked and the term d \left( \mu(dz), E_\phi \sharp \mathbb^ \right)^2 is added to the loss. We obtain the final formula for the loss: L_ = \mathbb_ \left x - D_\theta(E_\phi(x))\, _2^2\right+d \left( \mu(dz), E_\phi \sharp \mathbb^ \right)^2 The statistical distance d requires special properties, for instance it has to be posses a formula as expectation because the loss function will need to be optimized by stochastic optimization algorithms. Several distances can be chosen and this gave rise to several flavors of VAEs: * the sliced Wasserstein distance used by S Kolouri, et al. in their VAE * the energy distance implemented in the Radon Sobolev Variational Auto-Encoder * the Maximum Mean Discrepancy distance used in the MMD-VAE * the Wasserstein distance used in the WAEs * kernel-based distances used in the Kernelized Variational Autoencoder (K-VAE)


See also

* Autoencoder *
Artificial neural network In machine learning, a neural network (also artificial neural network or neural net, abbreviated ANN or NN) is a computational model inspired by the structure and functions of biological neural networks. A neural network consists of connected ...
*
Deep learning Deep learning is a subset of machine learning that focuses on utilizing multilayered neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience a ...
* Generative adversarial network * Representation learning * Sparse dictionary learning * Data augmentation * Backpropagation


References


Further reading

* {{Artificial intelligence navbox Neural network architectures Unsupervised learning Supervised learning Graphical models Bayesian statistics Dimension reduction