Fréchet Inception Distance
   HOME

TheInfoList



OR:

The Fréchet inception distance (FID) is a metric used to assess the quality of images created by a generative model, like a generative adversarial network (GAN). Unlike the earlier inception score (IS), which evaluates only the distribution of generated images, the FID compares the distribution of generated images with the distribution of a set of real images ("ground truth"). The FID metric was introduced in 2017, and is the current standard metric for assessing the quality of generative models as of 2020. It has been used to measure the quality of many recent models including the high-resolution StyleGAN1 and StyleGAN2 networks.


Definition

For any two probability distributions \mu, \nu over \R^n having finite mean and variances, their Fréchet distance isd_F (\mu, \nu):=\left( \inf_ \int_ \, x-y\, ^2 \, \mathrm \gamma (x, y) \right)^,where \Gamma(\mu, \nu) is the set of all measures on \R^n \times \R^n with
marginals The Marginals, also called the "''Paddy Irish''" gang, was a New York street gang during the early 1900s which, under stevedore Thomas F. "Tanner" Smith, succeeded the longtime Hudson Dusters in their territory of New York's Lower West Side. Base ...
''\mu'' and ''\nu'' on the first and second factors respectively. (The set \Gamma(\mu,\nu) is also called the set of all couplings of ''\mu'' and ''\nu''.). In other words, it is the 2-Wasserstein distance on \R^n. For two multidimensional Gaussian distributions \mathcal(\mu,\Sigma) and \mathcal(\mu',\Sigma'), it is explicitly solvable as d_(\mathcal N(\mu, \Sigma), \mathcal N(\mu', \Sigma'))^2 = \lVert \mu - \mu' \rVert^2_2 + \operatorname\left(\Sigma + \Sigma' -2\left(\Sigma^\frac \cdot \Sigma' \cdot \Sigma^\frac \right)^\frac \right)This allows us to define the FID in
pseudocode In computer science, pseudocode is a plain language description of the steps in an algorithm or another system. Pseudocode often uses structural conventions of a normal programming language, but is intended for human reading rather than machine re ...
form:
INPUT a function f: \Omega_X \to \R^n. INPUT two datasets S, S'\subset \Omega_X. Compute f(S), f(S') \subset \R^n. Fit two gaussian distributions \mathcal N(\mu, \Sigma), \mathcal N(\mu', \Sigma'), respectively for f(S), f(S'). RETURN d_(\mathcal N(\mu, \Sigma), \mathcal N(\mu', \Sigma'))^2.
In most practical uses of the FID, \Omega_X is the space of images, and f is an Inception v3 model trained on the ImageNet, but without its final classification layer. Technically, it is the 2048-dimensional activation vector of its ''pool3'' layer.


Interpretation

Rather than directly comparing images pixel by pixel (for example, as done by the L2 norm), the FID compares the mean and standard deviation of the deepest layer in Inception v3. These layers are closer to output nodes that correspond to real-world objects such as a specific breed of dog or an airplane, and further from the shallow layers near the input image. As a result, they tend to mimic human perception of similarity in images.


Variants

Specialized variants of FID have been suggested as evaluation metric for music enhancement algorithms as Fréchet Audio Distance (FAD), for generative models of video as Fréchet Video Distance (FVD), and for AI-generated molecules as Fréchet ChemNet Distance (FCD).


Limitations

Chong and Forsyth showed FID to be statistically biased, in the sense that their expected value over a finite data is not their true value. Also, because FID measured the Wasserstein distance towards the ground-truth distribution, it is inadequate for evaluating the quality of generators in domain adaptation setups, or in zero-shot generation. Finally, while FID is more consistent with human judgement than previously used inception score, there are cases where FID is inconsistent with human judgment (e.g. Figure 3,5 in Liu ''et'' al.).


See also

* Fréchet distance


References

{{Machine learning evaluation metrics Fréchet spaces