In
probability theory
Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set o ...
and
statistics, a Gaussian process is a
stochastic process (a collection of random variables indexed by time or space), such that every finite collection of those random variables has a
multivariate normal distribution
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One ...
, i.e. every finite
linear combination of them is normally distributed. The distribution of a Gaussian process is the
joint distribution
Given two random variables that are defined on the same probability space, the joint probability distribution is the corresponding probability distribution on all possible pairs of outputs. The joint distribution can just as well be considered ...
of all those (infinitely many) random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space.
The concept of Gaussian processes is named after
Carl Friedrich Gauss
Johann Carl Friedrich Gauss (; german: Gauß ; la, Carolus Fridericus Gauss; 30 April 177723 February 1855) was a German mathematician and physicist who made significant contributions to many fields in mathematics and science. Sometimes refe ...
because it is based on the notion of the Gaussian distribution (
normal distribution
In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
:
f(x) = \frac e^
The parameter \mu i ...
). Gaussian processes can be seen as an infinite-dimensional generalization of multivariate normal distributions.
Gaussian processes are useful in
statistical model
A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data (and similar data from a larger population). A statistical model represents, often in considerably idealized form, ...
ling, benefiting from properties inherited from the normal distribution. For example, if a
random process
In probability theory and related fields, a stochastic () or random process is a mathematical object usually defined as a family of random variables. Stochastic processes are widely used as mathematical models of systems and phenomena that appea ...
is modelled as a Gaussian process, the distributions of various derived quantities can be obtained explicitly. Such quantities include the average value of the process over a range of times and the error in estimating the average using sample values at a small set of times. While exact models often scale poorly as the amount of data increases, multiple
approximation methods have been developed which often retain good accuracy while drastically reducing computation time.
Definition
A time continuous
stochastic process is Gaussian
if and only if
In logic and related fields such as mathematics and philosophy, "if and only if" (shortened as "iff") is a biconditional logical connective between statements, where either both statements are true or both are false.
The connective is bi ...
for every
finite set
In mathematics, particularly set theory, a finite set is a set that has a finite number of elements. Informally, a finite set is a set which one could in principle count and finish counting. For example,
:\
is a finite set with five elements. ...
of
indices in the index set
is a
multivariate Gaussian
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One ...
random variable
A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the p ...
.
That is the same as saying every linear combination of
has a univariate normal (or Gaussian) distribution.
Using
characteristic functions of random variables, the Gaussian property can be formulated as follows:
is Gaussian if and only if, for every finite set of indices
, there are real-valued
,
with
such that the following equality holds for all
where
denotes the
imaginary unit
The imaginary unit or unit imaginary number () is a solution to the quadratic equation x^2+1=0. Although there is no real number with this property, can be used to extend the real numbers to what are called complex numbers, using addition a ...
such that
.
The numbers
and
can be shown to be the
covariance
In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the le ...
s and
means
Means may refer to:
* Means LLC, an anti-capitalist media worker cooperative
* Means (band), a Christian hardcore band from Regina, Saskatchewan
* Means, Kentucky, a town in the US
* Means (surname)
* Means Johnston Jr. (1916–1989), US Navy ...
of the variables in the process.
Variance
The variance of a Gaussian process is finite at any time
, formally
Stationarity
For general stochastic processes
strict-sense stationarity implies
wide-sense stationarity but not every wide-sense stationary stochastic process is strict-sense stationary. However, for a Gaussian stochastic process the two concepts are equivalent.
A Gaussian stochastic process is strict-sense stationary if, and only if, it is wide-sense stationary.
Example
There is an explicit representation for stationary Gaussian processes.
A simple example of this representation is
where
and
are independent random variables with the
standard normal distribution
In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
:
f(x) = \frac e^
The parameter \mu i ...
.
Covariance functions
A key fact of Gaussian processes is that they can be completely defined by their second-order statistics.
Thus, if a Gaussian process is assumed to have mean zero, defining the
covariance function In probability theory and statistics, the covariance function describes how much two random variables change together (their ''covariance'') with varying spatial or temporal separation. For a random field or stochastic process ''Z''(''x'') on a d ...
completely defines the process' behaviour. Importantly the non-negative definiteness of this function enables its spectral decomposition using the
Karhunen–Loève expansion. Basic aspects that can be defined through the covariance function are the process'
stationarity
In addition to its common meaning, stationary may have the following specialized scientific meanings:
Mathematics
* Stationary point
* Stationary process
* Stationary state
Meteorology
* A stationary front is a weather front that is not moving ...
,
isotropy
Isotropy is uniformity in all orientations; it is derived . Precise definitions depend on the subject area. Exceptions, or inequalities, are frequently indicated by the prefix ' or ', hence ''anisotropy''. ''Anisotropy'' is also used to describe ...
,
smoothness
In mathematical analysis, the smoothness of a function is a property measured by the number of continuous derivatives it has over some domain, called ''differentiability class''. At the very minimum, a function could be considered smooth if ...
and
periodicity.
Stationarity
In addition to its common meaning, stationary may have the following specialized scientific meanings:
Mathematics
* Stationary point
* Stationary process
* Stationary state
Meteorology
* A stationary front is a weather front that is not moving ...
refers to the process' behaviour regarding the separation of any two points
and
. If the process is stationary, the covariance function depends only on
. For example, the
Ornstein–Uhlenbeck process
In mathematics, the Ornstein–Uhlenbeck process is a stochastic process with applications in financial mathematics and the physical sciences. Its original application in physics was as a model for the velocity of a massive Brownian particl ...
is stationary.
If the process depends only on
, the Euclidean distance (not the direction) between
and
, then the process is considered isotropic. A process that is concurrently stationary and isotropic is considered to be
homogeneous
Homogeneity and heterogeneity are concepts often used in the sciences and statistics relating to the uniformity of a substance or organism. A material or image that is homogeneous is uniform in composition or character (i.e. color, shape, siz ...
;
in practice these properties reflect the differences (or rather the lack of them) in the behaviour of the process given the location of the observer.
Ultimately Gaussian processes translate as taking priors on functions and the smoothness of these priors can be induced by the covariance function.
If we expect that for "near-by" input points
and
their corresponding output points
and
to be "near-by" also, then the assumption of continuity is present. If we wish to allow for significant displacement then we might choose a rougher covariance function. Extreme examples of the behaviour is the Ornstein–Uhlenbeck covariance function and the squared exponential where the former is never differentiable and the latter infinitely differentiable.
Periodicity refers to inducing periodic patterns within the behaviour of the process. Formally, this is achieved by mapping the input
to a two dimensional vector
.
Usual covariance functions

There are a number of common covariance functions:
*Constant :
*Linear:
*white Gaussian noise:
*Squared exponential:
*Ornstein–Uhlenbeck:
*Matérn:
*Periodic:
*Rational quadratic:
Here
. The parameter
is the characteristic length-scale of the process (practically, "how close" two points
and
have to be to influence each other significantly), ''
'' is the
Kronecker delta
In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise:
\delta_ = \begin
0 &\text i \neq j, \\
1 ...
and
the
standard deviation of the noise fluctuations. Moreover,
is the
modified Bessel function
Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions of Bessel's differential equation
x^2 \frac + x \frac + \left(x^2 - \alpha^2 \right)y = 0
for an arbitrary ...
of order
and
is the
gamma function
In mathematics, the gamma function (represented by , the capital letter gamma from the Greek alphabet) is one commonly used extension of the factorial function to complex numbers. The gamma function is defined for all complex numbers except th ...
evaluated at
. Importantly, a complicated covariance function can be defined as a linear combination of other simpler covariance functions in order to incorporate different insights about the data-set at hand.
Clearly, the inferential results are dependent on the values of the hyperparameters
(e.g.
and
) defining the model's behaviour. A popular choice for
is to provide ''
maximum a posteriori
In Bayesian statistics, a maximum a posteriori probability (MAP) estimate is an estimate of an unknown quantity, that equals the mode of the posterior distribution. The MAP can be used to obtain a point estimate of an unobserved quantity on the ...
'' (MAP) estimates of it with some chosen prior. If the prior is very near uniform, this is the same as maximizing the
marginal likelihood
A marginal likelihood is a likelihood function that has been integrated over the parameter space. In Bayesian statistics, it represents the probability of generating the observed sample from a prior and is therefore often referred to as model ev ...
of the process; the marginalization being done over the observed process values
.
This approach is also known as ''maximum likelihood II'', ''evidence maximization'', or ''
empirical Bayes
Empirical Bayes methods are procedures for statistical inference in which the prior probability distribution is estimated from the data. This approach stands in contrast to standard Bayesian methods, for which the prior distribution is fixed b ...
''.
Continuity
For a Gaussian process,
continuity in probability is equivalent to
mean-square continuity,
and
continuity with probability one is equivalent to
sample continuity.
The latter implies, but is not implied by, continuity in probability.
Continuity in probability holds if and only if the
mean and autocovariance are continuous functions. In contrast, sample continuity was challenging even for
stationary Gaussian processes (as probably noted first by
Andrey Kolmogorov
Andrey Nikolaevich Kolmogorov ( rus, Андре́й Никола́евич Колмого́ров, p=ɐnˈdrʲej nʲɪkɐˈlajɪvʲɪtɕ kəlmɐˈɡorəf, a=Ru-Andrey Nikolaevich Kolmogorov.ogg, 25 April 1903 – 20 October 1987) was a Sovi ...
), and more challenging for more general processes.
As usual, by a sample continuous process one means a process that admits a sample continuous
modification
Modification may refer to:
* Modifications of school work for students with special educational needs
* Modifications (genetics), changes in appearance arising from changes in the environment
* Posttranslational modifications, changes to prote ...
.
Stationary case
For a stationary Gaussian process
some conditions on its spectrum are sufficient for sample continuity, but fail to be necessary. A necessary and sufficient condition, sometimes called Dudley–Fernique theorem, involves the function
defined by
(the right-hand side does not depend on
due to stationarity). Continuity of
in probability is equivalent to continuity of
at
When convergence of
to
(as
) is too slow, sample continuity of
may fail. Convergence of the following integrals matters:
these two integrals being equal according to
integration by substitution
In calculus, integration by substitution, also known as ''u''-substitution, reverse chain rule or change of variables, is a method for evaluating integrals and antiderivatives. It is the counterpart to the chain rule for differentiation, and ...
The first integrand need not be bounded as
thus the integral may converge (
) or diverge (
). Taking for example
for large
that is,
for small
one obtains
when
and
when
In these two cases the function
is increasing on
but generally it is not. Moreover, the condition
does not follow from continuity of
and the evident relations
(for all
) and
Some history.
Sufficiency was announced by Xavier Fernique in 1964, but the first proof was published by Richard M. Dudley in 1967.
Necessity was proved by Michael B. Marcus and Lawrence Shepp in 1970.
There exist sample continuous processes
such that
they violate condition
(*). An example found by Marcus and Shepp
is a random
lacunary Fourier series
where
are independent random variables with
standard normal distribution
In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
:
f(x) = \frac e^
The parameter \mu i ...
; frequencies
are a fast growing sequence; and coefficients
satisfy
The latter relation implies
whence
almost surely, which ensures uniform convergence of the Fourier series almost surely, and sample continuity of

Its autocovariation function
is nowhere monotone (see the picture), as well as the corresponding function
Brownian motion as the integral of Gaussian processes
A
Wiener process
In mathematics, the Wiener process is a real-valued continuous-time stochastic process named in honor of American mathematician Norbert Wiener for his investigations on the mathematical properties of the one-dimensional Brownian motion. It i ...
(also known as Brownian motion) is the integral of a
white noise generalized Gaussian process. It is not
stationary, but it has
stationary increments In probability theory, a stochastic process is said to have stationary increments if its change only depends on the time span of observation, but not on the time when the observation was started. Many large families of stochastic processes have sta ...
.
The
Ornstein–Uhlenbeck process
In mathematics, the Ornstein–Uhlenbeck process is a stochastic process with applications in financial mathematics and the physical sciences. Its original application in physics was as a model for the velocity of a massive Brownian particl ...
is a
stationary Gaussian process.
The
Brownian bridge
A Brownian bridge is a continuous-time stochastic process ''B''(''t'') whose probability distribution is the conditional probability distribution of a standard Wiener process ''W''(''t'') (a mathematical model of Brownian motion) subject to the c ...
is (like the Ornstein–Uhlenbeck process) an example of a Gaussian process whose increments are not
independent
Independent or Independents may refer to:
Arts, entertainment, and media Artist groups
* Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s
* Independe ...
.
The
fractional Brownian motion In probability theory, fractional Brownian motion (fBm), also called a fractal Brownian motion, is a generalization of Brownian motion. Unlike classical Brownian motion, the increments of fBm need not be independent. fBm is a continuous-time Gaus ...
is a Gaussian process whose covariance function is a generalisation of that of the Wiener process.
Driscoll's zero-one law
Driscoll's zero-one law is a result characterizing the sample functions generated by a Gaussian process.
Let
be a mean-zero Gaussian process
with non-negative definite covariance function
. Let
be a
Reproducing kernel Hilbert space
In functional analysis (a branch of mathematics), a reproducing kernel Hilbert space (RKHS) is a Hilbert space of functions in which point evaluation is a continuous linear functional. Roughly speaking, this means that if two functions f and g i ...
with positive definite kernel
.
Then
where
and
are the covariance matrices of all possible pairs of
points, implies
Moreover,
implies
This has significant implications when
, as
As such, almost all sample paths of a mean-zero Gaussian process with positive definite kernel
will lie outside of the Hilbert space
.
Linearly constrained Gaussian processes
For many applications of interest some pre-existing knowledge about the system at hand is already given. Consider e.g. the case where the output of the Gaussian process corresponds to a magnetic field; here, the real magnetic field is bound by Maxwell's equations and a way to incorporate this constraint into the Gaussian process formalism would be desirable as this would likely improve the accuracy of the algorithm.
A method on how to incorporate linear constraints into Gaussian processes already exists:
Consider the (vector valued) output function
which is known to obey the linear constraint (i.e.
is a linear operator)
Then the constraint
can be fulfilled by choosing
, where
is modelled as a Gaussian process, and finding
such that
Given
and using the fact that Gaussian processes are closed under linear transformations, the Gaussian process for
obeying constraint
becomes
Hence, linear constraints can be encoded into the mean and covariance function of a Gaussian process.
Applications
A Gaussian process can be used as a
prior probability distribution
In Bayesian statistical inference, a prior probability distribution, often simply called the prior, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken int ...
over
functions in
Bayesian inference.
Given any set of ''N'' points in the desired domain of your functions, take a
multivariate Gaussian
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One ...
whose covariance
matrix
Matrix most commonly refers to:
* ''The Matrix'' (franchise), an American media franchise
** '' The Matrix'', a 1999 science-fiction action film
** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
parameter is the
Gram matrix
In linear algebra, the Gram matrix (or Gramian matrix, Gramian) of a set of vectors v_1,\dots, v_n in an inner product space is the Hermitian matrix of inner products, whose entries are given by the inner product G_ = \left\langle v_i, v_j \right\r ...
of your ''N'' points with some desired
kernel
Kernel may refer to:
Computing
* Kernel (operating system), the central component of most operating systems
* Kernel (image processing), a matrix used for image convolution
* Compute kernel, in GPGPU programming
* Kernel method, in machine lea ...
, and
sample from that Gaussian. For solution of the multi-output prediction problem, Gaussian process regression for vector-valued function was developed. In this method, a 'big' covariance is constructed, which describes the correlations between all the input and output variables taken in ''N'' points in the desired domain.
This approach was elaborated in detail for the matrix-valued Gaussian processes and generalised to processes with 'heavier tails' like
Student-t processes.
Inference of continuous values with a Gaussian process prior is known as Gaussian process regression, or
kriging
In statistics, originally in geostatistics, kriging or Kriging, also known as Gaussian process regression, is a method of interpolation based on Gaussian process governed by prior covariances. Under suitable assumptions of the prior, kriging g ...
; extending Gaussian process regression to
multiple target variables is known as ''cokriging''. Gaussian processes are thus useful as a powerful non-linear multivariate
interpolation
In the mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (finding) new data points based on the range of a discrete set of known data points.
In engineering and science, one often has ...
tool.
Gaussian processes are also commonly used to tackle numerical analysis problems such as numerical integration, solving differential equations, or optimisation in the field of
probabilistic numerics
Probabilistic numerics is a scientific field at the intersection of statistics, machine learning and applied mathematics, where tasks in numerical analysis including finding numerical solutions for integration, linear algebra, optimisation and dif ...
.
Gaussian processes can also be used in the context of mixture of experts models, for example. The underlying rationale of such a learning framework consists in the assumption that a given mapping cannot be well captured by a single Gaussian process model. Instead, the observation space is divided into subsets, each of which is characterized by a different mapping function; each of these is learned via a different Gaussian process component in the postulated mixture.
Gaussian process prediction, or Kriging

When concerned with a general Gaussian process regression problem (Kriging), it is assumed that for a Gaussian process
observed at coordinates
, the vector of values is just one sample from a multivariate Gaussian distribution of dimension equal to number of observed coordinates . Therefore, under the assumption of a zero-mean distribution, , where is the covariance matrix between all possible pairs for a given set of hyperparameters ''θ''.
As such the log marginal likelihood is:
and maximizing this marginal likelihood towards provides the complete specification of the Gaussian process . One can briefly note at this point that the first term corresponds to a penalty term for a model's failure to fit observed values and the second term to a penalty term that increases proportionally to a model's complexity. Having specified , making predictions about unobserved values at coordinates is then only a matter of drawing samples from the predictive distribution
where the posterior mean estimate is defined as
and the posterior variance estimate ''B'' is defined as:
where is the covariance between the new coordinate of estimation ''x''* and all other observed coordinates ''x'' for a given hyperparameter vector , and are defined as before and is the variance at point as dictated by . It is important to note that practically the posterior mean estimate of (the "point estimate") is just a linear combination of the observations ; in a similar manner the variance of is actually independent of the observations . A known bottleneck in Gaussian process prediction is that the computational complexity of inference and likelihood evaluation is cubic in the number of points , ''x'', , and as such can become unfeasible for larger data sets.
Works on sparse Gaussian processes, that usually are based on the idea of building a ''representative set'' for the given process ''f'', try to circumvent this issue.
The
kriging
In statistics, originally in geostatistics, kriging or Kriging, also known as Gaussian process regression, is a method of interpolation based on Gaussian process governed by prior covariances. Under suitable assumptions of the prior, kriging g ...
method can be used in the latent level of a
nonlinear mixed-effects model for a spatial functional prediction: this technique is called the latent kriging.
Often, the covariance has the form
, where
is a scaling parameter. Examples are the Matérn class covariance functions. If this scaling parameter
is either known or unknown (i.e. must be marginalized), then the posterior probability,
, i.e. the probability for the hyperparameters
given a set of data pairs
of observations of
and
, admits an analytical expression.
Bayesian neural networks as Gaussian processes
Bayesian neural networks are a particular type of
Bayesian network
A Bayesian network (also known as a Bayes network, Bayes net, belief network, or decision network) is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). Bay ...
that results from treating
deep learning and
artificial neural network
Artificial neural networks (ANNs), usually simply called neural networks (NNs) or neural nets, are computing systems inspired by the biological neural networks that constitute animal brains.
An ANN is based on a collection of connected units ...
models probabilistically, and assigning a
prior distribution
In Bayesian statistical inference, a prior probability distribution, often simply called the prior, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken int ...
to their
parameters
A parameter (), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when ...
. Computation in artificial neural networks is usually organized into sequential layers of
artificial neuron
An artificial neuron is a mathematical function conceived as a model of biological neurons, a neural network. Artificial neurons are elementary units in an artificial neural network. The artificial neuron receives one or more inputs (representing ...
s. The number of neurons in a layer is called the layer width. As layer width grows large, many Bayesian neural networks reduce to a Gaussian process with a
closed form compositional kernel. This Gaussian process is called the Neural Network Gaussian Process (NNGP).
It allows predictions from Bayesian neural networks to be more efficiently evaluated, and provides an analytic tool to understand
deep learning models.
Computational issues
In practical applications, Gaussian process models are often evaluated on a grid leading to multivariate normal distributions. Using these models for prediction or parameter estimation using maximum likelihood requires evaluating a multivariate Gaussian density, which involves calculating the determinant and the inverse of the covariance matrix. Both of these operations have cubic computational complexity which means that even for grids of modest sizes, both operations can have a prohibitive computational cost. This drawback led to the development of multiple
approximation methods.
See also
*
Bayes linear statistics
*
Bayesian interpretation of regularization
*
Kriging
In statistics, originally in geostatistics, kriging or Kriging, also known as Gaussian process regression, is a method of interpolation based on Gaussian process governed by prior covariances. Under suitable assumptions of the prior, kriging g ...
*
Gaussian free field
In probability theory and statistical mechanics, the Gaussian free field (GFF) is a Gaussian random field, a central model of random surfaces (random height functions). gives a mathematical survey of the Gaussian free field.
The discrete versio ...
*
Gauss–Markov process
Gauss–Markov stochastic processes (named after Carl Friedrich Gauss and Andrey Markov) are stochastic processes that satisfy the requirements for both Gaussian processes and Markov processes. A stationary Gauss–Markov process is unique up to ...
*
Gradient-enhanced kriging
Gradient-enhanced kriging (GEK) is a surrogate modeling technique used in engineering. A surrogate model (alternatively known as a metamodel, response surface or emulator) is a prediction of the output of an expensive computer code.
This predi ...
(GEK)
*
Student's t-process
References
External links
The Gaussian Processes Web Site, including the text of Rasmussen and Williams' Gaussian Processes for Machine LearningA gentle introduction to Gaussian processesA Review of Gaussian Random Fields and Correlation FunctionsEfficient Reinforcement Learning using Gaussian Processes
Software
GPML: A comprehensive Matlab toolbox for GP regression and classificationSTK: a Small (Matlab/Octave) Toolbox for Kriging and GP modelingKriging module in UQLab framework (Matlab)Matlab/Octave function for stationary Gaussian fieldsYelp MOE – A black box optimization engine using Gaussian process learningooDACE– A flexible object-oriented Kriging Matlab toolbox.
GPstuff – Gaussian process toolbox for Matlab and OctaveGPy – A Gaussian processes framework in PythonGSTools - A geostatistical toolbox, including Gaussian process regression, written in PythonInteractive Gaussian process regression demoBasic Gaussian process library written in C++11scikit-learn– A machine learning library for Python which includes Gaussian process regression and classification
- The Kriging toolKit (KriKit) is developed at the Institute of Bio- and Geosciences 1 (IBG-1) of Forschungszentrum Jülich (FZJ)
Video tutorials
Gaussian Process Basics by David MacKayLearning with Gaussian Processes by Carl Edward RasmussenBayesian inference and Gaussian processes by Carl Edward Rasmussen
{{DEFAULTSORT:Gaussian Process
Stochastic processes
Kernel methods for machine learning
Nonparametric Bayesian statistics
Normal distribution