Catalog Of Articles In Probability Theory
   HOME

TheInfoList



OR:

This page lists articles related to
probability theory Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expre ...
. In particular, it lists many articles corresponding to specific
probability distributions In probability theory and statistics, a probability distribution is a function that gives the probabilities of occurrence of possible events for an experiment. It is a mathematical description of a random phenomenon in terms of its sample spac ...
. Such articles are marked here by a code of the form (X:Y), which refers to number of random variables involved and the type of the distribution. For example (2:DC) indicates a distribution with two random variables, discrete or continuous. Other codes are just abbreviations for topics. The list of codes can be found in the table of contents.


Core probability: selected topics

Probability theory Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expre ...


Basic notions (bsc)

*
Random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a Mathematics, mathematical formalization of a quantity or object which depends on randomness, random events. The term 'random variable' in its mathema ...
*
Continuous probability distribution In probability theory and statistics, a probability distribution is a Function (mathematics), function that gives the probabilities of occurrence of possible events for an Experiment (probability theory), experiment. It is a mathematical descri ...
 / (1:C) *
Cumulative distribution function In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. Ever ...
 / (1:DCR) *
Discrete probability distribution In probability theory and statistics, a probability distribution is a function that gives the probabilities of occurrence of possible events for an experiment. It is a mathematical description of a random phenomenon in terms of its sample spa ...
 / (1:D) * Independent and identically-distributed random variables / (FS:BDCR) *
Joint probability distribution A joint or articulation (or articular surface) is the connection made between bones, ossicles, or other hard structures in the body which link an animal's skeletal system into a functional whole.Saladin, Ken. Anatomy & Physiology. 7th ed. McGraw- ...
 / (F:DC) *
Marginal distribution In probability theory and statistics, the marginal distribution of a subset of a collection of random variables is the probability distribution of the variables contained in the subset. It gives the probabilities of various values of the variable ...
 / (2F:DC) *
Probability density function In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a Function (mathematics), function whose value at any given sample (or point) in the sample space (the s ...
 / (1:C) *
Probability distribution In probability theory and statistics, a probability distribution is a Function (mathematics), function that gives the probabilities of occurrence of possible events for an Experiment (probability theory), experiment. It is a mathematical descri ...
 / (1:DCRG) * Probability distribution function *
Probability mass function In probability and statistics, a probability mass function (sometimes called ''probability function'' or ''frequency function'') is a function that gives the probability that a discrete random variable is exactly equal to some value. Sometimes i ...
 / (1:D) *
Sample space In probability theory, the sample space (also called sample description space, possibility space, or outcome space) of an experiment or random trial is the set of all possible outcomes or results of that experiment. A sample space is usually den ...


Instructive examples (paradoxes) (iex)

*
Berkson's paradox Berkson's paradox, also known as Berkson's bias, collider bias, or Berkson's fallacy, is a result in conditional probability and statistics which is often found to be counterintuitive, and hence a veridical paradox. It is a complicating factor ar ...
 / (2:B) *
Bertrand's box paradox Bertrand's box paradox is a Paradox#Quine's_classification, veridical paradox in elementary probability theory. It was first posed by Joseph Bertrand in his 1889 work Calcul des Probabilités'. There are three boxes: # a box containing two gol ...
 / (F:B) *
Borel–Kolmogorov paradox In probability theory, the Borel–Kolmogorov paradox (sometimes known as Borel's paradox) is a paradox relating to conditional probability with respect to an event of probability zero (also known as a null set). It is named after Émile Borel and ...
 / cnd (2:CM) * Boy or Girl paradox / (2:B) * Exchange paradox / (2:D) *
Intransitive dice A set of dice is intransitive (or nontransitive) if it contains X>2 dice, ''X1'', ''X2'', and ''X3''... with the property that ''X1'' rolls higher than ''X2'' more than half the time, and ''X2'' rolls higher than ''X3'' etc... more than half the ...
*
Monty Hall problem The Monty Hall problem is a brain teaser, in the form of a probability puzzle, based nominally on the American television game show ''Let's Make a Deal'' and named after its original host, Monty Hall. The problem was originally posed (and solved ...
 / (F:B) * Necktie paradox *
Simpson's paradox Simpson's paradox is a phenomenon in probability and statistics in which a trend appears in several groups of data but disappears or reverses when the groups are combined. This result is often encountered in social-science and medical-science st ...
* Sleeping Beauty problem * St. Petersburg paradox / mnt (1:D) * Three Prisoners problem *
Two envelopes problem The two envelopes problem, also known as the exchange paradox, is a paradox in probability theory. It is of special interest in decision theory and for the Bayesian interpretation of probability theory. It is a variant of an older problem known ...


Moments (mnt)

*
Expected value In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
 / (12:DCR) *
Canonical correlation In statistics, canonical-correlation analysis (CCA), also called canonical variates analysis, is a way of inferring information from cross-covariance matrices. If we have two vectors ''X'' = (''X''1, ..., ''X'n'') and ''Y'' ...
 / (F:R) *
Carleman's condition In mathematics, particularly, in analysis, Carleman's condition gives a sufficient condition for the determinacy of the moment problem. That is, if a measure \mu satisfies Carleman's condition, there is no other measure \nu having the same moment ...
 / anl (1:R) *
Central moment In probability theory and statistics, a central moment is a moment of a probability distribution of a random variable about the random variable's mean; that is, it is the expected value of a specified integer power of the deviation of the random ...
 / (1:R) *
Coefficient of variation In probability theory and statistics, the coefficient of variation (CV), also known as normalized root-mean-square deviation (NRMSD), percent RMS, and relative standard deviation (RSD), is a standardized measure of dispersion of a probability ...
 / (1:R) *
Correlation In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics ...
 / (2:R) *
Correlation function A correlation function is a function that gives the statistical correlation between random variables, contingent on the spatial or temporal distance between those variables. If one considers the correlation function between random variables ...
 / (U:R) *
Covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. The sign of the covariance, therefore, shows the tendency in the linear relationship between the variables. If greater values of one ...
 / (2F:R) (1:G) *
Covariance function In probability theory and statistics, the covariance function describes how much two random variables change together (their ''covariance'') with varying spatial or temporal separation. For a random field or stochastic process ''Z''(''x'') on a dom ...
 / (U:R) *
Covariance matrix In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
 / (F:R) *
Cumulant In probability theory and statistics, the cumulants of a probability distribution are a set of quantities that provide an alternative to the '' moments'' of the distribution. Any two probability distributions whose moments are identical will have ...
 / (12F:DCR) *
Factorial moment In probability theory, the factorial moment is a mathematical quantity defined as the expectation or average of the falling factorial of a random variable. Factorial moments are useful for studying non-negative integer-valued random variables,D. ...
 / (1:R) *
Factorial moment generating function In probability theory and statistics, the factorial moment generating function (FMGF) of the probability distribution of a real-valued random variable ''X'' is defined as :M_X(t)=\operatorname\bigl ^\bigr/math> for all complex numbers ''t'' for w ...
 / anl (1:R) *
Fano factor In statistics, the Fano factor, like the coefficient of variation, is a measure of the statistical dispersion, dispersion of a counting process. It was originally used to measure the Fano noise in ion detectors. It is named after Ugo Fano, an Itali ...
* Geometric standard deviation / (1:R) *
Hamburger moment problem In mathematics, the Hamburger moment problem, named after Hans Ludwig Hamburger, is formulated as follows: given a sequence , does there exist a positive Borel measure (for instance, the measure determined by the cumulative distribution function o ...
 / anl (1:R) *
Hausdorff moment problem In mathematics, the Hausdorff moment problem, named after Felix Hausdorff, asks for necessary and sufficient conditions that a given sequence be the sequence of moments :m_n = \int_0^1 x^n\,d\mu(x) of some Borel measure supported on the clos ...
 / anl (1:R) *
Isserlis Gaussian moment theorem In probability theory, Isserlis's theorem or Wick's probability theorem is a formula that allows one to compute higher-order moments of the multivariate normal distribution in terms of its covariance matrix. It is named after Leon Isserlis. This ...
 / Gau *
Jensen's inequality In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier p ...
 / (1:DCR) *
Kurtosis In probability theory and statistics, kurtosis (from , ''kyrtos'' or ''kurtos'', meaning "curved, arching") refers to the degree of “tailedness” in the probability distribution of a real-valued random variable. Similar to skewness, kurtos ...
 / (1:CR) * Law of the unconscious statistician / (1:DCR) * Moment / (12FU:CRG) * Law of total covariance / (F:R) * Law of total cumulance / (F:R) *
Law of total expectation The proposition in probability theory known as the law of total expectation, the law of iterated expectations (LIE), Adam's law, the tower rule, and the smoothing property of conditional expectation, among other names, states that if X is a random ...
 / (F:DR) *
Law of total variance The law of total variance is a fundamental result in probability theory that expresses the variance of a random variable in terms of its conditional variances and conditional means given another random variable . Informally, it states that the o ...
 / (F:R) * Logmoment generating function * Marcinkiewicz–Zygmund inequality / inq * Method of moments / lmt (L:R) *
Moment problem In mathematics, a moment problem arises as the result of trying to invert the mapping that takes a measure \mu to the sequence of moments :m_n = \int_^\infty x^n \,d\mu(x)\,. More generally, one may consider :m_n = \int_^\infty M_n(x) \,d\mu( ...
 / anl (1:R) *
Moment-generating function In probability theory and statistics, the moment-generating function of a real-valued random variable is an alternative specification of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compare ...
 / anl (1F:R) * Second moment method / (1FL:DR) *
Skewness In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined. For a unimodal ...
 / (1:R) * St. Petersburg paradox / iex (1:D) *
Standard deviation In statistics, the standard deviation is a measure of the amount of variation of the values of a variable about its Expected value, mean. A low standard Deviation (statistics), deviation indicates that the values tend to be close to the mean ( ...
 / (1:DCR) *
Standardized moment In probability theory and statistics, a standardized moment of a probability distribution is a moment (often a higher degree central moment) that is normalized, typically by a power of the standard deviation, rendering the moment scale invariant ...
 / (1:R) *
Stieltjes moment problem In mathematics, the Stieltjes moment problem, named after Thomas Joannes Stieltjes, seeks necessary and sufficient conditions for a sequence (''m''0, ''m''1, ''m''2, ...) to be of the form :m_n = \int_0^\infty x^n\,d\mu(x) for some measure ''&m ...
 / anl (1:R) * Trigonometric moment problem / anl (1:R) *
Uncorrelated In probability theory and statistics, two real-valued random variables, X, Y, are said to be uncorrelated if their covariance, \operatorname ,Y= \operatorname Y- \operatorname \operatorname /math>, is zero. If two variables are uncorrelated, ther ...
 / (2:R) *
Variance In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion ...
 / (12F:DCR) *
Variance-to-mean ratio In probability theory and statistics, the index of dispersion, dispersion index, coefficient of dispersion, relative variance, or variance-to-mean ratio (VMR), like the coefficient of variation, is a Normalization (statistics), normalized measure ...
 / (1:R)


Inequalities (inq)

*
Chebyshev's inequality In probability theory, Chebyshev's inequality (also called the Bienaymé–Chebyshev inequality) provides an upper bound on the probability of deviation of a random variable (with finite variance) from its mean. More specifically, the probability ...
 / (1:R) * An inequality on location and scale parameters / (1:R) *
Azuma's inequality In probability theory, the Azuma–Hoeffding inequality (named after Kazuoki Azuma and Wassily Hoeffding) gives a concentration result for the values of martingales that have bounded differences. Suppose \ is a martingale (or super-martingale ...
 / (F:BR) * Bennett's inequality / (F:R) * Bernstein inequalities / (F:R) * Bhatia–Davis inequality *
Chernoff bound In probability theory, a Chernoff bound is an exponentially decreasing upper bound on the tail of a random variable based on its moment generating function. The minimum of all such exponential bounds forms ''the'' Chernoff or Chernoff-Cramér boun ...
 / (F:B) *
Doob's martingale inequality In mathematics, Doob's martingale inequality, also known as Kolmogorov's submartingale inequality is a result in the study of stochastic processes. It gives a bound on the probability that a submartingale exceeds any given value over a given inter ...
 / (FU:R) * Dudley's theorem / Gau * Entropy power inequality * Etemadi's inequality / (F:R) * Gauss's inequality *
Hoeffding's inequality In probability theory, Hoeffding's inequality provides an upper bound on the probability that the sum of bounded independent random variables deviates from its expected value by more than a certain amount. Hoeffding's inequality was proven by Wass ...
 / (F:R) * Khintchine inequality / (F:B) * Kolmogorov's inequality / (F:R) * Marcinkiewicz–Zygmund inequality / mnt *
Markov's inequality In probability theory, Markov's inequality gives an upper bound on the probability that a non-negative random variable is greater than or equal to some positive Constant (mathematics), constant. Markov's inequality is tight in the sense that for e ...
 / (1:R) * McDiarmid's inequality *
Multidimensional Chebyshev's inequality In probability theory, the multidimensional Chebyshev's inequality is a generalization of Chebyshev's inequality, which puts a bound on the probability of the event that a random variable differs from its expected value by more than a specified am ...
*
Paley–Zygmund inequality In mathematics, the Paley–Zygmund inequality bounds the probability that a positive random variable is small, in terms of its first two moments. The inequality was proved by Raymond Paley and Antoni Zygmund. Theorem: If ''Z'' ≥ 0 i ...
 / (1:R) * Pinsker's inequality / (2:R) * Vysochanskiï–Petunin inequality / (1:C)


Markov chains, processes, fields, networks (Mar)

*
Markov chain In probability theory and statistics, a Markov chain or Markov process is a stochastic process describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally ...
 / (FLSU:D) * Additive Markov chain *
Bayesian network A Bayesian network (also known as a Bayes network, Bayes net, belief network, or decision network) is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). Whi ...
 / Bay *
Birth–death process The birth–death process (or birth-and-death process) is a special case of continuous-time Markov process where the state transitions are of only two types: "births", which increase the state variable by one and "deaths", which decrease the stat ...
 / (U:D) * CIR process / scl * Chapman–Kolmogorov equation / (F:DC) * Cheeger bound / (L:D) * Conductance *
Contact process The contact process is a method of producing sulfuric acid in the high concentrations needed for industrial processes. Platinum was originally used as the catalyst for this reaction; however, because it is susceptible to reacting with arsenic impu ...
*
Continuous-time Markov process A continuous-time Markov chain (CTMC) is a continuous stochastic process in which, for each state, the process will change state according to an exponential random variable and then move to a different state as specified by the probabilities of a ...
 / (U:D) *
Detailed balance The principle of detailed balance can be used in Kinetics (physics), kinetic systems which are decomposed into elementary processes (collisions, or steps, or elementary reactions). It states that at Thermodynamic equilibrium, equilibrium, each elem ...
 / (F:D) *
Examples of Markov chains Example may refer to: * ''exempli gratia'' (e.g.), usually read out in English as "for example" * .example, reserved as a domain name that may not be installed as a top-level domain of the Internet ** example.com, example.net, example.org, a ...
 / (FL:D) * Feller process / (U:G) *
Fokker–Planck equation In statistical mechanics and information theory, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag (physi ...
 / scl anl *
Foster's theorem In probability theory, Foster's theorem, named after Gordon Foster, is used to draw conclusions about the positive recurrence of Markov chains with countable state spaces. It uses the fact that positive recurrent Markov chains exhibit a notion ...
 / (L:D) * Gauss–Markov process / Gau *
Geometric Brownian motion A geometric Brownian motion (GBM) (also known as exponential Brownian motion) is a continuous-time stochastic process in which the logarithm of the randomly varying quantity follows a Brownian motion (also called a Wiener process) with drift. It ...
 / scl *
Hammersley–Clifford theorem The Hammersley–Clifford theorem is a result in probability theory, mathematical statistics and statistical mechanics that gives necessary and sufficient conditions under which a strictly positive probability distribution can be represented as even ...
 / (F:C) *
Harris chain In the mathematical study of stochastic processes, a Harris chain is a Markov chain where the chain returns to a particular part of the state space an unbounded number of times. Harris chains are regenerative processes and are named after Theod ...
 / (L:DC) *
Hidden Markov model A hidden Markov model (HMM) is a Markov model in which the observations are dependent on a latent (or ''hidden'') Markov process (referred to as X). An HMM requires that there be an observable process Y whose outcomes depend on the outcomes of X ...
 / (F:D) * Hidden Markov random field * Hunt process / (U:R) *
Kalman filter In statistics and control theory, Kalman filtering (also known as linear quadratic estimation) is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, to produce estimates of unk ...
 / (F:C) *
Kolmogorov backward equation In probability theory, Kolmogorov equations characterize continuous-time Markov processes. In particular, they describe how the probability of a continuous-time Markov process in a certain state changes over time. There are four distinct equati ...
 / scl *
Kolmogorov's criterion In probability theory, Kolmogorov's criterion, named after Andrey Kolmogorov, is a theorem giving a necessary and sufficient condition for a Markov chain or continuous-time Markov chain to be stochastically identical to its time-reversed version. ...
 / (F:D) * Kolmogorov's generalized criterion / (U:D) * Krylov–Bogolyubov theorem / anl *
Lumpability In probability theory, lumpability is a method for reducing the size of the state space of some continuous-time Markov chains, first published by Kemeny and Snell. Definition Suppose that the complete state-space of a Markov chain is divided into ...
* Markov additive process *
Markov blanket In statistics and machine learning, a Markov blanket of a random variable is a minimal set of variables that renders the variable conditionally independent of all other variables in the system. This concept is central in probabilistic graphical ...
 / Bay *
Markov chain mixing time In probability theory, the mixing time of a Markov chain is the time until the Markov chain is "close" to its steady state distribution. More precisely, a fundamental result about Markov chains is that a finite state irreducible aperiodic chain h ...
 / (L:D) * Markov decision process * Markov information source *
Markov kernel In probability theory, a Markov kernel (also known as a stochastic kernel or probability kernel) is a map that in the general theory of Markov processes plays the role that the transition matrix does in the theory of Markov processes with a finit ...
*
Markov logic network A Markov logic network (MLN) is a probabilistic logic which applies the ideas of a Markov network to first-order logic, defining probability distributions on possible worlds on any given domain. History In 2002, Ben Taskar, Pieter Abbeel and ...
*
Markov network In the domain of physics and probability, a Markov random field (MRF), Markov network or undirected graphical model is a set of random variables having a Markov property described by an undirected graph. In other words, a random field is said to ...
*
Markov process In probability theory and statistics, a Markov chain or Markov process is a stochastic process describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, ...
 / (U:D) *
Markov property In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process, which means that its future evolution is independent of its history. It is named after the Russian mathematician Andrey Ma ...
 / (F:D) *
Markov random field In the domain of physics and probability, a Markov random field (MRF), Markov network or undirected graphical model is a set of random variables having a Markov property described by an undirected graph In discrete mathematics, particularly ...
*
Master equation In physics, chemistry, and related fields, master equations are used to describe the time evolution of a system that can be modeled as being in a probabilistic combination of states at any given time, and the switching between states is determi ...
 / phs (U:D) *
Milstein method In mathematics, the Milstein method is a technique for the approximate numerical analysis, numerical solution of a stochastic differential equation. It is named after Grigori Milstein who first published it in 1974. Description Consider the autono ...
 / scl *
Moran process A Moran process or Moran model is a simple stochastic process used in biology to describe finite populations. The process is named after Patrick Moran, who first proposed the model in 1958. It can be used to model variety-increasing processes suc ...
*
Ornstein–Uhlenbeck process In mathematics, the Ornstein–Uhlenbeck process is a stochastic process with applications in financial mathematics and the physical sciences. Its original application in physics was as a model for the velocity of a massive Brownian particle ...
 / Gau scl *
Partially observable Markov decision process A partially observable Markov decision process (POMDP) is a generalization of a Markov decision process (MDP). A POMDP models an agent decision process in which it is assumed that the system dynamics are determined by an MDP, but the agent cannot ...
*
Product-form solution In probability theory, a product-form solution is a particularly efficient form of solution for determining some metric of a system with distinct sub-components, where the metric for the collection of components can be written as a product (mathem ...
 / spr * Quantum Markov chain / phs * Semi-Markov process *
Stochastic matrix In mathematics, a stochastic matrix is a square matrix used to describe the transitions of a Markov chain. Each of its entries is a nonnegative real number representing a probability. It is also called a probability matrix, transition matrix, ''s ...
 / anl * Telegraph process / (U:B) * Variable-order Markov model *
Wiener process In mathematics, the Wiener process (or Brownian motion, due to its historical connection with Brownian motion, the physical process of the same name) is a real-valued continuous-time stochastic process discovered by Norbert Wiener. It is one o ...
 / Gau scl


Gaussian random variables, vectors, functions (Gau)

*
Normal distribution In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is f(x) = \frac ...
 / spd * Abstract Wiener space *
Brownian bridge A Brownian bridge is a continuous-time gaussian process ''B''(''t'') whose probability distribution is the conditional probability distribution of a standard Wiener process ''W''(''t'') (a mathematical model of Brownian motion) subject to the con ...
*
Classical Wiener space In mathematics, classical Wiener space is the collection of all continuous functions on a given domain (usually a subinterval of the real line), taking values in a metric space (usually ''n''-dimensional Euclidean space). Classical Wiener space ...
* Concentration dimension * Dudley's theorem / inq *
Estimation of covariance matrices In statistics, sometimes the covariance matrix of a multivariate random variable is not known but has to be estimation theory, estimated. Estimation of covariance matrices then deals with the question of how to approximate the actual covariance ma ...
*
Fractional Brownian motion In probability theory, fractional Brownian motion (fBm), also called a fractal Brownian motion, is a generalization of Brownian motion. Unlike classical Brownian motion, the increments of fBm need not be independent. fBm is a continuous-time Gaus ...
*
Gaussian isoperimetric inequality Carl Friedrich Gauss (1777–1855) is the eponym of all of the topics listed below. There are over 100 topics all named after this German mathematician and scientist, all in the fields of mathematics, physics, and astronomy. The English eponymo ...
*
Gaussian measure In mathematics, Gaussian measure is a Borel measure on finite-dimensional Euclidean space \mathbb^n, closely related to the normal distribution in statistics. There is also a generalization to infinite-dimensional spaces. Gaussian measures are na ...
 / anl * Gaussian random field * Gauss–Markov process / Mar * Integration of the normal density function / spd anl *
Gaussian process In probability theory and statistics, a Gaussian process is a stochastic process (a collection of random variables indexed by time or space), such that every finite collection of those random variables has a multivariate normal distribution. The di ...
*
Isserlis Gaussian moment theorem In probability theory, Isserlis's theorem or Wick's probability theorem is a formula that allows one to compute higher-order moments of the multivariate normal distribution in terms of its covariance matrix. It is named after Leon Isserlis. This ...
 / mnt * Karhunen–Loève theorem * Large deviations of Gaussian random functions / lrd * Lévy's modulus of continuity theorem / (U:R) *
Matrix normal distribution In statistics, the matrix normal distribution or matrix Gaussian distribution is a probability distribution that is a generalization of the multivariate normal distribution to matrix-valued random variables. Definition The probability density ...
 / spd *
Multivariate normal distribution In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One d ...
 / spd *
Ornstein–Uhlenbeck process In mathematics, the Ornstein–Uhlenbeck process is a stochastic process with applications in financial mathematics and the physical sciences. Its original application in physics was as a model for the velocity of a massive Brownian particle ...
 / Mar scl * Paley–Wiener integral / anl * Pregaussian class * Schilder's theorem / lrd *
Wiener process In mathematics, the Wiener process (or Brownian motion, due to its historical connection with Brownian motion, the physical process of the same name) is a real-valued continuous-time stochastic process discovered by Norbert Wiener. It is one o ...
 / Mar scl


Conditioning (cnd)

*
Conditioning Conditioning may refer to: Science, computing, and technology * Air conditioning, the removal of heat from indoor air for thermal comfort ** Automobile air conditioning, air conditioning in a vehicle ** Ice storage air conditioning, air conditio ...
 / (2:BDCR) *
Bayes' theorem Bayes' theorem (alternatively Bayes' law or Bayes' rule, after Thomas Bayes) gives a mathematical rule for inverting Conditional probability, conditional probabilities, allowing one to find the probability of a cause given its effect. For exampl ...
 / (2:BCG) *
Borel–Kolmogorov paradox In probability theory, the Borel–Kolmogorov paradox (sometimes known as Borel's paradox) is a paradox relating to conditional probability with respect to an event of probability zero (also known as a null set). It is named after Émile Borel and ...
 / iex (2:CM) *
Conditional expectation In probability theory, the conditional expectation, conditional expected value, or conditional mean of a random variable is its expected value evaluated with respect to the conditional probability distribution. If the random variable can take on ...
 / (2:BDR) *
Conditional independence In probability theory, conditional independence describes situations wherein an observation is irrelevant or redundant when evaluating the certainty of a hypothesis. Conditional independence is usually formulated in terms of conditional probabi ...
 / (3F:BR) *
Conditional probability In probability theory, conditional probability is a measure of the probability of an Event (probability theory), event occurring, given that another event (by assumption, presumption, assertion or evidence) is already known to have occurred. This ...
*
Conditional probability distribution In probability theory and statistics, the conditional probability distribution is a probability distribution that describes the probability of an outcome given the occurrence of a particular event. Given two jointly distributed random variables X ...
 / (2:DC) *
Conditional random field Conditional random fields (CRFs) are a class of statistical modeling methods often applied in pattern recognition and machine learning and used for structured prediction. Whereas a classifier predicts a label for a single sample without consi ...
 / (F:R) * Disintegration theorem / anl (2:G) *
Inverse probability In probability theory, inverse probability is an old term for the probability distribution of an unobserved variable. Today, the problem of determining an unobserved variable (by whatever method) is called inferential statistics. The method of i ...
 / Bay *
Luce's choice axiom In probability theory, Luce's choice axiom, formulated by R. Duncan Luce (1959), states that the relative odds of selecting one item over another from a pool of many items is not affected by the presence or absence of other items in the pool. Sel ...
*
Regular conditional probability Regular may refer to: Arts, entertainment, and media Music * "Regular" (Badfinger song) * Regular tunings of stringed instruments, tunings with equal intervals between the paired notes of successive open strings Other uses * Regular character, ...
 / (2:G) *
Rule of succession In probability theory, the rule of succession is a formula introduced in the 18th century by Pierre-Simon Laplace in the course of treating the sunrise problem. The formula is still used, particularly to estimate underlying probabilities when ...
 / (F:B)


Specific distributions (spd)

*
Binomial distribution In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of statistical independence, independent experiment (probability theory) ...
 / (1:D) *
(a,b,0) class of distributions In probability theory, a member of the (''a'', ''b'', 0) class of distributions is any distribution of a discrete random variable ''N'' whose values are nonnegative integers whose probability mass function satisfies the recurrence formula : \fra ...
 / (1:D) *
Anscombe transform In statistics, the Anscombe transform, named after Francis Anscombe, is a variance-stabilizing transformation that transforms a random variable with a Poisson distribution into one with an approximately standard Gaussian distribution. The Anscom ...
*
Bernoulli distribution In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli, is the discrete probability distribution of a random variable which takes the value 1 with probability p and the value 0 with pro ...
 / (1:B) *
Beta distribution In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval
, 1 The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
or (0, 1) in terms of two positive Statistical parameter, parameters, denoted by ''alpha'' (''α'') an ...
 / (1:C) * Bose–Einstein statistics / (F:D) *
Cantor distribution The Cantor distribution is the probability distribution whose cumulative distribution function is the Cantor function. This distribution has neither a probability density function nor a probability mass function, since although its cumulative ...
 / (1:C) *
Cauchy distribution The Cauchy distribution, named after Augustin-Louis Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) ...
 / (1:C) *
Chi-squared distribution In probability theory and statistics, the \chi^2-distribution with k Degrees of freedom (statistics), degrees of freedom is the distribution of a sum of the squares of k Independence (probability theory), independent standard normal random vari ...
 / (1:C) * Compound Poisson distribution / (F:DR) *
Degenerate distribution In probability theory, a degenerate distribution on a measure space (E, \mathcal, \mu) is a probability distribution whose support is a null set with respect to \mu. For instance, in the -dimensional space endowed with the Lebesgue measure, an ...
 / (1:D) *
Dirichlet distribution In probability and statistics, the Dirichlet distribution (after Peter Gustav Lejeune Dirichlet), often denoted \operatorname(\boldsymbol\alpha), is a family of continuous multivariate probability distributions parameterized by a vector of pos ...
 / (F:C) * Discrete phase-type distribution / (1:D) *
Erlang distribution The Erlang distribution is a two-parameter family of continuous probability distributions with Support (mathematics), support x \in
 / (1:C) * Exponential-logarithmic distribution">, \infty). The two parameters are: * a positive integer k, the "shape", and * a positive real number \lambda, ...
 / (1:C) * Exponential-logarithmic distribution / (1:C) * Exponential distribution / (1:C) * F-distribution / (1:C) * Fermi–Dirac statistics / (1F:D) * Fisher–Tippett distribution / (1:C) * Gamma distribution / (1:C) * Generalized normal distribution / (1:C) *
Geometric distribution In probability theory and statistics, the geometric distribution is either one of two discrete probability distributions: * The probability distribution of the number X of Bernoulli trials needed to get one success, supported on \mathbb = \; * T ...
 / (1:D) * Half circle distribution / (1:C) *
Hypergeometric distribution In probability theory and statistics, the hypergeometric distribution is a Probability distribution#Discrete probability distribution, discrete probability distribution that describes the probability of k successes (random draws for which the ...
 / (1:D) *
Normal distribution In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is f(x) = \frac ...
 / Gau * Integration of the normal density function / Gau anl *
Lévy distribution In probability theory and statistics, the Lévy distribution, named after Paul Lévy, is a continuous probability distribution for a non-negative random variable. In spectroscopy, this distribution, with frequency as the dependent variable, is k ...
 / (1:C) *
Matrix normal distribution In statistics, the matrix normal distribution or matrix Gaussian distribution is a probability distribution that is a generalization of the multivariate normal distribution to matrix-valued random variables. Definition The probability density ...
 / Gau * Maxwell–Boltzmann statistics / (F:D) * McCullagh's parametrization of the Cauchy distributions / (1:C) *
Multinomial distribution In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a ''k''-sided die rolled ''n'' times. For ''n'' statistical independence, indepen ...
 / (F:D) *
Multivariate normal distribution In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One d ...
 / Gau *
Negative binomial distribution In probability theory and statistics, the negative binomial distribution, also called a Pascal distribution, is a discrete probability distribution that models the number of failures in a sequence of independent and identically distributed Berno ...
 / (1:D) *
Pareto distribution The Pareto distribution, named after the Italian civil engineer, economist, and sociologist Vilfredo Pareto, is a power-law probability distribution that is used in description of social, quality control, scientific, geophysical, actuarial scien ...
 / (1:C) *
Phase-type distribution A phase-type distribution is a probability distribution constructed by a convolution or mixture of exponential distributions. It results from a system of one or more inter-related Poisson processes occurring in sequence, or phases. The sequence i ...
 / (1:C) *
Poisson distribution In probability theory and statistics, the Poisson distribution () is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time if these events occur with a known const ...
 / (1:D) *
Power law In statistics, a power law is a Function (mathematics), functional relationship between two quantities, where a Relative change and difference, relative change in one quantity results in a relative change in the other quantity proportional to the ...
 / (1:C) *
Skew normal distribution In probability theory and statistics, the skew normal distribution is a continuous probability distribution that generalises the normal distribution to allow for non-zero skewness. Definition Let \phi(x) denote the Normal distribution, standard ...
 / (1:C) *
Stable distribution In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be st ...
 / (1:C) *
Student's t-distribution In probability theory and statistics, Student's  distribution (or simply the  distribution) t_\nu is a continuous probability distribution that generalizes the Normal distribution#Standard normal distribution, standard normal distribu ...
 / (1:C) * Tracy–Widom distribution / rmt *
Triangular distribution In probability theory and statistics, the triangular distribution is a continuous probability distribution with lower limit ''a'', upper limit ''b'', and mode ''c'', where ''a'' < ''b'' and ''a'' ≤ ''c'' ≤ ''b''. ...
 / (1:C) *
Weibull distribution In probability theory and statistics, the Weibull distribution is a continuous probability distribution. It models a broad range of random variables, largely in the nature of a time to failure or time between events. Examples are maximum on ...
 / (1:C) *
Wigner semicircle distribution The Wigner semicircle distribution, named after the physicist Eugene Wigner, is the probability distribution defined on the domain minus;''R'', ''R''whose probability density function ''f'' is a scaled semicircle, i.e. a semi-ellipse, centered at ...
 / (1:C) *
Wishart distribution In statistics, the Wishart distribution is a generalization of the gamma distribution to multiple dimensions. It is named in honor of John Wishart (statistician), John Wishart, who first formulated the distribution in 1928. Other names include Wi ...
 / (F:C) * Zeta distribution / (1:D) *
Zipf's law Zipf's law (; ) is an empirical law stating that when a list of measured values is sorted in decreasing order, the value of the -th entry is often approximately inversely proportional to . The best known instance of Zipf's law applies to the ...
 / (1:D)


Empirical measure (emm)

*
Donsker's theorem In probability theory, Donsker's theorem (also known as Donsker's invariance principle, or the functional central limit theorem), named after Monroe D. Donsker, is a functional extension of the central limit theorem for empirical distribution fun ...
 / (LU:C) *
Empirical distribution function In statistics, an empirical distribution function ( an empirical cumulative distribution function, eCDF) is the Cumulative distribution function, distribution function associated with the empirical measure of a Sampling (statistics), sample. Th ...
*
Empirical measure In probability theory, an empirical measure is a random measure arising from a particular realization of a (usually finite) sequence of random variables. The precise definition is found below. Empirical measures are relevant to mathematical sta ...
 / (FL:RG) (U:D) *
Empirical process In probability theory, an empirical process is a stochastic process that characterizes the deviation of the empirical distribution function from its expectation. In mean field theory, limit theorems (as the number of objects becomes large) are con ...
 / (FL:RG) (U:D) *
Glivenko–Cantelli theorem In the theory of probability, the Glivenko–Cantelli theorem (sometimes referred to as the fundamental theorem of statistics), named after Valery Ivanovich Glivenko and Francesco Paolo Cantelli, describes the asymptotic behaviour of the empirica ...
 / (FL:RG) (U:D) *
Khmaladze transformation In statistics, the Khmaladze transformation is a mathematical tool used in constructing convenient goodness of fit tests for hypothetical distribution functions. More precisely, suppose X_1,\ldots, X_n are i.i.d., possibly multi-dimensional, ran ...
 / (FL:RG) (U:D) *
Vapnik–Chervonenkis theory Vapnik–Chervonenkis theory (also known as VC theory) was developed during 1960–1990 by Vladimir Vapnik and Alexey Chervonenkis. The theory is a form of computational learning theory, which attempts to explain the learning process from a stat ...


Limit theorems (lmt)

*
Central limit theorem In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the Probability distribution, distribution of a normalized version of the sample mean converges to a Normal distribution#Standard normal distributi ...
 / (L:R) * Berry–Esseen theorem / (F:R) *
Characteristic function In mathematics, the term "characteristic function" can refer to any of several distinct concepts: * The indicator function of a subset, that is the function \mathbf_A\colon X \to \, which for a given subset ''A'' of ''X'', has value 1 at points ...
 / anl (1F:DCR) *
De Moivre–Laplace theorem In probability theory, the de Moivre–Laplace theorem, which is a special case of the central limit theorem, states that the normal distribution may be used as an approximation to the binomial distribution under certain conditions. In particul ...
 / (L:BD) * Helly–Bray theorem / anl (L:R) * Illustration of the central limit theorem / (L:DC) *
Lindeberg's condition In probability theory, Lindeberg's condition is a Necessary and sufficient condition, sufficient condition (and under certain conditions also a necessary condition) for the central limit theorem (CLT) to hold for a sequence of independent random va ...
*
Lyapunov's central limit theorem In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables ...
 / (L:R) * Lévy's continuity theorem / anl (L:R) * Lévy's convergence theorem / (S:R) *
Martingale central limit theorem Martingale may refer to: *Martingale (probability theory), a stochastic process in which the conditional expectation of the next value, given the current and preceding values, is the current value * Martingale (tack) for horses * Martingale (colla ...
 / (S:R) * Method of moments / mnt (L:R) *
Slutsky's theorem In probability theory, Slutsky's theorem extends some properties of algebraic operations on convergent sequences of real numbers to sequences of random variables. The theorem was named after Eugen Slutsky. Slutsky's theorem is also attributed to ...
 / anl *
Weak convergence of measures In mathematics, more specifically measure theory, there are various notions of the convergence of measures. For an intuitive general sense of what is meant by ''convergence of measures'', consider a sequence of measures on a space, sharing a com ...
 / anl


Large deviations (lrd)

*
Large deviations theory In probability theory, the theory of large deviations concerns the asymptotic behaviour of remote tails of sequences of probability distributions. While some basic ideas of the theory can be traced to Laplace, the formalization started with insura ...
* Contraction principle * Cramér's theorem * Exponentially equivalent measures * Freidlin–Wentzell theorem * Laplace principle * Large deviations of Gaussian random functions / Gau *
Rate function In mathematics — specifically, in large deviations theory — a rate function is a function used to quantify the probabilities of rare events. Such functions are used to formulate large deviation principles. A large deviation principle qu ...
* Schilder's theorem / Gau * Tilted large deviation principle * Varadhan's lemma


Random graphs (rgr)

*
Random graph In mathematics, random graph is the general term to refer to probability distributions over graphs. Random graphs may be described simply by a probability distribution, or by a random process which generates them. The theory of random graphs l ...
* BA model *
Barabási–Albert model The Barabási–Albert (BA) model is an algorithm for generating random scale-free network, scale-free complex network, networks using a preferential attachment mechanism. Several natural and human-made systems, including the Internet, the Worl ...
*
Erdős–Rényi model In the mathematical field of graph theory, the Erdős–Rényi model refers to one of two closely related models for generating random graphs or the evolution of a random network. These models are named after Hungarians, Hungarian mathematicians ...
*
Percolation theory In statistical physics and mathematics, percolation theory describes the behavior of a network when nodes or links are added. This is a geometric type of phase transition, since at a critical fraction of addition the network of small, disconnected ...
 / phs (L:B) *
Percolation threshold The percolation threshold is a mathematical concept in percolation theory that describes the formation of long-range connectivity in Randomness, random systems. Below the threshold a giant connected component (graph theory), connected componen ...
 / phs * Random geometric graph *
Random regular graph A random ''r''-regular graph is a graph selected from \mathcal_, which denotes the probability space of all ''r''-regular graphs on n vertices, where 3 \le r 0 is a positive constant, and d is the least integer satisfying (r-1)^ \ge (2 + \epsilon ...
*
Watts and Strogatz model Watts is plural for ''watt'', the unit of power. Watts may also refer to: People * Watts (surname), a list of people with the surname Watts Fictional characters * Albie Watts, a fictional character in the British soap opera ''EastEnders'' * Ang ...


Random matrices (rmt)

*
Random matrix In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all of its entries are sampled randomly from a probability distribution. Random matrix theory (RMT) is the ...
* Circular ensemble * Gaussian matrix ensemble * Tracy–Widom distribution / spd * Weingarten function / anl


Stochastic calculus (scl)

*
Itô calculus Itô calculus, named after Kiyosi Itô, extends the methods of calculus to stochastic processes such as Brownian motion (see Wiener process). It has important applications in mathematical finance and stochastic differential equations. The cent ...
*
Bessel process In mathematics Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. Th ...
* CIR process / Mar * Doléans-Dade exponential *
Dynkin's formula In mathematics — specifically, in stochastic analysis — Dynkin's formula is a theorem giving the expected value of any suitably smooth function applied to a Feller process at a stopping time. It may be seen as a stochastic generalizati ...
*
Euler–Maruyama method In Itô calculus, the Euler–Maruyama method (also simply called the Euler method) is a method for the approximate numerical analysis, numerical solution of a stochastic differential equation (SDE). It is an extension of the Euler method for ord ...
*
Feynman–Kac formula The Feynman–Kac formula, named after Richard Feynman and Mark Kac, establishes a link between parabolic partial differential equations and stochastic processes. In 1947, when Kac and Feynman were both faculty members at Cornell University, Kac ...
*
Filtering problem In the theory of stochastic processes, filtering describes the problem of determining the state of a system from an incomplete and potentially noisy set of observations. For example, in GPS navigation, filtering helps estimate a car’s true posit ...
*
Fokker–Planck equation In statistical mechanics and information theory, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag (physi ...
 / Mar anl *
Geometric Brownian motion A geometric Brownian motion (GBM) (also known as exponential Brownian motion) is a continuous-time stochastic process in which the logarithm of the randomly varying quantity follows a Brownian motion (also called a Wiener process) with drift. It ...
 / Mar *
Girsanov theorem In probability theory, Girsanov's theorem or the Cameron-Martin-Girsanov theorem explains how stochastic processes change under changes in measure. The theorem is especially important in the theory of financial mathematics as it explains how to ...
* Green measure *
Heston model In finance, the Heston model, named after Steven L. Heston, is a mathematical model that describes the evolution of the volatility of an underlying asset. It is a stochastic volatility model: such a model assumes that the volatility of the asset ...
 / fnc * Hörmander's condition / anl * Infinitesimal generator *
Itô's lemma In mathematics Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. ...
*
Itô calculus Itô calculus, named after Kiyosi Itô, extends the methods of calculus to stochastic processes such as Brownian motion (see Wiener process). It has important applications in mathematical finance and stochastic differential equations. The cent ...
*
Itô diffusion In mathematics – specifically, in stochastic analysis – an Itô diffusion is a solution to a specific type of stochastic differential equation. That equation is similar to the Langevin equation used in physics to describe the Brownian motion ...
* Itô isometry *
Itô's lemma In mathematics Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. ...
*
Kolmogorov backward equation In probability theory, Kolmogorov equations characterize continuous-time Markov processes. In particular, they describe how the probability of a continuous-time Markov process in a certain state changes over time. There are four distinct equati ...
 / Mar * Local time *
Milstein method In mathematics, the Milstein method is a technique for the approximate numerical analysis, numerical solution of a stochastic differential equation. It is named after Grigori Milstein who first published it in 1974. Description Consider the autono ...
 / Mar * Novikov's condition *
Ornstein–Uhlenbeck process In mathematics, the Ornstein–Uhlenbeck process is a stochastic process with applications in financial mathematics and the physical sciences. Its original application in physics was as a model for the velocity of a massive Brownian particle ...
 / Gau Mar *
Quadratic variation In mathematics, quadratic variation is used in the analysis of stochastic processes such as Brownian motion and other martingales. Quadratic variation is just one kind of variation of a process. Definition Suppose that X_t is a real-valued st ...
*
Random dynamical system In mathematics, a random dynamical system is a dynamical system in which the equations of motion have an element of randomness to them. Random dynamical systems are characterized by a state space ''S'', a set of maps \Gamma from ''S'' into itself t ...
 / rds *
Reversible diffusion In mathematics, a reversible diffusion is a specific example of a reversible stochastic process. Reversible diffusions have an elegant characterization due to the Russian mathematician Andrey Nikolaevich Kolmogorov. Kolmogorov's characterization ...
* Runge–Kutta method * Russo–Vallois integral *
Schramm–Loewner evolution In probability theory, the Schramm–Loewner evolution with parameter ''κ'', also known as stochastic Loewner evolution (SLE''κ''), is a family of random planar curves that have been proven to be the scaling limit of a variety of two-dimensiona ...
*
Semimartingale In probability theory, a real-valued stochastic process ''X'' is called a semimartingale if it can be decomposed as the sum of a local martingale and a càdlàg adapted finite-variation process. Semimartingales are "good integrators", forming the ...
*
Stochastic calculus Stochastic calculus is a branch of mathematics that operates on stochastic processes. It allows a consistent theory of integration to be defined for integrals of stochastic processes with respect to stochastic processes. This field was created an ...
*
Stochastic differential equation A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process, resulting in a solution which is also a stochastic process. SDEs have many applications throughout pure mathematics an ...
* Stochastic processes and boundary value problems / anl *
Stratonovich integral In stochastic processes, the Stratonovich integral or Fisk–Stratonovich integral (developed simultaneously by Ruslan Stratonovich and Donald Fisk) is a stochastic integral, the most common alternative to the Itô integral. Although the Itô in ...
* Tanaka equation * Tanaka's formula *
Wiener process In mathematics, the Wiener process (or Brownian motion, due to its historical connection with Brownian motion, the physical process of the same name) is a real-valued continuous-time stochastic process discovered by Norbert Wiener. It is one o ...
 / Gau Mar * Wiener sausage


Malliavin calculus (Mal)

*
Malliavin calculus In probability theory and related fields, Malliavin calculus is a set of mathematical techniques and ideas that extend the mathematical field of calculus of variations from deterministic functions to stochastic processes. In particular, it allows ...
* Clark–Ocone theorem *
H-derivative In mathematics, the ''H''-derivative is a notion of derivative in the study of abstract Wiener spaces and the Malliavin calculus. Definition Let i : H \to E be an abstract Wiener space, and suppose that F : E \to \mathbb is Fréchet_d ...
* Integral representation theorem for classical Wiener space * Integration by parts operator *
Malliavin derivative In mathematics, the Malliavin derivative is a notion of derivative in the Malliavin calculus. Intuitively, it is the notion of derivative appropriate to paths in classical Wiener space, which are "usually" not differentiable in the usual sense. ...
* Malliavin's absolute continuity lemma * Ornstein–Uhlenbeck operator *
Skorokhod integral In mathematics, the Skorokhod integral, also named Hitsuda–Skorokhod integral, often denoted \delta, is an operator of great importance in the theory of stochastic processes. It is named after the Ukrainian mathematician Anatoliy Skorokhod and ...


Random dynamical systems (rds)

Random dynamical system In mathematics, a random dynamical system is a dynamical system in which the equations of motion have an element of randomness to them. Random dynamical systems are characterized by a state space ''S'', a set of maps \Gamma from ''S'' into itself t ...
 / scl *
Absorbing set In functional analysis and related areas of mathematics an absorbing set in a vector space is a set S which can be "inflated" or "scaled up" to eventually always include any given point of the vector space. Alternative terms are radial or absorben ...
* Base flow * Pullback attractor


Analytic aspects (including measure theoretic) (anl)

*
Probability space In probability theory, a probability space or a probability triple (\Omega, \mathcal, P) is a mathematical construct that provides a formal model of a random process or "experiment". For example, one can define a probability space which models ...
*
Carleman's condition In mathematics, particularly, in analysis, Carleman's condition gives a sufficient condition for the determinacy of the moment problem. That is, if a measure \mu satisfies Carleman's condition, there is no other measure \nu having the same moment ...
 / mnt (1:R) *
Characteristic function In mathematics, the term "characteristic function" can refer to any of several distinct concepts: * The indicator function of a subset, that is the function \mathbf_A\colon X \to \, which for a given subset ''A'' of ''X'', has value 1 at points ...
 / lmt (1F:DCR) * Contiguity#Probability theory *
Càdlàg In mathematics, a càdlàg (), RCLL ("right continuous with left limits"), or corlol ("continuous on (the) right, limit on (the) left") function is a function defined on the real numbers (or a subset of them) that is everywhere right-continuous an ...
* Disintegration theorem / cnd (2:G) *
Dynkin system A Dynkin system, named after Eugene Dynkin, is a collection of subsets of another universal set \Omega satisfying a set of axioms weaker than those of -algebra. Dynkin systems are sometimes referred to as -systems (Dynkin himself used this term ...
*
Exponential family In probability and statistics, an exponential family is a parametric set of probability distributions of a certain form, specified below. This special form is chosen for mathematical convenience, including the enabling of the user to calculate ...
*
Factorial moment generating function In probability theory and statistics, the factorial moment generating function (FMGF) of the probability distribution of a real-valued random variable ''X'' is defined as :M_X(t)=\operatorname\bigl ^\bigr/math> for all complex numbers ''t'' for w ...
 / mnt (1:R) *
Filtration Filtration is a physical separation process that separates solid matter and fluid from a mixture using a ''filter medium'' that has a complex structure through which only the fluid can pass. Solid particles that cannot pass through the filte ...
*
Fokker–Planck equation In statistical mechanics and information theory, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag (physi ...
 / scl Mar *
Gaussian measure In mathematics, Gaussian measure is a Borel measure on finite-dimensional Euclidean space \mathbb^n, closely related to the normal distribution in statistics. There is also a generalization to infinite-dimensional spaces. Gaussian measures are na ...
 / Gau *
Hamburger moment problem In mathematics, the Hamburger moment problem, named after Hans Ludwig Hamburger, is formulated as follows: given a sequence , does there exist a positive Borel measure (for instance, the measure determined by the cumulative distribution function o ...
 / mnt (1:R) *
Hausdorff moment problem In mathematics, the Hausdorff moment problem, named after Felix Hausdorff, asks for necessary and sufficient conditions that a given sequence be the sequence of moments :m_n = \int_0^1 x^n\,d\mu(x) of some Borel measure supported on the clos ...
 / mnt (1:R) * Helly–Bray theorem / lmt (L:R) * Hörmander's condition / scl * Integration of the normal density function / spd Gau *
Kolmogorov extension theorem In mathematics, the Kolmogorov extension theorem (also known as Kolmogorov existence theorem, the Kolmogorov consistency theorem or the Daniell-Kolmogorov theorem) is a theorem that guarantees that a suitably "consistent" collection of finite-dim ...
 / (SU:R) * Krylov–Bogolyubov theorem / Mar *
Law (stochastic processes) In probability theory and related fields, a stochastic () or random process is a mathematical object usually defined as a family of random variables in a probability space, where the index of the family often has the interpretation of time. Stoc ...
 / (U:G) * Location-scale family * Lévy's continuity theorem / lmt (L:R) * Minlos' theorem *
Moment problem In mathematics, a moment problem arises as the result of trying to invert the mapping that takes a measure \mu to the sequence of moments :m_n = \int_^\infty x^n \,d\mu(x)\,. More generally, one may consider :m_n = \int_^\infty M_n(x) \,d\mu( ...
 / mnt (1:R) *
Moment-generating function In probability theory and statistics, the moment-generating function of a real-valued random variable is an alternative specification of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compare ...
 / mnt (1F:R) *
Natural filtration In the theory of stochastic processes in mathematics and statistics, the generated filtration or natural filtration associated to a stochastic process is a filtration associated to the process which records its "past behaviour" at each time. It is ...
 / (U:G) * Paley–Wiener integral / Gau *
Sazonov's theorem In mathematics, Sazonov's theorem, named after Vyacheslav Vasilievich Sazonov (), is a theorem in functional analysis. It states that a bounded linear operator between two Hilbert spaces is ''γ''-radonifying if it is a Hilbert–Schmidt op ...
*
Slutsky's theorem In probability theory, Slutsky's theorem extends some properties of algebraic operations on convergent sequences of real numbers to sequences of random variables. The theorem was named after Eugen Slutsky. Slutsky's theorem is also attributed to ...
 / lmt *
Standard probability space Standard may refer to: Symbols * Colours, standards and guidons, kinds of military signs * Standard (emblem), a type of a large symbol or emblem used for identification Norms, conventions or requirements * Standard (metrology), an object t ...
*
Stieltjes moment problem In mathematics, the Stieltjes moment problem, named after Thomas Joannes Stieltjes, seeks necessary and sufficient conditions for a sequence (''m''0, ''m''1, ''m''2, ...) to be of the form :m_n = \int_0^\infty x^n\,d\mu(x) for some measure ''&m ...
 / mnt (1:R) *
Stochastic matrix In mathematics, a stochastic matrix is a square matrix used to describe the transitions of a Markov chain. Each of its entries is a nonnegative real number representing a probability. It is also called a probability matrix, transition matrix, ''s ...
 / Mar * Stochastic processes and boundary value problems / scl * Trigonometric moment problem / mnt (1:R) *
Weak convergence of measures In mathematics, more specifically measure theory, there are various notions of the convergence of measures. For an intuitive general sense of what is meant by ''convergence of measures'', consider a sequence of measures on a space, sharing a com ...
 / lmt * Weingarten function / rmt


Core probability: other articles, by number and type of random variables


A single random variable (1:)


Binary (1:B)

*
Bernoulli trial In the theory of probability and statistics, a Bernoulli trial (or binomial trial) is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is ...
 / (1:B) *
Complementary event In probability theory, the complement of any event ''A'' is the event ot ''A'' i.e. the event that ''A'' does not occur.Robert R. Johnson, Patricia J. Kuby: ''Elementary Statistics''. Cengage Learning 2007, , p. 229 () The event ''A'' and ...
 / (1:B) *
Entropy Entropy is a scientific concept, most commonly associated with states of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the micros ...
 / (1:BDC) *
Event Event may refer to: Gatherings of people * Ceremony, an event of ritual significance, performed on a special occasion * Convention (meeting), a gathering of individuals engaged in some common interest * Event management, the organization of eve ...
 / (1:B) *
Indecomposable distribution In probability theory, an indecomposable distribution is a probability distribution that cannot be represented as the distribution of the sum of two or more non-constant independent random variables: ''Z'' ≠ ''X'' + ''Y''. I ...
 / (1:BDCR) *
Indicator function In mathematics, an indicator function or a characteristic function of a subset of a set is a function that maps elements of the subset to one, and all other elements to zero. That is, if is a subset of some set , then the indicator functio ...
 / (1F:B)


Discrete (1:D)

*
Binomial probability In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of independent experiments, each asking a yes–no question, and each wi ...
 / (1:D) *
Continuity correction In mathematics, a continuity correction is an adjustment made when a discrete object is approximated using a continuous object. Examples Binomial If a random variable ''X'' has a binomial distribution with parameters ''n'' and ''p'', i.e., '' ...
 / (1:DC) *
Entropy Entropy is a scientific concept, most commonly associated with states of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the micros ...
 / (1:BDC) *
Equiprobable Equiprobability is a property for a collection of events that each have the same probability of occurring. In statistics and probability theory it is applied in the discrete uniform distribution and the equidistribution theorem for rational num ...
 / (1:D) *
Hann function The Hann function is named after the Austrian meteorologist Julius von Hann. It is a window function used to perform Hann smoothing or hanning. The function, with length L and amplitude 1/L, is given by: : w_0(x) \triangleq \left\.   For ...
 / (1:D) *
Indecomposable distribution In probability theory, an indecomposable distribution is a probability distribution that cannot be represented as the distribution of the sum of two or more non-constant independent random variables: ''Z'' ≠ ''X'' + ''Y''. I ...
 / (1:BDCR) *
Infinite divisibility Infinite divisibility arises in different ways in philosophy, physics, economics, order theory (a branch of mathematics), and probability theory (also a branch of mathematics). One may speak of infinite divisibility, or the lack thereof, of matter ...
 / (1:DCR) * Le Cam's theorem / (F:B) (1:D) * Limiting density of discrete points / (1:DC) * Mean difference / (1:DCR) *
Memorylessness In probability and statistics, memorylessness is a property of probability distributions. It describes situations where previous failures or elapsed time does not affect future trials or further wait time. Only the geometric and exponential distr ...
 / (1:DCR) *
Probability vector In mathematics and statistics, a probability vector or stochastic vector is a vector with non-negative entries that add up to one. The positions (indices) of a probability vector represent the possible outcomes of a discrete random variable, and ...
 / (1:D) *
Probability-generating function In probability theory, the probability generating function of a discrete random variable is a power series representation (the generating function) of the probability mass function of the random variable. Probability generating functions are often ...
 / (1:D) * Tsallis entropy / (1:DC)


Continuous (1:C)

*
Almost surely In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (with respect to the probability measure). In other words, the set of outcomes on which the event does not occur ha ...
 / (1:C) (LS:D) *
Continuity correction In mathematics, a continuity correction is an adjustment made when a discrete object is approximated using a continuous object. Examples Binomial If a random variable ''X'' has a binomial distribution with parameters ''n'' and ''p'', i.e., '' ...
 / (1:DC) *
Edgeworth series In probability theory, the Gram–Charlier A series (named in honor of Jørgen Pedersen Gram and Carl Charlier), and the Edgeworth series (named in honor of Francis Ysidro Edgeworth) are series that approximate a probability distribution over th ...
 / (1:C) *
Entropy Entropy is a scientific concept, most commonly associated with states of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the micros ...
 / (1:BDC) *
Indecomposable distribution In probability theory, an indecomposable distribution is a probability distribution that cannot be represented as the distribution of the sum of two or more non-constant independent random variables: ''Z'' ≠ ''X'' + ''Y''. I ...
 / (1:BDCR) *
Infinite divisibility Infinite divisibility arises in different ways in philosophy, physics, economics, order theory (a branch of mathematics), and probability theory (also a branch of mathematics). One may speak of infinite divisibility, or the lack thereof, of matter ...
 / (1:DCR) * Limiting density of discrete points / (1:DC) *
Location parameter In statistics, a location parameter of a probability distribution is a scalar- or vector-valued parameter x_0, which determines the "location" or shift of the distribution. In the literature of location parameter estimation, the probability distr ...
 / (1:C) * Mean difference / (1:DCR) *
Memorylessness In probability and statistics, memorylessness is a property of probability distributions. It describes situations where previous failures or elapsed time does not affect future trials or further wait time. Only the geometric and exponential distr ...
 / (1:DCR) *
Monotone likelihood ratio A monotonic likelihood ratio in distributions \ f(x)\ and \ g(x)\ The ratio of the probability density function, density functions above is monotone in the parameter \ x\ , so \ \frac\ satisfies the monotone likelihood ratio property. In sta ...
 / (1:C) *
Scale parameter In probability theory and statistics, a scale parameter is a special kind of numerical parameter of a parametric family of probability distributions. The larger the scale parameter, the more spread out the distribution. Definition If a family ...
 / (1:C) *
Stability Stability may refer to: Mathematics *Stability theory, the study of the stability of solutions to differential equations and dynamical systems ** Asymptotic stability ** Exponential stability ** Linear stability **Lyapunov stability ** Marginal s ...
 / (1:C) *
Stein's lemma Stein's lemma, named in honor of Charles Stein, is a theorem of probability theory that is of interest primarily because of its applications to statistical inference — in particular, to James–Stein estimation and empirical Bayes methods & ...
 / (12:C) *
Truncated distribution In statistics, a truncated distribution is a conditional distribution that results from restricting the domain of some other probability distribution. Truncated distributions arise in practical statistics in cases where the ability to record, or ...
 / (1:C) * Tsallis entropy / (1:DC)


Real-valued, arbitrary (1:R)

*
Heavy-tailed distribution In probability theory, heavy-tailed distributions are probability distributions whose tails are not exponentially bounded: that is, they have heavier tails than the exponential distribution. Roughly speaking, “heavy-tailed” means the distribu ...
 / (1:R) *
Indecomposable distribution In probability theory, an indecomposable distribution is a probability distribution that cannot be represented as the distribution of the sum of two or more non-constant independent random variables: ''Z'' ≠ ''X'' + ''Y''. I ...
 / (1:BDCR) *
Infinite divisibility Infinite divisibility arises in different ways in philosophy, physics, economics, order theory (a branch of mathematics), and probability theory (also a branch of mathematics). One may speak of infinite divisibility, or the lack thereof, of matter ...
 / (1:DCR) *
Locality Locality may refer to: * Locality, a historical named location or place in Canada * Locality (association), an association of community regeneration organizations in England * Locality (linguistics) * Locality (settlement) * Suburbs and localitie ...
 / (1:R) * Mean difference / (1:DCR) *
Memorylessness In probability and statistics, memorylessness is a property of probability distributions. It describes situations where previous failures or elapsed time does not affect future trials or further wait time. Only the geometric and exponential distr ...
 / (1:DCR) *
Quantile In statistics and probability, quantiles are cut points dividing the range of a probability distribution into continuous intervals with equal probabilities or dividing the observations in a sample in the same way. There is one fewer quantile t ...
 / (1:R) *
Survival function The survival function is a function that gives the probability that a patient, device, or other object of interest will survive past a certain time. The survival function is also known as the survivor function or reliability function. The term ...
 / (1:R) *
Taylor expansions for the moments of functions of random variables In probability theory, it is possible to approximate the moments of a function ''f'' of a random variable ''X'' using Taylor expansions, provided that ''f'' is sufficiently differentiable and that the moments of ''X'' are finite. A simulatio ...
 / (1:R)


Random point of a manifold (1:M)

* Bertrand's paradox / (1:M)


General (random element of an abstract space) (1:G)

* Pitman–Yor process / (1:G) *
Random compact set In mathematics, a random compact set is essentially a compact set-valued random variable. Random compact sets are useful in the study of attractors for random dynamical systems. Definition Let (M, d) be a complete separable metric space. Let \ma ...
 / (1:G) *
Random element In probability theory, random element is a generalization of the concept of random variable to more complicated spaces than the simple real line. The concept was introduced by who commented that the “development of probability theory and expansio ...
 / (1:G)


Two random variables (2:)


Binary (2:B)

*
Coupling A coupling is a device used to connect two shafts together at their ends for the purpose of transmitting power. The primary purpose of couplings is to join two pieces of rotating equipment while permitting some degree of misalignment or end mo ...
 / (2:BRG) * Craps principle / (2:B)


Discrete (2:D)

*
Kullback–Leibler divergence In mathematical statistics, the Kullback–Leibler (KL) divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how much a model probability distribution is diff ...
 / (2:DCR) *
Mutual information In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual Statistical dependence, dependence between the two variables. More specifically, it quantifies the "Information conten ...
 / (23F:DC)


Continuous (2:C)

* Copula / (2F:C) * Cramér's theorem / (2:C) *
Kullback–Leibler divergence In mathematical statistics, the Kullback–Leibler (KL) divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how much a model probability distribution is diff ...
 / (2:DCR) *
Mutual information In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual Statistical dependence, dependence between the two variables. More specifically, it quantifies the "Information conten ...
 / (23F:DC) *
Normally distributed and uncorrelated does not imply independent Normality is a behavior that can be normal for an individual (intrapersonal normality) when it is consistent with the most common behavior for that person. Normal is also used to describe individual behavior that conforms to the most common beha ...
 / (2:C) *
Posterior probability The posterior probability is a type of conditional probability that results from updating the prior probability with information summarized by the likelihood via an application of Bayes' rule. From an epistemological perspective, the posteri ...
 / Bay (2:C) *
Stein's lemma Stein's lemma, named in honor of Charles Stein, is a theorem of probability theory that is of interest primarily because of its applications to statistical inference — in particular, to James–Stein estimation and empirical Bayes methods & ...
 / (12:C)


Real-valued, arbitrary (2:R)

*
Coupling A coupling is a device used to connect two shafts together at their ends for the purpose of transmitting power. The primary purpose of couplings is to join two pieces of rotating equipment while permitting some degree of misalignment or end mo ...
 / (2:BRG) *
Hellinger distance In probability and statistics, the Hellinger distance (closely related to, although different from, the Bhattacharyya distance) is used to quantify the similarity between two probability distributions. It is a type of ''f''-divergence. The Hell ...
 / (2:R) *
Kullback–Leibler divergence In mathematical statistics, the Kullback–Leibler (KL) divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how much a model probability distribution is diff ...
 / (2:DCR) *
Lévy metric In mathematics, the Lévy metric is a metric on the space of cumulative distribution functions of one-dimensional random variables. It is a special case of the Lévy–Prokhorov metric, and is named after the French mathematician Paul Lévy. Def ...
 / (2:R) * Total variation#Total variation distance in probability theory / (2:R)


General (random element of an abstract space) (2:G)

*
Coupling A coupling is a device used to connect two shafts together at their ends for the purpose of transmitting power. The primary purpose of couplings is to join two pieces of rotating equipment while permitting some degree of misalignment or end mo ...
 / (2:BRG) *
Lévy–Prokhorov metric In mathematics, the Lévy–Prokhorov metric (sometimes known just as the Prokhorov metric) is a metric (i.e., a definition of distance) on the collection of probability measures on a given metric space. It is named after the French mathematician P ...
 / (2:G) *
Wasserstein metric In mathematics, the Wasserstein distance or Kantorovich– Rubinstein metric is a distance function defined between probability distributions on a given metric space M. It is named after Leonid Vaseršteĭn. Intuitively, if each distribution ...
 / (2:G)


Three random variables (3:)


Binary (3:B)

*
Pairwise independence In probability theory, a pairwise independent collection of random variables is a set of random variables any two of which are statistical independence, independent. Any collection of Mutual independence, mutually independent random variables is p ...
 / (3:B) (F:R)


Discrete (3:D)

*
Mutual information In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual Statistical dependence, dependence between the two variables. More specifically, it quantifies the "Information conten ...
 / (23F:DC)


Continuous (3:C)

*
Mutual information In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual Statistical dependence, dependence between the two variables. More specifically, it quantifies the "Information conten ...
 / (23F:DC)


Finitely many random variables (F:)


Binary (F:B)

*
Bertrand's ballot theorem In combinatorics, Bertrand's ballot problem is the question: "In an election where candidate A receives ''p'' votes and candidate B receives ''q'' votes with ''p'' > ''q'', what is the probability that A will be strictly ahead of B throug ...
 / (F:B) *
Boole's inequality In probability theory, Boole's inequality, also known as the union bound, says that for any finite or countable set of events, the probability that at least one of the events happens is no greater than the sum of the probabilities of the indiv ...
 / (FS:B) *
Coin flipping Coin flipping, coin tossing, or heads or tails is using the thumb to make a coin go up while spinning in the air and checking which side is showing when it is down onto a surface, in order to randomly choose between two alternatives. It is a for ...
 / (F:B) *
Collectively exhaustive events In probability theory and logic, a set of events is jointly or collectively exhaustive if at least one of the events must occur. For example, when rolling a six-sided die, the events 1, 2, 3, 4, 5, and 6 are collectively exhaustive, because t ...
 / (F:B) *
Inclusion–exclusion principle In combinatorics, the inclusion–exclusion principle is a counting technique which generalizes the familiar method of obtaining the number of elements in the union (set theory), union of two finite sets; symbolically expressed as : , A \cup B, ...
 / (F:B) *
Independence Independence is a condition of a nation, country, or state, in which residents and population, or some portion thereof, exercise self-government, and usually sovereignty, over its territory. The opposite of independence is the status of ...
 / (F:BR) *
Indicator function In mathematics, an indicator function or a characteristic function of a subset of a set is a function that maps elements of the subset to one, and all other elements to zero. That is, if is a subset of some set , then the indicator functio ...
 / (1F:B) *
Law of total probability In probability theory, the law (or formula) of total probability is a fundamental rule relating marginal probabilities to conditional probabilities. It expresses the total probability of an outcome which can be realized via several distinct ev ...
 / (F:B) * Le Cam's theorem / (F:B) (1:D) * Leftover hash lemma / (F:B) *
Lovász local lemma In probability theory, if a large number of events are all independent of one another and each has probability less than 1, then there is a positive (possibly small) probability that none of the events will occur. The Lovász local lemma allows a s ...
 / (F:B) * Mutually exclusive / (F:B) * Random walk / (FLS:BD) (U:C) * Schuette–Nesbitt formula / (F:B)


Discrete (F:D)

* Coupon collector's problem / gmb (F:D) * Graphical model / (F:D) * Kirkwood approximation / (F:D) *
Mutual information In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual Statistical dependence, dependence between the two variables. More specifically, it quantifies the "Information conten ...
 / (23F:DC) * Random field / (F:D) * Random walk / (FLS:BD) (U:C) * Stopped process / (FU:DG)


Continuous (F:C)

* Anderson's theorem#Application to probability theory / (F:C) * Autoregressive integrated moving average / (FS:C) * Autoregressive model / (FS:C) * Autoregressive moving average model / (FS:C) * Copula / (2F:C) * Maxwell's theorem / (F:C) * Moving average model / (FS:C) *
Mutual information In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual Statistical dependence, dependence between the two variables. More specifically, it quantifies the "Information conten ...
 / (23F:DC) * Schrödinger method / (F:C)


Real-valued, arbitrary (F:R)

* Bapat–Beg theorem / (F:R) * Comonotonicity / (F:R) * Doob martingale / (F:R) *
Independence Independence is a condition of a nation, country, or state, in which residents and population, or some portion thereof, exercise self-government, and usually sovereignty, over its territory. The opposite of independence is the status of ...
 / (F:BR) * Littlewood–Offord problem / (F:R) * Lévy flight / (F:R) (U:C) * Martingale (probability theory), Martingale / (FU:R) * Martingale difference sequence / (F:R) * Maximum likelihood / (FL:R) * Multivariate random variable / (F:R) * Optional stopping theorem / (FS:R) *
Pairwise independence In probability theory, a pairwise independent collection of random variables is a set of random variables any two of which are statistical independence, independent. Any collection of Mutual independence, mutually independent random variables is p ...
 / (3:B) (F:R) * Stopping time / (FU:R) * Time series / (FS:R) * Wald's equation / (FS:R) * Wick product / (F:R)


General (random element of an abstract space) (F:G)

* Finite-dimensional distribution / (FU:G) * Hitting time / (FU:G) * Stopped process / (FU:DG)


A large number of random variables (finite but tending to infinity) (L:)


Binary (L:B)

* Random walk / (FLS:BD) (U:C)


Discrete (L:D)

*
Almost surely In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (with respect to the probability measure). In other words, the set of outcomes on which the event does not occur ha ...
 / (1:C) (LS:D) * Gambler's ruin / gmb (L:D) * Loop-erased random walk / (L:D) (U:C) * Preferential attachment / (L:D) * Random walk / (FLS:BD) (U:C) * Typical set / (L:D)


Real-valued, arbitrary (L:R)

* Convergence of random variables / (LS:R) * Law of large numbers / (LS:R) * Maximum likelihood / (FL:R) * Stochastic convergence / (LS:R)


An infinite sequence of random variables (S:)


Binary (S:B)

* Bernoulli process / (S:B) *
Boole's inequality In probability theory, Boole's inequality, also known as the union bound, says that for any finite or countable set of events, the probability that at least one of the events happens is no greater than the sum of the probabilities of the indiv ...
 / (FS:B) * Borel–Cantelli lemma / (S:B) * De Finetti's theorem / (S:B) * Exchangeable random variables / (S:BR) * Random walk / (FLS:BD) (U:C)


Discrete (S:D)

*
Almost surely In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (with respect to the probability measure). In other words, the set of outcomes on which the event does not occur ha ...
 / (1:C) (LS:D) * Asymptotic equipartition property / (S:DC) * Bernoulli scheme / (S:D) * Branching process / (S:D) * Chinese restaurant process / (S:D) * Galton–Watson process / (S:D) * Information source (mathematics), Information source / (S:D) * Random walk / (FLS:BD) (U:C)


Continuous (S:C)

* Asymptotic equipartition property / (S:DC) * Autoregressive integrated moving average / (FS:C) * Autoregressive model / (FS:C) * Autoregressive–moving-average model / (FS:C) * Moving-average model / (FS:C)


Real-valued, arbitrary (S:R)

* Big O in probability notation / (S:R) * Convergence of random variables / (LS:R) * Doob's martingale convergence theorems / (SU:R) * Ergodic theory / (S:R) * Exchangeable random variables / (S:BR) * Hewitt–Savage zero–one law / (S:RG) * Kolmogorov's zero–one law / (S:R) * Law of large numbers / (LS:R) * Law of the iterated logarithm / (S:R) * Maximal ergodic theorem / (S:R) * Op (statistics) / (S:R) * Optional stopping theorem / (FS:R) * Stationary process / (SU:R) * Stochastic convergence / (LS:R) * Stochastic process / (SU:RG) * Time series / (FS:R) * Uniform integrability / (S:R) * Wald's equation / (FS:R)


General (random element of an abstract space) (S:G)

* Hewitt–Savage zero–one law / (S:RG) * Mixing (mathematics), Mixing / (S:G) * Skorokhod's representation theorem / (S:G) * Stochastic process / (SU:RG)


Uncountably many random variables (continuous-time processes etc) (U:)


Discrete (U:D)

* Counting process / (U:D) * Cox process / (U:D) * Dirichlet process / (U:D) * Lévy process / (U:DC) * Non-homogeneous Poisson process / (U:D) * Point process / (U:D) * Poisson process / (U:D) * Poisson random measure / (U:D) * Random measure / (U:D) * Renewal theory / (U:D) * Stopped process / (FU:DG)


Continuous (U:C)

* Brownian motion / phs (U:C) * Gamma process / (U:C) * Loop-erased random walk / (L:D) (U:C) * Lévy flight / (F:R) (U:C) * Lévy process / (U:DC) * Martingale representation theorem / (U:C) * Random walk / (FLS:BD) (U:C) * Skorokhod's embedding theorem / (U:C)


Real-valued, arbitrary (U:R)

* Compound Poisson process / (U:R) * Continuous stochastic process / (U:RG) * Doob's martingale convergence theorems / (SU:R) * Doob–Meyer decomposition theorem / (U:R) * Feller-continuous process / (U:R) * Kolmogorov continuity theorem / (U:R) * Local martingale / (U:R) * Martingale (probability theory), Martingale / (FU:R) * Stationary process / (SU:R) * Stochastic process / (SU:RG) * Stopping time / (FU:R)


General (random element of an abstract space) (U:G)

* Adapted process / (U:G) * Continuous stochastic process / (U:RG) * Finite-dimensional distribution / (FU:G) * Hitting time / (FU:G) * Killed process / (U:G) * Progressively measurable process / (U:G) * Sample-continuous process / (U:G) * Stochastic process / (SU:RG) * Stopped process / (FU:DG)


Around the core


General aspects (grl)

* Average * Bean machine * Cox's theorem * Equipossible * Exotic probability * Extractor (mathematics), Extractor * Free probability * Frequency (statistics), Frequency * Frequency probability * Impossible event * Infinite monkey theorem * Information geometry * Law of Truly Large Numbers * Littlewood's law * Observational error * Principle of indifference * Principle of maximum entropy * Probability * Probability interpretations * Propensity probability * Random number generator * Random sequence * Randomization * Randomness * Statistical dispersion * Statistical regularity * Uncertainty * Upper and lower probabilities * Urn problem


Foundations of probability theory (fnd)

* Algebra of random variables * Belief propagation * Dempster–Shafer theory * Dutch book * Elementary event * Normalizing constant * Possibility theory * Probability axioms * Transferable belief model * Unit measure


Gambling (gmb)

* Betting * Bookmaker * Coherence (philosophical gambling strategy), Coherence * Coupon collector's problem / (F:D) * Coupon collector's problem (generating function approach) / (F:D) * Gambler's fallacy * Gambler's ruin / (L:D) * Game of chance * Inverse gambler's fallacy * Lottery * Lottery machine * Luck * Martingale (betting system), Martingale * Odds * Pachinko * Parimutuel betting * Parrondo's paradox * Pascal's wager * Poker probability * Poker probability (Omaha) * Poker probability (Texas hold 'em) * Pot odds * Proebsting's paradox * Roulette * Spread betting * Charles Wells (gambler), The man who broke the bank at Monte Carlo


Coincidence (cnc)

* Bible code * Birthday paradox * Birthday problem * Index of coincidence * Spurious relationship


Algorithmics (alg)

* Algorithmic Lovász local lemma * Box–Muller transform * Gibbs sampling * Inverse transform sampling method * Las Vegas algorithm * Metropolis algorithm * Monte Carlo method * Panjer recursion * Probabilistic Turing machine * Probabilistic algorithm * Probabilistically checkable proof * Probable prime * Stochastic programming


Bayesian approach (Bay)

* Bayes factor * Bayesian model comparison *
Bayesian network A Bayesian network (also known as a Bayes network, Bayes net, belief network, or decision network) is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). Whi ...
 / Mar * Bayesian probability * Bayesian programming * Bayesianism * Checking if a coin is fair * Conjugate prior * Factor graph * Good–Turing frequency estimation * Imprecise probability *
Inverse probability In probability theory, inverse probability is an old term for the probability distribution of an unobserved variable. Today, the problem of determining an unobserved variable (by whatever method) is called inferential statistics. The method of i ...
 / cnd * Marginal likelihood *
Markov blanket In statistics and machine learning, a Markov blanket of a random variable is a minimal set of variables that renders the variable conditionally independent of all other variables in the system. This concept is central in probabilistic graphical ...
 / Mar *
Posterior probability The posterior probability is a type of conditional probability that results from updating the prior probability with information summarized by the likelihood via an application of Bayes' rule. From an epistemological perspective, the posteri ...
 / (2:C) * Prior probability * SIPTA * Subjective logic * Subjectivism#Subjectivism in probability / hst


Financial mathematics (fnc)

* Allais paradox * Black–Scholes * Cox–Ingersoll–Ross model * Forward measure *
Heston model In finance, the Heston model, named after Steven L. Heston, is a mathematical model that describes the evolution of the volatility of an underlying asset. It is a stochastic volatility model: such a model assumes that the volatility of the asset ...
 / scl * Jump process * Jump-diffusion model * Kelly criterion * Market risk * Mathematics of bookmaking * Risk * Risk-neutral measure * Ruin theory * Sethi model * Technical analysis * Value at risk * Variance gamma process / spr * Vasicek model * Volatility (finance), Volatility


Physics (phs)

* Boltzmann factor * Brownian motion / (U:C) * Brownian ratchet * Cosmic variance * Critical phenomena * Diffusion-limited aggregation * Fluctuation theorem * Gibbs state * Information entropy * Lattice model (physics), Lattice model *
Master equation In physics, chemistry, and related fields, master equations are used to describe the time evolution of a system that can be modeled as being in a probabilistic combination of states at any given time, and the switching between states is determi ...
 / Mar (U:D) * Negative probability * Nonextensive entropy * Partition function (mathematics), Partition function *
Percolation theory In statistical physics and mathematics, percolation theory describes the behavior of a network when nodes or links are added. This is a geometric type of phase transition, since at a critical fraction of addition the network of small, disconnected ...
 / rgr (L:B) *
Percolation threshold The percolation threshold is a mathematical concept in percolation theory that describes the formation of long-range connectivity in Randomness, random systems. Below the threshold a giant connected component (graph theory), connected componen ...
 / rgr * Probability amplitude * Quantum Markov chain / Mar * Quantum probability * Scaling limit * Statistical mechanics * Statistical physics * Vacuum expectation value


Genetics (gnt)

* Ewens's sampling formula * Hardy–Weinberg principle * Population genetics * Punnett square * Ronald Fisher


Stochastic process (spr)

* Anomaly time series * Arrival theorem * Beverton–Holt model * Burke's theorem * Buzen's algorithm * Disorder problem * Erlang unit * G-network * Gordon–Newell theorem * Innovation (signal processing), Innovation * Interacting particle system * Jump diffusion * M/M/1 model * M/M/c model * Mark V Shaney * Markov chain Monte Carlo * Markov switching multifractal * Oscillator linewidth * Poisson hidden Markov model * Population process * Stochastic cellular automata, Probabilistic cellular automata *
Product-form solution In probability theory, a product-form solution is a particularly efficient form of solution for determining some metric of a system with distinct sub-components, where the metric for the collection of components can be written as a product (mathem ...
 / Mar * Quasireversibility * Queueing theory * Recurrence period density entropy * Variance gamma process / fnc * Wiener equation


Geometric probability (geo)

* Boolean model (probability theory), Boolean model * Buffon's needle * Geometric probability * Hadwiger's theorem * Integral geometry * Random coil * Stochastic geometry * Vitale's random Brunn–Minkowski inequality


Empirical findings (emp)

* Benford's law * Pareto principle


Historical (hst)

* History of probability * Newton–Pepys problem * Problem of points * Subjectivism#Subjectivism in probability / Bay * Sunrise problem * The Doctrine of Chances


Miscellany (msc)

* B-convex space * Conditional event algebra * Error function * Goodman–Nguyen–van Fraassen algebra * List of mathematical probabilists * Nuisance variable * Probabilistic encryption * Probabilistic logic * Probabilistic proofs of non-probabilistic theorems * Pseudocount


Counters of articles

* "Core": 455 (570) * "Around": 198 (200) * "Core selected": 311 (358) * "Core others": 144 (212) {{div col end Here ''k''(''n'') means: ''n'' links to ''k'' articles. (Some articles are linked more than once.) Statistics-related lists Probability theory, * Outlines of mathematics and logic, Probability topics Outlines, Probability topics