List Of Probability Topics
   HOME

TheInfoList



OR:

This is a list of probability topics. It overlaps with the (alphabetical)
list of statistical topics 0–9 * 1.96 *2SLS (two-stage least squares) redirects to instrumental variable *3SLS – see three-stage least squares * 68–95–99.7 rule *100-year flood A *A priori probability *Abductive reasoning *Absolute deviation *Absolute risk re ...
. There are also the outline of probability and
catalog of articles in probability theory This page lists articles related to probability theory. In particular, it lists many articles corresponding to specific probability distributions. Such articles are marked here by a code of the form (X:Y), which refers to number of random variable ...
. For distributions, see
List of probability distributions Many probability distributions that are important in theory or applications have been given specific names. Discrete distributions With finite support *The Bernoulli distribution, which takes value 1 with probability ''p'' and value 0 with pro ...
. For journals, see list of probability journals. For contributors to the field, see
list of mathematical probabilists :''See probabilism for the followers of such a theory in theology or philosophy''. {{ProbabilityTopicsTOC This list contains only probabilists in the sense of mathematicians specializing in probability theory. * David Aldous (born 1952) * Siva Ath ...
and
list of statisticians This list of statisticians lists people who have made notable contributions to the theories or application of statistics, or to the related fields of probability or machine learning. It includes the founders of statistics and others. It includes ...
.


General aspects

*
Probability Probability is a branch of mathematics and statistics concerning events and numerical descriptions of how likely they are to occur. The probability of an event is a number between 0 and 1; the larger the probability, the more likely an e ...
*
Randomness In common usage, randomness is the apparent or actual lack of definite pattern or predictability in information. A random sequence of events, symbols or steps often has no order and does not follow an intelligible pattern or combination. ...
,
Pseudorandomness A pseudorandom sequence of numbers is one that appears to be statistically random, despite having been produced by a completely deterministic and repeatable process. Pseudorandom number generators are often used in computer programming, as tradi ...
, Quasirandomness *
Randomization Randomization is a statistical process in which a random mechanism is employed to select a sample from a population or assign subjects to different groups.Oxford English Dictionary "randomization" The process is crucial in ensuring the random alloc ...
,
hardware random number generator In computing, a hardware random number generator (HRNG), true random number generator (TRNG), non-deterministic random bit generator (NRBG), or physical random number generator is a device that generates random numbers from a physical process c ...
*
Random number generation Random number generation is a process by which, often by means of a random number generator (RNG), a sequence of numbers or symbols is generated that cannot be reasonably predicted better than by random chance. This means that the particular ou ...
*
Random sequence The concept of a random sequence is essential in probability theory and statistics. The concept generally relies on the notion of a sequence of random variables and many statistical discussions begin with the words "let ''X''1,...,''Xn'' be independ ...
*
Uncertainty Uncertainty or incertitude refers to situations involving imperfect or unknown information. It applies to predictions of future events, to physical measurements that are already made, or to the unknown, and is particularly relevant for decision ...
*
Statistical dispersion In statistics, dispersion (also called variability, scatter, or spread) is the extent to which a distribution is stretched or squeezed. Common examples of measures of statistical dispersion are the variance, standard deviation, and interquartil ...
*
Observational error Observational error (or measurement error) is the difference between a measured value of a quantity and its unknown true value.Dodge, Y. (2003) ''The Oxford Dictionary of Statistical Terms'', OUP. Such errors are inherent in the measurement ...
*
Equiprobable Equiprobability is a property for a collection of events that each have the same probability of occurring. In statistics and probability theory it is applied in the discrete uniform distribution and the equidistribution theorem for rational num ...
**
Equipossible Equipossibility is a philosophical concept in possibility theory that is a precursor to the notion of equiprobability in probability theory. It is used to distinguish what ''can'' occur in a probability experiment. For example, it is the difference ...
*
Average In colloquial, ordinary language, an average is a single number or value that best represents a set of data. The type of average taken as most typically representative of a list of numbers is the arithmetic mean the sum of the numbers divided by ...
*
Probability interpretations The word "probability" has been used in a variety of ways since it was first applied to the mathematical study of games of chance. Does probability measure the real, physical, tendency of something to occur, or is it a measure of how strongly on ...
* Markovian *
Statistical regularity Statistical regularity is a notion in statistics and probability theory that random events exhibit regularity when repeated enough times or that enough sufficiently similar random events exhibit regularity. It is an umbrella term that covers the law ...
*
Central tendency In statistics, a central tendency (or measure of central tendency) is a central or typical value for a probability distribution.Weisberg H.F (1992) ''Central Tendency and Variability'', Sage University Paper Series on Quantitative Applications in ...
*
Bean machine The Galton board, also known as the Galton box or quincunx or bean machine (or incorrectly Dalton board), is a device invented by Francis Galton to demonstrate the central limit theorem, in particular that with sufficient sample size the binomi ...
*
Relative frequency In probability theory and statistics, the empirical probability, relative frequency, or experimental probability of an event is the ratio of the number of outcomes in which a specified event occurs to the total number of trials, i.e. by means no ...
*
Frequency probability Frequentist probability or frequentism is an interpretation of probability; it defines an event's probability (the ''long-run probability'') as the limit of a sequence, limit of its Empirical probability, relative frequency in infinitely many E ...
*
Maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed stati ...
*
Bayesian probability Bayesian probability ( or ) is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation representing a state of knowledge or as quant ...
*
Principle of indifference The principle of indifference (also called principle of insufficient reason) is a rule for assigning epistemic probabilities. The principle of indifference states that in the absence of any relevant evidence, agents should distribute their cre ...
*
Credal set In mathematics, a credal set is a set of probability distributions or, more generally, a set of (possibly only finitely additive) probability measures. A credal set is often assumed or constructed to be a closed convex set. It is intended to e ...
*
Cox's theorem Cox's theorem, named after the physicist Richard Threlkeld Cox, is a derivation of the laws of probability theory from a certain set of postulates. This derivation justifies the so-called "logical" interpretation of probability, as the laws of pr ...
*
Principle of maximum entropy The principle of maximum entropy states that the probability distribution which best represents the current state of knowledge about a system is the one with largest entropy, in the context of precisely stated prior data (such as a proposition ...
*
Information entropy In information theory, the entropy of a random variable quantifies the average level of uncertainty or information associated with the variable's potential states or possible outcomes. This measures the expected amount of information needed ...
*
Urn problem In probability and statistics, an urn problem is an idealized mental exercise in which some objects of real interest (such as atoms, people, cars, etc.) are represented as colored balls in an urn or other container. One pretends to remove one o ...
s * Extractor *
Free probability Free probability is a mathematics, mathematical theory that studies non-commutative random variables. The "freeness" or free independence property is the analogue of the classical notion of statistical independence, independence, and it is connecte ...
* Exotic probability * Schrödinger method *
Empirical measure In probability theory, an empirical measure is a random measure arising from a particular realization of a (usually finite) sequence of random variables. The precise definition is found below. Empirical measures are relevant to mathematical sta ...
*
Glivenko–Cantelli theorem In the theory of probability, the Glivenko–Cantelli theorem (sometimes referred to as the fundamental theorem of statistics), named after Valery Ivanovich Glivenko and Francesco Paolo Cantelli, describes the asymptotic behaviour of the empirica ...
*
Zero–one law In probability theory, a zero–one law is a result that states that an event must have probability 0 or 1 and no intermediate value. Sometimes, the statement is that the limit of certain probabilities must be 0 or 1. It may refer to: * Borel–Ca ...
**
Kolmogorov's zero–one law In probability theory, Kolmogorov's zero–one law, named in honor of Andrey Nikolaevich Kolmogorov, specifies that a certain type of event, namely a ''tail event of independent σ-algebras'', will either almost surely happen or almost su ...
**
Hewitt–Savage zero–one law The Hewitt–Savage zero–one law is a theorem in probability theory, similar to Kolmogorov's zero–one law and the Borel–Cantelli lemma, that specifies that a certain type of event will either almost surely happen or almost surely not happen. I ...
*
Law of truly large numbers The law of truly large numbers (a statistical adage), attributed to Persi Diaconis and Frederick Mosteller, states that with a large enough number of independent samples, any highly implausible (i.e., unlikely in any single sample, but with con ...
**
Littlewood's law Littlewood's law states that a person can expect to experience events with odds of one in a million (referred to as a "miracle") at the rate of about one per month. It is named after the British mathematician John Edensor Littlewood. It seeks, am ...
**
Infinite monkey theorem The infinite monkey theorem states that a monkey hitting keys independently and at randomness, random on a typewriter keyboard for an infinity, infinite amount of time will almost surely type any given text, including the complete works of Willi ...
* Littlewood–Offord problem *
Inclusion–exclusion principle In combinatorics, the inclusion–exclusion principle is a counting technique which generalizes the familiar method of obtaining the number of elements in the union (set theory), union of two finite sets; symbolically expressed as : , A \cup B, ...
*
Impossible event Impossible, Imposible or Impossibles may refer to: Music * ImPossible (album), ''ImPossible'' (album), a 2016 album by Divinity Roxx * The Impossible (album), ''The Impossible'' (album), a 1981 album by Ken Lockie Groups * The Impossibles (Ameri ...
*
Information geometry Information geometry is an interdisciplinary field that applies the techniques of differential geometry to study probability theory and statistics. It studies statistical manifolds, which are Riemannian manifolds whose points correspond to proba ...
* Talagrand's concentration inequality


Foundations of probability theory

*
Probability theory Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expre ...
*
Probability space In probability theory, a probability space or a probability triple (\Omega, \mathcal, P) is a mathematical construct that provides a formal model of a random process or "experiment". For example, one can define a probability space which models ...
**
Sample space In probability theory, the sample space (also called sample description space, possibility space, or outcome space) of an experiment or random trial is the set of all possible outcomes or results of that experiment. A sample space is usually den ...
**
Standard probability space Standard may refer to: Symbols * Colours, standards and guidons, kinds of military signs * Standard (emblem), a type of a large symbol or emblem used for identification Norms, conventions or requirements * Standard (metrology), an object t ...
**
Random element In probability theory, random element is a generalization of the concept of random variable to more complicated spaces than the simple real line. The concept was introduced by who commented that the “development of probability theory and expansio ...
***
Random compact set In mathematics, a random compact set is essentially a compact set-valued random variable. Random compact sets are useful in the study of attractors for random dynamical systems. Definition Let (M, d) be a complete separable metric space. Let \ma ...
**
Dynkin system A Dynkin system, named after Eugene Dynkin, is a collection of subsets of another universal set \Omega satisfying a set of axioms weaker than those of -algebra. Dynkin systems are sometimes referred to as -systems (Dynkin himself used this term ...
*
Probability axioms The standard probability axioms are the foundations of probability theory introduced by Russian mathematician Andrey Kolmogorov in 1933. These axioms remain central and have direct contributions to mathematics, the physical sciences, and real-worl ...
*
Normalizing constant In probability theory, a normalizing constant or normalizing factor is used to reduce any probability function to a probability density function with total probability of one. For example, a Gaussian function can be normalized into a probabilit ...
*
Event (probability theory) In probability theory, an event is a subset of outcomes of an experiment (a subset of the sample space) to which a probability is assigned. A single outcome may be an element of many different events, and different events in an experiment are ...
**
Complementary event In probability theory, the complement of any event ''A'' is the event ot ''A'' i.e. the event that ''A'' does not occur.Robert R. Johnson, Patricia J. Kuby: ''Elementary Statistics''. Cengage Learning 2007, , p. 229 () The event ''A'' and ...
*
Elementary event In probability theory, an elementary event, also called an atomic event or sample point, is an event which contains only a single outcome in the sample space. Using set theory terminology, an elementary event is a singleton. Elementary events ...
*
Mutually exclusive In logic and probability theory, two events (or propositions) are mutually exclusive or disjoint if they cannot both occur at the same time. A clear example is the set of outcomes of a single coin toss, which can result in either heads or tails ...
*
Boole's inequality In probability theory, Boole's inequality, also known as the union bound, says that for any finite or countable set of events, the probability that at least one of the events happens is no greater than the sum of the probabilities of the indiv ...
*
Probability density function In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a Function (mathematics), function whose value at any given sample (or point) in the sample space (the s ...
*
Cumulative distribution function In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. Ever ...
*
Law of total cumulance In probability theory and mathematical statistics, the law of total cumulance is a generalization to cumulants of the law of total probability, the law of total expectation, and the law of total variance. It has applications in the analysis of ti ...
*
Law of total expectation The proposition in probability theory known as the law of total expectation, the law of iterated expectations (LIE), Adam's law, the tower rule, and the smoothing property of conditional expectation, among other names, states that if X is a random ...
*
Law of total probability In probability theory, the law (or formula) of total probability is a fundamental rule relating marginal probabilities to conditional probabilities. It expresses the total probability of an outcome which can be realized via several distinct ev ...
*
Law of total variance The law of total variance is a fundamental result in probability theory that expresses the variance of a random variable in terms of its conditional variances and conditional means given another random variable . Informally, it states that the o ...
*
Almost surely In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (with respect to the probability measure). In other words, the set of outcomes on which the event does not occur ha ...
*
Cox's theorem Cox's theorem, named after the physicist Richard Threlkeld Cox, is a derivation of the laws of probability theory from a certain set of postulates. This derivation justifies the so-called "logical" interpretation of probability, as the laws of pr ...
*
Bayesianism Bayesian probability ( or ) is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation representing a state of knowledge or as quanti ...
*
Prior probability A prior probability distribution of an uncertain quantity, simply called the prior, is its assumed probability distribution before some evidence is taken into account. For example, the prior could be the probability distribution representing the ...
*
Posterior probability The posterior probability is a type of conditional probability that results from updating the prior probability with information summarized by the likelihood via an application of Bayes' rule. From an epistemological perspective, the posteri ...
* Borel's paradox * Bertrand's paradox *
Coherence (philosophical gambling strategy) In decision theory, economics, and probability theory, the Dutch book arguments are a set of results showing that agents must satisfy the axioms of rational choice to avoid a kind of self-contradiction called a Dutch book. A Dutch book, some ...
*
Dutch book In decision theory, economics, and probability theory, the Dutch book arguments are a set of results showing that agents must satisfy the axioms of rational choice to avoid a kind of self-contradiction called a Dutch book. A Dutch book, somet ...
*
Algebra of random variables In statistics, the algebra of random variables provides rules for the symbolic manipulation of random variables, while avoiding delving too deeply into the mathematically sophisticated ideas of probability theory. Its symbolism allows the treat ...
*
Belief propagation Belief propagation, also known as sum–product message passing, is a message-passing algorithm for performing inference on graphical models, such as Bayesian networks and Markov random fields. It calculates the marginal distribution for ea ...
*
Transferable belief model The transferable belief model (TBM) is an elaboration on the Dempster–Shafer theory (DST), which is a mathematical model used to evaluate the probability that a given proposition is true from other propositions that are assigned probabilities. I ...
*
Dempster–Shafer theory The theory of belief functions, also referred to as evidence theory or Dempster–Shafer theory (DST), is a general framework for reasoning with uncertainty, with understood connections to other frameworks such as probability, possibility and ...
*
Possibility theory Possibility theory is a mathematical theory for dealing with certain types of uncertainty and is an alternative to probability theory. It uses measures of possibility and necessity between 0 and 1, ranging from impossible to possible and unnecessa ...


Random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a Mathematics, mathematical formalization of a quantity or object which depends on randomness, random events. The term 'random variable' in its mathema ...
s

*
Discrete random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. The term 'random variable' in its mathematical definition refers ...
**
Probability mass function In probability and statistics, a probability mass function (sometimes called ''probability function'' or ''frequency function'') is a function that gives the probability that a discrete random variable is exactly equal to some value. Sometimes i ...
*
Constant random variable In probability theory, a degenerate distribution on a measure space (E, \mathcal, \mu) is a probability distribution whose support is a null set with respect to \mu. For instance, in the -dimensional space endowed with the Lebesgue measure, an ...
*
Expected value In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
**
Jensen's inequality In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier p ...
*
Variance In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion ...
**
Standard deviation In statistics, the standard deviation is a measure of the amount of variation of the values of a variable about its Expected value, mean. A low standard Deviation (statistics), deviation indicates that the values tend to be close to the mean ( ...
**
Geometric standard deviation In probability theory and statistics, the geometric standard deviation (GSD) describes how spread out are a set of numbers whose preferred average is the geometric mean. For such data, it may be preferred to the more usual standard deviation. Not ...
*
Multivariate random variable In probability, and statistics, a multivariate random variable or random vector is a list or vector of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge ...
**
Joint probability distribution A joint or articulation (or articular surface) is the connection made between bones, ossicles, or other hard structures in the body which link an animal's skeletal system into a functional whole.Saladin, Ken. Anatomy & Physiology. 7th ed. McGraw- ...
**
Marginal distribution In probability theory and statistics, the marginal distribution of a subset of a collection of random variables is the probability distribution of the variables contained in the subset. It gives the probabilities of various values of the variable ...
** Kirkwood approximation *
Independent identically-distributed random variables Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in Pennsylvania, United States * Independentes (English: Independents), a Portuguese artist ...
**
Independent and identically-distributed random variables Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in Pennsylvania, United States * Independentes (English: Independents), a Portuguese artist ...
*
Statistical independence Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of ...
**
Conditional independence In probability theory, conditional independence describes situations wherein an observation is irrelevant or redundant when evaluating the certainty of a hypothesis. Conditional independence is usually formulated in terms of conditional probabi ...
**
Pairwise independence In probability theory, a pairwise independent collection of random variables is a set of random variables any two of which are statistical independence, independent. Any collection of Mutual independence, mutually independent random variables is p ...
**
Covariance In probability theory and statistics, covariance is a measure of the joint variability of two random variables. The sign of the covariance, therefore, shows the tendency in the linear relationship between the variables. If greater values of one ...
**
Covariance matrix In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
**
De Finetti's theorem In probability theory, de Finetti's theorem states that exchangeable random variables, exchangeable observations are conditionally independent relative to some latent variable. An epistemic probability probability distribution, distribution could ...
*
Correlation In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics ...
**
Uncorrelated In probability theory and statistics, two real-valued random variables, X, Y, are said to be uncorrelated if their covariance, \operatorname ,Y= \operatorname Y- \operatorname \operatorname /math>, is zero. If two variables are uncorrelated, ther ...
**
Correlation function A correlation function is a function that gives the statistical correlation between random variables, contingent on the spatial or temporal distance between those variables. If one considers the correlation function between random variables ...
*
Canonical correlation In statistics, canonical-correlation analysis (CCA), also called canonical variates analysis, is a way of inferring information from cross-covariance matrices. If we have two vectors ''X'' = (''X''1, ..., ''X'n'') and ''Y'' ...
*
Convergence of random variables In probability theory, there exist several different notions of convergence of sequences of random variables, including ''convergence in probability'', ''convergence in distribution'', and ''almost sure convergence''. The different notions of conve ...
**
Weak convergence of measures In mathematics, more specifically measure theory, there are various notions of the convergence of measures. For an intuitive general sense of what is meant by ''convergence of measures'', consider a sequence of measures on a space, sharing a com ...
*** Helly–Bray theorem ***
Slutsky's theorem In probability theory, Slutsky's theorem extends some properties of algebraic operations on convergent sequences of real numbers to sequences of random variables. The theorem was named after Eugen Slutsky. Slutsky's theorem is also attributed to ...
** Skorokhod's representation theorem **
Lévy's continuity theorem In probability theory, Lévy’s continuity theorem, or Lévy's convergence theorem, named after the French mathematician Paul Lévy, connects convergence in distribution of the sequence of random variables with pointwise convergence of their cha ...
**
Uniform integrability In mathematics, uniform integrability is an important concept in real analysis, functional analysis and measure theory, and plays a vital role in the theory of martingales. Measure-theoretic definition Uniform integrability is an extension to the ...
*
Markov's inequality In probability theory, Markov's inequality gives an upper bound on the probability that a non-negative random variable is greater than or equal to some positive Constant (mathematics), constant. Markov's inequality is tight in the sense that for e ...
*
Chebyshev's inequality In probability theory, Chebyshev's inequality (also called the Bienaymé–Chebyshev inequality) provides an upper bound on the probability of deviation of a random variable (with finite variance) from its mean. More specifically, the probability ...
=
Chernoff bound In probability theory, a Chernoff bound is an exponentially decreasing upper bound on the tail of a random variable based on its moment generating function. The minimum of all such exponential bounds forms ''the'' Chernoff or Chernoff-Cramér boun ...
* Chernoff's inequality *
Bernstein inequalities (probability theory) In probability theory, Bernstein inequalities give bounds on the probability that the sum of random variables deviates from its mean. In the simplest case, let ''X''1, ..., ''X'n'' be independent Bernoulli random variables taking valu ...
**
Hoeffding's inequality In probability theory, Hoeffding's inequality provides an upper bound on the probability that the sum of bounded independent random variables deviates from its expected value by more than a certain amount. Hoeffding's inequality was proven by Wass ...
* Kolmogorov's inequality * Etemadi's inequality * Chung–Erdős inequality *
Khintchine inequality The Khintchine inequality, is a result in probability also frequently used in mathematical analysis, analysis bounding the expectation a weighted sum of Rademacher distribution , Rademacher random variables with Sequence_space#ℓp_spaces , squa ...
*
Paley–Zygmund inequality In mathematics, the Paley–Zygmund inequality bounds the probability that a positive random variable is small, in terms of its first two moments. The inequality was proved by Raymond Paley and Antoni Zygmund. Theorem: If ''Z'' ≥ 0 i ...
* Laws of large numbers **
Asymptotic equipartition property In information theory, the asymptotic equipartition property (AEP) is a general property of the output samples of a stochastic source. It is fundamental to the concept of typical set used in theories of data compression. Roughly speaking, the t ...
**
Typical set In information theory, the typical set is a set of sequences whose probability is close to two raised to the negative power of the entropy of their source distribution. That this set has total probability close to one is a consequence of the asympt ...
**
Law of large numbers In probability theory, the law of large numbers is a mathematical law that states that the average of the results obtained from a large number of independent random samples converges to the true value, if it exists. More formally, the law o ...
** Kolmogorov's two-series theorem *
Random field In physics and mathematics, a random field is a random function over an arbitrary domain (usually a multi-dimensional space such as \mathbb^n). That is, it is a function f(x) that takes on a random value at each point x \in \mathbb^n(or some other ...
**
Conditional random field Conditional random fields (CRFs) are a class of statistical modeling methods often applied in pattern recognition and machine learning and used for structured prediction. Whereas a classifier predicts a label for a single sample without consi ...
*
Borel–Cantelli lemma In probability theory, the Borel–Cantelli lemma is a theorem about sequences of events. In general, it is a result in measure theory. It is named after Émile Borel and Francesco Paolo Cantelli, who gave statement to the lemma in the first d ...
*
Wick product In probability theory, the Wick product, named for Italian physicist Gian-Carlo Wick, is a particular way of defining an adjusted product of a set of random variables. In the lowest order product the adjustment corresponds to subtracting off the ...


Conditional probability In probability theory, conditional probability is a measure of the probability of an Event (probability theory), event occurring, given that another event (by assumption, presumption, assertion or evidence) is already known to have occurred. This ...

*
Conditioning (probability) Beliefs depend on the available information. This idea is formalized in probability theory by conditioning. Conditional probabilities, conditional expectations, and conditional probability distributions are treated on three levels: discrete prob ...
*
Conditional expectation In probability theory, the conditional expectation, conditional expected value, or conditional mean of a random variable is its expected value evaluated with respect to the conditional probability distribution. If the random variable can take on ...
*
Conditional probability distribution In probability theory and statistics, the conditional probability distribution is a probability distribution that describes the probability of an outcome given the occurrence of a particular event. Given two jointly distributed random variables X ...
*
Regular conditional probability Regular may refer to: Arts, entertainment, and media Music * "Regular" (Badfinger song) * Regular tunings of stringed instruments, tunings with equal intervals between the paired notes of successive open strings Other uses * Regular character, ...
*
Disintegration theorem In mathematics, the disintegration theorem is a result in measure theory and probability theory. It rigorously defines the idea of a non-trivial "restriction" of a measure to a measure zero subset of the measure space in question. It is related t ...
*
Bayes' theorem Bayes' theorem (alternatively Bayes' law or Bayes' rule, after Thomas Bayes) gives a mathematical rule for inverting Conditional probability, conditional probabilities, allowing one to find the probability of a cause given its effect. For exampl ...
*
de Finetti's theorem In probability theory, de Finetti's theorem states that exchangeable random variables, exchangeable observations are conditionally independent relative to some latent variable. An epistemic probability probability distribution, distribution could ...
**
Exchangeable random variables In statistics, an exchangeable sequence of random variables (also sometimes interchangeable) is a sequence ''X''1, ''X''2, ''X''3, ... (which may be finitely or infinitely long) whose joint probability distribution does not change wh ...
*
Rule of succession In probability theory, the rule of succession is a formula introduced in the 18th century by Pierre-Simon Laplace in the course of treating the sunrise problem. The formula is still used, particularly to estimate underlying probabilities when ...
*
Conditional independence In probability theory, conditional independence describes situations wherein an observation is irrelevant or redundant when evaluating the certainty of a hypothesis. Conditional independence is usually formulated in terms of conditional probabi ...
* Conditional event algebra **
Goodman–Nguyen–van Fraassen algebra In probability theory, a conditional event algebra (CEA) is an alternative to a standard, Boolean algebra of possible events (a set of possible events related to one another by the familiar operations ''and'', ''or'', and ''not'') that contains not ...


Theory of probability distributions

*
Probability distribution In probability theory and statistics, a probability distribution is a Function (mathematics), function that gives the probabilities of occurrence of possible events for an Experiment (probability theory), experiment. It is a mathematical descri ...
* Probability distribution function *
Probability density function In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a Function (mathematics), function whose value at any given sample (or point) in the sample space (the s ...
*
Probability mass function In probability and statistics, a probability mass function (sometimes called ''probability function'' or ''frequency function'') is a function that gives the probability that a discrete random variable is exactly equal to some value. Sometimes i ...
*
Cumulative distribution function In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. Ever ...
*
Quantile In statistics and probability, quantiles are cut points dividing the range of a probability distribution into continuous intervals with equal probabilities or dividing the observations in a sample in the same way. There is one fewer quantile t ...
*
Moment (mathematics) In mathematics, the moments of a function are certain quantitative measures related to the shape of the function's graph. If the function represents mass density, then the zeroth moment is the total mass, the first moment (normalized by total m ...
**
Moment about the mean In probability theory and statistics, a central moment is a moment of a probability distribution of a random variable about the random variable's mean; that is, it is the expected value of a specified integer power of the deviation of the random ...
**
Standardized moment In probability theory and statistics, a standardized moment of a probability distribution is a moment (often a higher degree central moment) that is normalized, typically by a power of the standard deviation, rendering the moment scale invariant ...
***
Skewness In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined. For a unimodal ...
***
Kurtosis In probability theory and statistics, kurtosis (from , ''kyrtos'' or ''kurtos'', meaning "curved, arching") refers to the degree of “tailedness” in the probability distribution of a real-valued random variable. Similar to skewness, kurtos ...
***
Locality Locality may refer to: * Locality, a historical named location or place in Canada * Locality (association), an association of community regeneration organizations in England * Locality (linguistics) * Locality (settlement) * Suburbs and localitie ...
**
Cumulant In probability theory and statistics, the cumulants of a probability distribution are a set of quantities that provide an alternative to the '' moments'' of the distribution. Any two probability distributions whose moments are identical will have ...
**
Factorial moment In probability theory, the factorial moment is a mathematical quantity defined as the expectation or average of the falling factorial of a random variable. Factorial moments are useful for studying non-negative integer-valued random variables,D. ...
**
Expected value In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
***
Law of the unconscious statistician In probability theory and statistics, the law of the unconscious statistician, or LOTUS, is a theorem which expresses the expected value of a function of a random variable in terms of and the probability distribution of . The form of the law de ...
**
Second moment method In mathematics, the second moment method is a technique used in probability theory and analysis to show that a random variable has positive probability of being positive. More generally, the "moment method" consists of bounding the probability th ...
**
Variance In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion ...
***
Coefficient of variation In probability theory and statistics, the coefficient of variation (CV), also known as normalized root-mean-square deviation (NRMSD), percent RMS, and relative standard deviation (RSD), is a standardized measure of dispersion of a probability ...
***
Variance-to-mean ratio In probability theory and statistics, the index of dispersion, dispersion index, coefficient of dispersion, relative variance, or variance-to-mean ratio (VMR), like the coefficient of variation, is a Normalization (statistics), normalized measure ...
**
Covariance function In probability theory and statistics, the covariance function describes how much two random variables change together (their ''covariance'') with varying spatial or temporal separation. For a random field or stochastic process ''Z''(''x'') on a dom ...
** An inequality on location and scale parameters **
Taylor expansions for the moments of functions of random variables In probability theory, it is possible to approximate the moments of a function ''f'' of a random variable ''X'' using Taylor expansions, provided that ''f'' is sufficiently differentiable and that the moments of ''X'' are finite. A simulatio ...
**
Moment problem In mathematics, a moment problem arises as the result of trying to invert the mapping that takes a measure \mu to the sequence of moments :m_n = \int_^\infty x^n \,d\mu(x)\,. More generally, one may consider :m_n = \int_^\infty M_n(x) \,d\mu( ...
***
Hamburger moment problem In mathematics, the Hamburger moment problem, named after Hans Ludwig Hamburger, is formulated as follows: given a sequence , does there exist a positive Borel measure (for instance, the measure determined by the cumulative distribution function o ...
****
Carleman's condition In mathematics, particularly, in analysis, Carleman's condition gives a sufficient condition for the determinacy of the moment problem. That is, if a measure \mu satisfies Carleman's condition, there is no other measure \nu having the same moment ...
***
Hausdorff moment problem In mathematics, the Hausdorff moment problem, named after Felix Hausdorff, asks for necessary and sufficient conditions that a given sequence be the sequence of moments :m_n = \int_0^1 x^n\,d\mu(x) of some Borel measure supported on the clos ...
*** Trigonometric moment problem ***
Stieltjes moment problem In mathematics, the Stieltjes moment problem, named after Thomas Joannes Stieltjes, seeks necessary and sufficient conditions for a sequence (''m''0, ''m''1, ''m''2, ...) to be of the form :m_n = \int_0^\infty x^n\,d\mu(x) for some measure ''&m ...
*
Prior probability distribution A prior probability distribution of an uncertain quantity, simply called the prior, is its assumed probability distribution before some evidence is taken into account. For example, the prior could be the probability distribution representing the ...
*
Total variation distance In probability theory, the total variation distance is a statistical distance between probability distributions, and is sometimes called the statistical distance, statistical difference or variational distance. Definition Consider a measurable ...
*
Hellinger distance In probability and statistics, the Hellinger distance (closely related to, although different from, the Bhattacharyya distance) is used to quantify the similarity between two probability distributions. It is a type of ''f''-divergence. The Hell ...
*
Wasserstein metric In mathematics, the Wasserstein distance or Kantorovich– Rubinstein metric is a distance function defined between probability distributions on a given metric space M. It is named after Leonid Vaseršteĭn. Intuitively, if each distribution ...
*
Lévy–Prokhorov metric In mathematics, the Lévy–Prokhorov metric (sometimes known just as the Prokhorov metric) is a metric (i.e., a definition of distance) on the collection of probability measures on a given metric space. It is named after the French mathematician P ...
**
Lévy metric In mathematics, the Lévy metric is a metric on the space of cumulative distribution functions of one-dimensional random variables. It is a special case of the Lévy–Prokhorov metric, and is named after the French mathematician Paul Lévy. Def ...
*
Continuity correction In mathematics, a continuity correction is an adjustment made when a discrete object is approximated using a continuous object. Examples Binomial If a random variable ''X'' has a binomial distribution with parameters ''n'' and ''p'', i.e., '' ...
*
Heavy-tailed distribution In probability theory, heavy-tailed distributions are probability distributions whose tails are not exponentially bounded: that is, they have heavier tails than the exponential distribution. Roughly speaking, “heavy-tailed” means the distribu ...
*
Truncated distribution In statistics, a truncated distribution is a conditional distribution that results from restricting the domain of some other probability distribution. Truncated distributions arise in practical statistics in cases where the ability to record, or ...
*
Infinite divisibility Infinite divisibility arises in different ways in philosophy, physics, economics, order theory (a branch of mathematics), and probability theory (also a branch of mathematics). One may speak of infinite divisibility, or the lack thereof, of matter ...
*
Stability (probability) In probability theory, the stability of a random variable is the property that a linear combination of two independent copies of the variable has the same distribution, up to location and scale parameters. The distributions of random variables h ...
*
Indecomposable distribution In probability theory, an indecomposable distribution is a probability distribution that cannot be represented as the distribution of the sum of two or more non-constant independent random variables: ''Z'' ≠ ''X'' + ''Y''. I ...
*
Power law In statistics, a power law is a Function (mathematics), functional relationship between two quantities, where a Relative change and difference, relative change in one quantity results in a relative change in the other quantity proportional to the ...
* Anderson's theorem *
Probability bounds analysis Probability bounds analysis (PBA) is a collection of methods of uncertainty propagation for making qualitative and quantitative calculations in the face of uncertainties of various kinds. It is used to project partial information about random vari ...
*
Probability box A probability box (or p-box) is a characterization of uncertain numbers consisting of both aleatoric and epistemic uncertainties that is often used in risk analysis or quantitative uncertainty modeling where numerical calculations must be per ...


Properties of probability distributions

*
Central limit theorem In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the Probability distribution, distribution of a normalized version of the sample mean converges to a Normal distribution#Standard normal distributi ...
** Illustration of the central limit theorem ** Concrete illustration of the central limit theorem ** Berry–Esséen theorem ** Berry–Esséen theorem **
De Moivre–Laplace theorem In probability theory, the de Moivre–Laplace theorem, which is a special case of the central limit theorem, states that the normal distribution may be used as an approximation to the binomial distribution under certain conditions. In particul ...
**
Lyapunov's central limit theorem In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables ...
**
Misconceptions about the normal distribution Students of statistics and probability theory sometimes develop misconceptions about the normal distribution, ideas that may seem plausible but are mathematically untrue. For example, it is sometimes mistakenly thought that two Pearson product-mome ...
**
Martingale central limit theorem Martingale may refer to: *Martingale (probability theory), a stochastic process in which the conditional expectation of the next value, given the current and preceding values, is the current value * Martingale (tack) for horses * Martingale (colla ...
**
Infinite divisibility (probability) Infinite may refer to: Mathematics * Infinite set, a set that is not a finite set * Infinity, an abstract concept describing something without any limit Music Performers * Infinite (group), a South Korean boy band * Infinite (rapper), Canadia ...
**
Method of moments (probability theory) In probability theory, the method of moments is a way of proving convergence in distribution by proving convergence of a sequence of moment (mathematics), moment sequences. Suppose ''X'' is a random variable and that all of the moments :\operato ...
**
Stability (probability) In probability theory, the stability of a random variable is the property that a linear combination of two independent copies of the variable has the same distribution, up to location and scale parameters. The distributions of random variables h ...
**
Stein's lemma Stein's lemma, named in honor of Charles Stein, is a theorem of probability theory that is of interest primarily because of its applications to statistical inference — in particular, to James–Stein estimation and empirical Bayes methods & ...
*
Characteristic function (probability theory) In probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution. If a random variable admits a probability density function, then the characteristic function is ...
**
Lévy continuity theorem Levy, Lévy or Levies may refer to: People * Levy (surname), people with the surname Levy or Lévy * Levy Adcock (born 1988), American football player * Levy Barent Cohen (1747–1808), Dutch-born British financier and community worker * Levy ...
* Darmois–Skitovich theorem *
Edgeworth series In probability theory, the Gram–Charlier A series (named in honor of Jørgen Pedersen Gram and Carl Charlier), and the Edgeworth series (named in honor of Francis Ysidro Edgeworth) are series that approximate a probability distribution over th ...
* Helly–Bray theorem * Kac–Bernstein theorem *
Location parameter In statistics, a location parameter of a probability distribution is a scalar- or vector-valued parameter x_0, which determines the "location" or shift of the distribution. In the literature of location parameter estimation, the probability distr ...
*
Maxwell's theorem In probability theory, Maxwell's theorem (known also as Herschel-Maxwell's theorem and Herschel-Maxwell's derivation) states that if the probability distribution of a random vector in \R^n is unchanged by rotations, and if the components are indep ...
*
Moment-generating function In probability theory and statistics, the moment-generating function of a real-valued random variable is an alternative specification of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compare ...
**
Factorial moment generating function In probability theory and statistics, the factorial moment generating function (FMGF) of the probability distribution of a real-valued random variable ''X'' is defined as :M_X(t)=\operatorname\bigl ^\bigr/math> for all complex numbers ''t'' for w ...
*
Negative probability The probability of the outcome of an experiment is never negative, although a quasiprobability distribution allows a negative probability, or quasiprobability for some events. These distributions may apply to unobservable events or conditional prob ...
*
Probability-generating function In probability theory, the probability generating function of a discrete random variable is a power series representation (the generating function) of the probability mass function of the random variable. Probability generating functions are often ...
* Vysochanskiï–Petunin inequality *
Mutual information In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual Statistical dependence, dependence between the two variables. More specifically, it quantifies the "Information conten ...
*
Kullback–Leibler divergence In mathematical statistics, the Kullback–Leibler (KL) divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how much a model probability distribution is diff ...
*
Le Cam's theorem In probability theory, Le Cam's theorem, named after Lucien Le Cam, states the following. Suppose: * X_1, X_2, X_3, \ldots are independent random variables, each with a Bernoulli distribution (i.e., equal to either 0 or 1), not necessarily ident ...
*
Large deviations theory In probability theory, the theory of large deviations concerns the asymptotic behaviour of remote tails of sequences of probability distributions. While some basic ideas of the theory can be traced to Laplace, the formalization started with insura ...
**
Contraction principle (large deviations theory) In mathematics — specifically, in large deviations theory — the contraction principle is a theorem that states how a large deviation principle on one space "pushes forward" (via the pushforward of a probability measure) to a large deviat ...
** Varadhan's lemma **
Tilted large deviation principle In mathematics — specifically, in large deviations theory — the tilted large deviation principle is a result that allows one to generate a new large deviation principle from an old one by exponential tilting, i.e. integration against a ...
**
Rate function In mathematics — specifically, in large deviations theory — a rate function is a function used to quantify the probabilities of rare events. Such functions are used to formulate large deviation principles. A large deviation principle qu ...
**
Laplace principle (large deviations theory) In mathematics, Laplace's principle is a basic theorem in large deviations theory which is similar to Varadhan's lemma. It gives an asymptotic expression for the Lebesgue integral of exp(−''θφ''(''x'')) over a fixed set ''A'' as ...
**
Exponentially equivalent measures In mathematics, exponential equivalence of measures is how two sequences or families of probability measures are "the same" from the point of view of large deviations theory. Definition Let (M,d) be a metric space and consider two one-parameter fa ...
** Cramér's theorem (second part)


Applied probability Applied probability is the application of probability theory to statistical problems and other scientific and engineering domains. Scope Much research involving probability is done under the auspices of applied probability. However, while such re ...

* ''Empirical findings'' ** Benford's law ** Pareto principle **
Zipf's law Zipf's law (; ) is an empirical law stating that when a list of measured values is sorted in decreasing order, the value of the -th entry is often approximately inversely proportional to . The best known instance of Zipf's law applies to the ...
*
Boy or Girl paradox The Boy or Girl paradox surrounds a set of questions in probability theory, which are also known as The Two Child Problem, Mr. Smith's Children and the Mrs. Smith Problem. The initial formulation of the question dates back to at least 1959, when&nb ...


Stochastic process In probability theory and related fields, a stochastic () or random process is a mathematical object usually defined as a family of random variables in a probability space, where the index of the family often has the interpretation of time. Sto ...
es

*
Adapted process In the study of stochastic processes, a stochastic process is adapted (also referred to as a non-anticipating or non-anticipative process) if information about the value of the process at a given time is available at that same time. An informal int ...
*
Basic affine jump diffusion In mathematics probability theory, a basic affine jump diffusion (basic AJD) is a stochastic process Z of the form : dZ_t=\kappa (\theta -Z_t)\,dt+\sigma \sqrt\,dB_t+dJ_t,\qquad t\geq 0, Z_\geq 0, where B is a standard Brownian motion, and ...
*
Bernoulli process In probability and statistics, a Bernoulli process (named after Jacob Bernoulli) is a finite or infinite sequence of binary random variables, so it is a discrete-time stochastic process that takes only two values, canonically 0 and 1. The ...
**
Bernoulli scheme In mathematics, the Bernoulli scheme or Bernoulli shift is a generalization of the Bernoulli process to more than two possible outcomes. Bernoulli schemes appear naturally in symbolic dynamics, and are thus important in the study of dynamical syst ...
*
Branching process In probability theory, a branching process is a type of mathematical object known as a stochastic process, which consists of collections of random variables indexed by some set, usually natural or non-negative real numbers. The original purpose of ...
*
Point process In statistics and probability theory, a point process or point field is a set of a random number of mathematical points randomly located on a mathematical space such as the real line or Euclidean space. Kallenberg, O. (1986). ''Random Measures'', ...
*
Chapman–Kolmogorov equation In mathematics, specifically in the theory of Markovian stochastic processes in probability theory, the Chapman–Kolmogorov equation (CKE) is an identity relating the joint probability distributions of different sets of coordinates on a stochastic ...
*
Chinese restaurant process In probability theory, the Chinese restaurant process is a discrete-time stochastic process, analogous to seating customers at tables in a restaurant. Imagine a restaurant with an infinite number of circular tables, each with infinite capacity. Cu ...
*
Coupling (probability) In probability theory, coupling is a proof technique that allows one to compare two unrelated random variables (distributions) and by creating a random vector whose marginal distributions correspond to and respectively. The choice of is ge ...
*
Ergodic theory Ergodic theory is a branch of mathematics that studies statistical properties of deterministic dynamical systems; it is the study of ergodicity. In this context, "statistical properties" refers to properties which are expressed through the behav ...
** Maximal ergodic theorem **
Ergodic (adjective) In mathematics, ergodicity expresses the idea that a point of a moving system, either a dynamical system or a stochastic process, will eventually visit all parts of the space that the system moves in, in a uniform and random sense. This implies th ...
*
Galton–Watson process The Galton–Watson process, also called the Bienaymé-Galton-Watson process or the Galton-Watson branching process, is a branching stochastic process arising from Francis Galton's statistical investigation of the extinction of family names. The ...
*
Gauss–Markov process Gauss–Markov stochastic processes (named after Carl Friedrich Gauss and Andrey Markov) are stochastic processes that satisfy the requirements for both Gaussian processes and Markov processes. A stationary Gauss–Markov process is unique up to r ...
*
Gaussian process In probability theory and statistics, a Gaussian process is a stochastic process (a collection of random variables indexed by time or space), such that every finite collection of those random variables has a multivariate normal distribution. The di ...
**
Gaussian random field In statistics, a Gaussian random field (GRF) is a random field involving Gaussian probability density functions of the variables. A one-dimensional GRF is also called a Gaussian process. An important special case of a GRF is the Gaussian free fi ...
**
Gaussian isoperimetric inequality Carl Friedrich Gauss (1777–1855) is the eponym of all of the topics listed below. There are over 100 topics all named after this German mathematician and scientist, all in the fields of mathematics, physics, and astronomy. The English eponymo ...
** Large deviations of Gaussian random functions *
Girsanov's theorem In probability theory, Girsanov's theorem or the Cameron-Martin-Girsanov theorem explains how stochastic processes change under changes in measure. The theorem is especially important in the theory of financial mathematics as it explains how to ...
* Hawkes process *
Increasing process An increasing process is a stochastic process... :(X_t)_ ...where the random variables X_t which make up the process are increasing almost surely In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s ...
*
Itô's lemma In mathematics Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. ...
*
Jump diffusion Jump diffusion is a stochastic process that involves jump process, jumps and diffusion process, diffusion. It has important applications in magnetic reconnection, coronal mass ejections, condensed matter physics, and pattern theory and computationa ...
*
Law of the iterated logarithm In probability theory, the law of the iterated logarithm describes the magnitude of the fluctuations of a random walk. The original statement of the law of the iterated logarithm is due to Aleksandr Khinchin, A. Ya. Khinchin (1924). Another state ...
*
Lévy flight Levy, Lévy or Levies may refer to: People * Levy (surname), people with the surname Levy or Lévy * Levy Adcock (born 1988), American football player * Levy Barent Cohen (1747–1808), Dutch-born British financier and community worker * Lev ...
*
Lévy process In probability theory, a Lévy process, named after the French mathematician Paul Lévy, is a stochastic process with independent, stationary increments: it represents the motion of a point whose successive displacements are random, in which disp ...
*
Loop-erased random walk In mathematics, loop-erased random walk is a model for a random simple path with important applications in combinatorics, physics and quantum field theory. It is intimately connected to the uniform spanning tree, a model for a random tree. See al ...
*
Markov chain In probability theory and statistics, a Markov chain or Markov process is a stochastic process describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally ...
**
Examples of Markov chains Example may refer to: * ''exempli gratia'' (e.g.), usually read out in English as "for example" * .example, reserved as a domain name that may not be installed as a top-level domain of the Internet ** example.com, example.net, example.org, a ...
**
Detailed balance The principle of detailed balance can be used in Kinetics (physics), kinetic systems which are decomposed into elementary processes (collisions, or steps, or elementary reactions). It states that at Thermodynamic equilibrium, equilibrium, each elem ...
**
Markov property In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process, which means that its future evolution is independent of its history. It is named after the Russian mathematician Andrey Ma ...
**
Hidden Markov model A hidden Markov model (HMM) is a Markov model in which the observations are dependent on a latent (or ''hidden'') Markov process (referred to as X). An HMM requires that there be an observable process Y whose outcomes depend on the outcomes of X ...
**
Maximum-entropy Markov model In statistics, a maximum-entropy Markov model (MEMM), or conditional Markov model (CMM), is a graphical model for sequence labeling that combines features of hidden Markov models (HMMs) and maximum entropy (MaxEnt) models. An MEMM is a discriminat ...
**
Markov chain mixing time In probability theory, the mixing time of a Markov chain is the time until the Markov chain is "close" to its steady state distribution. More precisely, a fundamental result about Markov chains is that a finite state irreducible aperiodic chain h ...
*
Markov partition A Markov partition in mathematics is a tool used in dynamical systems theory, allowing the methods of symbolic dynamics to be applied to the study of hyperbolic dynamics. By using a Markov partition, the system can be made to resemble a discrete- ...
*
Markov process In probability theory and statistics, a Markov chain or Markov process is a stochastic process describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, ...
**
Continuous-time Markov process A continuous-time Markov chain (CTMC) is a continuous stochastic process in which, for each state, the process will change state according to an exponential random variable and then move to a different state as specified by the probabilities of a ...
**
Piecewise-deterministic Markov process In probability theory, a piecewise-deterministic Markov process (PDMP) is a process whose behaviour is governed by random jumps at points in time, but whose evolution is deterministically governed by an ordinary differential equation between those ...
* Martingale **
Doob martingale In the mathematical theory of probability, a Doob martingale (named after Joseph L. Doob, also known as a Levy martingale) is a stochastic process that approximates a given random variable and has the martingale property with respect to the g ...
** Optional stopping theorem **
Martingale representation theorem In probability theory, the martingale representation theorem states that a random variable with finite variance that is measurable with respect to the filtration generated by a Brownian motion can be written in terms of an Itô integral with res ...
**
Azuma's inequality In probability theory, the Azuma–Hoeffding inequality (named after Kazuoki Azuma and Wassily Hoeffding) gives a concentration result for the values of martingales that have bounded differences. Suppose \ is a martingale (or super-martingale ...
**
Wald's equation In probability theory, Wald's equation, Wald's identity or Wald's lemma is an important identity (mathematics), identity that simplifies the calculation of the expected value of the sum of a random number of random quantities. In its simplest form, ...
*
Poisson process In probability theory, statistics and related fields, a Poisson point process (also known as: Poisson random measure, Poisson random point field and Poisson point field) is a type of mathematical object that consists of Point (geometry), points ...
**
Poisson random measure Let (E, \mathcal A, \mu) be some measure space with \sigma- finite measure \mu. The Poisson random measure with intensity measure \mu is a family of random variables \_ defined on some probability space (\Omega, \mathcal F, \mathrm) such that i) \ ...
* Population process * Process with independent increments *
Progressively measurable process In mathematics, progressive measurability is a property in the theory of stochastic processes. A progressively measurable process, while defined quite technically, is important because it implies the stopped process is measurable. Being progressivel ...
*
Queueing theory Queueing theory is the mathematical study of waiting lines, or queues. A queueing model is constructed so that queue lengths and waiting time can be predicted. Queueing theory is generally considered a branch of operations research because th ...
**
Erlang unit The erlang (symbol E) is a dimensionless unit that is used in telephony as a measure of offered load or carried load on service-providing elements such as telephone circuits or telephone switching equipment. A single cord circuit has the capacit ...
*
Random walk In mathematics, a random walk, sometimes known as a drunkard's walk, is a stochastic process that describes a path that consists of a succession of random steps on some Space (mathematics), mathematical space. An elementary example of a rand ...
* Random walk Monte Carlo *
Renewal theory Renewal theory is the branch of probability theory that generalizes the Poisson process for arbitrary holding times. Instead of exponentially distributed holding times, a renewal process may have any independent and identically distributed (IID) h ...
* Skorokhod's embedding theorem *
Stationary process In mathematics and statistics, a stationary process (also called a strict/strictly stationary process or strong/strongly stationary process) is a stochastic process whose statistical properties, such as mean and variance, do not change over time. M ...
*
Stochastic calculus Stochastic calculus is a branch of mathematics that operates on stochastic processes. It allows a consistent theory of integration to be defined for integrals of stochastic processes with respect to stochastic processes. This field was created an ...
**
Itô calculus Itô calculus, named after Kiyosi Itô, extends the methods of calculus to stochastic processes such as Brownian motion (see Wiener process). It has important applications in mathematical finance and stochastic differential equations. The cent ...
**
Malliavin calculus In probability theory and related fields, Malliavin calculus is a set of mathematical techniques and ideas that extend the mathematical field of calculus of variations from deterministic functions to stochastic processes. In particular, it allows ...
**
Stratonovich integral In stochastic processes, the Stratonovich integral or Fisk–Stratonovich integral (developed simultaneously by Ruslan Stratonovich and Donald Fisk) is a stochastic integral, the most common alternative to the Itô integral. Although the Itô in ...
*
Time series analysis In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. ...
**
Autoregressive model In statistics, econometrics, and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it can be used to describe certain time-varying processes in nature, economics, behavior, etc. The autoregre ...
**
Moving average model In time series analysis, the moving-average model (MA model), also known as moving-average process, is a common approach for modeling univariate time series. The moving-average model specifies that the output variable is cross-correlated with a n ...
**
Autoregressive moving average model In the statistical analysis of time series, autoregressive–moving-average (ARMA) models are a way to describe a (weakly) stationary stochastic process using autoregression (AR) and a moving average (MA), each with a polynomial. They are a too ...
** Autoregressive integrated moving average model ** Anomaly time series * Voter model *
Wiener process In mathematics, the Wiener process (or Brownian motion, due to its historical connection with Brownian motion, the physical process of the same name) is a real-valued continuous-time stochastic process discovered by Norbert Wiener. It is one o ...
**
Brownian motion Brownian motion is the random motion of particles suspended in a medium (a liquid or a gas). The traditional mathematical formulation of Brownian motion is that of the Wiener process, which is often called Brownian motion, even in mathematical ...
**
Geometric Brownian motion A geometric Brownian motion (GBM) (also known as exponential Brownian motion) is a continuous-time stochastic process in which the logarithm of the randomly varying quantity follows a Brownian motion (also called a Wiener process) with drift. It ...
**
Donsker's theorem In probability theory, Donsker's theorem (also known as Donsker's invariance principle, or the functional central limit theorem), named after Monroe D. Donsker, is a functional extension of the central limit theorem for empirical distribution fun ...
**
Empirical process In probability theory, an empirical process is a stochastic process that characterizes the deviation of the empirical distribution function from its expectation. In mean field theory, limit theorems (as the number of objects becomes large) are con ...
**
Wiener equation A simple mathematical representation of Brownian motion, the Wiener equation, named after Norbert Wiener, assumes the current velocity of a fluid particle fluctuates randomly: :\mathbf = \frac = g(t), where v is velocity, x is position, ''d/dt'' ...
** Wiener sausage


Geometric probability

*
Buffon's needle In probability theory, Buffon's needle problem is a question first posed in the 18th century by Georges-Louis Leclerc, Comte de Buffon: :Suppose we have a floor made of parallel strips of wood, each the same width, and we drop a needle onto the ...
*
Integral geometry In mathematics, integral geometry is the theory of measures on a geometrical space invariant under the symmetry group of that space. In more recent times, the meaning has been broadened to include a view of invariant (or equivariant) transformati ...
*
Hadwiger's theorem In integral geometry (otherwise called geometric probability theory), Hadwiger's theorem characterises the valuations on convex bodies in \R^n. It was proved by Hugo Hadwiger. Introduction Valuations Let \mathbb^n be the collection of all ...
* Wendel's theorem


Gambling Gambling (also known as betting or gaming) is the wagering of something of Value (economics), value ("the stakes") on a Event (probability theory), random event with the intent of winning something else of value, where instances of strategy (ga ...

*
Luck Luck is the phenomenon and belief that defines the experience of improbable events, especially improbably positive or negative ones. The Naturalism (philosophy), naturalistic interpretation is that positive and negative events may happen at a ...
*
Game of chance A game of chance is in contrast with a game of skill. It is a game whose outcome is strongly influenced by some randomizing device. Common devices used include dice, spinning tops, playing cards, roulette wheels, numbered balls, or in the case ...
*
Odds In probability theory, odds provide a measure of the probability of a particular outcome. Odds are commonly used in gambling and statistics. For example for an event that is 40% probable, one could say that the odds are or When gambling, o ...
*
Gambler's fallacy The gambler's fallacy, also known as the Monte Carlo fallacy or the fallacy of the maturity of chances, is the belief that, if an event (whose occurrences are Independent and identically distributed random variables, independent and identically dis ...
*
Inverse gambler's fallacy The inverse gambler's fallacy, named by philosopher Ian Hacking, is a formal fallacy of Bayesian inference which is an inverse of the better known gambler's fallacy. It is the fallacy of concluding, on the basis of an unlikely outcome of a random p ...
*
Parrondo's paradox Parrondo's paradox, a paradox in game theory, describes how a combination of losing strategies can become a winning strategy. It is named after its creator, Juan Parrondo, who discovered the paradox in 1996. A simple example involves two coin fl ...
* Pascal's wager *
Gambler's ruin In statistics, gambler's ruin is the fact that a gambling, gambler playing a game with negative expected value will eventually go Bankruptcy, bankrupt, regardless of their betting system. The concept was initially stated: A persistent gambler wh ...
*
Poker probability In poker, the probability of each type of 5-card poker hand, hand can be computed by calculating the proportion of hands of that type among all possible hands. History Probability and gambling have been ideas since long before the invention of p ...
**
Poker probability (Omaha) Poker is a family of comparing card games in which players wager over which hand is best according to that specific game's rules. It is played worldwide, with varying rules in different places. While the earliest known form of the game was p ...
**
Poker probability (Texas hold 'em) Poker is a family of comparing card games in which players wager over which hand is best according to that specific game's rules. It is played worldwide, with varying rules in different places. While the earliest known form of the game was ...
**
Pot odds In poker, pot odds are the ratio of the current size of the pot to the cost of a contemplated call. Pot odds are compared to the odds of winning a hand with a future card in order to estimate the call's expected value. The purpose of this is to s ...
*
Roulette Roulette (named after the French language, French word meaning "little wheel") is a casino game which was likely developed from the Italy, Italian game Biribi. In the game, a player may choose to place a bet on a single number, various grouping ...
**
Martingale (betting system) A martingale is a class of betting strategies that originated from and were popular in 18th-century France. The simplest of these strategies was designed for a game in which the gambler wins the stake if a coin comes up heads and loses if it co ...
** The man who broke the bank at Monte Carlo *
Lottery A lottery (or lotto) is a form of gambling that involves the drawing of numbers at random for a prize. Some governments outlaw lotteries, while others endorse it to the extent of organizing a national or state lottery. It is common to find som ...
**
Lottery machine A lottery machine is the machine used to draw the winning numbers for a lottery. Early lotteries were done by drawing numbers, or winning Ticket (admission), tickets, from a container. In the United Kingdom, UK, numbers of winning Premium Bonds ...
**
Pachinko is a mechanical game originating in Japan that is used as an arcade game, and much more frequently for gambling. Pachinko fills a niche in Gambling in Japan, Japanese gambling comparable to that of the slot machine in the West as a form of l ...
*
Coherence (philosophical gambling strategy) In decision theory, economics, and probability theory, the Dutch book arguments are a set of results showing that agents must satisfy the axioms of rational choice to avoid a kind of self-contradiction called a Dutch book. A Dutch book, some ...
*
Coupon collector's problem In probability theory, the coupon collector's problem refers to mathematical analysis of "collect all coupons and win" contests. It asks the following question: if each box of a given product (e.g., breakfast cereals) contains a coupon, and there ...


Coincidence

*
Birthday paradox In probability theory, the birthday problem asks for the probability that, in a set of randomly chosen people, at least two will share the same birthday. The birthday paradox is the counterintuitive fact that only 23 people are needed for that ...
**
Birthday problem In probability theory, the birthday problem asks for the probability that, in a set of randomly chosen people, at least two will share the same birthday. The birthday paradox is the counterintuitive fact that only 23 people are needed for that ...
*
Index of coincidence In cryptography, coincidence counting is the technique (invented by William F. Friedman) of putting two texts side-by-side and counting the number of times that identical letters appear in the same position in both texts. This count, either as a r ...
*
Bible code The Bible code (, ), also known as the Torah code, is a purported set of encoded words within a Hebrew text of the Torah that, according to proponents, has predicted significant historical events. The statistical likelihood of the Bible code a ...
*
Spurious relationship In statistics, a spurious relationship or spurious correlation is a mathematical relationship in which two or more events or variables are associated but '' not'' causally related, due to either coincidence or the presence of a certain third, u ...
*
Monty Hall problem The Monty Hall problem is a brain teaser, in the form of a probability puzzle, based nominally on the American television game show ''Let's Make a Deal'' and named after its original host, Monty Hall. The problem was originally posed (and solved ...


Algorithmics

*
Probable prime In number theory, a probable prime (PRP) is an integer that satisfies a specific condition that is satisfied by all prime numbers, but which is not satisfied by most composite numbers. Different types of probable primes have different specific co ...
*
Probabilistic algorithm A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performan ...
= Randomised algorithm *
Monte Carlo method Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be ...
*
Las Vegas algorithm In computing, a Las Vegas algorithm is a randomized algorithm that always gives Correctness (computer science), correct results; that is, it always produces the correct result or it informs about the failure. However, the runtime of a Las Vegas alg ...
*
Probabilistic Turing machine In theoretical computer science, a probabilistic Turing machine is a non-deterministic Turing machine that chooses between the available transitions at each point according to some probability distribution. As a consequence, a probabilistic Tur ...
*
Stochastic programming In the field of mathematical optimization, stochastic programming is a framework for modeling optimization problems that involve uncertainty. A stochastic program is an optimization problem in which some or all problem parameters are uncertain, ...
*
Probabilistically checkable proof In computational complexity theory, a probabilistically checkable proof (PCP) is a type of proof that can be checked by a randomized algorithm using a bounded amount of randomness and reading a bounded number of bits of the proof. The algorithm is ...
*
Box–Muller transform The Box–Muller transform, by George Edward Pelham Box and Mervin Edgar Muller, is a random number sampling method for generating pairs of independent, standard, normally distributed (zero expectation, unit variance) random numbers, given a ...
*
Metropolis algorithm A metropolis () is a large city or conurbation which is a significant economic, political, and cultural area for a country or region, and an important hub for regional or international connections, commerce, and communications. A big city b ...
*
Gibbs sampling In statistics, Gibbs sampling or a Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm for sampling from a specified multivariate distribution, multivariate probability distribution when direct sampling from the joint distribution is dif ...
* Inverse transform sampling method *
Walk-on-spheres method In mathematics, the walk-on-spheres method (WoS) is a numerical probabilistic algorithm, or Monte-Carlo method, used mainly in order to approximate the solutions of some specific boundary value problem for partial differential equations (PDEs). The ...


Financial mathematics Mathematical finance, also known as quantitative finance and financial mathematics, is a field of applied mathematics, concerned with mathematical modeling in the Finance#Quantitative_finance, financial field. In general, there exist two separate ...

*
Risk In simple terms, risk is the possibility of something bad happening. Risk involves uncertainty about the effects/implications of an activity with respect to something that humans value (such as health, well-being, wealth, property or the environ ...
*
Value at risk Value at risk (VaR) is a measure of the risk of loss of investment/capital. It estimates how much a set of investments might lose (with a given probability), given normal market conditions, in a set time period such as a day. VaR is typically us ...
*
Market risk Market risk is the risk of losses in positions arising from movements in market variables like prices and volatility. There is no unique classification as each classification may refer to different aspects of market risk. Nevertheless, the m ...
*
Risk-neutral measure In mathematical finance, a risk-neutral measure (also called an equilibrium measure, or '' equivalent martingale measure'') is a probability measure such that each share price is exactly equal to the discounted expectation of the share price un ...
* Volatility * SWOT analysis (Marketing) *
Kelly criterion In probability theory, the Kelly criterion (or Kelly strategy or Kelly bet) is a formula for sizing a sequence of bets by maximizing the long-term expected value of the logarithm of wealth, which is equivalent to maximizing the long-term expected ...


Genetics Genetics is the study of genes, genetic variation, and heredity in organisms.Hartl D, Jones E (2005) It is an important branch in biology because heredity is vital to organisms' evolution. Gregor Mendel, a Moravian Augustinians, Augustinian ...

*
Punnett square The Punnett square is a square diagram that is used to predict the genotypes of a particular cross or breeding experiment. It is named after Reginald C. Punnett, who devised the approach in 1905. The diagram is used by biologists to determine ...
*
Hardy–Weinberg principle In population genetics, the Hardy–Weinberg principle, also known as the Hardy–Weinberg equilibrium, model, theorem, or law, states that Allele frequency, allele and genotype frequencies in a population will remain constant from generation ...
*
Ewens's sampling formula In population genetics, Ewens's sampling formula describes the probabilities associated with counts of how many different alleles are observed a given number of times in the sample. Definition Ewens's sampling formula, introduced by Warren Ewen ...
*
Population genetics Population genetics is a subfield of genetics that deals with genetic differences within and among populations, and is a part of evolutionary biology. Studies in this branch of biology examine such phenomena as Adaptation (biology), adaptation, s ...


Historical

*
History of probability Probability has a dual aspect: on the one hand the likelihood of hypotheses given the evidence for them, and on the other hand the behavior of stochastic processes such as the throwing of dice or coins. The study of the former is historically old ...
* ''
The Doctrine of Chances ''The Doctrine of Chances'' was the first textbook on probability theory, written by 18th-century French mathematician Abraham de Moivre and first published in 1718.. De Moivre wrote in English because he resided in England at the time, having ...
'' {{DEFAULTSORT:Probability * Statistics-related lists Lists of topics