0–9
*
1.96
*
2SLS
In statistics, econometrics, epidemiology and related disciplines, the method of instrumental variables (IV) is used to estimate causal relationships when controlled experiments are not feasible or when a treatment is not successfully delivered to ...
(two-stage least squares) redirects to
instrumental variable
In statistics, econometrics, epidemiology and related disciplines, the method of instrumental variables (IV) is used to estimate causal relationships when controlled experiments are not feasible or when a treatment is not successfully delivered to ...
*3SLS – see
three-stage least squares
*
68–95–99.7 rule
*
100-year flood
A 100-year flood, also called a 1% flood,Holmes, R.R., Jr., and Dinicola, K. (2010) ''100-Year flood–it's all about chance 'U.S. Geological Survey General Information Product 106/ref> is a flood event at a level that is reached or exceeded onc ...
A
*
A priori probability
A prior probability distribution of an uncertain quantity, simply called the prior, is its assumed probability distribution before some evidence is taken into account. For example, the prior could be the probability distribution representing the ...
*
Abductive reasoning
Abductive reasoning (also called abduction,For example: abductive inference, or retroduction) is a form of logical inference that seeks the simplest and most likely conclusion from a set of observations. It was formulated and advanced by Ameri ...
*
Absolute deviation
In mathematics and statistics, deviation serves as a measure to quantify the disparity between an observed value of a variable and another designated value, frequently the mean of that variable. Deviations with respect to the sample mean and th ...
*
Absolute risk reduction
The risk difference (RD), excess risk, or attributable risk is the difference between the risk of an outcome in the exposed group and the unexposed group. It is computed as I_e - I_u, where I_e is the incidence in the exposed group, and I_u is t ...
*
Absorbing Markov chain
*
ABX test
An ABX test is a method of comparing two choices of sensory stimuli to identify detectable differences between them. A subject is presented with two known samples (sample , the first reference, and sample , the second reference) followed by one un ...
*
Accelerated failure time model
*
Acceptable quality limit
*
Acceptance sampling
Acceptance sampling uses statistical sampling to determine whether to accept or reject a production lot of material. It has been a common quality control technique used in industry.
It is usually done as products leave the factory, or in some ...
*
Accidental sampling
*
Accuracy and precision
Accuracy and precision are two measures of ''observational error''.
''Accuracy'' is how close a given set of measurements (observations or readings) are to their ''true value''.
''Precision'' is how close the measurements are to each other.
The ...
*
Accuracy paradox
*
Acquiescence bias
*
Actuarial science
Actuarial science is the discipline that applies mathematics, mathematical and statistics, statistical methods to Risk assessment, assess risk in insurance, pension, finance, investment and other industries and professions.
Actuary, Actuaries a ...
*
Adapted process
In the study of stochastic processes, a stochastic process is adapted (also referred to as a non-anticipating or non-anticipative process) if information about the value of the process at a given time is available at that same time. An informal int ...
*
Adaptive estimator
*
Additive Markov chain
*
Additive model
In statistics, an additive model (AM) is a nonparametric regression method. It was suggested by Jerome H. Friedman and Werner Stuetzle (1981) and is an essential part of the ACE algorithm. The ''AM'' uses a one-dimensional smoother to build a ...
*
Additive smoothing
In statistics, additive smoothing, also called Laplace smoothing or Lidstone smoothing, is a technique used to smooth count data, eliminating issues caused by certain values having 0 occurrences. Given a set of observation counts \mathbf = \lang ...
*
Additive white Gaussian noise
Additive white Gaussian noise (AWGN) is a basic noise model used in information theory to mimic the effect of many random processes that occur in nature. The modifiers denote specific characteristics:
* ''Additive'' because it is added to any nois ...
*Adjusted Rand index – see
Rand index
The Rand index or Rand measure (named after William M. Rand) in statistics, and in particular in data clustering, is a measure of the similarity between two data clusterings. A form of the Rand index may be defined that is adjusted for the chance ...
(subsection)
*
ADMB software
*
Admissible decision rule
In statistical decision theory, an admissible decision rule is a rule for making a decision such that there is no other rule that is always "better" than it (or at least sometimes better and never worse), in the precise sense of "better" define ...
*
Age adjustment
Age or AGE may refer to:
Time and its effects
* Age, the amount of time someone has been alive or something has existed
** East Asian age reckoning, an Asian system of marking age starting at 1
* Ageing or aging, the process of becoming older ...
*
Age-standardized mortality rate
*
Age stratification
In sociology, age stratification refers to the hierarchical ranking of people into age groups within a society. Age stratification could also be defined as a system of inequalities linked to age. In Western societies, for example, both the old an ...
*
Aggregate data
Aggregate data is high-level data which is acquired by combining individual-level data. For instance, the output of an industry is an aggregate of the firms’ individual outputs within that industry. Aggregate data are applied in statistics, d ...
*
Aggregate pattern
*
Akaike information criterion
The Akaike information criterion (AIC) is an estimator of prediction error and thereby relative quality of statistical models for a given set of data. Given a collection of models for the data, AIC estimates the quality of each model, relative to ...
*
Algebra of random variables
In statistics, the algebra of random variables provides rules for the symbolic manipulation of random variables, while avoiding delving too deeply into the mathematically sophisticated ideas of probability theory. Its symbolism allows the treat ...
*
Algebraic statistics
*
Algorithmic inference Algorithmic inference gathers new developments in the statistical inference methods made feasible by the powerful computing devices widely available to any data analyst. Cornerstones in this field are computational learning theory, granular computin ...
*
Algorithms for calculating variance
Algorithms for calculating variance play a major role in computational statistics. A key difficulty in the design of good algorithms for this problem is that formulas for the variance may involve sums of squares, which can lead to numerical insta ...
*
All models are wrong
"All models are wrong" is a common aphorism and anapodoton in statistics. It is often expanded as "All models are wrong, but some are useful". The aphorism acknowledges that statistical models always fall short of the complexities of reality but ca ...
*
All-pairs testing In computer science, all-pairs testing or pairwise testing is a combinatorial method of software testing that, for ''each pair'' of input parameters to a system (typically, a software algorithm), tests all possible discrete combinations of those par ...
*
Allan variance
The Allan variance (AVAR), also known as two-sample variance, is a measure of frequency stability in clocks, oscillators and amplifiers. It is named after David W. Allan and expressed mathematically as \sigma_y^2(\tau).
The Allan deviation (ADEV ...
*
Alignments of random points
The study of alignments of random points in a plane seeks to discover subsets of points that occupy an approximately straight line within a larger set of points that are randomly placed in a planar region.
Studies have shown that such near-align ...
*
Almost surely
In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (with respect to the probability measure). In other words, the set of outcomes on which the event does not occur ha ...
*
Alpha beta filter
An alpha beta filter (also called alpha-beta filter, f-g filter or g-h filterEli Brookner: Tracking and Kalman Filtering Made Easy. Wiley-Interscience, 1st edition, 4 1998.) is a simplified form of State observer, observer for estimation, data smo ...
*
Alternative hypothesis
In statistical hypothesis testing, the alternative hypothesis is one of the proposed propositions in the hypothesis test. In general the goal of hypothesis test is to demonstrate that in the given condition, there is sufficient evidence supporting ...
*
Analyse-it – software
*
Analysis of categorical data
*
Analysis of covariance
Analysis of covariance (ANCOVA) is a general linear model that blends ANOVA and regression analysis, regression. ANCOVA evaluates whether the means of a dependent variable (DV) are equal across levels of one or more Categorical variable, categori ...
*
Analysis of molecular variance
*
Analysis of rhythmic variance
*
Analysis of variance
Analysis of variance (ANOVA) is a family of statistical methods used to compare the Mean, means of two or more groups by analyzing variance. Specifically, ANOVA compares the amount of variation ''between'' the group means to the amount of variati ...
*
Analytic and enumerative statistical studies
*
Ancestral graph
*
Anchor test In psychometrics, an anchor test is a common set of test items administered in combination with two or more alternative forms of the test with the aim of establishing the equivalence of the test scores on the alternative forms. The purpose of th ...
*
Ancillary statistic In statistics, ancillarity is a property of a statistic computed on a sample dataset in relation to a parametric model of the dataset. An ancillary statistic has the same distribution regardless of the value of the parameters and thus provides no i ...
*
ANCOVA redirects to
Analysis of covariance
Analysis of covariance (ANCOVA) is a general linear model that blends ANOVA and regression analysis, regression. ANCOVA evaluates whether the means of a dependent variable (DV) are equal across levels of one or more Categorical variable, categori ...
*
Anderson–Darling test
The Anderson–Darling test is a statistical test of whether a given sample of data is drawn from a given probability distribution. In its basic form, the test assumes that there are no parameters to be estimated in the distribution being tested, i ...
*
ANOVA
Analysis of variance (ANOVA) is a family of statistical methods used to compare the means of two or more groups by analyzing variance. Specifically, ANOVA compares the amount of variation ''between'' the group means to the amount of variation ''w ...
*
ANOVA on ranks
*
ANOVA–simultaneous component analysis
*
Anomaly detection
In data analysis, anomaly detection (also referred to as outlier detection and sometimes as novelty detection) is generally understood to be the identification of rare items, events or observations which deviate significantly from the majority of ...
*
Anomaly time series
*
Anscombe transform
In statistics, the Anscombe transform, named after Francis Anscombe, is a variance-stabilizing transformation that transforms a random variable with a Poisson distribution into one with an approximately standard Gaussian distribution. The Anscom ...
*
Anscombe's quartet
*
Antecedent variable
*
Antithetic variates
In statistics, the antithetic variates method is a variance reduction technique used in Monte Carlo methods. Considering that the error in the simulated signal (using Monte Carlo methods) has a one-over square root convergence, a very large number ...
*
Approximate Bayesian computation
Approximate Bayesian computation (ABC) constitutes a class of computational methods rooted in Bayesian statistics that can be used to estimate the posterior distributions of model parameters.
In all model-based statistical inference, the likel ...
*
Approximate entropy
In statistics, an approximate entropy (ApEn) is a technique used to quantify the amount of regularity and the unpredictability of fluctuations over time-series data. For example, consider two series of data:
: Series A: (0, 1, 0, 1, 0, 1, 0, 1, 0, ...
*
Arcsine distribution
*
Area chart
An area chart or area graph displays graphically quantitative data. It is based on the line chart. The area between axis and line are commonly emphasized with colors, textures and hatchings. Commonly one compares two or more quantities with an a ...
*
Area compatibility factor
*
ARGUS distribution
*
Arithmetic mean
In mathematics and statistics, the arithmetic mean ( ), arithmetic average, or just the ''mean'' or ''average'' is the sum of a collection of numbers divided by the count of numbers in the collection. The collection is often a set of results fr ...
*
Armitage–Doll multistage model of carcinogenesis
The Armitage–Doll model is a statistical model of carcinogenesis, proposed in 1954 by Peter Armitage (statistician), Peter Armitage and Richard Doll, in which a series of discrete mutations result in cancer.Armitage, P. and Doll, R. (1954"The Age ...
*
Arrival theorem
*
Artificial neural network
In machine learning, a neural network (also artificial neural network or neural net, abbreviated ANN or NN) is a computational model inspired by the structure and functions of biological neural networks.
A neural network consists of connected ...
*
Ascertainment bias
In statistics, sampling bias is a bias (statistics), bias in which a sample is collected in such a way that some members of the intended statistical population, population have a lower or higher sampling probability than others. It results in a b ...
*
ASReml software
*
Association (statistics)
In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics ...
*
Association mapping In genetics, association mapping, also known as "linkage disequilibrium mapping", is a method of mapping quantitative trait locus, quantitative trait loci (QTLs) that takes advantage of historic linkage disequilibrium to link phenotypes (observable ...
*
Association scheme
The theory of association schemes arose in statistics, in the theory of design of experiments, experimental design for the analysis of variance. In mathematics, association schemes belong to both algebra and combinatorics. In algebraic combinatori ...
*
Assumed mean
*
Astrostatistics
*
Asymptotic distribution
In mathematics and statistics, an asymptotic distribution is a probability distribution that is in a sense the limiting distribution of a sequence of distributions. One of the main uses of the idea of an asymptotic distribution is in providing appr ...
*
Asymptotic equipartition property
In information theory, the asymptotic equipartition property (AEP) is a general property of the output samples of a stochastic source. It is fundamental to the concept of typical set used in theories of data compression.
Roughly speaking, the t ...
(information theory)
*
Asymptotic normality
In mathematics and statistics, an asymptotic distribution is a probability distribution that is in a sense the limiting distribution of a sequence of distributions. One of the main uses of the idea of an asymptotic distribution is in providing appr ...
redirects to
Asymptotic distribution
In mathematics and statistics, an asymptotic distribution is a probability distribution that is in a sense the limiting distribution of a sequence of distributions. One of the main uses of the idea of an asymptotic distribution is in providing appr ...
*
Asymptotic relative efficiency redirects to
Efficiency (statistics)
In statistics, efficiency is a measure of quality of an estimator, of an experimental design, or of a hypothesis testing procedure. Essentially, a more efficient estimator needs fewer input data or observations than a less efficient one to achiev ...
*
Asymptotic theory (statistics) In statistics, asymptotic theory, or large sample theory, is a framework for assessing properties of estimators and statistical tests. Within this framework, it is often assumed that the sample size may grow indefinitely; the properties of estimato ...
*
Atkinson index The Atkinson index (also known as the Atkinson measure or Atkinson inequality measure) is a measure of income inequality developed by British economist Anthony Barnes Atkinson. The measure is useful in determining which end of the distribution cont ...
*
Attack rate In epidemiology, the attack rate is the proportion of an at-risk population that contracts the disease during a specified time interval. It is used in hypothetical predictions and during actual outbreaks of disease. An at-risk population is defined ...
*
Augmented Dickey–Fuller test
In statistics, an augmented Dickey–Fuller test (ADF) tests the null hypothesis that a unit root is present in a time series sample. The alternative hypothesis depends on which version of the test is used, but is usually stationarity or trend-s ...
*
Aumann's agreement theorem
Aumann's agreement theorem states that two Bayesian agents with the same prior beliefs cannot "agree to disagree" about the probability of an event if their individual beliefs are common knowledge. In other words, if it is commonly known what eac ...
*
Autocorrelation
Autocorrelation, sometimes known as serial correlation in the discrete time case, measures the correlation of a signal with a delayed copy of itself. Essentially, it quantifies the similarity between observations of a random variable at differe ...
**
Autocorrelation plot redirects to
Correlogram
*
Autocovariance
In probability theory and statistics, given a stochastic process, the autocovariance is a function that gives the covariance of the process with itself at pairs of time points. Autocovariance is closely related to the autocorrelation of the proces ...
*
Autoregressive conditional duration In financial econometrics, an autoregressive conditional duration (ACD, Engle and Russell (1998)) model considers irregularly spaced and autocorrelated intertrade durations. ACD is analogous to GARCH. In a continuous double auction (a common trading ...
*
Autoregressive conditional heteroskedasticity
In econometrics, the autoregressive conditional heteroskedasticity (ARCH) model is a statistical model for time series data that describes the variance of the current error term or innovation as a function of the actual sizes of the previous time ...
*
Autoregressive fractionally integrated moving average
*
Autoregressive integrated moving average
In time series analysis used in statistics and econometrics, autoregressive integrated moving average (ARIMA) and seasonal ARIMA (SARIMA) models are generalizations of the autoregressive moving average (ARMA) model to non-stationary series and pe ...
*
Autoregressive model
In statistics, econometrics, and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it can be used to describe certain time-varying processes in nature, economics, behavior, etc. The autoregre ...
*
Autoregressive–moving-average model
In the statistical analysis of time series, autoregressive–moving-average (ARMA) models are a way to describe a (weakly) stationary stochastic process using autoregression (AR) and a moving average (MA), each with a polynomial. They are a too ...
*
Auxiliary particle filter
*
Average
In colloquial, ordinary language, an average is a single number or value that best represents a set of data. The type of average taken as most typically representative of a list of numbers is the arithmetic mean the sum of the numbers divided by ...
*
Average treatment effect
The average treatment effect (ATE) is a measure used to compare treatments (or interventions) in randomized experiments, evaluation of policy interventions, and medical trials. The ATE measures the difference in mean (average) outcomes between unit ...
*
Averaged one-dependence estimators
*
Azuma's inequality
In probability theory, the Azuma–Hoeffding inequality (named after Kazuoki Azuma and Wassily Hoeffding) gives a concentration result for the values of martingales that have bounded differences.
Suppose \ is a martingale (or super-martingale ...
B
*
BA model model for a random network
*
Backfitting algorithm
*
Balance equation
In probability theory, a balance equation is an equation that describes the probability flux associated with a Markov chain
In probability theory and statistics, a Markov chain or Markov process is a stochastic process describing a sequence ...
*
Balanced incomplete block design redirects to
Block design
In combinatorial mathematics, a block design is an incidence structure consisting of a set together with a family of subsets known as ''blocks'', chosen such that number of occurrences of each element satisfies certain conditions making the co ...
*
Balanced repeated replication
*
Balding–Nichols model
*
Banburismus
Banburismus was a Cryptanalysis, cryptanalytic process developed by Alan Turing at Bletchley Park in United Kingdom, Britain during the Second World War. It was used by Bletchley Park's Hut 8 to help break German ''Kriegsmarine'' (naval) message ...
related to Bayesian networks
*
Bangdiwala's B
Bangdiwala's B statistic was created by Shrikant Bangdiwala in 1985 and is a measure of inter-rater agreement.Bangwidala S (1985) A graphical test for observer agreement. Proc 45th Int Stats Institute Meeting, Amsterdam, 1, 307–308Bangdiwala K (1 ...
*
Bapat–Beg theorem In probability theory, the Bapat–Beg theorem gives the joint probability distribution of order statistics of independent (probability), independent but not necessarily identically distributed random variables in terms of the cumulative distributio ...
*
Bar chart
A bar chart or bar graph is a chart or graph that presents categorical variable, categorical data with rectangular bars with heights or lengths proportional to the values that they represent. The bars can be plotted vertically or horizontally. A ...
*
Barabási–Albert model
The Barabási–Albert (BA) model is an algorithm for generating random scale-free network, scale-free complex network, networks using a preferential attachment mechanism. Several natural and human-made systems, including the Internet, the Worl ...
*
Barber–Johnson diagram
*
Barnard's test
*
Barnardisation
*
Barnes interpolation
Barnes interpolation, named after Stanley L. Barnes, is the interpolation of unevenly spread data points from a set of measurements of an unknown function in two dimensions into an analytic function of two variables. An example of a situation where ...
*
Bartlett's method
*
Bartlett's test
*
Bartlett's theorem
*
Base rate
In probability and statistics, the base rate (also known as prior probabilities) is the class of probabilities unconditional on "featural evidence" ( likelihoods).
It is the proportion of individuals in a population who have a certain characte ...
*
Baseball statistics
Baseball statistics include a variety of metrics used to evaluate player and team performance in the sport of baseball.
Because the flow of a baseball game has natural breaks to it, and player activity is characteristically distinguishable ind ...
*
Basu's theorem
In statistics, Basu's theorem states that any boundedly complete and sufficient statistic is independent of any ancillary statistic. This is a 1955 result of Debabrata Basu.
It is often used in statistics as a tool to prove independence of two s ...
*
Bates distribution
*
Baum–Welch algorithm
In electrical engineering, statistical computing and bioinformatics, the Baum–Welch algorithm is a special case of the expectation–maximization algorithm used to find the unknown parameters of a hidden Markov model (HMM). It makes use of the ...
*
Bayes classifier
In statistical classification, the Bayes classifier is the classifier having the smallest probability of misclassification of all classifiers using the same set of features.
Definition
Suppose a pair (X,Y) takes values in \mathbb^d \times \, whe ...
*
Bayes error rate
*
Bayes estimator
In estimation theory and decision theory, a Bayes estimator or a Bayes action is an estimator or decision rule that minimizes the posterior expected value of a loss function (i.e., the posterior expected loss). Equivalently, it maximizes the ...
*
Bayes factor
The Bayes factor is a ratio of two competing statistical models represented by their evidence, and is used to quantify the support for one model over the other. The models in question can have a common set of parameters, such as a null hypothesis ...
*
Bayes linear statistics
*
Bayes' rule
Bayes' theorem (alternatively Bayes' law or Bayes' rule, after Thomas Bayes) gives a mathematical rule for inverting conditional probabilities, allowing one to find the probability of a cause given its effect. For example, if the risk of develo ...
*
Bayes' theorem
Bayes' theorem (alternatively Bayes' law or Bayes' rule, after Thomas Bayes) gives a mathematical rule for inverting Conditional probability, conditional probabilities, allowing one to find the probability of a cause given its effect. For exampl ...
**
Evidence under Bayes theorem
*
Bayesian – disambiguation
*
Bayesian average
*
Bayesian brain
*
Bayesian econometrics
*
Bayesian experimental design
*
Bayesian game
In game theory, a Bayesian game is a strategic decision-making model which assumes players have incomplete information. Players may hold private information relevant to the game, meaning that the payoffs are not common knowledge. Bayesian games mo ...
*
Bayesian inference
Bayesian inference ( or ) is a method of statistical inference in which Bayes' theorem is used to calculate a probability of a hypothesis, given prior evidence, and update it as more information becomes available. Fundamentally, Bayesian infer ...
*
Bayesian inference in marketing
*
Bayesian inference in phylogeny
Bayesian Computational phylogenetics, inference of phylogeny combines the information in the prior and in the data likelihood to create the so-called posterior probability of trees, which is the probability that the tree is correct given the data, ...
*
Bayesian inference using Gibbs sampling
Bayesian inference using Gibbs sampling (BUGS) is a statistical software for performing Bayesian inference using Markov chain Monte Carlo (MCMC) methods. It was developed by David Spiegelhalter at the Medical Research Council Biostatistics Unit i ...
*
Bayesian information criterion
In statistics, the Bayesian information criterion (BIC) or Schwarz information criterion (also SIC, SBC, SBIC) is a criterion for model selection among a finite set of models; models with lower BIC are generally preferred. It is based, in part, on ...
*
Bayesian linear regression
Bayesian linear regression is a type of conditional modeling in which the mean of one variable is described by a linear combination of other variables, with the goal of obtaining the posterior probability of the regression coefficients (as well ...
*Bayesian model comparison – see
Bayes factor
The Bayes factor is a ratio of two competing statistical models represented by their evidence, and is used to quantify the support for one model over the other. The models in question can have a common set of parameters, such as a null hypothesis ...
*
Bayesian multivariate linear regression
*
Bayesian network
A Bayesian network (also known as a Bayes network, Bayes net, belief network, or decision network) is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). Whi ...
*
Bayesian probability
Bayesian probability ( or ) is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation representing a state of knowledge or as quant ...
*
Bayesian search theory
*
Bayesian spam filtering
In statistics, naive (sometimes simple or idiot's) Bayes classifiers are a family of " probabilistic classifiers" which assumes that the features are conditionally independent, given the target class. In other words, a naive Bayes model assumes th ...
*
Bayesian statistics
Bayesian statistics ( or ) is a theory in the field of statistics based on the Bayesian interpretation of probability, where probability expresses a ''degree of belief'' in an event. The degree of belief may be based on prior knowledge about ...
*
Bayesian tool for methylation analysis
*
Bayesian vector autoregression
*
BCMP network queueing theory
*
Bean machine
The Galton board, also known as the Galton box or quincunx or bean machine (or incorrectly Dalton board), is a device invented by Francis Galton to demonstrate the central limit theorem, in particular that with sufficient sample size the binomi ...
*
Behrens–Fisher distribution
*
Behrens–Fisher problem
*
Belief propagation
Belief propagation, also known as sum–product message passing, is a message-passing algorithm for performing inference on graphical models, such as Bayesian networks and Markov random fields. It calculates the marginal distribution for ea ...
*
Belt transect
*
Benford's law
*
Benini distribution
*
Bennett's inequality
*
Berkson error model The Berkson error model is a description of random error (or misclassification) in measurement. Unlike classical error, Berkson error causes little or no bias in the measurement. It was proposed by Joseph Berkson in an article entitled “Are ther ...
*
Berkson's paradox
Berkson's paradox, also known as Berkson's bias, collider bias, or Berkson's fallacy, is a result in conditional probability and statistics which is often found to be counterintuitive, and hence a veridical paradox. It is a complicating factor ar ...
*
Berlin procedure
*
Bernoulli distribution
In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli, is the discrete probability distribution of a random variable which takes the value 1 with probability p and the value 0 with pro ...
*
Bernoulli process
In probability and statistics, a Bernoulli process (named after Jacob Bernoulli) is a finite or infinite sequence of binary random variables, so it is a discrete-time stochastic process that takes only two values, canonically 0 and 1. The ...
*
Bernoulli sampling In the theory of finite population sampling, Bernoulli sampling is a sampling process where each element of the statistical population, population is subjected to an statistical independence, independent Bernoulli trial which determines whether the ...
*
Bernoulli scheme
In mathematics, the Bernoulli scheme or Bernoulli shift is a generalization of the Bernoulli process to more than two possible outcomes. Bernoulli schemes appear naturally in symbolic dynamics, and are thus important in the study of dynamical syst ...
*
Bernoulli trial
In the theory of probability and statistics, a Bernoulli trial (or binomial trial) is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is ...
*
Bernstein inequalities (probability theory)
In probability theory, Bernstein inequalities give bounds on the probability that the sum of random variables deviates from its mean. In the simplest case, let ''X''1, ..., ''X'n'' be independent Bernoulli random variables taking valu ...
*
Bernstein–von Mises theorem
In Bayesian inference, the Bernstein–von Mises theorem provides the basis for using Bayesian credible sets for confidence statements in parametric models. It states that under some conditions, a posterior distribution converges in total variat ...
*
Berry–Esseen theorem
*
Bertrand's ballot theorem
In combinatorics, Bertrand's ballot problem is the question: "In an election where candidate A receives ''p'' votes and candidate B receives ''q'' votes with ''p'' > ''q'', what is the probability that A will be strictly ahead of B throug ...
*
Bertrand's box paradox
Bertrand's box paradox is a Paradox#Quine's_classification, veridical paradox in elementary probability theory. It was first posed by Joseph Bertrand in his 1889 work Calcul des Probabilités'.
There are three boxes:
# a box containing two gol ...
*
Bessel process In mathematics
Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. Th ...
*
Bessel's correction
In statistics, Bessel's correction is the use of ''n'' − 1 instead of ''n'' in the formula for the sample variance and sample standard deviation, where ''n'' is the number of observations in a sample. This method corrects the bias in ...
*
Best linear unbiased prediction
*
Beta (finance)
In finance, the beta ( or market beta or beta coefficient) is a statistic that measures the expected increase or decrease of an individual stock price in proportion to movements of the stock market as a whole. Beta can be used to indicate the c ...
*
Beta-binomial distribution
In probability theory and statistics, the beta-binomial distribution is a family of discrete probability distributions on a finite support of non-negative integers arising when the probability of success in each of a fixed or known number of Ber ...
*
Beta-binomial model
*
Beta distribution
In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval , 1
The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
or (0, 1) in terms of two positive Statistical parameter, parameters, denoted by ''alpha'' (''α'') an ...
*
Beta function
In mathematics, the beta function, also called the Euler integral of the first kind, is a special function that is closely related to the gamma function and to binomial coefficients. It is defined by the integral
: \Beta(z_1,z_2) = \int_0^1 t^ ...
for incomplete beta function
*
Beta negative binomial distribution
In probability theory, a beta negative binomial distribution is the probability distribution of a discrete probability distribution, discrete random variable X equal to the number of failures needed to get r successes in a sequence of indepe ...
*
Beta prime distribution
In probability theory and statistics, the beta prime distribution (also known as inverted beta distribution or beta distribution of the second kindJohnson et al (1995), p 248) is an absolutely continuous probability distribution. If p\in ,1/math ...
*
Beta rectangular distribution
*
Beverton–Holt model
*
Bhatia–Davis inequality
*
Bhattacharya coefficient redirects to
Bhattacharyya distance
*
Bias (statistics)
In the field of statistics, bias is a systematic tendency in which the methods used to gather data and estimate a sample statistic present an inaccurate, skewed or distorted ('' biased'') depiction of reality. Statistical bias exists in numer ...
*
Bias of an estimator
In statistics, the bias of an estimator (or bias function) is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called ''unbiased''. In st ...
*
Biased random walk (biochemistry)
*Biased sample – see
Sampling bias
In statistics, sampling bias is a bias (statistics), bias in which a sample is collected in such a way that some members of the intended statistical population, population have a lower or higher sampling probability than others. It results in a b ...
*
Biclustering
*
Big O in probability notation
*
Bienaymé–Chebyshev inequality
*
Bills of Mortality
Bills of mortality were the weekly mortality statistics in London, designed to monitor burials from 1592 to 1595 and then continuously from 1603. The responsibility to produce the statistics was chartered in 1611 to the Worshipful Company of Pari ...
*
Bimodal distribution
In statistics, a multimodal distribution is a probability distribution with more than one mode (i.e., more than one local peak of the distribution). These appear as distinct peaks (local maxima) in the probability density function, as shown ...
*
Binary classification
Binary classification is the task of classifying the elements of a set into one of two groups (each called ''class''). Typical binary classification problems include:
* Medical testing to determine if a patient has a certain disease or not;
* Qual ...
*
Bingham distribution
In statistics, the Bingham distribution, named after Christopher Bingham, is an antipodally symmetric probability distribution on the ''n''-sphere. It is a generalization of the Watson distribution and a special case of the Kent and Fisher–Bin ...
*
Binomial distribution
In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of statistical independence, independent experiment (probability theory) ...
*
Binomial proportion confidence interval
In statistics, a binomial proportion confidence interval is a confidence interval for the probability of success calculated from the outcome of a series of success–failure experiments (Bernoulli trials). In other words, a binomial proportion conf ...
*
Binomial regression
In statistics, binomial regression is a regression analysis technique in which the response (often referred to as ''Y'') has a binomial distribution: it is the number of successes in a series of independent Bernoulli trials, where each trial ha ...
*
Binomial test
Binomial test is an exact test of the statistical significance of deviations from a theoretically expected distribution of observations into two categories using sample data.
Usage
A binomial test is a statistical hypothesis test used to deter ...
*
Bioinformatics
Bioinformatics () is an interdisciplinary field of science that develops methods and Bioinformatics software, software tools for understanding biological data, especially when the data sets are large and complex. Bioinformatics uses biology, ...
*
Biometrics (statistics)
Biostatistics (also known as biometry) is a branch of statistics that applies statistical methods to a wide range of topics in biology. It encompasses the design of biological experiments, the collection and analysis of data from those experimen ...
redirects to
Biostatistics
Biostatistics (also known as biometry) is a branch of statistics that applies statistical methods to a wide range of topics in biology. It encompasses the design of biological experiments, the collection and analysis of data from those experimen ...
*
Biostatistics
Biostatistics (also known as biometry) is a branch of statistics that applies statistical methods to a wide range of topics in biology. It encompasses the design of biological experiments, the collection and analysis of data from those experimen ...
*
Biplot
Biplots are a type of exploratory graph used in statistics, a generalization of the simple two-variable scatterplot.
A biplot overlays a ''score plot'' with a ''loading plot''.
A biplot allows information on both samples and variables of a d ...
*
Birnbaum–Saunders distribution
*
Birth–death process
The birth–death process (or birth-and-death process) is a special case of continuous-time Markov process where the state transitions are of only two types: "births", which increase the state variable by one and "deaths", which decrease the stat ...
*
Bispectrum
*
Bivariate analysis
Bivariate analysis is one of the simplest forms of quantitative (statistical) analysis. Earl R. Babbie, ''The Practice of Social Research'', 12th edition, Wadsworth Publishing, 2009, , pp. 436–440 It involves the analysis of two variables (oft ...
*
Bivariate von Mises distribution
In probability theory and statistics, the bivariate von Mises distribution is a probability distribution describing values on a torus. It may be thought of as an analogue on the torus of the bivariate normal distribution. The distribution belongs ...
*
Black–Scholes
*
Bland–Altman plot
A Bland–Altman plot (difference plot) in analytical chemistry or biomedicine is a method of data plotting used in analyzing the agreement between two different assays. It is identical to a John Tukey, Tukey mean-difference plot, the name by wh ...
*
Blind deconvolution
*
Blind experiment
In a blind or blinded experiment, information which may influence the participants of the experiment is withheld until after the experiment is complete. Good blinding can reduce or eliminate experimental biases that arise from a participants' expe ...
*
Block design
In combinatorial mathematics, a block design is an incidence structure consisting of a set together with a family of subsets known as ''blocks'', chosen such that number of occurrences of each element satisfies certain conditions making the co ...
*
Blocking (statistics)
In the statistical theory of the design of experiments, blocking is the arranging of experimental units that are similar to one another in groups (blocks) based on one or more variables. These variables are chosen carefully to minimize the effect ...
*
Blumenthal's zero–one law
*
BMDP software
*
Bochner's theorem
In mathematics, Bochner's theorem (named for Salomon Bochner) characterizes the Fourier transform of a positive finite Borel measure on the real line. More generally in harmonic analysis, Bochner's theorem asserts that under Fourier transform a c ...
*
Bonferroni correction
In statistics, the Bonferroni correction is a method to counteract the multiple comparisons problem.
Background
The method is named for its use of the Bonferroni inequalities.
Application of the method to confidence intervals was described by ...
*
Bonferroni inequalities redirects to
Boole's inequality
In probability theory, Boole's inequality, also known as the union bound, says that for any finite or countable set of events, the probability that at least one of the events happens is no greater than the sum of the probabilities of the indiv ...
*
Boole's inequality
In probability theory, Boole's inequality, also known as the union bound, says that for any finite or countable set of events, the probability that at least one of the events happens is no greater than the sum of the probabilities of the indiv ...
*
Boolean analysis Boolean analysis was introduced by Flament (1976).Flament, C. (1976). "L'analyse booleenne de questionnaire", Paris: Mouton. The goal of a Boolean analysis is to detect deterministic dependencies between the items of a questionnaire or similar data- ...
*
Bootstrap aggregating
Bootstrap aggregating, also called bagging (from bootstrap aggregating) or bootstrapping, is a machine learning (ML) ensemble meta-algorithm designed to improve the stability and accuracy of ML classification and regression algorithms. It also ...
*
Bootstrap error-adjusted single-sample technique
*
Bootstrapping (statistics)
Bootstrapping is a procedure for estimating the distribution of an estimator by resampling (often with replacement) one's data or a model estimated from the data. Bootstrapping assigns measures of accuracy ( bias, variance, confidence interval ...
*
Bootstrapping populations
Bootstrapping populations in statistics and mathematics starts with a Statistical sample, sample \ observed from a random variable.
When ''X'' has a given cumulative distribution function, distribution law with a set of non fixed parameters, we d ...
*
Borel–Cantelli lemma
In probability theory, the Borel–Cantelli lemma is a theorem about sequences of events. In general, it is a result in measure theory. It is named after Émile Borel and Francesco Paolo Cantelli, who gave statement to the lemma in the first d ...
*
Bose–Mesner algebra In mathematics, a Bose–Mesner algebra is a special set of matrices which arise from a combinatorial structure known as an association scheme, together with the usual set of rules for combining (forming the products of) those matrices, such that th ...
*
Box–Behnken design
*
Box–Cox distribution
In statistics, the Box–Cox distribution (also known as the power-normal distribution) is the distribution of a random variable ''X'' for which the Box–Cox transformation on ''X'' follows a truncated normal distribution. It is a continuous pro ...
*
Box–Cox transformation redirects to
Power transform
*
Box–Jenkins
*
Box–Muller transform
*
Box–Pierce test
*
Box plot
In descriptive statistics, a box plot or boxplot is a method for demonstrating graphically the locality, spread and skewness groups of numerical data through their quartiles.
In addition to the box on a box plot, there can be lines (which are ca ...
*
Branching process
In probability theory, a branching process is a type of mathematical object known as a stochastic process, which consists of collections of random variables indexed by some set, usually natural or non-negative real numbers. The original purpose of ...
*
Bregman divergence
In mathematics, specifically statistics and information geometry, a Bregman divergence or Bregman distance is a measure of difference between two points, defined in terms of a strictly convex function; they form an important class of divergences. ...
*
Breusch–Godfrey test
*
Breusch–Pagan statistic redirects to
Breusch–Pagan test
*
Breusch–Pagan test
*
Brown–Forsythe test
*
Brownian bridge
A Brownian bridge is a continuous-time gaussian process ''B''(''t'') whose probability distribution is the conditional probability distribution of a standard Wiener process ''W''(''t'') (a mathematical model of Brownian motion) subject to the con ...
*
Brownian excursion
*
Brownian motion
Brownian motion is the random motion of particles suspended in a medium (a liquid or a gas). The traditional mathematical formulation of Brownian motion is that of the Wiener process, which is often called Brownian motion, even in mathematical ...
*
Brownian tree
*
Bruck–Ryser–Chowla theorem
The Bruck– Ryser– Chowla theorem is a result on the combinatorics of symmetric block designs that implies nonexistence of certain kinds of design. It states that if a -design exists with (implying and ), then:
* if is even, then is a squa ...
*
Burke's theorem
*
Burr distribution
In probability theory, statistics and econometrics, the Burr Type XII distribution or simply the Burr distribution is a continuous probability distribution for a non-negative random variable. It is also known as the Singh–Maddala distribution a ...
*
Business statistics
Statistics (from German: ', "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social ...
*
Bühlmann model
In credibility theory, a branch of study in actuarial science, the Bühlmann model is a random effects model (or "variance components model" or hierarchical linear model) used to determine the appropriate premium for a group of insurance cont ...
*
Buzen's algorithm
*
BV4.1 (software)
C
*
c-chart
*
Càdlàg
In mathematics, a càdlàg (), RCLL ("right continuous with left limits"), or corlol ("continuous on (the) right, limit on (the) left") function is a function defined on the real numbers (or a subset of them) that is everywhere right-continuous an ...
*
Calculating demand forecast accuracy
Demand forecasting, also known as ''demand planning and sales forecasting'' (DP&SF), involves the prediction of the quantity of goods and services that will be demanded by consumers or customer, business customers at a future point in time. More s ...
*
Calculus of predispositions
*
Calibrated probability assessment
Calibrated probability assessments are subjective probabilities assigned by individuals who have been trained to assess probabilities in a way that historically represents their uncertainty.S. Lichtenstein, B. Fischhoff, and L. D. Phillips, "Calib ...
*
Calibration (probability) subjective probability, redirects to
Calibrated probability assessment
Calibrated probability assessments are subjective probabilities assigned by individuals who have been trained to assess probabilities in a way that historically represents their uncertainty.S. Lichtenstein, B. Fischhoff, and L. D. Phillips, "Calib ...
*
Calibration (statistics)
There are two main uses of the term calibration in statistics that denote special types of statistical inference problems. Calibration can mean
:*a reverse process to regression, where instead of a future dependent variable being predicted from ...
''the statistical calibration problem''
*
Cancer cluster
*
Candlestick chart
A candlestick chart (also called Japanese candlestick chart or K-line) is a style of financial chart used to describe price movements of a security, derivative, or currency.
While similar in appearance to a bar chart, each candlestick represent ...
*
Canonical analysis
In statistics, canonical analysis (from bar, measuring rod, ruler) belongs to the family of regression methods for data analysis. Regression analysis quantifies a relationship between a predictor variable and a criterion variable by the coefficie ...
*
Canonical correlation
In statistics, canonical-correlation analysis (CCA), also called canonical variates analysis, is a way of inferring information from cross-covariance matrices. If we have two vectors ''X'' = (''X''1, ..., ''X'n'') and ''Y'' ...
*
Canopy clustering algorithm
*
Cantor distribution
The Cantor distribution is the probability distribution whose cumulative distribution function is the Cantor function.
This distribution has neither a probability density function nor a probability mass function, since although its cumulative ...
*
Carpet plot
A carpet plot is any of a few different specific types of Plot_(graphics), plot. The more common plot referred to as a carpet plot is one that illustrates the interaction between two or more independent variables and one or more dependent vari ...
*
Cartogram
A cartogram (also called a value-area map or an anamorphic map, the latter common among German-speakers) is a thematic map of a set of features (countries, provinces, etc.), in which their geographic size is altered to be Proportionality (math ...
*
Case-control redirects to
Case-control study
*
Case-control study
*
Catastro of Ensenada a census of part of Spain
*
Categorical data
In statistics, a categorical variable (also called qualitative variable) is a variable (research), variable that can take on one of a limited, and usually fixed, number of possible values, assigning each individual or other unit of observation to a ...
*
Categorical distribution
In probability theory and statistics, a categorical distribution (also called a generalized Bernoulli distribution, multinoulli distribution) is a discrete probability distribution that describes the possible results of a random variable that can ...
*
Categorical variable
In statistics, a categorical variable (also called qualitative variable) is a variable that can take on one of a limited, and usually fixed, number of possible values, assigning each individual or other unit of observation to a particular group or ...
*
Cauchy distribution
The Cauchy distribution, named after Augustin-Louis Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) ...
*
Cauchy–Schwarz inequality
The Cauchy–Schwarz inequality (also called Cauchy–Bunyakovsky–Schwarz inequality) is an upper bound on the absolute value of the inner product between two vectors in an inner product space in terms of the product of the vector norms. It is ...
*
Causal Markov condition
*
CDF-based nonparametric confidence interval
*
Ceiling effect (statistics)
The "ceiling effect" is one type of scale attenuation effect; the other scale attenuation effect is the " floor effect". The ceiling effect is observed when an independent variable no longer has an effect on a dependent variable, or the level abov ...
*
Cellular noise
Cellular may refer to:
*Cellular automaton, a model in discrete mathematics
*Cell biology, the evaluation of cells work and more
* ''Cellular'' (film), a 2004 movie
*Cellular frequencies, assigned to networks operating in cellular RF bands
*Cellu ...
*
Censored regression model Censored regression models are a class of models in which the dependent variable is censored above or below a certain threshold. A commonly used likelihood-based model to accommodate to a censored sample is the Tobit model, but quantile
In sta ...
*
Censoring (clinical trials)
*
Censoring (statistics)
In statistics, censoring is a condition in which the Value (mathematics), value of a measurement or observation is only partially known.
For example, suppose a study is conducted to measure the impact of a drug on mortality rate. In such a study, ...
*
Centering matrix
In mathematics and multivariate statistics, the centering matrixJohn I. Marden, ''Analyzing and Modeling Rank Data'', Chapman & Hall, 1995, , page 59. is a symmetric and idempotent matrix, which when multiplied with a vector has the same effect a ...
*
Centerpoint (geometry)
In statistics and computational geometry, the notion of centerpoint is a generalization of the median to data in higher-dimensional Euclidean space. Given a set of points in ''d''-dimensional space, a centerpoint of the set is a point such that a ...
to which
Tukey median
John Wilder Tukey (; June 16, 1915 – July 26, 2000) was an American mathematician and statistician, best known for the development of the fast Fourier Transform (FFT) algorithm and box plot. The Tukey range test, the Tukey lambda distributi ...
redirects
*
Central composite design
*
Central limit theorem
In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the Probability distribution, distribution of a normalized version of the sample mean converges to a Normal distribution#Standard normal distributi ...
**
Central limit theorem (illustration) redirects to
Illustration of the central limit theorem
**
Central limit theorem for directional statistics
**
Lyapunov's central limit theorem
In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables ...
**
Martingale central limit theorem
Martingale may refer to:
*Martingale (probability theory), a stochastic process in which the conditional expectation of the next value, given the current and preceding values, is the current value
* Martingale (tack) for horses
* Martingale (colla ...
*
Central moment
In probability theory and statistics, a central moment is a moment of a probability distribution of a random variable about the random variable's mean; that is, it is the expected value of a specified integer power of the deviation of the random ...
*
Central tendency
In statistics, a central tendency (or measure of central tendency) is a central or typical value for a probability distribution.Weisberg H.F (1992) ''Central Tendency and Variability'', Sage University Paper Series on Quantitative Applications in ...
*
Census
A census (from Latin ''censere'', 'to assess') is the procedure of systematically acquiring, recording, and calculating population information about the members of a given Statistical population, population, usually displayed in the form of stati ...
*
Cepstrum
In Fourier analysis, the cepstrum (; plural ''cepstra'', adjective ''cepstral'') is the result of computing the inverse Fourier transform (IFT) of the logarithm of the estimated signal spectrum. The method is a tool for investigating periodic st ...
*
CHAID
Chi-square automatic interaction detection (CHAID) is a decision tree technique based on adjusted significance testing (Bonferroni correction, Holm-Bonferroni testing).
History
CHAID is based on a formal extension of AID (Automatic Interaction De ...
– CHi-squared Automatic Interaction Detector
*
Chain rule for Kolmogorov complexity
*
Challenge–dechallenge–rechallenge
*
Champernowne distribution In statistics, the Champernowne distribution is a symmetric, continuous probability distribution, describing random variables that take both positive and negative values. It is a generalization of the logistic distribution that was introduced by D. ...
*
Change detection
In statistical analysis, change detection or change point detection tries to identify times when the probability distribution of a stochastic process or time series changes. In general the problem concerns both detecting whether or not a change ...
**
Change detection (GIS)
*
Chapman–Kolmogorov equation
*
Chapman–Robbins bound
*
Characteristic function (probability theory)
In probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution. If a random variable admits a probability density function, then the characteristic function is ...
*
Chauvenet's criterion
*
Chebyshev center
*
Chebyshev's inequality
In probability theory, Chebyshev's inequality (also called the Bienaymé–Chebyshev inequality) provides an upper bound on the probability of deviation of a random variable (with finite variance) from its mean. More specifically, the probability ...
*
Checking if a coin is biased redirects to
Checking whether a coin is fair
*
Checking whether a coin is fair
*
Cheeger bound
*
Chemometrics
Chemometrics is the science of extracting information from chemical systems by data-driven means. Chemometrics is inherently interdisciplinary, using methods frequently employed in core data-analytic disciplines such as multivariate statistics, ap ...
*
Chernoff bound
In probability theory, a Chernoff bound is an exponentially decreasing upper bound on the tail of a random variable based on its moment generating function. The minimum of all such exponential bounds forms ''the'' Chernoff or Chernoff-Cramér boun ...
a special case of Chernoff's inequality
*
Chernoff face
*
Chernoff's distribution
*
Chernoff's inequality
*
Chi distribution
In probability theory and statistics, the chi distribution is a continuous probability distribution over the non-negative real line. It is the distribution of the positive square root of a sum of squared independent Gaussian random variables. E ...
*
Chi-squared distribution
In probability theory and statistics, the \chi^2-distribution with k Degrees of freedom (statistics), degrees of freedom is the distribution of a sum of the squares of k Independence (probability theory), independent standard normal random vari ...
*
Chi-squared test
A chi-squared test (also chi-square or test) is a Statistical hypothesis testing, statistical hypothesis test used in the analysis of contingency tables when the sample sizes are large. In simpler terms, this test is primarily used to examine w ...
*
Chinese restaurant process
In probability theory, the Chinese restaurant process is a discrete-time stochastic process, analogous to seating customers at tables in a restaurant.
Imagine a restaurant with an infinite number of circular tables, each with infinite capacity. Cu ...
*
Choropleth map
A choropleth map () is a type of statistical thematic map that uses pseudocolor, meaning color corresponding with an aggregate summary of a geographic characteristic within spatial enumeration units, such as population density or per-capita inco ...
*
Chow test
The Chow test (), proposed by econometrician Gregory Chow in 1960, is a statistical test of whether the true coefficients in two linear regressions on different data sets are equal. In econometrics, it is most commonly used in time series analysis ...
*
Chronux software
*
Circular analysis
*
Circular distribution
*
Circular error probable
Circular error probable (CEP),Circular Error Probable (CEP), Air Force Operational Test and Evaluation Center Technical Paper 6, Ver 2, July 1987, p. 1 also circular error probability or circle of equal probability, is a measure of a weapon s ...
*
Circular statistics redirects to
Directional statistics
Directional statistics (also circular statistics or spherical statistics) is the subdiscipline of statistics that deals with directions (unit vectors in Euclidean space, R''n''), axes ( lines through the origin in R''n'') or rotations in R''n''. ...
*
Circular uniform distribution
*
Civic statistics
*
Clark–Ocone theorem
*
Class membership probabilities
*
Classic data sets
*
Classical definition of probability
The classical definition of probability or classical interpretation of probability is identified with the works of Jacob Bernoulli and Pierre-Simon Laplace:
This definition is essentially a consequence of the principle of indifference. If eleme ...
*
Classical test theory
Classical test theory (CTT) is a body of related psychometric theory that predicts outcomes of psychological Test (assessment), testing such as the difficulty of items or the ability of test-takers. It is a theory of testing based on the idea that ...
– psychometrics
*
Classification rule Given a population whose members each belong to one of a number of different sets or classes, a classification rule or classifier is a procedure by which the elements of the population set are each predicted to belong to one of the classes. A perfe ...
*
Classifier (mathematics)
When classification is performed by a computer, statistical methods are normally used to develop the algorithm.
Often, the individual observations are analyzed into a set of quantifiable properties, known variously as explanatory variables or ''f ...
*
Climate ensemble
A climate ensemble involves slightly different models of the climate system.
The ensemble average is expected to perform better than individual model runs.
There are at least five different types, to be described below.
Aims
The aim of running a ...
*
Climograph
A climograph is a graphical representation of a location's basic climate. Climographs display data for two variables:
# monthly average temperature
# monthly average precipitation.
These are useful tools to quickly describe a location's climate ...
*
Clinical significance
In medicine and psychology, clinical significance is the practical importance of a treatment effect—whether it has a real genuine, palpable, noticeable effect on daily life.
Types of significance Statistical significance
Statistical significanc ...
*
Clinical study design
Clinical study design is the formulation of clinical trials and other experiments, as well as observational studies, in medical research involving human beings and involving clinical aspects, including epidemiology . It is the design of experime ...
*
Clinical trial
Clinical trials are prospective biomedical or behavioral research studies on human subject research, human participants designed to answer specific questions about biomedical or behavioral interventions, including new treatments (such as novel v ...
*
Clinical utility of diagnostic tests
*
Cliodynamics
Cliodynamics () is a transdisciplinary area of research that integrates cultural evolution, economic history/ cliometrics, macrosociology, the mathematical modeling of historical processes during the '' longue durée'', and the construction and ...
*
Closed testing procedure
In statistics, the closed testing procedure is a general method for performing more than one hypothesis test simultaneously.
The closed testing principle
Suppose there are ''k'' hypotheses ''H''1,..., ''H'k'' to be tested and the overall type ...
*
Cluster analysis
Cluster analysis or clustering is the data analyzing technique in which task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more Similarity measure, similar (in some specific sense defined by the ...
*
Cluster randomised controlled trial
*
Cluster sampling
In statistics, cluster sampling is a sampling plan used when mutually homogeneous yet internally heterogeneous groupings are evident in a statistical population. It is often used in marketing research.
In this sampling plan, the total populat ...
*
Cluster-weighted modeling In data mining, cluster-weighted modeling (CWM) is an algorithm-based approach to non-linear prediction of outputs ( dependent variables) from inputs (independent variables) based on density estimation using a set of models (clusters) that are each ...
*
Clustering high-dimensional data Clustering high-dimensional data is the cluster analysis of data with anywhere from a few dozen to many thousands of dimensions. Such high-dimensional spaces of data are often encountered in areas such as medicine, where DNA microarray technology ca ...
*
CMA-ES
Covariance matrix adaptation evolution strategy (CMA-ES) is a particular kind of strategy for numerical optimization. evolution strategy, Evolution strategies (ES) are stochastic, Derivative-free optimization, derivative-free methods for numerical ...
(Covariance Matrix Adaptation Evolution Strategy)
*
Coalescent theory
Coalescent theory is a Scientific modelling, model of how alleles sampled from a population may have originated from a most recent common ancestor, common ancestor. In the simplest case, coalescent theory assumes no genetic recombination, recombina ...
*
Cochran's C test
*
Cochran's Q test
*
Cochran's theorem
*
Cochran–Armitage test for trend
*
Cochran–Mantel–Haenszel statistics
In statistics, the Cochran–Mantel–Haenszel test (CMH) is a test used in the analysis of stratified or matched categorical data. It allows an investigator to test the association between a binary predictor or treatment and a binary outcome su ...
*
Cochrane–Orcutt estimation
*
Coding (social sciences)
In the social sciences, coding is an analytical process in which data, in both quantitative form (such as questionnaires results) or qualitative form (such as interview transcripts) are categorized to facilitate analysis.
One purpose of coding ...
*
Coefficient of coherence redirects to
Coherence (statistics)
*
Coefficient of determination
In statistics, the coefficient of determination, denoted ''R''2 or ''r''2 and pronounced "R squared", is the proportion of the variation in the dependent variable that is predictable from the independent variable(s).
It is a statistic used in t ...
*
Coefficient of dispersion
*
Coefficient of variation
In probability theory and statistics, the coefficient of variation (CV), also known as normalized root-mean-square deviation (NRMSD), percent RMS, and relative standard deviation (RSD), is a standardized measure of dispersion of a probability ...
*
Cognitive pretesting
Cognitive pretesting, or cognitive interviewing, is a field research method where data is collected on how the subject answers interview questions. It is the evaluation of a test or questionnaire before it's administered. It allows survey research ...
*
Cohen's class distribution function – a time-frequency distribution function
*
Cohen's kappa
Cohen's kappa coefficient ('κ', lowercase Greek kappa) is a statistic that is used to measure inter-rater reliability (and also intra-rater reliability) for qualitative (categorical) items. It is generally thought to be a more robust measure than ...
*
Coherence (signal processing)
In signal processing, the coherence is a statistic that can be used to examine the relation between two signals or data sets. It is commonly used to estimate the power transfer between input and output of a linear system. If the signals are ergod ...
*
Coherence (statistics)
*
Cohort (statistics)
In statistics, epidemiology, marketing and demography, a cohort is a group of research subject, subjects who share a defining characteristic (typically subjects who experienced a common event in a selected time period, such as birth or graduatio ...
*
Cohort effect
*
Cohort study
A cohort study is a particular form of longitudinal study that samples a Cohort (statistics), cohort (a group of people who share a defining characteristic, typically those who experienced a common event in a selected period, such as birth or gra ...
*
Cointegration
In econometrics, cointegration is a statistical property describing a long-term, stable relationship between two or more time series variables, even if those variables themselves are individually non-stationary (i.e., they have trends). This means ...
*
Collectively exhaustive events
In probability theory and logic, a set of events is jointly or collectively exhaustive if at least one of the events must occur. For example, when rolling a six-sided die, the events 1, 2, 3, 4, 5, and 6 are collectively exhaustive, because t ...
*
Collider (epidemiology)
In statistics and causal graphs, a variable is a collider when it is causally influenced by two or more variables. The name "collider" reflects the fact that in graphical models, the arrow heads from variables that lead into the collider appear to ...
*
Combinatorial data analysis In statistics, combinatorial data analysis (CDA) is the study of data sets where the order in which objects are arranged is important. CDA can be used either to determine how well a given combinatorial construct reflects the observed data, or to se ...
*
Combinatorial design
Combinatorial design theory is the part of combinatorial mathematics that deals with the existence, construction and properties of systems of finite sets whose arrangements satisfy generalized concepts of ''balance'' and/or ''symmetry''. These co ...
*
Combinatorial meta-analysis
Combinatorial meta-analysis (CMA) is the study of the behaviour of statistical properties of combinations of studies from a meta-analytic dataset (typically in social science research). In an article that develops the notion of "gravity" in the co ...
*
Common-method variance
*
Common mode failure
*
Common cause and special cause (statistics)
*
Comonotonicity In probability theory, comonotonicity mainly refers to the perfect positive dependence between the components of a random vector, essentially saying that they can be represented as increasing functions of a single random variable. In two dimensions ...
*
Comparing means
*
Comparison of general and generalized linear models
*
Comparison of statistical packages
The following tables compare general and technical information for many statistical analysis software packages.
General information
Operating system support
ANOVA
Support for various ANOVA methods
Regression
Support for various regression ...
*
Comparisonwise error rate
*
Complementary event
In probability theory, the complement of any event ''A'' is the event ot ''A'' i.e. the event that ''A'' does not occur.Robert R. Johnson, Patricia J. Kuby: ''Elementary Statistics''. Cengage Learning 2007, , p. 229 () The event ''A'' and ...
*
Complete-linkage clustering
Complete-linkage clustering is one of several methods of agglomerative hierarchical clustering. At the beginning of the process, each element is in a cluster of its own. The clusters are then sequentially combined into larger clusters until all ...
*
Complete spatial randomness
Complete spatial randomness (CSR) describes a point process whereby point events occur within a given study area in a completely random fashion. It is synonymous with a ''homogeneous spatial Poisson process''.O. Maimon, L. Rokach, ''Data Mining a ...
*
Completely randomized design In the design of experiments, completely randomized designs are for studying the effects of one primary factor without the need to take other nuisance variables into account. This article describes completely randomized designs that have one primary ...
*
Completeness (statistics)
In statistics, completeness is a property of a statistic computed on a sample dataset in relation to a parametric model of the dataset. It is opposed to the concept of an ancillary statistic. While an ancillary statistic contains no information ...
*
Compositional data
*
Composite bar chart
*
Compound Poisson distribution
*
Compound Poisson process
A compound Poisson process is a continuous-time stochastic process with jumps. The jumps arrive randomly according to a Poisson process and the size of the jumps is also random, with a specified probability distribution. To be precise, a compound ...
*
Compound probability distribution
In probability and statistics, a compound probability distribution (also known as a mixture distribution or contagious distribution) is the probability distribution that results from assuming that a random variable is distributed according to some ...
*Computational formula for the variance
*
Computational learning theory
In computer science, computational learning theory (or just learning theory) is a subfield of artificial intelligence devoted to studying the design and analysis of machine learning algorithms.
Overview
Theoretical results in machine learning m ...
*
Computational statistics
Computational statistics, or statistical computing, is the study which is the intersection of statistics and computer science, and refers to the statistical methods that are enabled by using computational methods. It is the area of computational ...
*
Computer experiment
A computer experiment or simulation experiment is an experiment used to study a computer simulation, also referred to as an in silico system. This area includes computational physics, computational chemistry, computational biology and other simila ...
*
Computer-assisted survey information collection
*
Concomitant (statistics)
*
Concordance correlation coefficient
In statistics, the concordance correlation coefficient measures the agreement between two variables, e.g., to evaluate reproducibility or for inter-rater reliability.
Definition
The form of the concordance correlation coefficient \rho_c as
:\rho_c ...
*
Concordant pair In statistics, a concordant pair is a pair of observations, each on two variables, (X''1'',Y''1'') and (X''2'',Y''2''), having the property that
: \sgn (X_2 - X_1)\ = \sgn (Y_2 - Y_1),
where "sgn" refers to whether a number is positive, zero, o ...
*
Concrete illustration of the central limit theorem
*
Concurrent validity Concurrent validity is a type of evidence that can be gathered to defend the use of a test for predicting other outcomes. It is a parameter used in sociology, psychology, and other psychometric or behavioral sciences. Concurrent validity is demonst ...
*
Conditional change model
*Conditional distribution – see
Conditional probability distribution
In probability theory and statistics, the conditional probability distribution is a probability distribution that describes the probability of an outcome given the occurrence of a particular event. Given two jointly distributed random variables X ...
*
Conditional dependence
*
Conditional expectation
In probability theory, the conditional expectation, conditional expected value, or conditional mean of a random variable is its expected value evaluated with respect to the conditional probability distribution. If the random variable can take on ...
*
Conditional independence
In probability theory, conditional independence describes situations wherein an observation is irrelevant or redundant when evaluating the certainty of a hypothesis. Conditional independence is usually formulated in terms of conditional probabi ...
*
Conditional probability
In probability theory, conditional probability is a measure of the probability of an Event (probability theory), event occurring, given that another event (by assumption, presumption, assertion or evidence) is already known to have occurred. This ...
*
Conditional probability distribution
In probability theory and statistics, the conditional probability distribution is a probability distribution that describes the probability of an outcome given the occurrence of a particular event. Given two jointly distributed random variables X ...
*
Conditional random field
Conditional random fields (CRFs) are a class of statistical modeling methods often applied in pattern recognition and machine learning and used for structured prediction. Whereas a classifier predicts a label for a single sample without consi ...
*
Conditional variance
In probability theory and statistics, a conditional variance is the variance of a random variable given the value(s) of one or more other variables.
Particularly in econometrics, the conditional variance is also known as the scedastic function or s ...
*
Conditionality principle
*
Confidence band redirects to
Confidence and prediction bands
*
Confidence distribution
*
Confidence interval
*
Confidence region In statistics, a confidence region is a multi-dimensional generalization of a confidence interval. For a bivariate normal distribution, it is an ellipse, also known as the error ellipse. More generally, it is a set of points in an ''n''-dimension ...
*
Configural frequency analysis
*
Confirmation bias
Confirmation bias (also confirmatory bias, myside bias, or congeniality bias) is the tendency to search for, interpret, favor and recall information in a way that confirms or supports one's prior beliefs or Value (ethics and social sciences), val ...
*
Confirmatory factor analysis
In statistics, confirmatory factor analysis (CFA) is a special form of factor analysis, most commonly used in social science research.Kline, R. B. (2010). ''Principles and practice of structural equation modeling (3rd ed.).'' New York, New York: Gu ...
*
Confounding
In causal inference, a confounder is a variable that influences both the dependent variable and independent variable, causing a spurious association. Confounding is a causal concept, and as such, cannot be described in terms of correlatio ...
*
Confounding factor
In causal inference, a confounder is a variable that influences both the dependent variable and independent variable, causing a spurious association. Confounding is a causal concept, and as such, cannot be described in terms of correlati ...
*
Confusion of the inverse
Confusion of the inverse, also called the conditional probability fallacy or the inverse fallacy, is a logical fallacy whereupon a conditional probability is equated with its inverse; that is, given two events ''A'' and ''B'', the probability of ...
*
Congruence coefficient In multivariate statistics, the congruence coefficient is an index of the similarity between factors that have been derived in a factor analysis. It was introduced in 1948 by Cyril Burt who referred to it as ''unadjusted correlation''. It is also ...
*
Conjoint analysis
Conjoint analysis is a survey-based statistical technique used in market research that helps determine how people value different attributes (feature, function, benefits) that make up an individual product or service.
The objective of conjoint an ...
**
Conjoint analysis (in healthcare)
**
Conjoint analysis (in marketing)
Conjoint analysis is a survey-based statistical technique used in market research that helps determine how people value different attributes (feature, function, benefits) that make up an individual product or service.
The objective of conjoint an ...
*
Conjugate prior
In Bayesian probability theory, if, given a likelihood function
p(x \mid \theta), the posterior distribution p(\theta \mid x) is in the same probability distribution family as the prior probability distribution p(\theta), the prior and posteri ...
*
Consensus-based assessment
*
Consensus clustering
Consensus clustering is a method of aggregating (potentially conflicting) results from multiple clustering algorithms. Also called cluster ensembles or aggregation of clustering (or partitions), it refers to the situation in which a number of diff ...
*
Consensus forecast
A consensus forecast is a prediction of the future created by combining several separate forecasts which have often been created using different methodologies. They are used in a number of sciences, ranging from econometrics to meteorology, and a ...
*
Conservatism (belief revision)
*
Consistency (statistics) In statistics, consistency of procedures, such as computing confidence intervals or conducting hypothesis tests, is a desired property of their behaviour as the number of items in the data set to which they are applied increases indefinitely. In p ...
*
Consistent estimator
In statistics, a consistent estimator or asymptotically consistent estimator is an estimator—a rule for computing estimates of a parameter ''θ''0—having the property that as the number of data points used increases indefinitely, the result ...
*
Constant elasticity of substitution
Constant elasticity of substitution (CES) is a common specification of many production functions and utility function
In economics, utility is a measure of a certain person's satisfaction from a certain state of the world. Over time, the term ...
*
Constant false alarm rate
Constant false alarm rate (CFAR) detection is a common form of adaptive algorithm used in radar systems to detect target returns against a background of noise, clutter and interference.
Principle
In the radar receiver, the returning echoes are ...
*
Constraint (information theory)
Constraint in information theory is the degree of statistical dependence between or among variables.
Garner Garner W R (1962). ''Uncertainty and Structure as Psychological Concepts'', John Wiley & Sons, New York. provides a thorough discussion of ...
*
Consumption distribution
*
Contact process (mathematics)
The contact process is a stochastic process used to model population growth on the set of sites S of a graph in which occupied sites become vacant at a constant rate, while vacant sites become occupied at a rate proportional to the number of occupi ...
*
Content validity
In psychometrics, content validity (also known as logical validity) refers to the extent to which a measure represents all facets of a given construct. For example, a depression scale may lack content validity if it only assesses the affective di ...
*
Contiguity (probability theory) In probability theory, two sequences of probability measures are said to be contiguous if asymptotically they share the same support. Thus the notion of contiguity extends the concept of absolute continuity to the sequences of measures.
The concep ...
*
Contingency table
In statistics, a contingency table (also known as a cross tabulation or crosstab) is a type of table in a matrix format that displays the multivariate frequency distribution of the variables. They are heavily used in survey research, business int ...
*
Continuity correction
*Continuous distribution – see
Continuous probability distribution
In probability theory and statistics, a probability distribution is a Function (mathematics), function that gives the probabilities of occurrence of possible events for an Experiment (probability theory), experiment. It is a mathematical descri ...
*
Continuous mapping theorem
In probability theory, the continuous mapping theorem states that continuous functions preserve limits even if their arguments are sequences of random variables. A continuous function, in Heine's definition, is such a function that maps converge ...
*
Continuous probability distribution
In probability theory and statistics, a probability distribution is a Function (mathematics), function that gives the probabilities of occurrence of possible events for an Experiment (probability theory), experiment. It is a mathematical descri ...
*
Continuous stochastic process
In probability theory, a continuous stochastic process is a type of stochastic process that may be said to be " continuous" as a function of its "time" or index parameter. Continuity is a nice property for (the sample paths of) a process to have, ...
*
Continuous-time Markov process
A continuous-time Markov chain (CTMC) is a continuous stochastic process in which, for each state, the process will change state according to an exponential random variable and then move to a different state as specified by the probabilities of a ...
*
Continuous-time stochastic process In probability theory and statistics, a continuous-time stochastic process, or a continuous-space-time stochastic process is a stochastic process for which the index variable takes a continuous set of values, as contrasted with a discrete-time proc ...
*
Contrast (statistics)
In statistics, particularly in the analysis of variance and linear regression, a contrast is a linear combination of variables ( parameters or statistics) whose coefficients add up to zero, allowing comparison of different treatments.
Definitions
...
*
Control chart
Control charts are graphical plots used in production control to determine whether quality and manufacturing processes are being controlled under stable conditions. (ISO 7870-1)
The hourly status is arranged on the graph, and the occurrence of ...
*
Control event rate
*
Control limits
*
Control variate
*
Controlling for a variable
In causal models, controlling for a variable means binning data according to measured values of the variable. This is typically done so that the variable can no longer act as a confounder in, for example, an observational study or experiment.
...
*
Convergence of measures
*
Convergence of random variables
In probability theory, there exist several different notions of convergence of sequences of random variables, including ''convergence in probability'', ''convergence in distribution'', and ''almost sure convergence''. The different notions of conve ...
*
Convex hull
In geometry, the convex hull, convex envelope or convex closure of a shape is the smallest convex set that contains it. The convex hull may be defined either as the intersection of all convex sets containing a given subset of a Euclidean space, ...
*
Convolution of probability distributions
*
Convolution random number generator
In statistics and computer software, a convolution random number generator is a pseudo-random number sampling method that can be used to generate random variates from certain classes of probability distribution. The particular advantage of this ...
*
Conway–Maxwell–Poisson distribution
In probability theory and statistics, the Conway–Maxwell–Poisson (CMP or COM–Poisson) distribution is a discrete probability distribution named after Richard W. Conway, William L. Maxwell, and Siméon Denis Poisson that generalizes the Pois ...
*
Cook's distance
*
Cophenetic correlation In statistics, and especially in biostatistics, cophenetic correlation (more precisely, the cophenetic correlation coefficient) is a measure of how faithfully a dendrogram preserves the pairwise distances between the original unmodeled data points. ...
*
Copula (statistics)
In probability theory and statistics, a copula is a multivariate cumulative distribution function for which the marginal probability distribution of each variable is Uniform distribution (continuous), uniform on the interval , 1 Cop ...
*
Cornish–Fisher expansion
*
Correct sampling
During sampling of granular materials (whether airborne, suspended in liquid, aerosol, or aggregated), correct sampling is defined in Gy's sampling theory as a sampling scenario in which all particles in a population have the same probability ...
*
Correction for attenuation
*
Correlation
In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics ...
*
Correlation and dependence
In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics ...
*
Correlation does not imply causation
The phrase "correlation does not imply causation" refers to the inability to legitimately deduce a cause-and-effect relationship between two events or variables solely on the basis of an observed association or correlation between them. The id ...
*
Correlation clustering
Clustering is the problem of partitioning data points into groups based on their similarity. Correlation clustering provides a method for clustering a set of objects into the optimum number of clusters without specifying that number in advance.
D ...
*
Correlation function
A correlation function is a function that gives the statistical correlation between random variables, contingent on the spatial or temporal distance between those variables. If one considers the correlation function between random variables ...
**
Correlation function (astronomy)
**
Correlation function (quantum field theory)
In quantum field theory, correlation functions, often referred to as correlators or Green's functions, are vacuum expectation values of time-ordered products of field operators. They are a key object of study in quantum field theory where the ...
**
Correlation function (statistical mechanics)
In statistical mechanics, the correlation function is a measure of the order in a system, as characterized by a mathematical correlation function. Correlation functions describe how microscopic variables, such as spin and density, at different p ...
*
Correlation inequality
*
Correlation ratio In statistics, the correlation ratio is a measure of the curvilinear relationship between the statistical dispersion within individual categories and the dispersion across the whole population or sample. The measure is defined as the ''ratio'' of tw ...
*
Correlogram
*
Correspondence analysis
Correspondence analysis (CA) is a multivariate statistical technique proposed by Herman Otto Hartley (Hirschfeld) and later developed by Jean-Paul Benzécri. It is conceptually similar to principal component analysis, but applies to categorical ...
*
Cosmic variance Cosmic Variance may refer to:
* Cosmic variance, in cosmology, the statistical uncertainty inherent in observations of the universe at extreme distances
* Cosmic Variance (blog), a collaborative weblog discussing physics, astrophysics, and othe ...
*
Cost-of-living index
A cost-of-living index is a theoretical price index that measures relative cost of living over time or regions. It is an index that measures differences in the price of goods and services, and allows for substitutions with other items as pric ...
*
Count data
Count (feminine: countess) is a historical title of nobility in certain European countries, varying in relative status, generally of middling rank in the hierarchy of nobility. Pine, L. G. ''Titles: How the King Became His Majesty''. New York: ...
*
Counternull
*
Counting process A counting process is a stochastic process
In probability theory and related fields, a stochastic () or random process is a mathematical object usually defined as a family of random variables in a probability space, where the index of the famil ...
*
Covariance
In probability theory and statistics, covariance is a measure of the joint variability of two random variables.
The sign of the covariance, therefore, shows the tendency in the linear relationship between the variables. If greater values of one ...
*
Covariance and correlation
In probability theory and statistics, the mathematical concepts of covariance and correlation are very similar. Both describe the degree to which two random variables or sets of random variables tend to deviate from their expected values in sim ...
*
Covariance intersection
*
Covariance matrix
In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
*
Covariance function
In probability theory and statistics, the covariance function describes how much two random variables change together (their ''covariance'') with varying spatial or temporal separation. For a random field or stochastic process ''Z''(''x'') on a dom ...
*
Covariate
A variable is considered dependent if it depends on (or is hypothesized to depend on) an independent variable. Dependent variables are studied under the supposition or demand that they depend, by some law or rule (e.g., by a mathematical function ...
*
Cover's theorem
*
Coverage probability
In statistical estimation theory, the coverage probability, or coverage for short, is the probability that a confidence interval or confidence region will include the true value (parameter) of interest.
It can be defined as the proportion of i ...
*
Cox process
In probability theory, a Cox process, also known as a doubly stochastic Poisson process is a point process which is a generalization of a Poisson process where the intensity that varies across the underlying mathematical space (often space or time) ...
*
Cox's theorem
Cox's theorem, named after the physicist Richard Threlkeld Cox, is a derivation of the laws of probability theory from a certain set of postulates. This derivation justifies the so-called "logical" interpretation of probability, as the laws of pr ...
*
Cox–Ingersoll–Ross model
In mathematical finance, the Cox–Ingersoll–Ross (CIR) model describes the evolution of interest rates. It is a type of "one factor model" (short-rate model) as it describes interest rate movements as driven by only one source of market risk. T ...
*
Cramér–Rao bound
In estimation theory and statistics, the Cramér–Rao bound (CRB) relates to estimation of a deterministic (fixed, though unknown) parameter. The result is named in honor of Harald Cramér and Calyampudi Radhakrishna Rao, but has also been d ...
*
Cramér–von Mises criterion
*
Cramér's decomposition theorem
*
Cramér's theorem (large deviations)
*
Cramér's V
In statistics, Cramér's V (sometimes referred to as Cramér's phi and denoted as φ''c'') is a measure of association between two nominal variables, giving a value between 0 and +1 (inclusive). It is based on Pearson's chi-squared statistic an ...
*
Craps principle
*
Credal set
*
Credible interval
In Bayesian statistics, a credible interval is an interval used to characterize a probability distribution. It is defined such that an unobserved parameter value has a particular probability \gamma to fall within it. For example, in an experime ...
*
Cricket statistics
Cricket is a sport that generates a variety of statistics.
Statistics are recorded for each player during a match, and aggregated over a career. At the professional level, statistics for Test cricket, One Day International (ODI), and first-class ...
*
Crime statistics
Crime statistics refer to systematic, quantitative results about crime, as opposed to crime news or anecdotes. Notably, crime statistics can be the result of two rather different processes:
* scientific research, such as criminological studies, vi ...
*
Critical region
A statistical hypothesis test is a method of statistical inference used to decide whether the data provide sufficient evidence to reject a particular hypothesis. A statistical hypothesis test typically involves a calculation of a test statistic. T ...
redirects to
Statistical hypothesis testing
A statistical hypothesis test is a method of statistical inference used to decide whether the data provide sufficient evidence to reject a particular hypothesis. A statistical hypothesis test typically involves a calculation of a test statistic. T ...
*
Cromwell's rule
Cromwell's rule, named by statistician Dennis Lindley, states that the use of prior probabilities of 1 ("the event will definitely occur") or 0 ("the event will definitely not occur") should be avoided, except when applied to statements that ar ...
*
Cronbach's α
*
Cross-correlation
In signal processing, cross-correlation is a measure of similarity of two series as a function of the displacement of one relative to the other. This is also known as a ''sliding dot product'' or ''sliding inner-product''. It is commonly used f ...
*
Cross-covariance
In probability and statistics, given two stochastic processes \left\ and \left\, the cross-covariance is a function that gives the covariance of one process with the other at pairs of time points. With the usual notation \operatorname E for th ...
*
Cross-entropy method
*
Cross-sectional data
In statistics and econometrics, cross-sectional data is a type of data collected by observing many subjects (such as individuals, firms, countries, or regions) at a single point or period of time. Analysis of cross-sectional data usually consists ...
*
Cross-sectional regression In statistics and econometrics, a cross-sectional regression is a type of Regression analysis, regression in which the Dependent and independent variables, explained and explanatory variables are all associated with the same single period or point i ...
*
Cross-sectional study
In statistics and econometrics, cross-sectional data is a type of data collected by observing many subjects (such as individuals, firms, countries, or regions) at a single point or period of time. Analysis of cross-sectional data usually consists ...
*
Cross-spectrum
*
Cross tabulation
In statistics, a contingency table (also known as a cross tabulation or crosstab) is a type of table in a matrix format that displays the multivariate frequency distribution of the variables. They are heavily used in survey research, business int ...
*
Cross-validation (statistics)
Cross-validation, sometimes called rotation estimation or out-of-sample testing, is any of various similar model validation techniques for assessing how the results of a statistics, statistical analysis will Generalization error, generalize to ...
*
Crossover study
In medicine, a crossover study or crossover trial is a longitudinal study in which subjects receive a sequence of different treatments (or exposures). While crossover studies can be observational studies, many important crossover studies are c ...
*
Crystal Ball function – a probability distribution
*
Cumulant
In probability theory and statistics, the cumulants of a probability distribution are a set of quantities that provide an alternative to the '' moments'' of the distribution. Any two probability distributions whose moments are identical will have ...
*
Cumulant generating function
In probability theory and statistics, the cumulants of a probability distribution are a set of quantities that provide an alternative to the '' moments'' of the distribution. Any two probability distributions whose moments are identical will have ...
redirects to
cumulant
In probability theory and statistics, the cumulants of a probability distribution are a set of quantities that provide an alternative to the '' moments'' of the distribution. Any two probability distributions whose moments are identical will have ...
*
Cumulative accuracy profile
*
Cumulative distribution function
In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x.
Ever ...
*
Cumulative frequency analysis
*
Cumulative incidence
In epidemiology, incidence reflects the number of new cases of a given medical condition in a population within a specified period of time.
Incidence proportion
Incidence proportion (IP), also known as cumulative incidence, is defined as the p ...
*
Cunningham function
*
CURE data clustering algorithm
*
Curve fitting
Curve fitting is the process of constructing a curve, or mathematical function, that has the best fit to a series of data points, possibly subject to constraints. Curve fitting can involve either interpolation, where an exact fit to the data is ...
*
CUSUM
In statistical process control, statistical quality control, the CUSUM (or cumulative sum control chart) is a sequential analysis technique developed by E. S. Page of the University of Cambridge. It is typically used for monitoring change detecti ...
*
Cuzick–Edwards test
In statistics, the Cuzick–Edwards test is a significance test whose aim is to detect the possible clustering of sub-populations within a clustered or non-uniformly-spread overall population. Possible applications of the test include examining the ...
*
Cyclostationary process
A cyclostationary process is a signal having statistical properties that vary cyclically with time.
A cyclostationary process can be viewed as multiple interleaved stationary processes. For example, the maximum daily temperature in New York City c ...
D
*
d-separation
A Bayesian network (also known as a Bayes network, Bayes net, belief network, or decision network) is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). Whi ...
*
D/M/1 queue
*
D'Agostino's K-squared test
*
Dagum distribution
The Dagum distribution (or Mielke Beta-Kappa distribution) is a continuous probability distribution defined over positive real numbers. It is named after Camilo Dagum, who proposed it in a series of papers in the 1970s. The Dagum distribution ar ...
*
DAP open source software
*
Data analysis
Data analysis is the process of inspecting, Data cleansing, cleansing, Data transformation, transforming, and Data modeling, modeling data with the goal of discovering useful information, informing conclusions, and supporting decision-making. Da ...
*
Data assimilation
Data assimilation refers to a large group of methods that update information from numerical computer models with information from observations. Data assimilation is used to update model states, model trajectories over time, model parameters, and ...
*
Data binning
Data binning, also called data discrete binning or data bucketing, is a data pre-processing technique used to reduce the effects of minor observation errors. The original data values which fall into a given small interval, a '' bin'', are replace ...
*
Data classification (business intelligence)
*
Data cleansing
Data cleansing or data cleaning is the process of identifying and correcting (or removing) corrupt, inaccurate, or irrelevant records from a dataset, table, or database. It involves detecting incomplete, incorrect, or inaccurate parts of the dat ...
*
Data clustering
Cluster analysis or clustering is the data analyzing technique in which task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some specific sense defined by the analyst) to each o ...
*
Data collection
Data collection or data gathering is the process of gathering and measuring information on targeted variables in an established system, which then enables one to answer relevant questions and evaluate outcomes. Data collection is a research com ...
*
Data Desk software
*
Data dredging
Data dredging, also known as data snooping or ''p''-hacking is the misuse of data analysis to find patterns in data that can be presented as statistically significant, thus dramatically increasing and understating the risk of false positives. Th ...
*
Data fusion
*
Data generating process
*
Data mining
Data mining is the process of extracting and finding patterns in massive data sets involving methods at the intersection of machine learning, statistics, and database systems. Data mining is an interdisciplinary subfield of computer science and ...
*
Data reduction Data reduction is the transformation of numerical or alphabetical digital information derived empirically or experimentally into a corrected, ordered, and simplified form. The purpose of data reduction can be two-fold: reduce the number of data rec ...
*
Data point
In statistics, a unit of observation is the unit described by the data that one analyzes. A study may treat groups as a unit of observation with a country as the unit of analysis, drawing conclusions on group characteristics from data collected a ...
*
Data quality assurance
Data ( , ) are a collection of discrete or continuous values that convey information, describing the quantity, quality, fact, statistics, other basic units of meaning, or simply sequences of symbols that may be further interpreted formally ...
*
Data set
A data set (or dataset) is a collection of data. In the case of tabular data, a data set corresponds to one or more table (database), database tables, where every column (database), column of a table represents a particular Variable (computer sci ...
*
Data-snooping bias
*
Data stream clustering
*
Data transformation (statistics)
In statistics, data transformation is the application of a deterministic mathematical function to each point in a data set—that is, each data point ''zi'' is replaced with the transformed value ''yi'' = ''f''(''zi''), where ''f'' is a functi ...
*
Data visualization
Data and information visualization (data viz/vis or info viz/vis) is the practice of designing and creating Graphics, graphic or visual Representation (arts), representations of a large amount of complex quantitative and qualitative data and i ...
*
DataDetective software
*
Dataplot software
*
Davies–Bouldin index
The Davies–Bouldin index (DBI), introduced by David L. Davies and Donald W. Bouldin in 1979, is a metric for evaluating clustering algorithms. This is an internal evaluation scheme, where the validation of how well the clustering has been d ...
*
Davis distribution
*
De Finetti's game
*
De Finetti's theorem
In probability theory, de Finetti's theorem states that exchangeable random variables, exchangeable observations are conditionally independent relative to some latent variable. An epistemic probability probability distribution, distribution could ...
*
DeFries–Fulker regression
*
de Moivre's law
*
De Moivre–Laplace theorem
In probability theory, the de Moivre–Laplace theorem, which is a special case of the central limit theorem, states that the normal distribution may be used as an approximation to the binomial distribution under certain conditions. In particul ...
*
Decision boundary
__NOTOC__
In a statistical-classification problem with two classes, a decision boundary or decision surface is a hypersurface that partitions the underlying vector space into two sets, one for each class. The classifier will classify all the poin ...
*
Decision theory
Decision theory or the theory of rational choice is a branch of probability theory, probability, economics, and analytic philosophy that uses expected utility and probabilities, probability to model how individuals would behave Rationality, ratio ...
*
Decomposition of time series
The decomposition of time series is a statistical task that deconstructs a time series into several components, each representing one of the underlying categories of patterns. There are two principal types of decomposition, which are outlined belo ...
*
Degenerate distribution
In probability theory, a degenerate distribution on a measure space (E, \mathcal, \mu) is a probability distribution whose support is a null set with respect to \mu. For instance, in the -dimensional space endowed with the Lebesgue measure, an ...
*
Degrees of freedom (statistics)
In statistics, the number of degrees of freedom is the number of values in the final calculation of a statistic that are free to vary.
Estimates of statistical parameters can be based upon different amounts of information or data. The number of i ...
*
Delaporte distribution
*
Delphi method The Delphi method or Delphi technique ( ; also known as Estimate-Talk-Estimate or ETE) is a structured communication technique or method, originally developed as a systematic, interactive forecasting method that relies on a panel of experts. Delphi ...
*
Delta method
In statistics, the delta method is a method of deriving the asymptotic distribution of a random variable. It is applicable when the random variable being considered can be defined as a differentiable function of a random variable which is Asymptoti ...
*
Demand forecasting
Demand forecasting, also known as ''demand planning and sales forecasting'' (DP&SF), involves the prediction of the quantity of goods and services that will be demanded by consumers or business customers at a future point in time. More specifical ...
*
Deming regression
In statistics, Deming regression, named after W. Edwards Deming, is an errors-in-variables model that tries to find the line of best fit for a two-dimensional data set. It differs from the simple linear regression in that it accounts for errors ...
*
Demographics
Demography () is the statistical study of human populations: their size, composition (e.g., ethnic group, age), and how they change through the interplay of fertility (births), mortality (deaths), and migration.
Demographic analysis examin ...
*
Demography
Demography () is the statistical study of human populations: their size, composition (e.g., ethnic group, age), and how they change through the interplay of fertility (births), mortality (deaths), and migration.
Demographic analysis examine ...
**
Demographic statistics
*
Dendrogram
A dendrogram is a diagram representing a Tree (graph theory), tree graph. This diagrammatic representation is frequently used in different contexts:
* in hierarchical clustering, it illustrates the arrangement of the clusters produced by ...
*
Density estimation
In statistics, probability density estimation or simply density estimation is the construction of an estimate, based on observed data, of an unobservable underlying probability density function. The unobservable density function is thought o ...
*
Dependent and independent variables
A variable is considered dependent if it depends on (or is hypothesized to depend on) an independent variable. Dependent variables are studied under the supposition or demand that they depend, by some law or rule (e.g., by a mathematical function ...
*
Descriptive research
Descriptive research is used to describe characteristics of a population or phenomenon being studied. It does not answer questions about how/when/why the characteristics occurred. Rather it addresses the "what" question (what are the characterist ...
*
Descriptive statistics
A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features from a collection of information, while descriptive statistics (in the mass noun sense) is the process of using and an ...
*
Design effect
*
Design matrix
In statistics and in particular in regression analysis, a design matrix, also known as model matrix or regressor matrix and often denoted by X, is a matrix of values of explanatory variables of a set of objects. Each row represents an individual o ...
*
Design of experiments
The design of experiments (DOE), also known as experiment design or experimental design, is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. ...
**
The Design of Experiments
''The Design of Experiments'' is a 1935 book by the English statistician Ronald Fisher about the design of experiments and is considered a foundational work in experimental design. Among other contributions, the book introduced the concept of th ...
(book by Fisher)
*
Detailed balance
The principle of detailed balance can be used in Kinetics (physics), kinetic systems which are decomposed into elementary processes (collisions, or steps, or elementary reactions). It states that at Thermodynamic equilibrium, equilibrium, each elem ...
*
Detection theory
Detection theory or signal detection theory is a means to measure the ability to differentiate between information-bearing patterns (called stimulus in living organisms, signal in machines) and random patterns that distract from the information (c ...
*
Determining the number of clusters in a data set
*
Detrended correspondence analysis
Detrended correspondence analysis (DCA) is a multivariate statistics, statistical technique widely used by ecology, ecologists to find the main factors or gradients in large, species-rich but usually sparse data matrices that typify Community (ecol ...
*
Detrended fluctuation analysis
In stochastic processes, chaos theory and time series analysis, detrended fluctuation analysis (DFA) is a method for determining the statistical self-affinity of a signal. It is useful for analysing time series that appear to be long-memory proc ...
*
Deviance (statistics)
In statistics
Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying ...
*
Deviance information criterion
The deviance information criterion (DIC) is a hierarchical modeling generalization of the Akaike information criterion (AIC). It is particularly useful in Bayesian model selection problems where the posterior distributions of the models have been ...
*
Deviation (statistics)
In mathematics and statistics, deviation serves as a measure to quantify the disparity between an observed value of a variable and another designated value, frequently the mean of that variable. Deviations with respect to the sample mean and the ...
*
Deviation analysis (disambiguation)
*
DFFITS
In statistics, DFFIT and DFFITS ("difference in fit(s)") are diagnostics meant to show how influential a point is in a linear regression, first proposed in 1980.
DFFIT is the change in the predicted value for a point, obtained when that point is ...
a regression diagnostic
*
Diagnostic odds ratio
*
Dickey–Fuller test
*
Difference in differences
Difference in differences (DID or DD) is a statistical technique used in econometrics and quantitative research in the social sciences that attempts to mimic an experimental research design using observational study data, by studying the differe ...
*
Differential entropy
Differential entropy (also referred to as continuous entropy) is a concept in information theory that began as an attempt by Claude Shannon to extend the idea of (Shannon) entropy (a measure of average surprisal) of a random variable, to continu ...
*
Diffusion process
In probability theory and statistics, diffusion processes are a class of continuous-time Markov process with almost surely continuous sample paths. Diffusion process is stochastic in nature and hence is used to model many real-life stochastic sy ...
*
Diffusion-limited aggregation
Diffusion-limited aggregation (DLA) is the process whereby particles undergoing a random walk due to Brownian motion cluster together to form aggregates of such particles. This theory, proposed by T.A. Witten Jr. and L.M. Sander in 1981, is ap ...
*
Dimension reduction
Dimensionality reduction, or dimension reduction, is the transformation of data from a high-dimensional space into a low-dimensional space so that the low-dimensional representation retains some meaningful properties of the original data, ideally ...
*
Dilution assay
*
Direct relationship
In mathematics, two sequences of numbers, often experimental data, are proportional or directly proportional if their corresponding elements have a constant ratio. The ratio is called ''coefficient of proportionality'' (or ''proportionality c ...
*
Directional statistics
Directional statistics (also circular statistics or spherical statistics) is the subdiscipline of statistics that deals with directions (unit vectors in Euclidean space, R''n''), axes ( lines through the origin in R''n'') or rotations in R''n''. ...
*
Dirichlet distribution
In probability and statistics, the Dirichlet distribution (after Peter Gustav Lejeune Dirichlet), often denoted \operatorname(\boldsymbol\alpha), is a family of continuous multivariate probability distributions parameterized by a vector of pos ...
*
Dirichlet-multinomial distribution
*
Dirichlet process
In probability theory, Dirichlet processes (after the distribution associated with Peter Gustav Lejeune Dirichlet) are a family of stochastic processes whose realizations are probability distributions. In other words, a Dirichlet process is a pro ...
*
Disattenuation
*
Discrepancy function
*
Discrete choice
In economics, discrete choice models, or qualitative choice models, describe, explain, and predict choices between two or more discrete alternatives, such as entering or not entering the labor market, or choosing between modes of transport. Such c ...
*
Discrete choice analysis
*
Discrete distribution
In probability theory and statistics, a probability distribution is a function that gives the probabilities of occurrence of possible events for an experiment. It is a mathematical description of a random phenomenon in terms of its sample spac ...
redirects to section of
Probability distribution
In probability theory and statistics, a probability distribution is a Function (mathematics), function that gives the probabilities of occurrence of possible events for an Experiment (probability theory), experiment. It is a mathematical descri ...
*
Discrete frequency domain
*
Discrete phase-type distribution
*Discrete probability distribution redirects to section of
Probability distribution
In probability theory and statistics, a probability distribution is a Function (mathematics), function that gives the probabilities of occurrence of possible events for an Experiment (probability theory), experiment. It is a mathematical descri ...
*Discrete time
*Discretization of continuous features
*Discriminant function analysis
*Discriminative model
*Disorder problem
*Distance correlation
*Distance sampling
*Distributed lag
*Distribution fitting
*Divergence (statistics)
*Diversity index
*Divisia index
*Divisia monetary aggregates index
*Dixon's Q test
*Dominating decision rule
*Donsker's theorem
*Doob decomposition theorem
*Doob martingale
*Doob's martingale convergence theorems
*Doob's martingale inequality
*Doob–Meyer decomposition theorem
*Doomsday argument
*Dot plot (bioinformatics)
*Dot plot (statistics)
*Double counting (fallacy)
*Double descent
*Double exponential distribution (disambiguation)
*Double mass analysis
*Doubly stochastic model
*Drift rate redirects to Stochastic drift
*Dudley's theorem
*Dummy variable (statistics)
*Duncan's new multiple range test
*Dunn index
*Dunnett's test
*Durbin test
*Durbin–Watson statistic
*Dutch book
*Dvoretzky–Kiefer–Wolfowitz inequality
*Dyadic distribution
*Dynamic Bayesian network
*Dynamic factor
*Dynamic topic model
E
*E-statistic
*Earth mover's distance
*Eaton's inequality
*Ecological correlation
*Ecological fallacy
*Ecological regression
*Ecological study
*Econometrics
*Econometric model
*Econometric software redirects to
Comparison of statistical packages
The following tables compare general and technical information for many statistical analysis software packages.
General information
Operating system support
ANOVA
Support for various ANOVA methods
Regression
Support for various regression ...
*Economic data
*Economic epidemiology
*Economic statistics
*Eddy covariance
*Edgeworth series
*Effect size
*
Efficiency (statistics)
In statistics, efficiency is a measure of quality of an estimator, of an experimental design, or of a hypothesis testing procedure. Essentially, a more efficient estimator needs fewer input data or observations than a less efficient one to achiev ...
*Efficient estimator
*Ehrenfest model
*Elastic map
*Elliptical distribution
*Ellsberg paradox
*Elston–Stewart algorithm
*EMG distribution
*Empirical
*Empirical Bayes method
*Empirical distribution function
*Empirical likelihood
*Empirical measure
*Empirical orthogonal functions
*Empirical probability
*Empirical process
*Empirical statistical laws
*Endogeneity (econometrics)
*End point of clinical trials
*Energy distance
*Energy statistics (disambiguation)
*Encyclopedia of Statistical Sciences (book)
*Engelbert–Schmidt zero–one law
*Engineering statistics
*Engineering tolerance
*Engset calculation
*Ensemble forecasting
*Ensemble Kalman filter
*Entropy (information theory)
*Entropy estimation
*Entropy power inequality
*Environmental statistics
*Epi Info software
*Epidata software
*Epidemic model
*Epidemiological methods
*Epilogism
*Epitome (image processing)
*Epps effect
*Equating test equating
*Equipossible
*Equiprobable
*Erdős–Rényi model
*Erlang distribution
*Ergodic theory
*Ergodicity
*Error bar
*Error correction model
*Error function
*Errors and residuals in statistics
*Errors-in-variables models
*An Essay Towards Solving a Problem in the Doctrine of Chances
*Estimating equations
*Estimation theory
*Estimation of covariance matrices
*Estimation of signal parameters via rotational invariance techniques
*Estimator
*Etemadi's inequality
*Ethical problems using children in clinical trials
*Event (probability theory)
*Event study
*Evidence lower bound
*
Evidence under Bayes theorem
*Evolutionary data mining
*Ewens's sampling formula
*EWMA chart
*Exact statistics
*Exact test
*Examples of Markov chains
*Excess risk
*Exchange paradox
*Exchangeable random variables
*Expander walk sampling
*Expectation–maximization algorithm
*Expectation propagation
*Expected mean squares
*Expected utility hypothesis
*Expected value
*Expected value of sample information
*Experiment
*Experimental design diagram
*Experimental event rate
*Experimental uncertainty analysis
*Experimenter's bias
*Experimentwise error rate
*Explained sum of squares
*Explained variation
*Explanatory variable
*Exploratory data analysis
*Exploratory factor analysis
*Exponential dispersion model
*Exponential distribution
*Exponential family
*Exponential-logarithmic distribution
*Exponential power distribution redirects to Generalized normal distribution
*Exponential random numbers redirect to subsection of Exponential distribution
*Exponential smoothing
*Exponentially modified Gaussian distribution
*Exponentiated Weibull distribution
*Exposure variable
*Extended Kalman filter
*Extended negative binomial distribution
*Extensions of Fisher's method
*External validity
*Extrapolation domain analysis
*Extreme value theory
*Extremum estimator
F
*F-distribution
*F-divergence
*F-statistics population genetics
*F-test
*F-test of equality of variances
*F1 score
*Facet theory
*Factor analysis
*Factor regression model
*Factor graph
*Factorial code
*Factorial experiment
*Factorial moment
*Factorial moment generating function
*Failure rate
*Fair coin
*Falconer's formula
*False discovery rate
*False nearest neighbor algorithm
*Type I and type II errors, False negative
*Type I and type II errors, False positive
*False positive rate
*False positive paradox
*Family-wise error rate
*Fan chart (time series)
*Fano factor
*Fast Fourier transform
*Fast Kalman filter
*FastICA fast independent component analysis
*Fat-tailed distribution
*Feasible generalized least squares
*Feature extraction
*Feller process
*Feller's coin-tossing constants
*Feller-continuous process
*Felsenstein's tree-pruning algorithm statistical genetics
*Fides (reliability)
*Fiducial inference
*Field experiment
*Fieller's theorem
*File drawer problem
*Filtering problem (stochastic processes)
*Financial econometrics
*Financial models with long-tailed distributions and volatility clustering
*Finite-dimensional distribution
*First-hitting-time model
*First-in-man study
*Fishburn–Shepp inequality
*Fisher consistency
*Fisher information
*Fisher information metric
*Fisher kernel
*Fisher transformation
*Fisher's exact test
*Fisher's inequality
*Fisher's linear discriminator
*Fisher's method
*Fisher's noncentral hypergeometric distribution
*Fisher's z-distribution
*Fisher–Tippett distribution redirects to Generalized extreme value distribution
*Fisher–Tippett–Gnedenko theorem
*Five-number summary
*Fixed effects estimator and Fixed effects estimation redirect to Fixed effects model
*Fixed-effect Poisson model
*FLAME clustering
*Fleiss' kappa
*Fleming–Viot process
*Flood risk assessment
*Floor effect
*Focused information criterion
*Fokker–Planck equation
*Folded normal distribution
*Forecast bias
*Forecast error
*Forecast skill
*Forecasting
*Forest plot
*Fork-join queue
*Formation matrix
*Forward measure
*Foster's theorem
*Foundations of statistics
*Founders of statistics
*Fourier analysis
*Fowlkes–Mallows index
*Fraction of variance unexplained
*Fractional Brownian motion
*Fractional factorial design
*Fréchet distribution
*Fréchet mean
*Free statistical software
*Freedman's paradox
*Freedman–Diaconis rule
*Freidlin–Wentzell theorem
*Frequency (statistics)
*Frequency distribution
*Frequency domain
*Frequency probability
*Frequentist inference
*Friedman test
*Friendship paradox
*Frisch–Waugh–Lovell theorem
*Fully crossed design
*Function approximation
*Functional boxplot
*Functional data analysis
*Funnel plot
*Fuzzy logic
*Fuzzy measure theory
*FWL theorem relating regression and projection
G
*G/G/1 queue
*G-network
*G-test
*Galbraith plot
*Gallagher Index
*Galton–Watson process
*Galton's problem
*Gambler's fallacy
*Gambler's ruin
*Gambling and information theory
*Game of chance
*Gamma distribution
*Gamma test (statistics)
*Gamma process
*Gamma variate
*GAUSS (software)
*Gauss's inequality
*Gauss–Kuzmin distribution
*Gauss–Markov process
*Gauss–Markov theorem
*Gauss–Newton algorithm
*Gaussian function
*Gaussian isoperimetric inequality
*Gaussian measure
*Gaussian noise
*Gaussian process
*Gaussian process emulator
*Gaussian q-distribution
*Geary's C
*GEH statistic – a statistic comparing modelled and observed counts
*General linear model
*Generalizability theory
*Generalized additive model
*Generalized additive model for location, scale and shape
*Generalized beta distribution
*Generalized canonical correlation
*Generalized chi-squared distribution
*Generalized Dirichlet distribution
*Generalized entropy index
*Generalized estimating equation
*Generalized expected utility
*Generalized extreme value distribution
*Generalized gamma distribution
*Generalized Gaussian distribution
*Generalised hyperbolic distribution
*Generalized inverse Gaussian distribution
*Generalized least squares
*Generalized linear array model
*Generalized linear mixed model
*Generalized linear model
*Generalized logistic distribution
*Generalized method of moments
*Generalized multidimensional scaling
*Generalized multivariate log-gamma distribution
*Generalized normal distribution
*Generalized p-value
*Generalized Pareto distribution
*Generalized Procrustes analysis
*Generalized randomized block design
*Generalized Tobit
*Generalized Wiener process
*Generative model
*Genetic epidemiology
*GenStat software
*Geo-imputation
*Geodemographic segmentation
*Geometric Brownian motion
*Geometric data analysis
*Geometric distribution
*Geometric median
*Geometric standard deviation
*Geometric stable distribution
*Geospatial predictive modeling
*Geostatistics
*German tank problem
*Gerschenkron effect
*Gibbs sampling
*Gillespie algorithm
*Gini coefficient
*Girsanov theorem
*Gittins index
*GLIM (software) software
*Glivenko–Cantelli theorem
*GLUE (uncertainty assessment)
*Goldfeld–Quandt test
*Gompertz distribution
*Gompertz function
*Gompertz–Makeham law of mortality
*Good–Turing frequency estimation
*Goodhart's law
*Goodman and Kruskal's gamma
*Goodman and Kruskal's lambda
*Goodness of fit
*Gordon–Newell network
*Gordon–Newell theorem
*Graeco-Latin square
*Grand mean
*Granger causality
*Graph cuts in computer vision a potential application of Bayesian analysis
*Graphical model
*Graphical models for protein structure
*GraphPad InStat software
*GraphPad Prism software
*Gravity model of trade
*Greenwood statistic
*Gretl
*Group family
*Group method of data handling
*Group size measures
*Grouped data
*Grubbs's test for outliers
*Guess value
*Guesstimate
*Gumbel distribution
*Guttman scale
*Gy's sampling theory
H
*h-index
*Hájek–Le Cam convolution theorem
*Half circle distribution
*Half-logistic distribution
*Half-normal distribution
*Halton sequence
*Hamburger moment problem
*Hannan–Quinn information criterion
*Harris chain
*Hardy–Weinberg principle statistical genetics
*Hartley's test
*Hat matrix
*Hammersley–Clifford theorem
*Hausdorff moment problem
*Hausman specification test redirects to Hausman test
*Haybittle–Peto boundary
*Hazard function redirects to Failure rate
*Hazard ratio
*Heaps' law
*Health care analytics
*Heart rate variability
*Heavy-tailed distribution
*Heckman correction
*Hedonic regression
*Hellin's law
*Hellinger distance
*Helmert–Wolf blocking
*Herdan's law
*Herfindahl index
*Heston model
*Heteroscedasticity
*Heteroscedasticity-consistent standard errors
*Heteroskedasticity – see Heteroscedasticity
*Hewitt–Savage zero–one law
*Hidden Markov model
*Hidden Markov random field
*Hidden semi-Markov model
*Hierarchical Bayes model
*Hierarchical clustering
*Hierarchical hidden Markov model
*Hierarchical linear modeling
*High-dimensional statistics
*Higher-order factor analysis
*Higher-order statistics
*Hirschman uncertainty
*Histogram
*Historiometry
*History of randomness
*History of statistics
*Hitting time
*Hodges' estimator
*Hodges–Lehmann estimator
*Hoeffding's independence test
*Hoeffding's lemma
*Hoeffding's inequality
*Holm–Bonferroni method
*Holtsmark distribution
*Homogeneity (statistics)
*Homogenization (climate)
*Homoscedasticity
*Hoover index (a.k.a. Robin Hood index)
*Horvitz–Thompson estimator
*Hosmer–Lemeshow test
*Hotelling's T-squared distribution
*How to Lie with Statistics (book)
*Howland will forgery trial
*Hubbert curve
*Huber–White standard error – see Heteroscedasticity-consistent standard errors
*Huber loss function
*Human subject research
*Hurst exponent
*Hyper-exponential distribution
*Hyper-Graeco-Latin square design
*Hyperbolic distribution
*Hyperbolic secant distribution
*Hypergeometric distribution
*Hyperparameter (Bayesian statistics)
*Hyperparameter (machine learning)
*Hyperprior
*Hypoexponential distribution
I
*Idealised population
*Idempotent matrix
*Identifiability
*Ignorability
*
Illustration of the central limit theorem
*Image denoising
*Importance sampling
*Imprecise probability
*Impulse response
*Imputation (statistics)
*Incidence (epidemiology)
*Increasing process
*Indecomposable distribution
*Independence of irrelevant alternatives
*Independent component analysis
*Independent and identically distributed random variables
*Index (economics)
*Index number
*Index of coincidence
*Index of dispersion
*Index of dissimilarity
*Indicators of spatial association
*Indirect least squares
*Inductive inference
*An inequality on location and scale parameters – see
Chebyshev's inequality
In probability theory, Chebyshev's inequality (also called the Bienaymé–Chebyshev inequality) provides an upper bound on the probability of deviation of a random variable (with finite variance) from its mean. More specifically, the probability ...
*Inference
*Inferential statistics redirects to Statistical inference
*Infinite divisibility (probability)
*Infinite monkey theorem
*Influence diagram
*Info-gap decision theory
*Information bottleneck method
*Information geometry
*Information gain ratio
*Information ratio finance
*Information source (mathematics)
*Information theory
*Inherent bias
*Inherent zero
*Injury prevention application
*Innovation (signal processing)
*Innovations vector
*Institutional review board
*Instrumental variable
*Integrated nested Laplace approximations
*Intention to treat analysis
*Interaction (statistics)
*Interaction variable see Interaction (statistics)
*Interclass correlation
*Interdecile range
*Interim analysis
*Internal consistency
*Internal validity
*Interquartile mean
*Interquartile range
*Inter-rater reliability
*Interval estimation
*Intervening variable
*Intra-rater reliability
*Intraclass correlation
*Invariant estimator
*Invariant extended Kalman filter
*Inverse distance weighting
*Inverse distribution
*Inverse Gaussian distribution
*Inverse matrix gamma distribution
*Inverse Mills ratio
*Inverse probability
*Inverse probability weighting
*Inverse relationship
*Inverse-chi-squared distribution
*Inverse-gamma distribution
*Inverse transform sampling
*Inverse-variance weighting
*Inverse-Wishart distribution
*Iris flower data set
*Irwin–Hall distribution
*Isomap
*Isotonic regression
*Isserlis' theorem
*Item response theory
*Item-total correlation
*Item tree analysis
*Iterative proportional fitting
*Iteratively reweighted least squares
*Itô calculus
*Itô isometry
*Itô's lemma
J
*Jaccard index
*Jackknife (statistics)
*Jackson network
*Jackson's theorem (queueing theory)
*Jadad scale
*James–Stein estimator
*Jarque–Bera test
*Jeffreys prior
*Jensen's inequality
*Jensen–Shannon divergence
*JMulTi software
*Johansen test
*Johnson SU distribution
*Joint probability distribution
*Jonckheere's trend test
*JMP (statistical software)
*Jump process
*Jump-diffusion model
*Junction tree algorithm
K
*K-distribution
*K-means algorithm redirects to k-means clustering
*K-means++
*K-medians clustering
*K-medoids
*K-statistic
*Kalman filter
*Kaplan–Meier estimator
*Kappa coefficient
*Kappa statistic
*Karhunen–Loève theorem
*Kendall tau distance
*Kendall tau rank correlation coefficient
*Kendall's notation
*Kendall's W Kendall's coefficient of concordance
*Kent distribution
*Kernel density estimation
*Kernel Fisher discriminant analysis
*Kernel methods
*Kernel principal component analysis
*Kernel regression
*Kernel smoother
*Kernel (statistics)
*Khmaladze transformation (probability theory)
*Killed process
*Khintchine inequality
*Kingman's formula
*Kirkwood approximation
*Kish grid
*Kitchen sink regression
*Klecka's tau
*Knightian uncertainty
*Kolmogorov backward equation
*Kolmogorov continuity theorem
*Kolmogorov extension theorem
*Kolmogorov's criterion
*Kolmogorov's generalized criterion
*Kolmogorov's inequality
*Kolmogorov's zero–one law
*Kolmogorov–Smirnov test
*KPSS test
*Kriging
*Kruskal–Wallis one-way analysis of variance
*Kuder–Richardson Formula 20
*Kuiper's test
*Kullback's inequality
*Kullback–Leibler divergence
*Kumaraswamy distribution
*Kurtosis
*Kushner equation
L
*L-estimator
*L-moment
*Labour Force Survey
*Lack-of-fit sum of squares
*Lady tasting tea
*Lag operator
*Lag windowing
*Lambda distribution (disambiguation), Lambda distribution disambiguation
*Landau distribution
*Lander–Green algorithm
*Language model
*Laplace distribution
*Laplace principle (large deviations theory)
*LaplacesDemon software
*Large deviations theory
*Large deviations of Gaussian random functions
*LARS – see least-angle regression
*Latent variable, latent variable model
*Latent class model
*Latent Dirichlet allocation
*Latent growth modeling
*Latent semantic analysis
*Latin rectangle
*Latin square
*Latin hypercube sampling
*Law (stochastic processes)
*Law of averages
*Law of comparative judgment
*Law of large numbers
*Law of the iterated logarithm
*Law of the unconscious statistician
*Law of total covariance
*Law of total cumulance
*Law of total expectation
*Law of total probability
*Law of total variance
*Law of truly large numbers
*Layered hidden Markov model
*Le Cam's theorem
*Lead time bias
*Least absolute deviations
*Least-angle regression
*Least squares
*Least-squares spectral analysis
*Least squares support vector machine
*Least trimmed squares
*Learning theory (statistics)
*Leftover hash-lemma
*Lehmann–Scheffé theorem
*Length time bias
*Levene's test
*Level of analysis
*Level of measurement
*Levenberg–Marquardt algorithm
*Leverage (statistics)
*Levey–Jennings chart redirects to Laboratory quality control
*Lévy's convergence theorem
*Lévy's continuity theorem
*Lévy arcsine law
*Lévy distribution
*Lévy flight
*Lévy process
*Lewontin's Fallacy
*Lexis diagram
*Lexis ratio
*Lies, damned lies, and statistics
*Life expectancy
*Life table
*Lift (data mining)
*Likelihood function
*Likelihood principle
*Likelihood-ratio test
*Likelihood ratios in diagnostic testing
*Likert scale
*Lilliefors test
*Limited dependent variable
*Limiting density of discrete points
*Lincoln index
*Lindeberg's condition
*Lindley equation
*Lindley's paradox
*Line chart
*Line-intercept sampling
*Linear classifier
*Linear discriminant analysis
*Linear least squares
*Linear model
*Linear prediction
*Linear probability model
*Linear regression
*Linguistic demography
*Linnik distribution redirects to Geometric stable distribution
*LISREL proprietary statistical software package
*List of basic statistics topics redirects to Outline of statistics
*List of convolutions of probability distributions
*List of graphical methods
*List of information graphics software
*List of probability topics
*List of random number generators
*List of scientific journals in statistics
*List of statistical packages
*List of statisticians
*Listwise deletion
*Little's law
*Littlewood's law
*Ljung–Box test
*Local convex hull
*Local independence
*Local martingale
*Local regression
*Location estimation redirects to Location parameter
*Location estimation in sensor networks
*Location parameter
*Location test
*Location-scale family
*Local asymptotic normality
*Locality (statistics)
*Loess curve redirects to Local regression
*Log-Cauchy distribution
*Log-Laplace distribution
*Log-normal distribution
*Log-linear analysis
*Log-linear model
*Log-linear modeling redirects to Poisson regression
*Log-log plot
*Log-logistic distribution
*Logarithmic distribution
*Logarithmic mean
*Logistic distribution
*Logistic function
*Logistic regression
*Logit
*Logit analysis in marketing
*Logit-normal distribution
*Log-normal distribution
*Logrank test
*Lomax distribution
*Long-range dependency
*Long Tail
*Long-tail traffic
*Longitudinal study
*Longstaff–Schwartz model
*Lorenz curve
*Loss function
*Lot quality assurance sampling
*Lotka's law
*Low birth weight paradox
*Lucia de Berk prob/stats related court case
*Lukacs's proportion-sum independence theorem
*Lumpability
*Lusser's law
*
Lyapunov's central limit theorem
In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables ...
M
*M/D/1 queue
*M/G/1 queue
*M/M/1 queue
*M/M/c queue
*M-estimator
**Redescending M-estimator
*M-separation
*Mabinogion sheep problem
*Machine learning
*Mahalanobis distance
*Main effect
*Mallows's Cp, Mallows's ''C
p''
*MANCOVA
*Manhattan plot
*Mann–Whitney U
*MANOVA
*Mantel test
*MAP estimator redirects to Maximum a posteriori estimation
*Marchenko–Pastur distribution
*Marcinkiewicz–Zygmund inequality
*Marcum Q-function
*Margin of error
*Marginal conditional stochastic dominance
*Marginal distribution
*Marginal likelihood
*Marginal model
*Marginal variable redirects to Marginal distribution
*Mark and recapture
*Markov additive process
*Markov blanket
*Markov chain
**Markov chain geostatistics
**Markov chain mixing time
*Markov chain Monte Carlo
*Markov decision process
*Markov information source
*Markov kernel
*Markov logic network
*Markov model
*Markov network
*Markov process
*Markov property
*Markov random field
*Markov renewal process
*Markov's inequality
*Markovian arrival processes
*Marsaglia polar method
*Martingale (probability theory)
*Martingale difference sequence
*Martingale representation theorem
*Master equation
*Matched filter
*Matching pursuit
*Matching (statistics)
*Matérn covariance function
*Mathematica – software
*Mathematical biology
*Mathematical modelling in epidemiology
*Mathematical modelling of infectious disease
*Mathematical statistics
*Matthews correlation coefficient
*Matrix gamma distribution
*Matrix normal distribution
*Matrix population models
*Matrix t-distribution
*Mauchly's sphericity test
*Maximal ergodic theorem
*Maximal information coefficient
*Maximum a posteriori estimation
*Maximum entropy classifier redirects to Logistic regression
*Maximum-entropy Markov model
*Maximum entropy method redirects to Principle of maximum entropy
*Maximum entropy probability distribution
*Maximum entropy spectral estimation
*Maximum likelihood
*Maximum likelihood sequence estimation
*Maximum parsimony
*Maximum spacing estimation
*Maxwell speed distribution
*Maxwell–Boltzmann distribution
*Maxwell's theorem
*Mazziotta–Pareto index
*MCAR (missing completely at random)
*McCullagh's parametrization of the Cauchy distributions
*McDiarmid's inequality
*McDonald–Kreitman test statistical genetics
*McKay's approximation for the coefficient of variation
*McNemar's test
*Meadow's law
*Mean
*Mean see also expected value
*Mean absolute error
*Mean absolute percentage error
*Mean absolute scaled error
*Mean and predicted response
*Mean deviation (disambiguation)
*Mean absolute difference, Mean difference
*Mean integrated squared error
*Mean of circular quantities
*Mean percentage error
*Mean preserving spread
*Mean reciprocal rank
*Mean signed difference
*Mean square quantization error
*Mean square weighted deviation
*Mean squared error
*Mean squared prediction error
*Mean time between failures
*Mean-reverting process redirects to Ornstein–Uhlenbeck process
*Mean value analysis
*Measurement, level of – see level of measurement.
*Measurement invariance
*MedCalc software
*Median
*Median absolute deviation
*Median polish
*Median test
*Mediation (statistics)
*Medical statistics
*Medoid
*Memorylessness
*Mendelian randomization
*Meta-analysis
*Meta-regression
*Metalog distribution
*Method of moments (statistics)
*Method of simulated moments
*Method of support
*Metropolis–Hastings algorithm
*Mexican paradox
*Microdata (statistics)
*Midhinge
*Mid-range
*MinHash
*Minimax
*Minimax estimator
*Minimisation (clinical trials)
*Minimum chi-square estimation
*Minimum distance estimation
*Minimum mean square error
*Minimum-variance unbiased estimator
*Minimum viable population
*Minitab
*MINQUE minimum norm quadratic unbiased estimation
*Misleading graph
*Missing completely at random
*Missing data
*Missing values – see Missing data
*Mittag–Leffler distribution
*Mixed logit
*Misconceptions about the normal distribution
*Misuse of statistics
*Mixed data sampling
*Mixed-design analysis of variance
*Mixed model
*Mixing (mathematics)
*Mixture distribution
*Mixture model
*Mixture (probability)
*MLwiN
*Mode (statistics)
*Model output statistics
*Model selection
*Model specification
*Moderator variable redirects to Moderation (statistics)
*Modifiable areal unit problem
*Moffat distribution
*Moment (mathematics)
*Moment-generating function
*Moments, method of see method of moments (statistics)
*Moment problem
*Monotone likelihood ratio
*Monte Carlo integration
*Monte Carlo method
*Monte Carlo method for photon transport
*Monte Carlo methods for option pricing
*Monte Carlo methods in finance
*Monte Carlo molecular modeling
*Moral graph
*Moran process
*Moran's I, Moran's ''I''
*Morisita's overlap index
*Morris method
*Mortality rate
*Most probable number
*Moving average
*Moving-average model
*Moving average representation redirects to Wold's theorem
*Moving least squares
*Multi-armed bandit
*Multi-vari chart
*Multiclass classification
*Multiclass LDA (linear discriminant analysis) redirects to Linear discriminant analysis
*Multicollinearity
*Multidimensional analysis
*Multidimensional Chebyshev's inequality
*Multidimensional panel data
*Multidimensional scaling
*Multifactor design of experiments software
*Multifactor dimensionality reduction
*Multilevel model
*Multilinear principal component analysis
*Multinomial distribution
*Multinomial logistic regression
*Multinomial logit – see Multinomial logistic regression
*Multinomial probit
*Multinomial test
*Multiple baseline design
*Multiple comparisons
*Multiple correlation
*Multiple correspondence analysis
*Multiple discriminant analysis
*Multiple-indicator kriging
*Multiple Indicator Cluster Survey
*Multiple of the median
*Multiple testing correction redirects to Multiple comparisons
*Multiple-try Metropolis
*Multiresolution analysis
*Multiscale decision making
*Multiscale geometric analysis
*Multistage testing
*Multitaper
*Multitrait-multimethod matrix
*Multivariate adaptive regression splines
*Multivariate analysis
*Multivariate analysis of variance
*Multivariate distribution – see Joint probability distribution
*Multivariate kernel density estimation
*Multivariate normal distribution
*Multivariate Pareto distribution
*Multivariate Pólya distribution
*Multivariate probit redirects to Multivariate probit model
*Multivariate random variable
*Multivariate stable distribution
*Multivariate statistics
*Multivariate Student distribution redirects to Multivariate t-distribution
*Multivariate t-distribution
N
*n = 1 fallacy, ''n'' = 1 fallacy
*N of 1 trial
*Naive Bayes classifier
*Nakagami distribution
*List of national and international statistical services, National and international statistical services
*Nash–Sutcliffe model efficiency coefficient
*National Health Interview Survey
*Natural experiment
*Natural exponential family
*Natural process variation
*NCSS (statistical software)
*Nearest-neighbor chain algorithm
*Negative binomial distribution
*Negative multinomial distribution
*Negative predictive value
*Negative relationship
*Negentropy
*Neighbourhood components analysis
*Neighbor joining
*Nelson rules
*Nelson–Aalen estimator
*Nemenyi test
*Nested case-control study
*Nested sampling algorithm
*Network probability matrix
*Neutral vector
*Newcastle–Ottawa scale
*Newey–West estimator
*Newman–Keuls method
*Neyer d-optimal test
*Neyman construction
*Neyman–Pearson lemma
*Nicholson–Bailey model
*Nominal category
*Noncentral beta distribution
*Noncentral chi distribution
*Noncentral chi-squared distribution
*Noncentral F-distribution
*Noncentral hypergeometric distributions
*Noncentral t-distribution
*Noncentrality parameter
*Nonlinear autoregressive exogenous model
*Nonlinear dimensionality reduction
*Non-linear iterative partial least squares
*Nonlinear regression
*Non-homogeneous Poisson process
*Non-linear least squares
*Non-negative matrix factorization
*Nonparametric skew
*Non-parametric statistics
*Non-response bias
*Non-sampling error
*Nonparametric regression
*Nonprobability sampling
*Normal curve equivalent
*Normal distribution
*Normal probability plot see also rankit
*Normal score see also rankit and Z score
*Normal variance-mean mixture
*Normal-exponential-gamma distribution
*Normal-gamma distribution
*Normal-inverse Gaussian distribution
*Normal-scaled inverse gamma distribution
*Normality test
*Normalization (statistics)
*Notation in probability and statistics
*Novikov's condition
*np-chart
*Null distribution
*Null hypothesis
*Null result
*Nuisance parameter
*Nuisance variable
*Numerical data
*Numerical methods for linear least squares
*Numerical parameter redirects to statistical parameter
*Numerical smoothing and differentiation
*Nuremberg Code
O
*Observable variable
*Observational equivalence
*Observational error
*Observational study
*Observed information
*Occupancy frequency distribution
*Odds
*Odds algorithm
*Odds ratio
*Official statistics
*Ogden tables
*Ogive (statistics)
*Omitted-variable bias
*Omnibus test
*One- and two-tailed tests
*One-class classification
*One-factor-at-a-time method
*One-tailed test redirects to One- and two-tailed tests
*One-way analysis of variance
*Online NMF Online Non-negative Matrix Factorisation
*Open-label trial
*OpenEpi software
*OpenBUGS software
*Operational confound
*Operational sex ratio
*Operations research
*Opinion poll
*Optimal decision
*Optimal design
*Optimal discriminant analysis
*Optimal matching
*Optimal stopping
*Optimality criterion
*Optimistic knowledge gradient
*Optional stopping theorem
*Order of a kernel
*Order of integration
*Order statistic
*Ordered logit
*Ordered probit
*Ordered subset expectation maximization
*Ordinal regression
*Ordinary least squares
*Ordination (statistics)
*Ornstein–Uhlenbeck process
*Orthogonal array testing
*Orthogonality
*Orthogonality principle
*Outlier
*Outliers ratio
*Outline of probability
*Outline of regression analysis
*Outline of statistics
*Overdispersion
*Overfitting
*Owen's T function
*OxMetrics software
P
*p-chart
*p-rep
*P-value
*P–P plot
*Page's trend test
*Paid survey
*Paired comparison analysis
*Paired difference test
*Pairwise comparison (psychology), Pairwise comparison
*Pairwise independence
*Panel analysis
*Panel data
*Panjer recursion a class of discrete compound distributions
*Paley–Zygmund inequality
*Parabolic fractal distribution
*PARAFAC (parallel factor analysis)
*Parallel coordinates – graphical display of data
*Parallel factor analysis redirects to PARAFAC
*Paradigm (experimental)
*Parameter identification problem
*Parameter space
*Parametric family
*Parametric model
*Parametric statistics
*Pareto analysis
*Pareto chart
*Pareto distribution
*Pareto index
*Pareto interpolation
*Pareto principle
*Park test
*Partial autocorrelation redirects to Partial autocorrelation function
*Partial autocorrelation function
*Partial correlation
*Partial least squares
*Partial least squares regression
*Partial leverage
*Partial regression plot
*Partial residual plot
*Particle filter
*Partition of sums of squares
*Parzen window
*Path analysis (statistics)
*Path coefficient
*Path space (disambiguation)
*Pattern recognition
*Pearson's chi-squared test (one of various chi-squared tests)
*Pearson distribution
*Pearson product-moment correlation coefficient
*Pedometric mapping
*People v. Collins (prob/stats related court case)
*Per capita
*Per-comparison error rate
*Per-protocol analysis
*Percentile
*Percentile rank
*Periodic variation redirects to Seasonality
*Periodogram
*Peirce's criterion
*Pensim2 an econometric model
*Percentage point
*Permutation test redirects to Resampling (statistics)
*Pharmaceutical statistics
*Phase dispersion minimization
*Phase-type distribution
*Phi coefficient
*Phillips–Perron test
*Philosophy of probability
*Philosophy of statistics
*Pickands–Balkema–de Haan theorem
*Pie chart
*Piecewise-deterministic Markov process
*Pignistic probability
*Pinsker's inequality
*Pitman closeness criterion
*Pitman–Koopman–Darmois theorem
*Pitman–Yor process
*Pivotal quantity
*Placebo-controlled study
*Plackett–Burman design
*Plate notation
*Plot (graphics)
*Pocock boundary
*Poincaré plot
*Point-biserial correlation coefficient
*Point estimation
*Point pattern analysis
*Point process
*Poisson binomial distribution
*Poisson distribution
*Poisson hidden Markov model
*Poisson limit theorem
*Poisson process
*Poisson regression
*Poisson random numbers redirects to section of Poisson distribution
*Poisson sampling
* Polar distribution – see
Circular distribution
*Policy capturing
*Political forecasting
*Pollaczek–Khinchine formula
*Pollyanna Creep
*Polykay
*Poly-Weibull distribution
*Polychoric correlation
*Polynomial and rational function modeling
*Polynomial chaos
*Polynomial regression
*Polytree (Bayesian networks)
*Pooled standard deviation redirects to Pooled variance
*Pooling design
*Popoviciu's inequality on variances
*Statistical population, Population
*Population dynamics
*Population ecology application
*Population modeling
*Population process
*Population pyramid
*Population statistics
*Population variance
*Population viability analysis
*Portmanteau test
*Positive predictive value
*Post-hoc analysis
*Posterior predictive distribution
*Posterior probability
*Power law
*
Power transform
*Prais–Winsten estimation
*Pre- and post-test probability
*Precision (statistics)
*Precision and recall
*Prediction interval
*Predictive analytics
*Predictive inference
*Predictive informatics
*Predictive intake modelling
*Predictive modelling
*Predictive validity
*Preference regression (in marketing)
*Preferential attachment process – see Preferential attachment
*PRESS statistic
*Prevalence
*Principal component analysis
**Multilinear principal-component analysis
*Principal component regression
*Principal geodesic analysis
*Principal stratification
*Principle of indifference
*Principle of marginality
*Principle of maximum entropy
*Prior knowledge for pattern recognition
*Prior probability
*Prior probability distribution redirects to Prior probability
*Probabilistic causation
*Probabilistic design
*Probabilistic forecasting
*Probabilistic latent semantic analysis
*Probabilistic metric space
*Probabilistic proposition
*Probabilistic relational model
*Probability
*Probability bounds analysis
*Probability box
*Probability density function
*
Probability distribution
In probability theory and statistics, a probability distribution is a Function (mathematics), function that gives the probabilities of occurrence of possible events for an Experiment (probability theory), experiment. It is a mathematical descri ...
*Probability density function, Probability distribution function (disambiguation)
*Probability integral transform
*Probability interpretations
*Probability mass function
*Probability matching
*Probability metric
*Probability of error
*Probability of precipitation
*Probability plot (disambiguation), Probability plot
*Probability plot correlation coefficient redirects to Q–Q plot
*Probability plot correlation coefficient plot
*Probability space
*Probability theory
*Probability-generating function
*Probable error
*Probit
*Probit model
*Procedural confound
*Process control
*Process Window Index
*Procrustes analysis
*Proebsting's paradox
*Product distribution
*Product form solution
*Profile likelihood redirects to Likelihood function
*Progressively measurable process
*Prognostics
*Projection pursuit
*Projection pursuit regression
*Proof of Stein's example
*Propagation of uncertainty
*Propensity probability
*Propensity score
*Propensity score matching
*Proper linear model
*Proportional hazards models
*Proportional reduction in loss
*Prosecutor's fallacy
*Proxy (statistics)
*Psephology
*Pseudo-determinant
*Pseudo-random number sampling
*Pseudocount
*Pseudolikelihood
*Pseudomedian
*Pseudoreplication
*PSPP (free software)
*Psychological statistics
*Psychometrics
*Pythagorean expectation
Q
*Q test
*Q-exponential distribution
*Q-function
*Q-Gaussian distribution
*Q–Q plot
*Q-statistic (disambiguation)
*Quadrat
*Quadrant count ratio
*Quadratic classifier
*Quadratic form (statistics)
*Quadratic variation
*Qualitative comparative analysis
*Qualitative data
*Qualitative variation
*Quality control
*Quantile
*Quantile function
*Quantile normalization
*Quantile regression
*Quantile-parameterized distribution
*Quantitative marketing research
*Quantitative psychological research
*Quantitative research
*Quartile
*Quartile coefficient of dispersion
*Quasi-birth–death process
*Quasi-experiment
*Quasi-experimental design – see Design of quasi-experiments
*Quasi-likelihood
*Quasi-maximum likelihood
*Quasireversibility
*Quasi-variance
*Questionnaire
*Queueing model
*Queueing theory
*Queuing delay
*teletraffic queuing theory, Queuing theory in teletraffic engineering
*Quota sampling
R
*R programming language – see R (programming language)
*R v Adams (prob/stats related court case)
*Radar chart
*Rademacher distribution
*Radial basis function network
*Raikov's theorem
*Raised cosine distribution
*Ramaswami's formula
*Ramsey RESET test the Ramsey Regression Equation Specification Error Test
*
Rand index
The Rand index or Rand measure (named after William M. Rand) in statistics, and in particular in data clustering, is a measure of the similarity between two data clusterings. A form of the Rand index may be defined that is adjusted for the chance ...
*Random assignment
*Random compact set
*Random data see randomness
*Random effects estimation – see Random effects model
*Random effects model
*Random element
*Random field
*Random function
*Random graph
*Random matrix
*Random measure
*Random multinomial logit
*Random naive Bayes
*Random permutation statistics
*Random regular graph
*Random sample
*Random sampling
*Random sequence
*Random variable
*Random variate
*Random walk
*Random walk hypothesis
*Randomization
*Randomized block design
*Randomized controlled trial
*Randomized decision rule
*Randomized experiment
*Randomized response
*Randomness
*Randomness tests
*Range (statistics)
*Rank abundance curve
*Rank correlation mainly links to two following
**Spearman's rank correlation coefficient
**Kendall tau rank correlation coefficient
*Rank product
*Rank-size distribution
*Ranking
*Rankit
*Ranklet
*RANSAC
*Rao–Blackwell theorem
*Rao-Blackwellisation – see Rao–Blackwell theorem
*Rasch model
**Polytomous Rasch model
*Rasch model estimation
*Ratio distribution
*Ratio estimator
*Rational quadratic covariance function
*Rayleigh distribution
*Rayleigh mixture distribution
*Raw score
*Realization (probability)
*Recall bias
*Receiver operating characteristic
*Reciprocal distribution
*Rectified Gaussian distribution
*Recurrence period density entropy
*Recurrence plot
*Recurrence quantification analysis
*Recursive Bayesian estimation
*Recursive least squares
*Recursive partitioning
*Reduced form
*Reference class problem
*Reflected Brownian motion
*Regenerative process
*Regression analysis see also linear regression
*Regression Analysis of Time Series proprietary software
*Regression control chart
*Regression diagnostic
*Regression dilution
*Regression discontinuity design
*Regression estimation
*Regression fallacy
*Regression-kriging
*Regression model validation
*Regression toward the mean
*Regret (decision theory)
*Reification (statistics)
*Rejection sampling
*Relationships among probability distributions
*Relative change and difference
*Relative efficiency redirects to
Efficiency (statistics)
In statistics, efficiency is a measure of quality of an estimator, of an experimental design, or of a hypothesis testing procedure. Essentially, a more efficient estimator needs fewer input data or observations than a less efficient one to achiev ...
*Relative index of inequality
*Relative likelihood
*Relative risk
*Relative risk reduction
*Relative standard deviation
*Relative standard error redirects to Relative standard deviation
*Relative variance redirects to Relative standard deviation
*Relative survival
*Relativistic Breit–Wigner distribution
*Relevance vector machine
*Reliability (statistics)
*Reliability block diagram
*Reliability engineering
*Reliability theory
*Reliability theory of aging and longevity
*Rencontres numbers a discrete distribution
*Renewal theory
*Repeatability
*Repeated measures design
*Replication (statistics)
*Representation validity
*Reproducibility
*Resampling (statistics)
*Rescaled range
*Resentful demoralization experimental design
*Residual. See errors and residuals in statistics.
*Residual sum of squares
*Response bias
*Response rate (survey)
*Response surface methodology
*Response variable
*Restricted maximum likelihood
*Restricted randomization
*Reversible-jump Markov chain Monte Carlo
*Reversible dynamics
*Rind et al. controversy interpretations of paper involving meta-analysis
*Rice distribution
*Richardson–Lucy deconvolution
*Ridge regression redirects to Tikhonov regularization
*Ridit scoring
*Risk adjusted mortality rate
*Risk factor
*Risk function
*Risk perception
*Risk theory
*Risk–benefit analysis
*Robbins lemma
*Robust Bayesian analysis
*Robust confidence intervals
*Robust measures of scale
*Robust regression
*Robust statistics
*Root mean square
*Root-mean-square deviation
*Root mean square deviation (bioinformatics)
*Root mean square fluctuation
*Ross's conjecture
*Rossmo's formula
*Rothamsted Experimental Station
*Round robin test
*Rubin causal model
*Ruin theory
*Rule of succession
*Rule of three (medicine)
*Run chart
*RV coefficient
S
*S (programming language)
*S-PLUS
*Safety in numbers
*Sally Clark (prob/stats related court case)
*Sammon projection
*Sample mean and covariance redirects to Sample mean and sample covariance
*Sample mean and sample covariance
*Sample maximum and minimum
*Sample size determination
*Sample space
*Sample (statistics)
*Sample-continuous process
*Sampling (statistics)
**Simple random sampling
**Snowball sampling
**Systematic sampling
**Stratified sampling
**
Cluster sampling
In statistics, cluster sampling is a sampling plan used when mutually homogeneous yet internally heterogeneous groupings are evident in a statistical population. It is often used in marketing research.
In this sampling plan, the total populat ...
**distance sampling
**Multistage sampling
**Nonprobability sampling
**Slice sampling
*
Sampling bias
In statistics, sampling bias is a bias (statistics), bias in which a sample is collected in such a way that some members of the intended statistical population, population have a lower or higher sampling probability than others. It results in a b ...
*Sampling design
*Sampling distribution
*Sampling error
*Sampling fraction
*Sampling frame
*Sampling probability
*Sampling risk
*Samuelson's inequality
*Sargan test
*SAS (software)
*SAS language
* SAS System – see SAS (software)
*Savitzky–Golay smoothing filter
*Sazonov's theorem
*Saturated array
*Scale analysis (statistics)
*Scale parameter
*Scaled-inverse-chi-squared distribution
*Scaling pattern of occupancy
*Scatter matrix
*Scatter plot
*Scatterplot smoothing
*Scheffé's method
*Scheirer–Ray–Hare test
*Schilder's theorem
*Schramm–Loewner evolution
*Schuette–Nesbitt formula
*Schwarz criterion
*Score (statistics)
*Score test
*Scoring algorithm
*Scoring rule
*SCORUS
*Scott's Pi
*SDMX a standard for exchanging statistical data
*Seasonal adjustment
*Seasonality
*Seasonal subseries plot
*Seasonal variation
*Seasonally adjusted annual rate
*Second moment method
*Secretary problem
*Secular variation
*Seed-based d mapping
*Seemingly unrelated regressions
*Seismic to simulation
*Selection bias
*Selective recruitment
*Self-organizing map
*Self-selection bias
*Self-similar process
*Segmented regression
*Seismic inversion
*Self-similarity matrix
*Semantic mapping (statistics)
*Semantic relatedness
*Semantic similarity
*Semi-Markov process
*Semi-log graph
*Semidefinite embedding
*Semimartingale
*Semiparametric model
*Semiparametric regression
*Semivariance
*Sensitivity (tests)
*Sensitivity analysis
*Sensitivity and specificity
*Sensitivity index
*Separation test
*Sequential analysis
*Sequential estimation
*Sequential Monte Carlo methods redirects to Particle filter
*Sequential probability ratio test
*Serial dependence
*Seriation (archaeology)
*SETAR (model) a time series model
*Sethi model
*Seven-number summary
*Sexual dimorphism measures
*Shannon–Hartley theorem
*Shape of the distribution
*Shape parameter
*Shapiro–Wilk test
*Sharpe ratio
*SHAZAM (software)
*Shewhart individuals control chart
*Shifted Gompertz distribution
*Shifted log-logistic distribution
*Shifting baseline
*Shrinkage (statistics)
*Shrinkage estimator
*Sichel distribution
*Siegel–Tukey test
*Sieve estimator
*Sigma-algebra
*SigmaStat software
*Sign test
*Signal-to-noise ratio
*Signal-to-noise statistic
*Significance analysis of microarrays
*Silhouette (clustering)
*Simfit software
*Similarity matrix
*Simon model
*Simple linear regression
*Simple moving average crossover
*Simple random sample
*Simpson's paradox
*Simulated annealing
*Simultaneous equation methods (econometrics)
*Simultaneous equations model
*Single equation methods (econometrics)
*Single-linkage clustering
*Singular distribution
*Singular spectrum analysis
*Sinusoidal model
*Sinkov statistic
*Size (statistics)
*Skellam distribution
*Skew normal distribution
*Skewness
*Skorokhod's representation theorem
*Slash distribution
*Slice sampling
*Sliced inverse regression
*Slutsky's theorem
*Small area estimation
*Smearing retransformation
*Smoothing
*Smoothing spline
*Smoothness (probability theory)
*Snowball sampling
*Sobel test
*Social network change detection
*Social statistics
*SOFA Statistics software
*Soliton distribution redirects to Luby transform code
*Somers' D
*Sørensen similarity index
*Spaghetti plot
*Sparse binary polynomial hashing
*Sparse PCA sparse principal components analysis
*Sparsity-of-effects principle
*Spatial analysis
*Spatial dependence
*Spatial descriptive statistics
*Spatial distribution
*Spatial econometrics
*Spatial statistics redirects to Spatial analysis
*Spatial variability
*Spearman's rank correlation coefficient
*Spearman–Brown prediction formula
*Species discovery curve
*Specification (regression) redirects to Statistical model specification
*Specificity (tests)
*Spectral clustering – (cluster analysis)
*Spectral density
*Spectral density estimation
*Spectrum bias
*Spectrum continuation analysis
*Speed prior
*Spherical design
*Split normal distribution
*SPRT redirects to Sequential probability ratio test
*SPSS software
*SPSS Clementine software (data mining)
*Spurious relationship
*Square root biased sampling
*Squared deviations
*St. Petersburg paradox
*Stability (probability)
*Stable distribution
*Financial models with long-tailed distributions and volatility clustering, Stable and tempered stable distributions with volatility clustering – financial applications
*Standard deviation
*Standard error
*Standard normal deviate
*Standard normal table
*Standard probability space
*Standard score
*Standardized coefficient
*Standardized moment
*Standardised mortality rate
*Standardized mortality ratio
*Standardized rate
*Stanine
*STAR model a time series model
*Star plot redirects to Radar chart
*Stata
*State space representation
*Statgraphics software
*Static analysis
*Stationary distribution
*Stationary ergodic process
*Stationary process
*Stationary sequence
*Stationary subspace analysis
*Statistic
*STATISTICA software
*Statistical arbitrage
*Statistical assembly
*Statistical assumption
*Statistical benchmarking
*Statistical classification
*Statistical conclusion validity
*Statistical consultant
*Statistical deviance – see deviance (statistics)
*Statistical dispersion
*Statistical distance
*Statistical efficiency
*Statistical epidemiology
*Statistical estimation redirects to Estimation theory
*Statistical finance
*Statistical genetics redirects to population genetics
*Statistical geography
*Statistical graphics
*
Statistical hypothesis testing
A statistical hypothesis test is a method of statistical inference used to decide whether the data provide sufficient evidence to reject a particular hypothesis. A statistical hypothesis test typically involves a calculation of a test statistic. T ...
*Statistical independence
*Statistical inference
*Statistical interference
*Statistical Lab software
*Statistical learning theory
*Statistical literacy
*Statistical model
*Statistical model specification
*Statistical model validation
*Statistical noise
*List of statistical packages, Statistical package
*Statistical parameter
*Statistical parametric mapping
*Statistical parsing
*Statistical population
*Statistical power
*Statistical probability
*Statistical process control
*Statistical proof
*Statistical randomness
*Statistical range see range (statistics)
*Statistical regularity
*Statistical relational learning
*Statistical sample
*Statistical semantics
*Statistical shape analysis
*Statistical signal processing
*Statistical significance
*Statistical survey
*Statistical syllogism
*Statistical theory
*Statistical unit
*Statisticians' and engineers' cross-reference of statistical terms
*Statistics
*Statistics education
*Statistics Online Computational Resource training materials
*StatPlus
*StatXact software
*Stein's example
**Proof of Stein's example
*Stein's lemma
*Stein's unbiased risk estimate
*Steiner system
*Stemplot – see Stem-and-leaf display
*Step detection
*Stepwise regression
*Stieltjes moment problem
*Stimulus-response model
*Stochastic
*Stochastic approximation
*Stochastic calculus
*Stochastic convergence
*Stochastic differential equation
*Stochastic dominance
*Stochastic drift
*Stochastic equicontinuity
*Stochastic gradient descent
*Stochastic grammar
*Stochastic investment model
*Stochastic kernel estimation
*Stochastic matrix
*Stochastic modelling (insurance)
*Stochastic optimization
*Stochastic ordering
*Stochastic process
*Stochastic rounding
*Stochastic simulation
*Stopped process
*Stopping time
*Stratified sampling
*Stratonovich integral
*Streamgraph
*Stress majorization
*Strong law of small numbers
*Strong prior
*Structural break
*Structural equation modeling
*Structural estimation
*Structured data analysis (statistics)
*Studentized range
*Studentized residual
*Student's t-distribution
*Student's t-statistic
*Student's t-test
*Student's t-test for Gaussian scale mixture distributions – see Location testing for Gaussian scale mixture distributions
*Studentization
*Study design
*Study heterogeneity
*Subcontrary mean redirects to Harmonic mean
*Subgroup analysis
*Subindependence
*Substitution model
*SUDAAN software
*Sufficiency (statistics) – see Sufficient statistic
*Sufficient dimension reduction
*Sufficient statistic
*Sum of normally distributed random variables
*Sum of squares (disambiguation) general disambiguation
*Sum of squares (statistics) – see Partition of sums of squares
*Summary statistic
*Support curve
*Support vector machine
*Surrogate model
*Survey data collection
*Survey sampling
*Survey methodology
*Survival analysis
*Survival rate
*Survival function
*Survivorship bias
*Symmetric design
*Symmetric mean absolute percentage error
*SYSTAT (statistics), SYSTAT software
*System dynamics
*System identification
*Systematic error (also see bias (statistics) and errors and residuals in statistics)
*Systematic review
T
*t-distribution – see Student's t-distribution (includes table)
*T distribution (disambiguation)
*t-statistic
*Tag cloud graphical display of info
*Taguchi loss function
*Taguchi methods
*Tajima's D
*Taleb distribution
*Tampering (quality control)
*Taylor expansions for the moments of functions of random variables
*Taylor's law empirical variance-mean relations
*Telegraph process
*Test for structural change
*Test–retest reliability
*Test score
*Test set
*Test statistic
*Testimator
*Testing hypotheses suggested by the data
*Text analytics
*The Long Tail possibly seminal magazine article
*The Unscrambler software
*Theil index
*Theil–Sen estimator
*Theory of conjoint measurement
*Therapeutic effect
*Thompson sampling
*Three-point estimation
*Three-stage least squares
*Threshold model
*Thurstone scale
*Thurstonian model
*Time–frequency analysis
*Time–frequency representation
*Time reversibility
*Time series
*Time-series regression
*Time use survey
*Time-varying covariate
*Timeline of probability and statistics
*TinkerPlots proprietary software for schools
*Tobit model
*Tolerance interval
*Top-coded
*Topic model (statistical natural language processing)
*Topological data analysis
*Tornqvist index
*Total correlation
*Total least squares
*Total sum of squares
*Total survey error
*Total variation distance of probability measures, Total variation distance a statistical distance measure
*TPL Tables software
*Tracy–Widom distribution
*Traffic equations
*Training set
*Transect
*Transferable belief model
*Transiogram
*Transition rate matrix
*Treatment and control groups
*Trend analysis
*Trend estimation
*Trend-stationary process
*Treynor ratio
*Triangular distribution
*Trimean
*Trimmed estimator
*Trispectrum
*True experiment
*True variance
*Truncated distribution
*Truncated mean
*Truncated normal distribution
*Truncated regression model
*Truncation (statistics)
*Tsallis distribution
*Tsallis statistics
*Tschuprow's T
*Tucker decomposition
*Tukey's range test multiple comparisons
*Tukey's test of additivity interaction in two-way anova
*Tukey–Duckworth test
*Tukey–Kramer method
*Tukey lambda distribution
*Tweedie distribution
*Twisting properties
*Two stage least squares redirects to Instrumental variable
*Two-tailed test
*Two-way analysis of variance
*Type I and type II errors
*Type-1 Gumbel distribution
*Type-2 Gumbel distribution
*Tyranny of averages
U
*u-chart
*U-quadratic distribution
*U-statistic
*U test
*Umbrella sampling
*Unbiased estimator – see bias (statistics)
*Unbiased estimation of standard deviation
*Uncertainty
*Uncertainty coefficient
*Uncertainty quantification
*Uncomfortable science
*Uncorrelated
*Underdispersion redirects to Overdispersion
*Underfitting redirects to Overfitting
*Underprivileged area score
*Unevenly spaced time series
* Unexplained variation – see Explained variation
*Uniform distribution (continuous)
*Uniform distribution (discrete)
*Uniformly most powerful test
*Unimodal distribution redirects to Unimodal function (has some stats context)
*Unimodality
*Unit (statistics)
*Unit of observation
*Unit root
*Unit root test
*Unit-weighted regression
*Unitized risk
*Univariate
*Univariate analysis
*Univariate distribution
*Unmatched count
*Unseen species problem
*Unsolved problems in statistics
*Upper and lower probabilities
*Upside potential ratio finance
*Urn problem
*Ursell function
*Utility maximization problem
*Utilization distribution
V
*Validity (statistics)
*Van der Waerden test
*Van Houtum distribution
*Vapnik–Chervonenkis theory
*Varadhan's lemma
*Variable (mathematics), Variable
*Variable kernel density estimation
*Variable-order Bayesian network
*Variable-order Markov model
*Variable rules analysis
*Variance
*Variance decomposition of forecast errors
*Variance gamma process
*Variance inflation factor
*Variance-gamma distribution
*Variance reduction
*Variance-stabilizing transformation
*Variance-to-mean ratio
*Variation ratio
*Variational Bayesian methods
*Variational message passing
*Variogram
*Varimax rotation
*Vasicek model
*VC dimension
*VC theory
*Vector autoregression
*VEGAS algorithm
*Violin plot
*ViSta – Software, see ViSta, The Visual Statistics system
*Voigt profile
*Volatility (finance)
*Volcano plot (statistics)
*Von Mises distribution
*Von Mises–Fisher distribution
*V-optimal histograms
*V-statistic
*Vuong's closeness test
*Vysochanskiï–Petunin inequality
W
*Wait list control group
*Wald distribution redirects to Inverse Gaussian distribution
*Wald test
*Wald–Wolfowitz runs test
*Wallenius' noncentral hypergeometric distribution
*Wang and Landau algorithm
*Ward's method
*Watterson estimator
*Watts and Strogatz model
*Weibull chart redirects to Weibull distribution
*Weibull distribution
*Weibull modulus
*Weight function
*Weighted median
*Weighted covariance matrix redirects to Sample mean and sample covariance
*Weighted mean
*Weighted sample redirects to Sample mean and sample covariance
*Welch's method spectral density estimation
*Welch's t test
*Welch–Satterthwaite equation
*Well-behaved statistic
*Wick product
*Wilks' lambda distribution
*Wilks' theorem
*Winsorized mean
*Whipple's index
*White test
*White noise
*Wide and narrow data
*Wiener deconvolution
*Wiener filter
*Wiener process
*Wigner quasi-probability distribution
*Wigner semicircle distribution
*Wike's law of low odd primes
*Wilcoxon signed-rank test
*Will Rogers phenomenon
*WinBUGS software
*Window function
*Winpepi software
*Winsorising
*Wishart distribution
*Wold's theorem
*Wombling
*Working–Hotelling procedure
*World Programming System software
*Wrapped Cauchy distribution
*Wrapped distribution
*Wrapped exponential distribution
*Wrapped normal distribution
*Wrapped Lévy distribution
*Writer invariant
X
*X-12-ARIMA
*X-bar chart,
chart
*Xbar and R chart,
and R chart
*Xbar and s chart,
and s chart
*XLispStat software
*XploRe software
Y
*Yamartino method
*Yates analysis
*Yates's correction for continuity
*Youden's J statistic
*Yule–Simon distribution
Z
*z-score
*z-factor
*z statistic
*Z-test
*Z-transform
*Zakai equation
*Zelen's design
*Zero degrees of freedom
*Zero–one law (disambiguation)
*Zeta distribution
*Ziggurat algorithm
*Zipf–Mandelbrot law a discrete distribution
*Zipf's law
See also
; Supplementary lists
These lists include items which are somehow related to statistics however are not included in this index:
* List of statisticians
* List of important publications in statistics
* List of scientific journals in statistics
; Topic lists
* Outline of statistics
* List of probability topics
* Glossary of probability and statistics
* Glossary of experimental design
* Notation in probability and statistics
* List of probability distributions
* List of graphical methods
* List of fields of application of statistics
* List of stochastic processes topics
* Lists of statistics topics
* List of statistical packages
External links
ISI Glossary of Statistical Terms(multilingual), International Statistical Institute
{{DEFAULTSORT:Statistics
Statistics-related lists,
Mathematics-related lists
Indexes of business topics
Lists of topics