![]() |
Rate Parameter
In probability theory and statistics, a scale parameter is a special kind of numerical parameter of a parametric family of probability distributions. The larger the scale parameter, the more spread out the distribution. Definition If a family of probability distributions is such that there is a parameter ''s'' (and other parameters ''θ'') for which the cumulative distribution function satisfies :F(x;s,\theta) = F(x/s;1,\theta), \! then ''s'' is called a scale parameter, since its value determines the " scale" or statistical dispersion of the probability distribution. If ''s'' is large, then the distribution will be more spread out; if ''s'' is small then it will be more concentrated. If the probability density exists for all values of the complete parameter set, then the density (as a function of the scale parameter only) satisfies :f_s(x) = f(x/s)/s, \! where ''f'' is the density of a standardized version of the density, i.e. f(x) \equiv f_(x). An estimator of a scale ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
![]() |
Probability Theory
Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms of probability, axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure (mathematics), measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of the sample space is called an event (probability theory), event. Central subjects in probability theory include discrete and continuous random variables, probability distributions, and stochastic processes (which provide mathematical abstractions of determinism, non-deterministic or uncertain processes or measured Quantity, quantities that may either be single occurrences or evolve over time in a random fashion). Although it is no ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
Normal Distribution
In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is f(x) = \frac e^\,. The parameter is the mean or expectation of the distribution (and also its median and mode), while the parameter \sigma^2 is the variance. The standard deviation of the distribution is (sigma). A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate. Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known. Their importance is partly due to the central limit theorem. It states that, under some conditions, the average of many samples (observations) of a random variable with finite mean and variance is itself a random variable—whose distribution c ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Invariant Estimator
In statistics, the concept of being an invariant estimator is a criterion that can be used to compare the properties of different estimators for the same quantity. It is a way of formalising the idea that an estimator should have certain intuitively appealing qualities. Strictly speaking, "invariant" would mean that the estimates themselves are unchanged when both the measurements and the parameters are transformed in a compatible way, but the meaning has been extended to allow the estimates to change in appropriate ways with such transformations. The term equivariant estimator is used in formal mathematical contexts that include a precise description of the relation of the way the estimator changes in response to changes to the dataset and parameterisation: this corresponds to the use of " equivariance" in more general mathematics. General setting Background In statistical inference, there are several approaches to estimation theory that can be used to decide immediately what es ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Central Tendency
In statistics, a central tendency (or measure of central tendency) is a central or typical value for a probability distribution.Weisberg H.F (1992) ''Central Tendency and Variability'', Sage University Paper Series on Quantitative Applications in the Social Sciences, p.2 Colloquially, measures of central tendency are often called '' averages.'' The term ''central tendency'' dates from the late 1920s. The most common measures of central tendency are the arithmetic mean, the median, and the mode. A middle tendency can be calculated for either a finite set of values or for a theoretical distribution, such as the normal distribution. Occasionally authors use central tendency to denote "the tendency of quantitative data to cluster around some central value."Upton, G.; Cook, I. (2008) ''Oxford Dictionary of Statistics'', OUP (entry for "central tendency")Dodge, Y. (2003) ''The Oxford Dictionary of Statistical Terms'', OUP for International Statistical Institute. (entry for "cent ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
![]() |
Average Absolute Deviation
The average absolute deviation (AAD) of a data set is the average of the absolute deviations from a central point. It is a summary statistic of statistical dispersion or variability. In the general form, the central point can be a mean, median, mode, or the result of any other measure of central tendency or any reference value related to the given data set. AAD includes the mean absolute deviation and the '' median absolute deviation'' (both abbreviated as MAD). Measures of dispersion Several measures of statistical dispersion are defined in terms of the absolute deviation. The term "average absolute deviation" does not uniquely identify a measure of statistical dispersion, as there are several measures that can be used to measure absolute deviations, and there are several measures of central tendency that can be used as well. Thus, to uniquely identify the absolute deviation it is necessary to specify both the measure of deviation and the measure of central tendency. The st ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
Median Absolute Deviation
In statistics, the median absolute deviation (MAD) is a robust measure of the variability of a univariate sample of quantitative data. It can also refer to the population parameter that is estimated by the MAD calculated from a sample. For a univariate data set ''X''1, ''X''2, ..., ''Xn'', the MAD is defined as the median of the absolute deviations from the data's median \tilde=\operatorname(X) : : \operatorname = \operatorname( , X_i - \tilde, ) that is, starting with the residuals (deviations) from the data's median, the MAD is the median of their absolute values. Example Consider the data (1, 1, 2, 2, 4, 6, 9). It has a median value of 2. The absolute deviations about 2 are (1, 1, 0, 0, 2, 4, 7) which in turn have a median value of 1 (because the sorted absolute deviations are (0, 0, 1, 1, 2, 4, 7)). So the median absolute deviation for this data is 1. Uses The median absolute deviation is a measure of statistical dispersion. Moreover, the MAD is a rob ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Quantile Function
In probability and statistics, the quantile function is a function Q: ,1\mapsto \mathbb which maps some probability x \in ,1/math> of a random variable v to the value of the variable y such that P(v\leq y) = x according to its probability distribution. In other words, the function returns the value of the variable below which the specified cumulative probability is contained. For example, if the distribution is a standard normal distribution then Q(0.5) will return 0 as 0.5 of the probability mass is contained below 0. The quantile function is also called the percentile function (after the percentile), percent-point function, inverse cumulative distribution function (after the cumulative distribution function or c.d.f.) or inverse distribution function. Definition Strictly increasing distribution function With reference to a continuous and strictly increasing cumulative distribution function (c.d.f.) F_X\colon \mathbb \to ,1/math> of a random variable , the quantile function ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Standard Deviation
In statistics, the standard deviation is a measure of the amount of variation of the values of a variable about its Expected value, mean. A low standard Deviation (statistics), deviation indicates that the values tend to be close to the mean (also called the expected value) of the set, while a high standard deviation indicates that the values are spread out over a wider range. The standard deviation is commonly used in the determination of what constitutes an outlier and what does not. Standard deviation may be abbreviated SD or std dev, and is most commonly represented in mathematical texts and equations by the lowercase Greek alphabet, Greek letter Sigma, σ (sigma), for the population standard deviation, or the Latin script, Latin letter ''s'', for the sample standard deviation. The standard deviation of a random variable, Sample (statistics), sample, statistical population, data set, or probability distribution is the square root of its variance. (For a finite population, v ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Median Absolute Deviation
In statistics, the median absolute deviation (MAD) is a robust measure of the variability of a univariate sample of quantitative data. It can also refer to the population parameter that is estimated by the MAD calculated from a sample. For a univariate data set ''X''1, ''X''2, ..., ''Xn'', the MAD is defined as the median of the absolute deviations from the data's median \tilde=\operatorname(X) : : \operatorname = \operatorname( , X_i - \tilde, ) that is, starting with the residuals (deviations) from the data's median, the MAD is the median of their absolute values. Example Consider the data (1, 1, 2, 2, 4, 6, 9). It has a median value of 2. The absolute deviations about 2 are (1, 1, 0, 0, 2, 4, 7) which in turn have a median value of 1 (because the sorted absolute deviations are (0, 0, 1, 1, 2, 4, 7)). So the median absolute deviation for this data is 1. Uses The median absolute deviation is a measure of statistical dispersion. Moreover, the MAD is a rob ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Scale Factor
In affine geometry, uniform scaling (or isotropic scaling) is a linear transformation that enlarges (increases) or shrinks (diminishes) objects by a '' scale factor'' that is the same in all directions ( isotropically). The result of uniform scaling is similar (in the geometric sense) to the original. A scale factor of 1 is normally allowed, so that congruent shapes are also classed as similar. Uniform scaling happens, for example, when enlarging or reducing a photograph, or when creating a scale model of a building, car, airplane, etc. More general is scaling with a separate scale factor for each axis direction. Non-uniform scaling (anisotropic scaling) is obtained when at least one of the scaling factors is different from the others; a special case is directional scaling or stretching (in one direction). Non-uniform scaling changes the shape of the object; e.g. a square may change into a rectangle, or into a parallelogram if the sides of the square are not parallel to the s ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Consistent Estimator
In statistics, a consistent estimator or asymptotically consistent estimator is an estimator—a rule for computing estimates of a parameter ''θ''0—having the property that as the number of data points used increases indefinitely, the resulting sequence of estimates converges in probability to ''θ''0. This means that the distributions of the estimates become more and more concentrated near the true value of the parameter being estimated, so that the probability of the estimator being arbitrarily close to ''θ''0 converges to one. In practice one constructs an estimator as a function of an available sample of size ''n'', and then imagines being able to keep collecting data and expanding the sample ''ad infinitum''. In this way one would obtain a sequence of estimates indexed by ''n'', and consistency is a property of what occurs as the sample size “grows to infinity”. If the sequence of estimates can be mathematically shown to converge in probability to the true value '' ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Statistical Dispersion
In statistics, dispersion (also called variability, scatter, or spread) is the extent to which a distribution is stretched or squeezed. Common examples of measures of statistical dispersion are the variance, standard deviation, and interquartile range. For instance, when the variance of data in a set is large, the data is widely scattered. On the other hand, when the variance is small, the data in the set is clustered. Dispersion is contrasted with location or central tendency, and together they are the most used properties of distributions. Measures of statistical dispersion A measure of statistical dispersion is a nonnegative real number that is zero if all the data are the same and increases as the data become more diverse. Most measures of dispersion have the same units as the quantity being measured. In other words, if the measurements are in metres or seconds, so is the measure of dispersion. Examples of dispersion measures include: * Standard deviation * Interquartile ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |