Mean Signed Difference
   HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, the mean signed difference (MSD), also known as mean signed deviation, mean signed error, or mean bias error is a sample
statistic A statistic (singular) or sample statistic is any quantity computed from values in a sample which is considered for a statistical purpose. Statistical purposes include estimating a population parameter, describing a sample, or evaluating a hypot ...
that summarizes how well a set of estimates \hat_i match the quantities \theta_i that they are supposed to estimate. It is one of a number of statistics that can be used to assess an estimation procedure, and it would often be used in conjunction with a sample version of the
mean square error In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the errors—that is, the average squared difference between ...
. For example, suppose a
linear regression In statistics, linear regression is a statistical model, model that estimates the relationship between a Scalar (mathematics), scalar response (dependent variable) and one or more explanatory variables (regressor or independent variable). A mode ...
model has been estimated over a sample of data, and is then used to extrapolate predictions of the
dependent variable A variable is considered dependent if it depends on (or is hypothesized to depend on) an independent variable. Dependent variables are studied under the supposition or demand that they depend, by some law or rule (e.g., by a mathematical functio ...
out of sample after the out-of-sample data points have become available. Then \theta_i would be the ''i''-th out-of-sample value of the dependent variable, and \hat_i would be its predicted value. The mean signed deviation is the average value of \hat_i-\theta_i.


Definition

The mean signed difference is derived from a set of ''n'' pairs, ( \hat_i,\theta_i), where \hat_i is an estimate of the parameter \theta in a case where it is known that \theta=\theta_i. In many applications, all the quantities \theta_i will share a common value. When applied to
forecasting Forecasting is the process of making predictions based on past and present data. Later these can be compared with what actually happens. For example, a company might Estimation, estimate their revenue in the next year, then compare it against the ...
in a
time series analysis In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. ...
context, a forecasting procedure might be evaluated using the mean signed difference, with \hat_i being the predicted value of a series at a given
lead time A lead time is the latency between the initiation and completion of a process. For example, the lead time between the placement of an order and delivery of new cars by a given manufacturer might be between 2 weeks and 6 months, depending on vari ...
and \theta_i being the value of the series eventually observed for that time-point. The mean signed difference is defined to be :\operatorname(\hat) = \frac\sum^_ \hat - \theta_ .


Use Cases

The mean signed difference is often useful when the estimations \hat are biased from the true values \theta_i in a certain direction. If the estimator that produces the \hat values is unbiased, then \operatorname(\hat)=0. However, if the estimations \hat are produced by a
biased estimator In statistics, the bias of an estimator (or bias function) is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called ''unbiased''. In stat ...
, then the mean signed difference is a useful tool to understand the direction of the estimator's bias.


See also

*
Bias of an estimator In statistics, the bias of an estimator (or bias function) is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called ''unbiased''. In st ...
*
Deviation (statistics) In mathematics and statistics, deviation serves as a measure to quantify the disparity between an observed value of a variable and another designated value, frequently the mean of that variable. Deviations with respect to the sample mean and the ...
*
Mean absolute difference The mean absolute difference (univariate) is a Statistical dispersion#Measures of statistical dispersion, measure of statistical dispersion equal to the average absolute difference of two independent values drawn from a probability distribution. ...
*
Mean absolute error In statistics, mean absolute error (MAE) is a measure of Error (statistics), errors between paired observations expressing the same phenomenon. Examples of ''Y'' versus ''X'' include comparisons of predicted versus observed, subsequent time vers ...


References

Summary statistics Means Distance {{statistics-stub