Outlier Rejection Technique
   HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, an outlier is a
data point In statistics, a unit of observation is the unit described by the data that one analyzes. A study may treat groups as a unit of observation with a country as the unit of analysis, drawing conclusions on group characteristics from data collected a ...
that differs significantly from other observations. An outlier may be due to a variability in the measurement, an indication of novel data, or it may be the result of experimental error; the latter are sometimes excluded from the
data set A data set (or dataset) is a collection of data. In the case of tabular data, a data set corresponds to one or more table (database), database tables, where every column (database), column of a table represents a particular Variable (computer sci ...
. An outlier can be an indication of exciting possibility, but can also cause serious problems in statistical analyses. Outliers can occur by chance in any distribution, but they can indicate novel behaviour or structures in the data-set,
measurement error Observational error (or measurement error) is the difference between a measured value of a quantity and its unknown true value.Dodge, Y. (2003) ''The Oxford Dictionary of Statistical Terms'', OUP. Such errors are inherent in the measurement pr ...
, or that the population has a
heavy-tailed distribution In probability theory, heavy-tailed distributions are probability distributions whose tails are not exponentially bounded: that is, they have heavier tails than the exponential distribution. Roughly speaking, “heavy-tailed” means the distribu ...
. In the case of measurement error, one wishes to discard them or use statistics that are
robust Robustness is the property of being strong and healthy in constitution. When it is transposed into a system, it refers to the ability of tolerating perturbations that might affect the system's functional body. In the same line ''robustness'' can ...
to outliers, while in the case of heavy-tailed distributions, they indicate that the distribution has high
skewness In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined. For a unimodal ...
and that one should be very cautious in using tools or intuitions that assume a
normal distribution In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is f(x) = \frac ...
. A frequent cause of outliers is a mixture of two distributions, which may be two distinct sub-populations, or may indicate 'correct trial' versus 'measurement error'; this is modeled by a
mixture model In statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observati ...
. In most larger samplings of data, some data points will be further away from the
sample mean The sample mean (sample average) or empirical mean (empirical average), and the sample covariance or empirical covariance are statistics computed from a sample of data on one or more random variables. The sample mean is the average value (or me ...
than what is deemed reasonable. This can be due to incidental
systematic error Observational error (or measurement error) is the difference between a measurement, measured value of a physical quantity, quantity and its unknown true value.Dodge, Y. (2003) ''The Oxford Dictionary of Statistical Terms'', OUP. Such errors are ...
or flaws in the
theory A theory is a systematic and rational form of abstract thinking about a phenomenon, or the conclusions derived from such thinking. It involves contemplative and logical reasoning, often supported by processes such as observation, experimentation, ...
that generated an assumed family of
probability distribution In probability theory and statistics, a probability distribution is a Function (mathematics), function that gives the probabilities of occurrence of possible events for an Experiment (probability theory), experiment. It is a mathematical descri ...
s, or it may be that some observations are far from the center of the data. Outlier points can therefore indicate faulty data, erroneous procedures, or areas where a certain theory might not be valid. However, in large samples, a small number of outliers is to be expected (and not due to any anomalous condition). Outliers, being the most extreme observations, may include the
sample maximum In statistics, the sample maximum and sample minimum, also called the largest observation and smallest observation, are the values of the greatest and least elements of a sample. They are basic summary statistics, used in descriptive statistics ...
or
sample minimum In statistics, the sample maximum and sample minimum, also called the largest observation and smallest observation, are the values of the greatest and least elements of a sample. They are basic summary statistics, used in descriptive statistics ...
, or both, depending on whether they are extremely high or low. However, the sample maximum and minimum are not always outliers because they may not be unusually far from other observations. Naive interpretation of statistics derived from data sets that include outliers may be misleading. For example, if one is calculating the
average In colloquial, ordinary language, an average is a single number or value that best represents a set of data. The type of average taken as most typically representative of a list of numbers is the arithmetic mean the sum of the numbers divided by ...
temperature of 10 objects in a room, and nine of them are between 20 and 25
degrees Celsius The degree Celsius is the unit of temperature on the Celsius temperature scale "Celsius temperature scale, also called centigrade temperature scale, scale based on 0 ° for the melting point of water and 100 ° for the boiling point ...
, but an oven is at 175 °C, the
median The median of a set of numbers is the value separating the higher half from the lower half of a Sample (statistics), data sample, a statistical population, population, or a probability distribution. For a data set, it may be thought of as the “ ...
of the data will be between 20 and 25 °C but the mean temperature will be between 35.5 and 40 °C. In this case, the median better reflects the temperature of a randomly sampled object (but not the temperature in the room) than the mean; naively interpreting the mean as "a typical sample", equivalent to the median, is incorrect. As illustrated in this case, outliers may indicate data points that belong to a different
population Population is a set of humans or other organisms in a given region or area. Governments conduct a census to quantify the resident population size within a given jurisdiction. The term is also applied to non-human animals, microorganisms, and pl ...
than the rest of the
sample Sample or samples may refer to: * Sample (graphics), an intersection of a color channel and a pixel * Sample (material), a specimen or small quantity of something * Sample (signal), a digital discrete sample of a continuous analog signal * Sample ...
set.
Estimator In statistics, an estimator is a rule for calculating an estimate of a given quantity based on Sample (statistics), observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguish ...
s capable of coping with outliers are said to be robust: the median is a robust statistic of
central tendency In statistics, a central tendency (or measure of central tendency) is a central or typical value for a probability distribution.Weisberg H.F (1992) ''Central Tendency and Variability'', Sage University Paper Series on Quantitative Applications in ...
, while the mean is not.


Occurrence and causes

In the case of
normally distributed In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real number, real-valued random variable. The general form of its probability density function is f(x ...
data, the
three sigma rule 3 (three) is a number, numeral and digit. It is the natural number following 2 and preceding 4, and is the smallest odd prime number and the only prime preceding a square number. It has religious and cultural significance in many societies ...
means that roughly 1 in 22 observations will differ by twice the
standard deviation In statistics, the standard deviation is a measure of the amount of variation of the values of a variable about its Expected value, mean. A low standard Deviation (statistics), deviation indicates that the values tend to be close to the mean ( ...
or more from the mean, and 1 in 370 will deviate by three times the standard deviation. In a sample of 1000 observations, the presence of up to five observations deviating from the mean by more than three times the standard deviation is within the range of what can be expected, being less than twice the expected number and hence within 1 standard deviation of the expected number – see
Poisson distribution In probability theory and statistics, the Poisson distribution () is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time if these events occur with a known const ...
– and not indicate an anomaly. If the sample size is only 100, however, just three such outliers are already reason for concern, being more than 11 times the expected number. In general, if the nature of the population distribution is known ''a priori'', it is possible to test if the number of outliers deviate significantly from what can be expected: for a given cutoff (so samples fall beyond the cutoff with probability ''p'') of a given distribution, the number of outliers will follow a
binomial distribution In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of statistical independence, independent experiment (probability theory) ...
with parameter ''p'', which can generally be well-approximated by the
Poisson distribution In probability theory and statistics, the Poisson distribution () is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time if these events occur with a known const ...
with λ = ''pn''. Thus if one takes a normal distribution with cutoff 3 standard deviations from the mean, ''p'' is approximately 0.3%, and thus for 1000 trials one can approximate the number of samples whose deviation exceeds 3 sigmas by a Poisson distribution with λ = 3.


Causes

Outliers can have many anomalous causes. A physical apparatus for taking measurements may have suffered a transient malfunction. There may have been an error in data transmission or transcription. Outliers arise due to changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. A sample may have been contaminated with elements from outside the population being examined. Alternatively, an outlier could be the result of a flaw in the assumed theory, calling for further investigation by the researcher. Additionally, the pathological appearance of outliers of a certain form appears in a variety of datasets, indicating that the causative mechanism for the data might differ at the extreme end (
King effect In statistics, economics, and econophysics, the king effect is the phenomenon in which the top one or two members of a ranked set show up as clear outliers. These top one or two members are unexpectedly large because they do not conform to the s ...
).


Definitions and detection

There is no rigid mathematical definition of what constitutes an outlier; determining whether or not an observation is an outlier is ultimately a subjective exercise. There are various methods of outlier detection, some of which are treated as synonymous with novelty detection. Some are graphical such as
normal probability plot The normal probability plot is a graphical technique to identify substantive departures from normality. This includes identifying outliers, skewness, kurtosis, a need for transformations, and mixtures. Normal probability plots are made of raw ...
s. Others are model-based.
Box plot In descriptive statistics, a box plot or boxplot is a method for demonstrating graphically the locality, spread and skewness groups of numerical data through their quartiles. In addition to the box on a box plot, there can be lines (which are ca ...
s are a hybrid. Model-based methods which are commonly used for identification assume that the data are from a normal distribution, and identify observations which are deemed "unlikely" based on mean and standard deviation: *
Chauvenet's criterion In statistical theory, Chauvenet's criterion (named for William Chauvenet) is a means of assessing whether one piece of experimental data from a set of observations is likely to be spurious – an outlier. Derivation The idea behind Chauvenet's cr ...
*
Grubbs's test for outliers In statistics, Grubbs's test or the Grubbs test (named after Frank E. Grubbs, who published the test in 1950), also known as the maximum normalized residual test or extreme studentized deviate test, is a test used to detect outliers in a univaria ...
* Dixon's ''Q'' test *
ASTM ASTM International, formerly known as American Society for Testing and Materials, is a standards organization that develops and publishes voluntary consensus technical international standards for a wide range of materials, products, systems and s ...
E178: Standard Practice for Dealing With Outlying Observations *
Mahalanobis distance The Mahalanobis distance is a distance measure, measure of the distance between a point P and a probability distribution D, introduced by Prasanta Chandra Mahalanobis, P. C. Mahalanobis in 1936. The mathematical details of Mahalanobis distance ...
and leverage are often used to detect outliers, especially in the development of linear regression models. * Subspace and correlation based techniques for high-dimensional numerical data


Peirce's criterion

It is proposed to determine in a series of m observations the limit of error, beyond which all observations involving so great an error may be rejected, provided there are as many as n such observations. The principle upon which it is proposed to solve this problem is, that the proposed observations should be rejected when the probability of the system of errors obtained by retaining them is less than that of the system of errors obtained by their rejection multiplied by the probability of making so many, and no more, abnormal observations. (Quoted in the editorial note on page 516 to Peirce (1982 edition) from ''A Manual of Astronomy'' 2:558 by Chauvenet.)


Tukey's fences

Other methods flag observations based on measures such as the
interquartile range In descriptive statistics, the interquartile range (IQR) is a measure of statistical dispersion, which is the spread of the data. The IQR may also be called the midspread, middle 50%, fourth spread, or H‑spread. It is defined as the differen ...
. For example, if Q_1 and Q_3 are the lower and upper
quartile In statistics, quartiles are a type of quantiles which divide the number of data points into four parts, or ''quarters'', of more-or-less equal size. The data must be ordered from smallest to largest to compute quartiles; as such, quartiles are ...
s respectively, then one could define an outlier to be any observation outside the range: : \big Q_1 - k (Q_3 - Q_1 ) , Q_3 + k (Q_3 - Q_1 ) \big/math> for some nonnegative constant k.
John Tukey John Wilder Tukey (; June 16, 1915 – July 26, 2000) was an American mathematician and statistician, best known for the development of the fast Fourier Transform (FFT) algorithm and box plot. The Tukey range test, the Tukey lambda distributi ...
proposed this test, where k=1.5 indicates an "outlier", and k=3 indicates data that is "far out".


In anomaly detection

In various domains such as, but not limited to,
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
,
signal processing Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing ''signals'', such as audio signal processing, sound, image processing, images, Scalar potential, potential fields, Seismic tomograph ...
,
finance Finance refers to monetary resources and to the study and Academic discipline, discipline of money, currency, assets and Liability (financial accounting), liabilities. As a subject of study, is a field of Business administration, Business Admin ...
,
econometrics Econometrics is an application of statistical methods to economic data in order to give empirical content to economic relationships. M. Hashem Pesaran (1987). "Econometrics", '' The New Palgrave: A Dictionary of Economics'', v. 2, p. 8 p. 8 ...
,
manufacturing Manufacturing is the creation or production of goods with the help of equipment, labor, machines, tools, and chemical or biological processing or formulation. It is the essence of the secondary sector of the economy. The term may refer ...
,
networking Network, networking and networked may refer to: Science and technology * Network theory, the study of graphs as a representation of relations between discrete objects * Network science, an academic field that studies complex networks Mathematics ...
and
data mining Data mining is the process of extracting and finding patterns in massive data sets involving methods at the intersection of machine learning, statistics, and database systems. Data mining is an interdisciplinary subfield of computer science and ...
, the task of ''anomaly detection'' may take other approaches. Some of these may be distance-based and density-based such as Local Outlier Factor (LOF). Some approaches may use the distance to the
k-nearest neighbor In statistics, the ''k''-nearest neighbors algorithm (''k''-NN) is a non-parametric supervised learning method. It was first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. Most often, it is used for cl ...
s to label observations as outliers or non-outliers.


Modified Thompson Tau test

The modified Thompson Tau test is a method used to determine if an outlier exists in a data set. The strength of this method lies in the fact that it takes into account a data set's standard deviation, average and provides a statistically determined rejection zone; thus providing an objective method to determine if a data point is an outlier. How it works: First, a data set's average is determined. Next the absolute deviation between each data point and the average are determined. Thirdly, a rejection region is determined using the formula: :\text \frac ; where \scriptstyle is the critical value from the Student distribution with ''n''-2 degrees of freedom, ''n'' is the sample size, and s is the sample standard deviation. To determine if a value is an outlier: Calculate \scriptstyle \delta = , (X - mean(X)) / s, . If ''δ'' > Rejection Region, the data point is an outlier. If ''δ'' ≤ Rejection Region, the data point is not an outlier. The modified Thompson Tau test is used to find one outlier at a time (largest value of ''δ'' is removed if it is an outlier). Meaning, if a data point is found to be an outlier, it is removed from the data set and the test is applied again with a new average and rejection region. This process is continued until no outliers remain in a data set. Some work has also examined outliers for nominal (or categorical) data. In the context of a set of examples (or instances) in a data set, instance hardness measures the probability that an instance will be misclassified ( 1-p(y, x) where is the assigned class label and represent the input attribute value for an instance in the training set ). Ideally, instance hardness would be calculated by summing over the set of all possible hypotheses : :\beginIH(\langle x, y\rangle) &= \sum_H (1 - p(y, x, h))p(h, t)\\ &= \sum_H p(h, t) - p(y, x, h)p(h, t)\\ &= 1- \sum_H p(y, x, h)p(h, t).\end Practically, this formulation is unfeasible as is potentially infinite and calculating p(h, t) is unknown for many algorithms. Thus, instance hardness can be approximated using a diverse subset L \subset H: :IH_L (\langle x,y\rangle) = 1 - \frac \sum_^ p(y, x, g_j(t, \alpha)) where g_j(t, \alpha) is the hypothesis induced by learning algorithm g_j trained on training set with hyperparameters \alpha. Instance hardness provides a continuous value for determining if an instance is an outlier instance.


Working with outliers

The choice of how to deal with an outlier should depend on the cause. Some estimators are highly sensitive to outliers, notably
estimation of covariance matrices In statistics, sometimes the covariance matrix of a multivariate random variable is not known but has to be estimation theory, estimated. Estimation of covariance matrices then deals with the question of how to approximate the actual covariance ma ...
.


Retention

Even when a normal distribution model is appropriate to the data being analyzed, outliers are expected for large sample sizes and should not automatically be discarded if that is the case. Instead, one should use a method that is robust to outliers to model or analyze data with naturally occurring outliers.


Exclusion

When deciding whether to remove an outlier, the cause has to be considered. As mentioned earlier, if the outlier's origin can be attributed to an experimental error, or if it can be otherwise determined that the outlying data point is erroneous, it is generally recommended to remove it. However, it is more desirable to correct the erroneous value, if possible. Removing a data point solely because it is an outlier, on the other hand, is a controversial practice, often frowned upon by many scientists and science instructors, as it typically invalidates statistical results. While mathematical criteria provide an objective and quantitative method for data rejection, they do not make the practice more scientifically or methodologically sound, especially in small sets or where a normal distribution cannot be assumed. Rejection of outliers is more acceptable in areas of practice where the underlying model of the process being measured and the usual distribution of measurement error are confidently known. The two common approaches to exclude outliers are
truncation In mathematics and computer science, truncation is limiting the number of digits right of the decimal point. Truncation and floor function Truncation of positive real numbers can be done using the floor function. Given a number x \in \mathbb ...
(or trimming) and
Winsorising Winsorizing or winsorization is the transformation of statistics by limiting extreme values in the statistical data to reduce the effect of possibly spurious outliers. It is named after the engineer-turned-biostatistician Charles P. Winsor (1895 ...
. Trimming discards the outliers whereas Winsorising replaces the outliers with the nearest "nonsuspect" data. Exclusion can also be a consequence of the measurement process, such as when an experiment is not entirely capable of measuring such extreme values, resulting in
censored Censorship is the suppression of speech, public communication, or other information. This may be done on the basis that such material is considered objectionable, harmful, sensitive, or "inconvenient". Censorship can be conducted by governmen ...
data. In
regression Regression or regressions may refer to: Arts and entertainment * ''Regression'' (film), a 2015 horror film by Alejandro Amenábar, starring Ethan Hawke and Emma Watson * ''Regression'' (magazine), an Australian punk rock fanzine (1982–1984) * ...
problems, an alternative approach may be to only exclude points which exhibit a large degree of influence on the estimated coefficients, using a measure such as
Cook's distance In statistics, Cook's distance or Cook's ''D'' is a commonly used estimate of the influence of a data point when performing a least-squares regression analysis. In a practical ordinary least squares analysis, Cook's distance can be used in several ...
. If a data point (or points) is excluded from the
data analysis Data analysis is the process of inspecting, Data cleansing, cleansing, Data transformation, transforming, and Data modeling, modeling data with the goal of discovering useful information, informing conclusions, and supporting decision-making. Da ...
, this should be clearly stated on any subsequent report.


Non-normal distributions

The possibility should be considered that the underlying distribution of the data is not approximately normal, having "
fat tails A fat-tailed distribution is a probability distribution that exhibits a large skewness or kurtosis, relative to that of either a normal distribution or an exponential distribution. In common usage, the terms fat-tailed and heavy-tailed are some ...
". For instance, when sampling from a
Cauchy distribution The Cauchy distribution, named after Augustin-Louis Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) ...
, the sample variance increases with the sample size, the sample mean fails to converge as the sample size increases, and outliers are expected at far larger rates than for a normal distribution. Even a slight difference in the fatness of the tails can make a large difference in the expected number of extreme values.


Set-membership uncertainties

A set membership approach considers that the uncertainty corresponding to the ''i''th measurement of an unknown random vector ''x'' is represented by a set ''X''i (instead of a probability density function). If no outliers occur, ''x'' should belong to the intersection of all ''X''i's. When outliers occur, this intersection could be empty, and we should relax a small number of the sets ''X''i (as small as possible) in order to avoid any inconsistency. This can be done using the notion of ''q''-
relaxed intersection The ''relaxed intersection'' of ''m'' sets corresponds to the classical intersection between sets except that it is allowed to relax few sets in order to avoid an empty intersection. This notion can be used to solve Constraint satisfaction problem, ...
. As illustrated by the figure, the ''q''-relaxed intersection corresponds to the set of all ''x'' which belong to all sets except ''q'' of them. Sets ''X''i that do not intersect the ''q''-relaxed intersection could be suspected to be outliers.


Alternative models

In cases where the cause of the outliers is known, it may be possible to incorporate this effect into the model structure, for example by using a
hierarchical Bayes model Multilevel models are statistical models of parameters that vary at more than one level. An example could be a model of student performance that contains measures for individual students as well as measures for classrooms within which the studen ...
, or a
mixture model In statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observati ...
.


See also

*
Anomaly (natural sciences) In the natural sciences, especially in atmospheric and Earth sciences involving applied statistics, an anomaly is a persisting deviation in a physical quantity from its expected value, e.g., the systematic difference between a measurement and a ...
*
Novelty detection Novelty detection is the mechanism by which an intelligent organism is able to identify an incoming sensory pattern as being hitherto unknown. If the pattern is sufficiently salient or associated with a high positive or strong negative utility, ...
*
Anscombe's quartet Anscombe's quartet comprises four datasets that have nearly identical simple descriptive statistics, yet have very different distributions and appear very different when graphed. Each dataset consists of eleven (''x'', ''y'') points. The ...
*
Data transformation (statistics) In statistics, data transformation is the application of a deterministic mathematical function to each point in a data set—that is, each data point ''zi'' is replaced with the transformed value ''yi'' = ''f''(''zi''), where ''f'' is a functi ...
*
Extreme value theory Extreme value theory or extreme value analysis (EVA) is the study of extremes in statistical distributions. It is widely used in many disciplines, such as structural engineering, finance, economics, earth sciences, traffic prediction, and Engin ...
*
Influential observation In statistics, an influential observation is an observation for a statistical calculation whose deletion from the dataset would noticeably change the result of the calculation. In particular, in regression analysis an influential observation is ...
* Random sample consensus *
Robust regression In robust statistics, robust regression seeks to overcome some limitations of traditional regression analysis. A regression analysis models the relationship between one or more independent variables and a dependent variable. Standard types of re ...
*
Studentized residual In statistics, a studentized residual is the dimensionless ratio resulting from the division of a errors and residuals in statistics, residual by an estimator, estimate of its standard deviation, both expressed in the same Unit of measurement, ...
* Winsorizing


References


External links

* *
Grubbs test
described by NIST manual {{Authority control Statistical charts and diagrams Robust statistics