Tukey G- And H-distribution
   HOME





Tukey G- And H-distribution
John Wilder Tukey (; June 16, 1915 – July 26, 2000) was an American mathematician and statistician, best known for the development of the fast Fourier Transform (FFT) algorithm and box plot. The Tukey range test, the Tukey lambda distribution, the Tukey test of additivity, and the Teichmüller–Tukey lemma all bear his name. He is also credited with coining the term ''bit'' and the first published use of the word ''software''. Biography Tukey was born in New Bedford, Massachusetts, in 1915, to a Latin teacher father and a private tutor. He was mainly taught by his mother and attended regular classes only for certain subjects like French. Tukey obtained a B.A. in 1936 and M.S. in 1937 in chemistry, from Brown University, before moving to Princeton University, where in 1939 he received a PhD in mathematics after completing a doctoral dissertation titled "On denumerability in topology". During World War II, Tukey worked at the Fire Control Research Office and collaborated ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

New Bedford, Massachusetts
New Bedford is a city in Bristol County, Massachusetts, United States. It is located on the Acushnet River in what is known as the South Coast region. At the 2020 census, New Bedford had a population of 101,079, making it the state's ninth-largest city and the largest of the South Coast region. It is the second-largest city in the Providence-New Bedford, RI-MA Metropolitan Statistical Area, which is also a part of the greater Boston, Massachusetts Combined Statistical Area. Up through the 17th century, the area was the territory of the Wampanoag Indians. English colonists bought the land on which New Bedford would later be built from the Wampanoag in 1652, and the original colonial settlement that would later become the city was founded by English Quakers in the late 17th century. The town of New Bedford itself was officially incorporated in 1787. During the first half of the 19th century, New Bedford was one of the world's most important whaling ports. At its economic hei ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Anscombe Transform
In statistics, the Anscombe transform, named after Francis Anscombe, is a variance-stabilizing transformation that transforms a random variable with a Poisson distribution into one with an approximately standard Gaussian distribution. The Anscombe transform is widely used in photon-limited imaging (astronomy, X-ray) where images naturally follow the Poisson law. The Anscombe transform is usually used to pre-process the data in order to make the standard deviation approximately constant. Then denoising algorithms designed for the framework of additive white Gaussian noise are used; the final estimate is then obtained by applying an inverse Anscombe transformation to the denoised data. Definition For the Poisson distribution the mean m and variance v are not independent: m = v. The Anscombe transform : A:x \mapsto 2 \sqrt \, aims at transforming the data so that the variance is set approximately 1 for large enough mean; for mean zero, the variance is still zero. It transform ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Outlier
In statistics, an outlier is a data point that differs significantly from other observations. An outlier may be due to a variability in the measurement, an indication of novel data, or it may be the result of experimental error; the latter are sometimes excluded from the data set. An outlier can be an indication of exciting possibility, but can also cause serious problems in statistical analyses. Outliers can occur by chance in any distribution, but they can indicate novel behaviour or structures in the data-set, measurement error, or that the population has a heavy-tailed distribution. In the case of measurement error, one wishes to discard them or use statistics that are robust statistics, robust to outliers, while in the case of heavy-tailed distributions, they indicate that the distribution has high skewness and that one should be very cautious in using tools or intuitions that assume a normal distribution. A frequent cause of outliers is a mixture of two distributions, wh ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Robust Statistics
Robust statistics are statistics that maintain their properties even if the underlying distributional assumptions are incorrect. Robust Statistics, statistical methods have been developed for many common problems, such as estimating location parameter, location, scale parameter, scale, and regression coefficient, regression parameters. One motivation is to produce statistical methods that are not unduly affected by outliers. Another motivation is to provide methods with good performance when there are small departures from a Parametric statistics, parametric distribution. For example, robust methods work well for mixtures of two normal distributions with different standard deviations; under this model, non-robust methods like a t-test work poorly. Introduction Robust statistics seek to provide methods that emulate popular statistical methods, but are not unduly affected by outliers or other small departures from Statistical assumption, model assumptions. In statistics, classical e ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Tukey Depth
In statistics and computational geometry, the Tukey depth or half-space depth is a measure of the depth of a point in a fixed set of points. The concept is named after its inventor, John Tukey. Given a set of ''n'' points \mathcal_n = \ in ''d''-dimensional space, Tukey's depth of a point ''x'' is the smallest fraction (or number) of points in any closed halfspace that contains ''x''. Tukey's depth measures how extreme a point is with respect to a point cloud. It is used to define the bagplot, a bivariate generalization of the boxplot. For example, for any extreme point of the convex hull there is always a (closed) halfspace that contains only that point, and hence its Tukey depth as a fraction is 1/n. Definitions ''Sample Tukey's depth'' of point ''x'', or Tukey's depth of ''x'' with respect to the point cloud \mathcal_n, is defined as D(x;\mathcal_n) = \inf_ \frac\sum_^n \mathbf\, where \mathbf\ is the indicator function In mathematics, an indicator fu ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Centerpoint (geometry)
In statistics and computational geometry, the notion of centerpoint is a generalization of the median to data in higher-dimensional Euclidean space. Given a set of points in ''d''-dimensional space, a centerpoint of the set is a point such that any hyperplane that goes through that point divides the set of points in two roughly equal subsets: the smaller part should have at least a 1/(''d'' + 1) fraction of the points. Like the median, a centerpoint need not be one of the data points. Every non-empty set of points (with no duplicates) has at least one centerpoint. Related concepts Closely related concepts are the Tukey depth of a point (the minimum number of sample points on one side of a hyperplane through the point) and a Tukey median of a point set (a point maximizing the Tukey depth). A centerpoint is a point of depth at least ''n''/(''d'' + 1), and a Tukey median must be a centerpoint, but not every centerpoint is a Tukey median. Both terms are named a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Bland–Altman Plot
A Bland–Altman plot (difference plot) in analytical chemistry or biomedicine is a method of data plotting used in analyzing the agreement between two different assays. It is identical to a John Tukey, Tukey mean-difference plot, the name by which it is known in other fields, but was popularised in medical statistics by J. Martin Bland and Doug Altman, Douglas G. Altman. Construction Consider a sample consisting of n observations (for example, objects of unknown volume). Both assays (for example, different methods of volume measurement) are performed on each sample, resulting in 2n data points. Each of the n samples is then represented on the graph by assigning the mean of the two measurements as the x-value, and the difference between the two values as the y-value. The Cartesian coordinate system, Cartesian coordinates of a given sample S with values of S_1 and S_2 determined by the two assays is : S(x,y)=\left( \frac, S_1-S_2 \right). For comparing the dissimilarities ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Tukey's Test Of Additivity
In statistics, Tukey's test of additivity, named for John Tukey, is an approach used in two-way ANOVA ( regression analysis involving two qualitative factors) to assess whether the factor variables ( categorical variables) are additively related to the expected value of the response variable. It can be applied when there are no replicated values in the data set, a situation in which it is impossible to directly estimate a fully general non-additive regression structure and still have information left to estimate the error variance. The test statistic proposed by Tukey has one degree of freedom under the null hypothesis, hence this is often called "Tukey's one-degree-of-freedom test." Introduction The most common setting for Tukey's test of additivity is a two-way factorial analysis of variance (ANOVA) with one observation per cell. The response variable ''Y''''ij'' is observed in a table of cells with the rows indexed by ''i'' = 1,..., ''m'' and the columns indexed ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Trimean
In statistics the trimean (TM), or Tukey's trimean, is a measure of a probability distribution's location defined as a weighted average of the distribution's median and its two quartiles: : TM= \frac This is equivalent to the arithmetic mean of the median and the midhinge: : TM= \frac\left(Q_2 + \frac\right) The foundations of the trimean were part of Arthur Bowley's teachings, and later popularized by statistician John Tukey in his 1977 book which has given its name to a set of techniques called exploratory data analysis. Like the median and the midhinge, but unlike the sample mean, it is a statistically resistant L-estimator with a breakdown point of 25%. This beneficial property has been described as follows: Efficiency Despite its simplicity, the trimean is a remarkably efficient estimator of population mean. More precisely, for a large data set (over 100 points) from a symmetric population, the average of the 18th, 50th, and 82nd percentile is the most efficient 3-p ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Tukey Lambda Distribution
Formalized by John Tukey, the Tukey lambda distribution is a continuous, symmetric probability distribution defined in terms of its quantile function. It is typically used to identify an appropriate distribution (see the comments below) and not used in statistical models directly. The Tukey lambda distribution has a single shape parameter, , and as with other probability distributions, it can be transformed with a location parameter, , and a scale parameter, . Since the general form of probability distribution can be expressed in terms of the standard distribution, the subsequent formulas are given for the standard form of the function. Quantile function For the standard form of the Tukey lambda distribution, the quantile function, ~ Q(p) ~, (i.e. the inverse function to the cumulative distribution function) and the quantile density function, ~ q = \frac\ , are : \ Q\left(\ p\ ; \lambda\ \right) ~=~ \begin \tfrac \left p^\lambda - (1 - p)^\lambda\ \right , &\ \mbox\ \lambda \n ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Tukey's Range Test
Tukey's range test, also known as Tukey's test, Tukey method, Tukey's honest significance test, or Tukey's HSD (honestly significant difference) test, : Also occasionally described as "honestly", see e.g. is a single-step multiple comparison procedure and statistical test. It can be used to correctly interpret the statistical significance of the difference between means that have been selected for comparison because of their extreme values. The method was initially developed and introduced by John Tukey for use in Analysis of Variance (ANOVA), and usually has only been taught in connection with ANOVA. However, the studentized range distribution used to determine the level of significance of the differences considered in Tukey's test has vastly broader application: It is useful for researchers who have searched their collected data for remarkable differences between groups, but then cannot validly determine how significant their discovered stand-out difference is using standar ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]