HOME

TheInfoList



OR:

In statistics, the ''t''-statistic is the ratio of the departure of the estimated value of a parameter from its hypothesized value to its standard error. It is used in hypothesis testing via Student's ''t''-test. The ''t''-statistic is used in a ''t''-test to determine whether to support or reject the null hypothesis. It is very similar to the z-score but with the difference that ''t''-statistic is used when the sample size is small or the population standard deviation is unknown. For example, the ''t''-statistic is used in estimating the population mean from a sampling distribution of sample means if the population standard deviation is unknown. It is also used along with p-value when running hypothesis tests where the p-value tells us what the odds are of the results to have happened.


Definition and features

Let \hat\beta be an estimator of parameter ''β'' in some statistical model. Then a ''t''-statistic for this parameter is any quantity of the form : t_ = \frac, where ''β''0 is a non-random, known constant, which may or may not match the actual unknown parameter value ''β'', and \operatorname(\hat\beta) is the standard error of the estimator \hat\beta for ''β''. By default, statistical packages report ''t''-statistic with (these ''t''-statistics are used to test the significance of corresponding regressor). However, when ''t''-statistic is needed to test the hypothesis of the form , then a non-zero ''β''0 may be used. If \hat\beta is an ordinary least squares estimator in the classical linear regression model (that is, with normally distributed and homoscedastic error terms), and if the true value of the parameter ''β'' is equal to ''β''0, then the sampling distribution of the ''t''-statistic is the Student's ''t''-distribution with degrees of freedom, where ''n'' is the number of observations, and ''k'' is the number of regressors (including the intercept). In the majority of models, the estimator \hat\beta is consistent for ''β'' and is distributed asymptotically normally. If the true value of the parameter ''β'' is equal to ''β''0, and the quantity \operatorname(\hat\beta) correctly estimates the asymptotic variance of this estimator, then the ''t''-statistic will asymptotically have the standard normal distribution. In some models the distribution of the ''t''-statistic is different from the normal distribution, even asymptotically. For example, when a
time series In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. E ...
with a unit root is regressed in the
augmented Dickey–Fuller test In statistics, an augmented Dickey–Fuller test (ADF) tests the null hypothesis that a unit root is present in a time series sample. The alternative hypothesis is different depending on which version of the test is used, but is usually stationari ...
, the test ''t''-statistic will asymptotically have one of the Dickey–Fuller distributions (depending on the test setting).


Use

Most frequently, ''t'' statistics are used in Student's ''t''-tests, a form of
statistical hypothesis testing A statistical hypothesis test is a method of statistical inference used to decide whether the data at hand sufficiently support a particular hypothesis. Hypothesis testing allows us to make probabilistic statements about population parameters. ...
, and in the computation of certain confidence intervals. The key property of the ''t'' statistic is that it is a pivotal quantity – while defined in terms of the sample mean, its sampling distribution does not depend on the population parameters, and thus it can be used regardless of what these may be. One can also divide a residual by the sample standard deviation: : g(x,X) = \frac to compute an estimate for the number of standard deviations a given sample is from the mean, as a sample version of a z-score, the z-score requiring the population parameters.


Prediction

Given a normal distribution N(\mu, \sigma^2) with unknown mean and variance, the ''t''-statistic of a future observation X_, after one has made ''n'' observations, is an ancillary statistic – a pivotal quantity (does not depend on the values of ''μ'' and ''σ''2) that is a statistic (computed from observations). This allows one to compute a frequentist
prediction interval In statistical inference, specifically predictive inference, a prediction interval is an estimate of an interval in which a future observation will fall, with a certain probability, given what has already been observed. Prediction intervals are ...
(a predictive confidence interval), via the following t-distribution: : \frac \sim T^. Solving for X_ yields the prediction distribution : \overline_n + s_n \sqrt \cdot T^, from which one may compute predictive confidence intervals – given a probability ''p'', one may compute intervals such that 100''p''% of the time, the next observation X_ will fall in that interval.


History

The term "''t''-statistic" is abbreviated from "hypothesis test statistic". In statistics, the t-distribution was first derived as a posterior distribution in 1876 by Helmert and Lüroth. The t-distribution also appeared in a more general form as Pearson Type IV distribution in Karl Pearson's 1895 paper. However, the T-Distribution, also known as Student's T Distribution gets its name from William Sealy Gosset who was first to published the result in English in his 1908 paper titled "The Probable Error of a Mean" (in Biometrika) using his pseudonym "Student" because his employer preferred their staff to use pen names when publishing scientific papers instead of their real name, so he used the name "Student" to hide his identity. Gosset worked at the
Guinness Brewery St. James's Gate Brewery is a brewery founded in 1759 in Dublin, Ireland, by Arthur Guinness. The company is now a part of Diageo, a company formed from the merger of Guinness and Grand Metropolitan in 1997. The main product of the brewery is G ...
in
Dublin Dublin (; , or ) is the capital and largest city of Ireland. On a bay at the mouth of the River Liffey, it is in the province of Leinster, bordered on the south by the Dublin Mountains, a part of the Wicklow Mountains range. At the 2016 ...
,
Ireland Ireland ( ; ga, Éire ; Ulster Scots dialect, Ulster-Scots: ) is an island in the Atlantic Ocean, North Atlantic Ocean, in Northwestern Europe, north-western Europe. It is separated from Great Britain to its east by the North Channel (Grea ...
, and was interested in the problems of small samples – for example, the chemical properties of barley where sample sizes might be as few as 3. Hence a second version of the etymology of the term Student is that Guinness did not want their competitors to know that they were using the t-test to determine the quality of raw material. Although it was William Gosset after whom the term "Student" is penned, it was actually through the work of Ronald Fisher that the distribution became well known as "Student's distribution" and " Student's t-test"


Related concepts

* ''z''-score (standardization): If the population parameters are known, then rather than computing the t-statistic, one can compute the z-score; analogously, rather than using a ''t''-test, one uses a ''z''-test. This is rare outside of standardized testing. * Studentized residual: In regression analysis, the standard errors of the estimators at different data points vary (compare the middle versus endpoints of a simple linear regression), and thus one must divide the different residuals by different estimates for the error, yielding what are called studentized residuals.


See also

* ''F''-test * ''t''2-statistic * Student's T-Distribution * Student's t-test * Hypothesis testing *
Folded-t and half-t distributions In statistics, the folded-''t'' and half-''t'' distributions are derived from Student's ''t''-distribution by taking the absolute values of variates. This is analogous to the folded-normal and the half-normal statistical distributions being deri ...
*
Chi-squared distribution In probability theory and statistics, the chi-squared distribution (also chi-square or \chi^2-distribution) with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. The chi-squar ...


References

{{DEFAULTSORT:T-statistic Statistical ratios Parametric statistics Normal distribution