In
survey methodology
Survey methodology is "the study of survey methods".
As a field of applied statistics concentrating on human-research surveys, survey methodology studies the sampling of individual units from a population and associated techniques of survey ...
, the design effect (generally
denoted as
or
) is a measure of the expected impact of a sampling design on the
variance
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of number ...
of an
estimator
In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguished. For example, the ...
for some parameter. It is calculated as the ratio of the variance of an estimator based on a sample from an (often) complex
sampling design
Sampling may refer to:
* Sampling (signal processing), converting a continuous signal into a discrete signal
* Sampling (graphics), converting continuous colors into discrete color components
*Sampling (music), the reuse of a sound recording in an ...
, to the variance of an alternative estimator based on a
simple random sample
In statistics, a simple random sample (or SRS) is a subset of individuals (a sample) chosen from a larger set (a population) in which a subset of individuals are chosen randomly, all with the same probability. It is a process of selecting a sam ...
(SRS) of the same number of elements.
The Deff (be it estimated, or known a-priori) can be used to adjust the variance of an estimator in cases where the sample is not drawn using simple random sampling. It may also be useful in sample size calculations and for quantifying the representativeness of a sample. The term "design effect" was coined by Leslie Kish in 1965.
The design effect is a positive
real number
In mathematics, a real number is a number that can be used to measurement, measure a ''continuous'' one-dimensional quantity such as a distance, time, duration or temperature. Here, ''continuous'' means that values can have arbitrarily small var ...
that indicates an inflation (
), or deflation (
) in the
variance
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of number ...
of an estimator for some parameter, that is due to the study not using SRS (with
, when the variances are identical).
Some potential complex sampling that could introduce Deff that is different than 1 include:
cluster sampling
In statistics, cluster sampling is a sampling plan used when mutually homogeneous yet internally heterogeneous groupings are evident in a statistical population. It is often used in marketing research.
In this sampling plan, the total popul ...
(such as when there is
correlation
In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statisti ...
between observations),
stratified sampling,
cluster randomized controlled trial A cluster-randomised controlled trial is a type of randomised controlled trial in which groups of subjects (as opposed to individual subjects) are randomised. Cluster randomised controlled trials are also known as cluster-randomised trials, group-ra ...
, disproportional (unequal probability) sample, non-coverage, non-response, statistical
adjustments of the data, etc..
Deff can be used in
sample size
Sample size determination is the act of choosing the number of observations or replicates to include in a statistical sample. The sample size is an important feature of any empirical study in which the goal is to make inferences about a populatio ...
calculations, quantifying the representative of a sample (to a target population), as well as for adjusting (often inflating) the variance of some estimator (in cases when we can calculate that estimator's variance assuming SRS).
The term "Design effect" was coined by
Leslie Kish
Leslie Kish (born László Kiss, July 27, 1910 – October 7, 2000) was a Hungarian- American statistician and survey methodologist.. Reprint of an obituary from '' International Statistical Institute (ISI) Newsletter'', Volume 25, No. 73.
Life ...
in 1965.
Ever since, many
calculations (and estimators) have been proposed, in the literature, for describing the effect of known sampling design on the increase/decrease in the variance of estimators of interest. In general, the design effect varies between statistics of interests, such as the total or
ratio mean; it also matters if the design (e.g.: selection probabilities) are correlated with the outcome of interest. And lastly, it is influenced by the distribution of the outcome itself. All of these should be considered when estimating and using design effect in practice.
Definitions
Deff
The design effect (Deff, or
) is the ratio of two theoretical variances for
estimator
In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguished. For example, the ...
s of some
parameter
A parameter (), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when ...
(
):
[
:* In the numerator is the actual variance for an estimator of some parameter () in a given sampling design ;
:* In the denominator is the variance assuming the same sample size, but if the sample was obtained using the estimator we would use for a ]simple random sampling
In statistics, a simple random sample (or SRS) is a subset of individuals (a sample) chosen from a larger set (a population) in which a subset of individuals are chosen randomly, all with the same probability. It is a process of selecting a sam ...
''without'' replacement ().
So that:
:
Put differently, is by how much more the variance had increased (or decreased, in some cases) because our sample was drawn and adjusted to a specific sampling design (e.g.: using weights, or other measures) as it would be if instead the sample was from a simple random sampling
In statistics, a simple random sample (or SRS) is a subset of individuals (a sample) chosen from a larger set (a population) in which a subset of individuals are chosen randomly, all with the same probability. It is a process of selecting a sam ...
(without replacement). There are many ways of calculation , depending on the parameter of interest (E.g.: population total, population mean, quantiles, ratio of quantities etc.), the estimator used, and the sampling design (e.g.: clustered sampling, stratified sampling, post-stratification, multi-stage sampling, etc.).
For estimating the population mean, the Deff (for some sampling design p) is:
:
Where n is the sample size, f is the fraction of the sample from the population (n/N), (1-f) is the (squared) finite population correction (FPC), and is the unbiassed sample variance.
The estimates of unit variance (or element variance) is when multiplying Deff by the element's variance, so to incorporate all the complexities of the sample design.
Notice how the definition of Deff is based on parameters of the population that we often do not know (i.e.: the variances of estimators under two different sampling designs). The process of estimating Deff for specific designs will be described in the following section.[Kalton, G., J. M. Brick, and T. Le. "Estimating components of design effects for use in sample design. In household sample surveys in developing and transition countries,(Sales No. E. 05. XVII. 6). Department of Economic and Social Affairs." Statistics Division, United Nations, New York (2005)]
(pdf)
/ref>
A general formula for the (theoretical) design effect of estimating a total (not the mean), for some design, is given in Cochran 1977.
Deft
A related quantity to Deff, proposed by Kish in 1995, is called Deft (Design Effect Factor). It is defined on the square root of the variance ratios, and also the denominator uses a simple random sample ''with'' replacement (srswr), instead of ''without replacement'' (srswor):
In this later definition (proposed in 1995, vs 1965) it was argued that srs "without replacement" (with its positive effect on the variance) should be captured in the definition of the design effect, since it is part of the sampling design. It is also more directly related to the use in inference (since we often use +Z*DE*SE, not +Z*DE*VAR when creating confidence interval
In frequentist statistics, a confidence interval (CI) is a range of estimates for an unknown parameter. A confidence interval is computed at a designated ''confidence level''; the 95% confidence level is most common, but other levels, such as ...
s). Also since the finite population correction (FPC) is also harder to compute in some situations. But for many cases when the population is very large, Deft is (almost) the square root of Deff ().
The original intention for ''Deft'' was to have it "express the effects of sample design beyond the elemental variability , removing both the unit of measurement and sample size as nuisance parameters", this is done in order to make the design effect generalizable (relevant for) many statistics and variables within the same survey (and even between surveys). However, followup works have shown that the calculation of design effect, for parameters such as a population total or mean, has dependence on the variability of the outcome measure, which limits the original aspiration of Kish for this measure. However, this statement may loosely (i.e.: under some conditions) be true for the weighted mean
The weighted arithmetic mean is similar to an ordinary arithmetic mean (the most common type of average), except that instead of each of the data points contributing equally to the final average, some data points contribute more than others. The ...
.
Effective sample size
The effective sample size, also defined by Kish in 1965, is the original sample size divided by the design effect.[Kish, Leslie, and J. Official Stat. "Weighting for unequal Pi." (1992): 183–200]
(pdf link)
/ref> This quantity reflects what would be the sample size that is needed to achieve the current variance of the estimator (for some parameter) with the existing design, if the sample design (and its relevant parameter estimator) were based on a simple random sample
In statistics, a simple random sample (or SRS) is a subset of individuals (a sample) chosen from a larger set (a population) in which a subset of individuals are chosen randomly, all with the same probability. It is a process of selecting a sam ...
.
Namely:
:
Put differently, it says how many responses we are left with when using an estimator that correctly adjusts for the design effect of the sampling design. For example, using the weighted mean
The weighted arithmetic mean is similar to an ordinary arithmetic mean (the most common type of average), except that instead of each of the data points contributing equally to the final average, some data points contribute more than others. The ...
with inverse probability weighting Inverse probability weighting is a statistical technique for calculating statistics standardized to a pseudo-population different from that in which the data was collected. Study designs with a disparate sampling population and population of target ...
, instead of the simple mean
There are several kinds of mean in mathematics, especially in statistics. Each mean serves to summarize a given group of data, often to better understand the overall value ( magnitude and sign) of a given data set.
For a data set, the '' ari ...
.
It is also possible to get the effective sample size ratio by taking the inverse of Deff (i.e.: ).
When using Kish's design effect for unequal weights, you may use the following simplified formula for "Kish
Kish may refer to:
Geography
* Gishi, Nagorno-Karabakh, Azerbaijan, a village also called Kish
* Kiş, Shaki, Azerbaijan, a village and municipality also spelled Kish
* Kish Island, an Iranian island and a city in the Persian Gulf
* Kish, Iran ...
's Effective Sample Size"[
:
]
Design effect for well-known sampling designs
Sampling design dictates how design effect should be calculated
Different sampling designs differ substantially in their impact on estimators (such as the mean) in terms of their bias and variance.
For example, in the cluster sampling case the units may have equal or unequal selection probabilities, irrespective to their intra-class correlation (and their negative effect of increasing the variance of our estimators). In the case of stratified sampling, the probabilities may be equal (EPSEM) or unequal. But regardless, the usage of the prior information on the stratum size in the population, during the sampling stage, could yield statistical efficiency of our estimators. For example: if we know that gender is correlated with our outcome of interest, and also know that the male-female ratio for some population is 50%-50%. Then if we made sure to sample exactly half of each gender, we've thus reduced the variance of the estimators because we've removed the variability caused by unequal proportion of males-females in our sample.
Lastly, in case of adjusting to non-coverage, non-response or some stratum split of the population (unavailable during the sampling stage), we may use statistical procedures (E.g.: post-stratification and others). The result of such procedures may lead to estimations of the sampling probabilities that are similar, or very different, than the true sampling probabilities of the units. The quality of these estimators depends on the quality of the auxiliary information and the missing at random assumptions used in creating them. Even when these sampling probability estimators (propensity scores) manage to capture most of the phenomena that has produced them - the impact of the variable selection probabilities on the estimators may be small or large, depending on the data (details in the next section).
Due to the large variety in sampling designs (with or without an effect on unequal selection probabilities), different formulas have been developed to capture the potential design effect, as well as to estimate the correct variance of estimators. Sometimes, these different design effects can be compounded together (as in the case of unequal selection probability and cluster sampling, more details in the following sections). Whether or not to use these formulas, or just assume SRS, depends on to expected amount of bias reduced vs the increase in estimator variance (and in the overhead of methodological and technical complexity).
Unequal selection probabilities
Sources for unequal selection probabilities
There are various ways to sample units so that each unit would have the exact same probability of selection. Such methods are called equal probability sampling (EPSEM) methods. Some of the more basic methods include simple random sample
In statistics, a simple random sample (or SRS) is a subset of individuals (a sample) chosen from a larger set (a population) in which a subset of individuals are chosen randomly, all with the same probability. It is a process of selecting a sam ...
(SRS, either with or without replacement) and systematic sampling In survey methodology, systematic sampling is a statistical method involving the selection of elements from an ordered sampling frame. The most common form of systematic sampling is an equiprobability method.
In this approach, progression throug ...
for getting a fixed sample size. There is also Bernoulli sampling In the theory of finite population sampling, Bernoulli sampling is a sampling process where each element of the population is subjected to an independent Bernoulli trial which determines whether the element becomes part of the sample. An essential p ...
with a random sample size. More advanced techniques such as stratified sampling and cluster sampling
In statistics, cluster sampling is a sampling plan used when mutually homogeneous yet internally heterogeneous groupings are evident in a statistical population. It is often used in marketing research.
In this sampling plan, the total popul ...
can also be designed to be EPSEM. For example, in cluster sampling we can make sure to sample each cluster with probability that is proportional to its size, and then measure all the units inside the cluster. A more complex method for cluster sampling is to use a two-stage sampling by which we sample clusters at the first stage (as before, proportional to cluster size), and sample from each cluster at the second stage using SRS with a fixed proportion (E.g.: sample half of the cluster).[Source: Frerichs, R.R. Rapid Surveys (unpublished), © 2004. N, chapter 4 - Equal Probability of Selection]
pdf
In their works, Kish
Kish may refer to:
Geography
* Gishi, Nagorno-Karabakh, Azerbaijan, a village also called Kish
* Kiş, Shaki, Azerbaijan, a village and municipality also spelled Kish
* Kish Island, an Iranian island and a city in the Persian Gulf
* Kish, Iran ...
and others highlights several known reasons that lead to unequal selection probabilities:
# Disproportional sampling due to selection frame or procedure. This happens when a researcher purposefully designs their sample so to over/under sample specific sub-populations or clusters. There are many cases in which this might happen. For example:
#:* In stratified sampling when units from some strata are known to have a larger variance than other strata. In such cases, the intention of the researcher may be to use this prior knowledge about the variance between stratum in order to reduce the overall variance of an estimator of some population level parameter of interest (e.g.: the mean). This can be achieved by a strategy known as '' optimum allocation'', in which a strata is over sampled proportional to higher standard deviation and lower sampling cost (i.e.: , where is the standard deviation of the outcome in , and relates to the cost of recruiting one element from ). An example of an optimum allocation is ''Neyman's optimal allocation'' which, when cost is fixed for recruiting each stratum, the sample size is: . Where the summation is over all strata; ''n'' is the total sample size; is the sample size for stratum ''h''; the relative size of stratum ''h'' as compared to the entire population ''N''; and is the standard error of in stratum ''h''. A related concept to optimum design is optimal experimental design
In the design of experiments, optimal designs (or optimum designs) are a class of experimental designs that are optimal with respect to some statistical criterion. The creation of this field of statistics has been credited to Danish statistic ...
.
#:* If there is interest in comparing two stratum (E.g.: people from two specific socio-demographic groups, or from two regions, etc.), in which case the smaller group may be over-sampled. This way, the variance of the estimator that compares the two groups is reduced.
#:* In cluster sampling there may be clusters of different sizes but the procedure samples from all clusters using SRS
SRS or SrS may stand for:
Organizations and companies
* Savez Radio-Amatera Srbije, a Serbian amateur radio organization
* Sea Research Society, for diving and underwater archaeology
* Serbian Radical Party (''Srpska radikalna stranka''), a poli ...
, and all elements in the cluster are measured (for example, if the cluster sizes are not known upfront at the stage of sampling).
#:* When using a two-stage sampling so that in the first stage the clusters are sampled proportionally to their size (a.k.a.: PPS Probability Proportional to Size), but then at the second stage only a specific fixed number of units (E.g.: one or two) are selected from each cluster - this may happen due to convenience/budget considerations. A similar case is when the first stage attempt to sample using PPS, but the number of elements in each unit are inaccurate (so that some smaller cluster may have a higher-than-it-should chance of being selected. And vis-versa for larger clusters with too-small of a chance to be sampled). In such cases, the larger the errors in the sampling frame in the first stage - the larger will be the needed unequal selection probabilities.
#:* When the frame used for sampling includes duplication of some of the items, thus leading some items to have a larger probability than others to be sampled (E.g.: if the sampling frame was created by merging several lists. Or if recruiting users from several ad channels - in which some of the users are available for recruitment from several of the channels, while others are available to be recruited from only one of the channels). In each of these cases - different units would have different sampling probability, thus making this sampling procedure to not be EPSEM.
#:* When several different samples/frames are combined. For example, if running different ad campaigns for recruiting respondents. Or when combining results from several studies done by different researchers and/or at different times (i.e.: Meta-analysis
A meta-analysis is a statistical analysis that combines the results of multiple scientific studies. Meta-analyses can be performed when there are multiple scientific studies addressing the same question, with each individual study reporting m ...
).
#: When disproportional sampling happens, due to sampling design decisions, the researcher may (sometimes) be able to traceback the decision and accurately calculate the exact inclusion probability. When these selection probabilities are hard to traceback, they may be estimated using some propensity score model combined with information from auxiliary variables (E.g.: age, gender, etc.).
# Non-coverage. This happens, for example, if people are sampled based on some pre-defined list that doesn't include all the people in the population (E.g.: a phone book or using ads to recruit people to a survey). These missing units are missing due to some failure of creating the sampling frame In statistics, a sampling frame is the source material or device from which a sample is drawn. It is a list of all those within a population who can be sampled, and may include individuals, households or institutions.
Importance of the sampling f ...
, as opposed to deliberate exclusion of some people (E.g.: minors, people who cannot vote, etc.). The effect of non-coverage on sampling probability is considered difficult to measure (and adjust for) in various survey situations, unless strong assumptions are made.
# Non-response. This refers to the failure of obtaining measurements on sampled units that are intended to be measured. Reasons for non-response are varied and depends on the context. A person may be temporarily unavailable, for example if they are not available to pick up the phone when the survey is done. A person may also refuse to answer the survey due to a variety of reasons, E.g.: different tendencies of people from different ethnic/demographic/socio-economic groups to respond in general; insufficient incentive to spend the time or share data; the identity of the institution that is running the survey; inability to respond (E.g.: due to illness, illiteracy, or a language barrier); respondent is not found (E.g.: they've moved an appartement); the response was lost/destroyed during encoding or transmission (i.e.: measurement error). In the context of surveys, these reasons may be related to answering the entire survey or just specific questions.
# statistical adjustments. These may include methods such as post-stratification, raking
Raking (also called "raking ratio estimation" or "iterative proportional fitting The iterative proportional fitting procedure (IPF or IPFP, also known as biproportional fitting or biproportion in statistics or economics (input-output analysis, etc ...
, or propensity score (estimation) models - used to perform an ad-hoc adjustment of the sample to some known (or estimated) stratum sizes. Such procedures are used to mitigate issues in the sampling ranging from sampling error
In statistics, sampling errors are incurred when the statistical characteristics of a population are estimated from a subset, or sample, of that population. Since the sample does not include all members of the population, statistics of the sample ...
, under-coverage of the sampling frame to non-response.[Kott, Phillip S. "Using calibration weighting to adjust for nonresponse and coverage errors." Survey Methodology 32.2 (2006): 133]
(pdf)
/ref> For example, if a simple random sample is used, a post-stratification (using some auxiliary information) does not offer an estimator that is uniformly better than just an unweighted estimator. However, it can be viewed as a more "robust" estimator. Alternatively, these methods can be used to make the sample more similar to some target "controls" (i.e.: population of interest), a process also known as "standardization". In such cases, these adjustments help with providing unbiased estimators (often with the cost of increased variance, as seen in the following sections). If the original sample is a nonprobability sampling Sampling is the use of a subset of the population to represent the whole population or to inform about (social) processes that are meaningful beyond the particular cases, individuals or sites studied. Probability sampling, or random sampling, is a ...
, then post-stratification adjustments are just similar to an ad-hoc quota sampling
Quota sampling is a method for selecting survey participants that is a non-probabilistic version of stratified sampling.
Process
In quota sampling, a population is first segmented into mutually exclusive sub-groups, just as in stratified samplin ...
.
When the sampling design is fully known (leading to some probability of selection for some element from strata h), and the non-response is measurable (i.e.: we know that only observations answered in strata h), then an exactly known inverse probability weight can be calculated for each element i from strata h using:. Sometimes a statistical adjustment, such as post-stratification or raking, is used for estimating the selection probability. E.g.: when comparing the sample we have with same target population, also known as matching to controls. The estimation process may be focused only on adjusting the existing population to an alternative population (for example, if trying to extrapolate from a panel drawn from several regions to an entire country). In such a case, the adjust might be focused on some calibration factor and the weights be calculated as . However, in other cases, both the under-coverage and non-response are all modeled in one go as part of the statistical adjustment, which leads to an estimation of the overall sampling probability (let's say ). In such a case, the weights are simply: . Notice that when statistical adjustments are used, is often estimated based on some model. The formulation in the following sections assume this is known, which is not true for statistical adjustments (since we only have ). However, if it is assumed that the estimation error of is very small then the following sections can be used as if it was known. Having this assumption be true depends on the size of the sample used for modeling, and is worth keeping in mind during analysis.
When the selection probabilities may be different, the sample size is random, and the pairwise selection probabilities are independent, we call this Poisson sampling
In survey methodology, Poisson sampling (sometimes denoted as ''PO sampling'') is a sampling process where each element of the population is subjected to an independent Bernoulli trial which determines whether the element becomes part of the sampl ...
.
"Design based" vs "model based" for describing properties of estimators
When adjusting for unequal probability selection through "individual case weights" (E.g.: inverse probability weighting), we get various types of estimators for quantities of interest. Estimators such as Horvitz–Thompson estimator
In statistics, the Horvitz–Thompson estimator, named after Daniel G. Horvitz and Donovan J. Thompson, is a method for estimating the total and mean of a pseudo-population in a stratified sample. Inverse probability weighting is applied to ac ...
yield unbiased estimators (if the selection probabilities are indeed known, or approximately known), for total and the mean of the population. Deville and Särndal (1992) coined the term “calibration estimator” for estimators using weights such that they satisfy some condition, such as having the sum of weights equal the population size. And more generally, that the weighted sum of weights is equal some quantity of an auxiliary variable: (e.g.: that the sum of weighted ages of the respondents is equal to the population size in each age bucket).[Deville, Jean-Claude, and Carl-Erik Särndal. "Calibration estimators in survey sampling." Journal of the American statistical Association 87.418 (1992): 376-382.]
The two primary ways to argue about the properties of calibration estimators are:
# randomization based (or, sampling design based) - in these cases, the weights () and values of the outcome of interest that are measured in the sample are all treated as known. In this framework, there is variability in the (known) values of the outcome (Y). However, the only randomness comes from which of the elements in the population were picked into the sample (often denoted as , getting 1 if element is in the sample and 0 if it is not). For a simple random sample
In statistics, a simple random sample (or SRS) is a subset of individuals (a sample) chosen from a larger set (a population) in which a subset of individuals are chosen randomly, all with the same probability. It is a process of selecting a sam ...
, each will be an i.i.d
In probability theory and statistics, a collection of random variables is independent and identically distributed if each random variable has the same probability distribution as the others and all are mutually independence (probability theory), ...
bernoulli distribution
In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli,James Victor Uspensky: ''Introduction to Mathematical Probability'', McGraw-Hill, New York 1937, page 45 is the discrete probab ...
with some parameter . For general EPSEM (equal probability sampling) will still be bernoulli with some parameter , but they will no longer be independent
Independent or Independents may refer to:
Arts, entertainment, and media Artist groups
* Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s
* Independe ...
random variables. For something like post stratification, the number of elements at each strata can be modeled as a multinomial distribution
In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a ''k''-sided dice rolled ''n'' times. For ''n'' independent trials each of w ...
with different inclusion probabilities for each element belonging to some strata . In these cases the sample size itself can be a random variable.
# model based - in these cases the sample is fixed, the weights are fixed, but the outcome of interest is treated as a random variable. For example, in the case of post-stratification, the outcome can be modeled as some linear regression
In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is ...
function where the independent variables are indicator variables mapping each observation to its relevant strata, and the variability comes with the error term.
As we will see later, some proofs in the literature rely on the randomization-based framework, while others focus on the model-based perspective. When moving from the mean to the weighted mean
The weighted arithmetic mean is similar to an ordinary arithmetic mean (the most common type of average), except that instead of each of the data points contributing equally to the final average, some data points contribute more than others. The ...
, more complexity is added. For example, in the context of survey methodology
Survey methodology is "the study of survey methods".
As a field of applied statistics concentrating on human-research surveys, survey methodology studies the sampling of individual units from a population and associated techniques of survey ...
oftentimes the population size itself is considered an unknown quantity that is estimated. So in the calculation of the weighted mean is in fact based on a ratio estimator The ratio estimator is a statistical parameter and is defined to be the ratio of means of two random variables. Ratio estimates are biased and corrections must be made when they are used in experimental or survey work. The ratio estimates are asymme ...
, with an estimator of the total at the numerator and an estimator of the population size in the denominator (making the variance calculation to be more complex).
Common types of weights
There are many types (and subtypes) of weights, with different ways to use and interpret them. With some weights their absolute value has some important meaning, while with other weights the important part is the relative values of the weights to each other. This section presents some of the more common types of weights so that they can be referenced in followup sections.
* Frequency weights are a basic type of weighting, presented in introduction to statistics courses. With these, each weight is an integer number that indicates the absolute frequency of an item in the sample. These are also sometimes termed repeat (or occurrence) weights. The specific value has an absolute meaning that is lost if the weights are transformed (e.g.: scaling). For example: if we have the numbers 10 and 20 with the frequency weights values of 2 and 3, then when "spreading" our data it is: 10,10, 20, 20, 20 (with weights of 1 to each of these items). Frequency weights includes the amount of information contained in a dataset, and thus allows things like creating unbiased weighted variance estimation using Bessel's correction
In statistics, Bessel's correction is the use of ''n'' − 1 instead of ''n'' in the formula for the sample variance and sample standard deviation, where ''n'' is the number of observations in a sample. This method corrects the bias i ...
. Notice that such weights are often random variable
A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the p ...
s, since the specific number of items we will see from each value in the dataset is random.
* inverse-variance weighting In statistics, inverse-variance weighting is a method of aggregating two or more random variables to minimize the variance of the weighted average. Each random variable is weighted in inverse proportion to its variance, i.e. proportional to its p ...
is when each element is assigned a weight that is the inverse of its (known) variance. When all elements have the same expectancy, using such weights for calculating weighted average
The weighted arithmetic mean is similar to an ordinary arithmetic mean (the most common type of average), except that instead of each of the data points contributing equally to the final average, some data points contribute more than others. The ...
has the least variance among all weighted averages. In the common formulation, these weights are known and not random (this seems related to reliability weights).
* Normalized (convex) weights is a set of weights that form a convex combination
In convex geometry and vector algebra, a convex combination is a linear combination of points (which can be vectors, scalars, or more generally points in an affine space) where all coefficients are non-negative and sum to 1. In other ...
. I.e.: each weight is a number between 0 and 1, and the sum of all weights is equal to 1. Any set of (non negative) weights can be turned into normalized weights by dividing each weight with the sum of all weights, making these weights normalized to sum to 1.
: A related form are weights normalized to sum to sample size (n). These (non-negative) weights sum to the sample size (n), and their mean is 1. Any set of weights can be normalized to sample size by dividing each weight with the average of all weights. These weights have a nice relative interpretation where elements with weight larger than 1 are more "important" (in terms of their relative influence on, say, the weighted mean) then the average observation, while weights smaller than 1 are less "important" than the average observation.
* Inverse probability weighting Inverse probability weighting is a statistical technique for calculating statistics standardized to a pseudo-population different from that in which the data was collected. Study designs with a disparate sampling population and population of target ...
is when each element is given a weight that is (proportional) to the inverse probability of selecting that element. E.g., by using . With inverse probability weights, we learn how many items each element "represents" in the target population. Hence, the sum of such weights returns the size of the target population of interest. Inverse probability weights can be normalized to sum to 1 or normalized to sum to the sample size (n), and many of the calculations from the following sections will yield the same results.
: When a sample is EPSEM then all the probabilities are equal and the inverse of the selection probability yield weights that are all equal to one another (they are all equal to , where is the sample size and is the population size). Such a sample is called a self weighting sample.
There are also indirect ways of applying "weighted" adjustments. For example, the existing cases may be duplicated to impute missing observations (e.g.: from non-response), with variance estimated using methods such as multiple imputation. A complementary dealing of data is to remove (give weight of 0) to some cases. For example, when wanting to reduce the influence of over-sampled groups that are less essential for some analysis. Both cases are similar in nature to inverse probability weighting but the application in practice gives more/less rows of data (making the input potentially simpler to use in some software implementation), instead of applying an extra column of weights. Nevertheless, the consequences of such implementations are similar to just using weights. So while in the case of removing observations the data can easily be handled by common software implementations, the case of adding rows requires special adjustments for the uncertainty estimations. Not doing so may lead to erroneous conclusions(i.e.: there is no free lunch when using alternative representation of the underlying issues).
The term "Haphazard weights", coined by Kish, is used to refer to weights that correspond to unequal selection probabilities, but ones that are not related to the expectancy or variance of the selected elements.
Haphazard weights with estimated ratio-mean () - Kish's design effect
= Formula
=
When taking an unrestricted sample of elements, we can then randomly split these elements into disjoint stratum, each of them containing some size of elements so that . All elements in each strata has some (known) non-negative weight assigned to them (). The weight can be produced by the inverse of some unequal selection probability for elements in each strata (i.e.: inverse probability weighting Inverse probability weighting is a statistical technique for calculating statistics standardized to a pseudo-population different from that in which the data was collected. Study designs with a disparate sampling population and population of target ...
following something like post-stratification). In this setting, Kish's design effect, for the increase in variance of the sample weighted mean
The weighted arithmetic mean is similar to an ordinary arithmetic mean (the most common type of average), except that instead of each of the data points contributing equally to the final average, some data points contribute more than others. The ...
due to this design (reflected in the weights), versus SRS
SRS or SrS may stand for:
Organizations and companies
* Savez Radio-Amatera Srbije, a Serbian amateur radio organization
* Sea Research Society, for diving and underwater archaeology
* Serbian Radical Party (''Srpska radikalna stranka''), a poli ...
of some outcome variable y (when there is no correlation between the weights and the outcome, i.e.: haphazard weights) is:
:
By treating each item from coming from its own stratum , Kish (in 1992) simplified the above formula to the (well known) following version:[Henry, Kimberly A., and Richard Valliant. "A design effect measure for calibration weighting in single-stage samples." Survey Methodology 41.2 (2015): 315-331]
(pdf)
/ref>
:
This version of the formula is valid when one stratum had several observations taken from it (i.e.: each having the same weight), or when there are just many stratum were each one had one observation taken from it, but several of them had the same probability of selection. While the interpretation is slightly different, the calculation of the two scenarios comes out to be the same.
Notice that Kish's definition of the design effect is closely tied to the coefficient of variation
In probability theory and statistics, the coefficient of variation (CV), also known as relative standard deviation (RSD), is a standardized measure of dispersion of a probability distribution or frequency distribution. It is often expressed ...
(also termed ''relative variance'', ''relvariance'' or ''relvar'' for short) of the weights (when using uncorrected (population level) sample standard deviation for estimation
Estimation (or estimating) is the process of finding an estimate or approximation, which is a value that is usable for some purpose even if input data may be incomplete, uncertain, or unstable. The value is nonetheless usable because it is de ...
). This has several notations in the literature:
: .
Where is the population variance of , and is the mean. When the weights are normalized to sample size (so that their sum is equal to n and their mean is equal to 1), then and the formula reduces to . While it is true we assume the weights are fixed, we can think of their variance as the variance of an empirical distribution defined by sampling (with equal probability) one weight from our set of weights (similar to how we would think about the correlation of x and y in a simple linear regression
In statistics, simple linear regression is a linear regression model with a single explanatory variable. That is, it concerns two-dimensional sample points with one independent variable and one dependent variable (conventionally, the ''x'' an ...
).
= Assumptions and proofs
=
The above formula gives the increase in the variance of the weighted mean
The weighted arithmetic mean is similar to an ordinary arithmetic mean (the most common type of average), except that instead of each of the data points contributing equally to the final average, some data points contribute more than others. The ...
based on "haphazard" weights, which reflects when y are observations selected using unequal selection probabilities (with no within-cluster correlation, and no relationship to the expectancy or variance of the outcome measurement); and y' are the observations we would have had if we got them from simple random sample
In statistics, a simple random sample (or SRS) is a subset of individuals (a sample) chosen from a larger set (a population) in which a subset of individuals are chosen randomly, all with the same probability. It is a process of selecting a sam ...
, then:
From a model based perspective,[Gabler, Siegfried, Sabine Häder, and Partha Lahiri. "A model based justification of Kish's formula for design effects for weighting and clustering." Survey Methodology 25 (1999): 105–106.]
pdf
this formula holds when all n observations () are (at least approximately) uncorrelated
In probability theory and statistics, two real-valued random variables, X, Y, are said to be uncorrelated if their covariance, \operatorname ,Y= \operatorname Y- \operatorname \operatorname /math>, is zero. If two variables are uncorrelated, there ...
(), with the same variance
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of number ...
() in the response variable of interest (y). It also assumes the weights themselves are not a random variable
A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the p ...
but rather some known constants (E.g.: the inverse of probability of selection, for some pre-determined and known sampling design
Sampling may refer to:
* Sampling (signal processing), converting a continuous signal into a discrete signal
* Sampling (graphics), converting continuous colors into discrete color components
*Sampling (music), the reuse of a sound recording in an ...
).
The following is a simplified proof for when there are no clusters (i.e.: no Intraclass correlation
In statistics, the intraclass correlation, or the intraclass correlation coefficient (ICC), is a descriptive statistic that can be used when quantitative measurements are made on units that are organized into groups. It describes how strongly ...
between element of the sample) and each strata includes only one observation:
Transitions:
# from definition of the weighted mean
The weighted arithmetic mean is similar to an ordinary arithmetic mean (the most common type of average), except that instead of each of the data points contributing equally to the final average, some data points contribute more than others. The ...
.
# using normalized (convex) weights definition (weights that sum to 1): .
# sum of uncorrelated random variables.
# If the weights are constants (from the basic properties of the variance). Another way to say it is that the weights are known upfront for each observation i. Namely that we are actually calculating
# when all observations have the same variance
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of number ...
().
The conditions on y are trivially held if the y observations are i.i.d
In probability theory and statistics, a collection of random variables is independent and identically distributed if each random variable has the same probability distribution as the others and all are mutually independence (probability theory), ...
with the same expectation
Expectation or Expectations may refer to:
Science
* Expectation (epistemic)
* Expected value, in mathematical probability theory
* Expectation value (quantum mechanics)
* Expectation–maximization algorithm, in statistics
Music
* ''Expectation' ...
and variance
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of number ...
. In such case we have , and we can estimate by using . If the y's are not all with the same expectations then we cannot use the estimated variance for calculation, since that estimation assumes that all s have the same expectation. Specifically, if there is a correlation between the weights and the outcome variable y, then it means that the expectation of y is not the same for all observations (but rather, dependent on the specific weight value for each observation). In such a case, while the design effect formula might still be correct (if the other conditions are met), it would require a different estimator for the variance of the weighted mean. For example, it might be better to use a weighted variance estimator.
If different s have different variances, then while the weighted variance could capture the correct population-level variance, the Kish's formula for the design effect may no longer be true.
A similar issue happens if there is some correlation structure in the samples (such as when using cluster sampling
In statistics, cluster sampling is a sampling plan used when mutually homogeneous yet internally heterogeneous groupings are evident in a statistical population. It is often used in marketing research.
In this sampling plan, the total popul ...
).
= Alternative definitions in the literature
=
It is worth noting that some sources in the literature give the following alternative definition to Kish's design effect, stating it is: "the ratio of the variance of the weighted survey mean under disproportionate stratified sampling to the variance under proportionate stratified sampling when all stratum unit variances are equal".
This definition can be slightly misleading, since it might be interpreted to mean that "proportionate stratified sampling" was achieved via stratified sampling, in which a pre-determined number of units is selected from each stratum. Such selection will yield reduced variance (as compared with simple random sample
In statistics, a simple random sample (or SRS) is a subset of individuals (a sample) chosen from a larger set (a population) in which a subset of individuals are chosen randomly, all with the same probability. It is a process of selecting a sam ...
), since it removes some of the uncertainty in the specific number of elements per stratum. This is different than Kish's original definition which compared the variance of the design to a simple random sample
In statistics, a simple random sample (or SRS) is a subset of individuals (a sample) chosen from a larger set (a population) in which a subset of individuals are chosen randomly, all with the same probability. It is a process of selecting a sam ...
(which would yield approximately probability proportionate to sample, but not exactly - due to the variance in sample sizes in each stratum). Park and Lee (2006) reflects on this by stating that "The rationale behind the above derivation is that the loss in precision of he weighted mean
He or HE may refer to:
Language
* He (pronoun), an English pronoun
* He (kana), the romanization of the Japanese kana へ
* He (letter), the fifth letter of many Semitic alphabets
* He (Cyrillic), a letter of the Cyrillic script called ''He'' in ...
due to haphazard unequal weighting can be approximated by the ratio of the variance under disproportionate stratified sampling to that under the proportionate stratified sampling". How far are these two definitions differ from each other is not mentioned in the literature. In his book from 1977, Cochran provides a formula for the proportional increase in variance due to deviation from optimum allocation (what, it Kish's formulas, would be called ''L''). However, the connection from that formula to Kish's ''L'' is not apparent.
= Alternative naming conventions
=
Earlier papers would use the term . As more definitions of design effect appeared, Kish's design effect for unequal selection probabilities was denoted (or ) or simply for short.[Valliant, Richard, Jill A. Dever, and Frauke Kreuter. Practical tools for designing and weighting survey samples. New York: Springer, 2013.] Kish's design effect is also known as the "Unequal Weighting Effect" (or just UWE), termed by Liu et al. in 2002.[Liu, Jun, Vince Iannacchione, and Margie Byron. "Decomposing design effects for stratified sampling." Proceedings of the survey research methods section, american statistical association. 2002]
(pdf)
/ref>
When the outcome correlates with the selection probabilities
= Spencer's Deff for estimated total ()
=
The estimator for the total is the "p-expanded with replacement" estimator (a.k.a.: ''pwr-estimator'' or Hansen and Hurwitz). It is based on a simple random sample
In statistics, a simple random sample (or SRS) is a subset of individuals (a sample) chosen from a larger set (a population) in which a subset of individuals are chosen randomly, all with the same probability. It is a process of selecting a sam ...
(with replacement, denoted ''SIR'') of ''m'' items () from a population of size M. Each item has a probability of (k from 1 to N) to be drawn in a single draw (, i.e.: it's a multinomial distribution
In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a ''k''-sided dice rolled ''n'' times. For ''n'' independent trials each of w ...
). The probability that a specific will appear in our sample is . The "p-expanded with replacement" value is with the following expectancy: . Hence , the pwr-estimator, is an unbiased estimator for the sum total of y.
In 2000, Bruce D. Spencer proposed a formula for estimating the design effect for the variance of estimating the total (not the mean) of some quantity (), when there is correlation between the selection probabilities of the elements and the outcome variable of interest.[Spencer, Bruce D. "An approximate design effect for unequal weighting when measurements may correlate with selection probabilities." Survey Methodology 26 (2000): 137-138]
(pdf)
/ref>
In this setup, a sample of size ''n'' is drawn (with replacement) from a population of size ''N''. Each item is drawn with probability (where , i.e.: multinomial distribution
In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a ''k''-sided dice rolled ''n'' times. For ''n'' independent trials each of w ...
). The selection probabilities are used to define the Normalized (convex) weights: . Notice that for some random set of ''n'' items, the sum of weights will be equal to 1 only by expectancy () with some variability of the sum around it (i.e.: the sum of elements from a poisson binomial distribution). The relationship between and is defined by the following (population) simple linear regression
In statistics, simple linear regression is a linear regression model with a single explanatory variable. That is, it concerns two-dimensional sample points with one independent variable and one dependent variable (conventionally, the ''x'' an ...
:
:
Where is the outcome of element ''i'', which linearly depends on with the intercept and slope . The residual from the fitted line is . We can also define the population variances of the outcome and the residuals as and . The correlation between and is .
Spencer's (approximate) design effect, for estimating the total of ''y'', is:[Park, Inho, and Hyunshik Lee. "The design effect: do we know all about it." Proceedings of the Annual Meeting of the American Statistical Association. 2001]
(pdf)
/ref>
:
Where:
* estimates
* estimates the slope
* estimates the population variance , and
* L is the relative variance of the weights, as defined in Kish's formula: : .
This assumes that the regression model fits well so that the probability of selection and the residuals are independent
Independent or Independents may refer to:
Arts, entertainment, and media Artist groups
* Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s
* Independe ...
, since it leads to the residuals, and the square residuals, to be uncorrelated with the weights. I.e.: that and also .
When the population size (N) is very large, the formula can be written as:
:
(since , where )
This approximation assumes that the linear relationship between ''P'' and ''y'' holds. And also that the correlation of the weights with the errors, and the errors squared, are both zero. I.e.: and .
We notice that if , then (i.e.: the average of ''y''). In such a case, the formula reduces to
:
Only if the variance of ''y'' is much larger than its mean then the right-most term is close to 0 (i.e.: ), which reduces Spencer's design effect (for the estimatoed total) to be equal to Kish's design effect (for the ratio means): . Otherwise, the two formula's will yield different results, which demonstrates the difference between the design effect of the total vs the one of the mean.
= Park and Lee's Deff for estimated ratio-mean ()
=
In 2001, Park and Lee extended Spencer's formula to the case of the ratio-mean (i.e.: estimating the mean by dividing the estimator of the total with the estimator of the population size). It is:
:
Where:
* is the (estimated) coefficient of variation of the probabilities of selection.
Park and Lee's formula is exactly equal to Kish's formula when . Both formula's relate to the design effect of the mean of ''y'' (while Spencer's Deff relates to the estimation of the total).
In general, the Deff for the total () tends to be less efficient than the Deff for the ratio mean () when is small. And in general, impacts the efficiency of both design effects.
Cluster sampling
For data collected using cluster sampling
In statistics, cluster sampling is a sampling plan used when mutually homogeneous yet internally heterogeneous groupings are evident in a statistical population. It is often used in marketing research.
In this sampling plan, the total popul ...
we assume the following structure:
* observations in each cluster and K clusters, and with a total of observations.
* The observations have a block
Block or blocked may refer to:
Arts, entertainment and media Broadcasting
* Block programming, the result of a programming strategy in broadcasting
* W242BX, a radio station licensed to Greenville, South Carolina, United States known as ''96.3 ...
correlation matrix in which every pair of observations from the same cluster is correlated with an intra-class correlation of , while every pair from difference clusters are uncorrelated. I.e., for every pair of observations, and , if they belong to the same cluster , we get . And two items from two different clusters are not correlated, i.e.: .
* An element from any cluster is assumed to have the same variance: .
When clusters are all of the same size , the design effect ''D''eff, proposed by Kish in 1965 (and later re-visited by others), is given by:
:
It is sometimes also denoted as .
In various papers, when cluster sizes are not equal, the above formula is also used with as the average cluster size (it is also sometimes denoted as ).[Kish, L. (1987). Weighting in . The Survey Statistician, June 1987. (this paper doesn't seem to be available online, but is references in several places as the original source of this formula)] In such cases, Kish's formula (using the average cluster weight) serves as a conservative (upper bound) of the exact design effect.
Alternative formulas exists for unequal cluster sizes.[ Followup work had discussed the sensitivity of using the average cluster size with various assumptions.
]
Unequal selection probabilities Cluster sampling
In his paper from 1987, Kish proposed a combined design effect that incorporates both the effects due to weighting that accounts for unequal selection probabilities as well as cluster sampling:[Gabler, Siegfried, Sabine Hader, and Peter Lynn. Design effects for multiple design samples. No. 2005-12. ISER Working Paper Series, 2005]
(pdf)
/ref>
:
With notations similar to above.
This formula received a model based justification, proposed in 1999 by Gabler et al.
Stratified sampling unequal selection probabilities Cluster sampling
In 2000, Liu and Aragon proposed a decomposition of unequal selection probabilities design effect for different strata in stratified sampling. In 2002, Liu et al. extended that work to account for stratified sample were within each strata is a set of unequal selection probability weights. The cluster sampling is either global or per strata. Similar work was done also by Park et al. in 2003.
Uses
Deff is primarily used for several purposes:[Cochran, W. G. (1977). Sampling Techniques (3rd ed.). Nashville, TN: John Wiley & Sons. ]
* When developing the design - to evaluate its efficiency. I.e.: if there is potentially "too much" increase in variance due to some decision, or if the new design is more efficient (e.g.: as in stratified sampling).
* As a way for guiding sample size (overall, per stratum, per cluster, etc.), and also
* When evaluating potential problems with a post-hoc weighting analysis (E.g.: from non-response adjustments). There is no universal rule-of-thumb for which design effect value is "too high", but the literature indicates that is likely to lead to some attention.
In his 1995 paper, Kish proposed the following categorization of when Deff is, and is not, useful:
* Design effect is ''unnecessary'' when: the source population is closely i.i.d
In probability theory and statistics, a collection of random variables is independent and identically distributed if each random variable has the same probability distribution as the others and all are mutually independence (probability theory), ...
, or when the sample design of the data was drawn as a simple random sample
In statistics, a simple random sample (or SRS) is a subset of individuals (a sample) chosen from a larger set (a population) in which a subset of individuals are chosen randomly, all with the same probability. It is a process of selecting a sam ...
. It is also less useful when the sample size is relatively small (at least partially, for practical reasons). And also if only descriptive statistics
A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features from a collection of information, while descriptive statistics (in the mass noun sense) is the process of using and a ...
are of interest (i.e.: point estimation
In statistics, point estimation involves the use of sample data to calculate a single value (known as a point estimate since it identifies a point in some parameter space) which is to serve as a "best guess" or "best estimate" of an unknown popu ...
). It is also suggested that if standard errors are needed for only a handful of statistics, it may be o.k. to ignore Deff.
* Design effect is ''necessary'' when: averaging sampling errors for different variables measured on the same survey. OR when averaging the same measured quantity from several surveys over a period of time. Or when extrapolating from the error of simple statistics (e.g.: the mean) to more complex ones (e.g.: regression coefficients). When designing a future survey (but with proper caution). As an aiding statistic to identify glaring issues with the data or its analysis (e.g.: ranging from mistakes to the presence of Outlier
In statistics, an outlier is a data point that differs significantly from other observations. An outlier may be due to a variability in the measurement, an indication of novel data, or it may be the result of experimental error; the latter are ...
s).
When planning the sample size, work has been done to correct the design effect so to separates the interviewer effect (measurement error) from the effects of the sampling design on the sampling variance.
While Kish originally hoped to have the design effect be able to be agnostic as possible to the underlying distribution of the data, sampling probabilities, their correlations, and the statistics of interest - followup research has shown that these do influence the design effect. Hence, careful attention to these properties should be taken into account when deciding which Deff calculation to use, and how to use it.
History
The term "Design effect" was introduced by Leslie Kish
Leslie Kish (born László Kiss, July 27, 1910 – October 7, 2000) was a Hungarian- American statistician and survey methodologist.. Reprint of an obituary from '' International Statistical Institute (ISI) Newsletter'', Volume 25, No. 73.
Life ...
in 1965 in his book "Survey Sampling". In his paper from 1995,[Kish, Leslie. "Methods for design effects." Journal of official Statistics 11.1 (1995): 55]
pdf
Kish mentions that a similar concept, termed "Lexis ratio", was described at the end of the 19th century. The closely related Intraclass correlation
In statistics, the intraclass correlation, or the intraclass correlation coefficient (ICC), is a descriptive statistic that can be used when quantitative measurements are made on units that are organized into groups. It describes how strongly ...
was described by Fisher in 1950, while computations of ratios of variances were already published by Kish and others from the late 40s to the 50s. One of the precursors for Kish's definition was the work done by Cornfield in 1951.[Cochran, William G. "Modern methods in the sampling of human populations." American journal of public health and the nation's health 41.6 (1951): 647–668.][Park, Inho, and Hyunshik Lee. "Design effects for the weighted mean and total estimators under complex survey sampling." Quality control and applied statistics 51.4 (2006): 381–384 (based on google scholar).
Vol. 30, No. 2, pp. 183-193. Statistics Canada, Catalogue No. 12-001. Survey Methodology December 2004 (based on the PDF)]
pdf
In his original book from 1965, Kish proposed the general definition for the design effect (ratio of variances of two estimators, one from a sample with some design and the other from a simple random sample). In his book, Kish proposed the formula for the design effect of cluster sampling (with intraclass correlation);[ as well as the famous design effect formula for unequal probability sampling.][ These are often known as "Kish's design effect", and have been merged later into a single formula.
]
See also
* Variance inflation factor (VIF). VIF and Deff are similar concepts in that they are both ratios of variances of estimating some parameter under alternative models.
* Effective sample size
References
{{reflist
Further reading
The Impact of Typical Survey Weighting Adjustments on the Design Effect: A Case Study
Medical statistics
Design of experiments