Glossary Of Experimental Design
   HOME

TheInfoList



OR:

A
glossary A glossary (from , ''glossa''; language, speech, wording), also known as a vocabulary or clavis, is an alphabetical list of Term (language), terms in a particular domain of knowledge with the definitions for those terms. Traditionally, a gloss ...
of terms used in
experimental research An experiment is a procedure carried out to support or refute a hypothesis, or determine the efficacy or likelihood of something previously untried. Experiments provide insight into cause-and-effect by demonstrating what outcome occurs when ...
.


Concerned fields

*
Statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
*
Experimental design The design of experiments (DOE), also known as experiment design or experimental design, is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. ...
*
Estimation theory Estimation theory is a branch of statistics that deals with estimating the values of Statistical parameter, parameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such ...


Glossary

* Alias: When the estimate of an effect also includes the influence of one or more other effects (usually high order interactions) the effects are said to be aliased (see confounding). For example, if the estimate of effect ''D'' in a four factor experiment actually estimates (''D'' + ''ABC''), then the main effect ''D'' is aliased with the 3-way interaction ''ABC''. Note: This causes no difficulty when the higher order interaction is either non-existent or insignificant. *
Analysis of variance Analysis of variance (ANOVA) is a family of statistical methods used to compare the Mean, means of two or more groups by analyzing variance. Specifically, ANOVA compares the amount of variation ''between'' the group means to the amount of variati ...
(ANOVA): A mathematical process for separating the variability of a group of observations into assignable causes and setting up various significance tests. * Balanced design: An experimental design where all cells (i.e. treatment combinations) have the same number of observations. * Blocking: A schedule for conducting treatment combinations in an experimental study such that any effects on the experimental results due to a known change in raw materials, operators, machines, etc., become concentrated in the levels of the blocking variable. Note: the reason for blocking is to isolate a systematic effect and prevent it from obscuring the main effects. Blocking is achieved by restricting randomization. * Center Points: Points at the center value of all factor ranges. * Coding Factor Levels: Transforming the scale of measurement for a factor so that the high value becomes +1 and the low value becomes -1 (see scaling). After coding all factors in a 2-level full factorial experiment, the design matrix has all orthogonal columns. Coding is a simple linear transformation of the original measurement scale. If the "high" value is ''X''h and the "low" value is ''X''L (in the original scale), then the scaling transformation takes any original ''X'' value and converts it to (''X'' − ''a'')/''b'', where ''a'' = (''X''h + ''X''L)/2 and ''b'' = (''X''h−''X''L)/2. To go back to the original measurement scale, just take the coded value and multiply it by ''b'' and add ''a'' or, ''X'' = ''b'' × (coded value) + ''a''. As an example, if the factor is temperature and the high setting is 65°C and the low setting is 55°C, then ''a'' = (65 + 55)/2 = 60 and ''b'' = (65 − 55)/2 = 5. The center point (where the coded value is 0) has a temperature of 5(0) + 60 = 60°C. * Comparative design: A design that allows the (typically mean-unbiased) estimation of the difference in factor effects, especially for the difference in treatment effects. The estimation of differences between treatment effects can be made with greater reliability than the estimation of absolute treatment effects. *
Confounding In causal inference, a confounder is a variable that influences both the dependent variable and independent variable, causing a spurious association. Confounding is a causal concept, and as such, cannot be described in terms of correlatio ...
: A confounding design is one where some treatment effects (main or interactions) are estimated by the same linear combination of the experimental observations as some blocking effects. In this case, the treatment effect and the blocking effect are said to be confounded. Confounding is also used as a general term to indicate that the value of a main effect estimate comes from both the main effect itself and also contamination or bias from higher order interactions. Note: Confounding designs naturally arise when full factorial designs have to be run in blocks and the block size is smaller than the number of different treatment combinations. They also occur whenever a fractional factorial design is chosen instead of a full factorial design. *
Control group In the design of experiments, hypotheses are applied to experimental units in a treatment group. In comparative experiments, members of a control group receive a standard treatment, a placebo, or no treatment at all. There may be more than one tr ...
: a set of experimental units to which incidental treatments are applied but not main treatments. For example, in applying a herbicide as one treatment, plots receiving that treatment might be driven over by a machine applying the herbicide but treatments not receiving the herbicide would not normally be driven over. The machine traffic is an incidental treatment. If there was a concern that the machine traffic might have an effect on the variable being measured (e.g. death of strawberry plants), then a control treatment would receive the machine traffic but no herbicide. Control groups are a way of eliminating the possibility of incidental treatments being the cause of measured effects. The incidental treatments are controlled for. Compare
treatment group In the design of experiments, hypotheses are applied to experimental units in a treatment group. In comparative experiments, members of a control group receive a standard treatment, a placebo, or no treatment at all. There may be more than one tr ...
s. A treatment that is only the absence of the manipulation being studied is simply one of the treatments and not a control, though it is now common to refer to a non-manipulated treatment as a control. * Crossed factors: See factors below. *
Design A design is the concept or proposal for an object, process, or system. The word ''design'' refers to something that is or has been intentionally created by a thinking agent, and is sometimes used to refer to the inherent nature of something ...
: A set of experimental runs which allows you to fit a particular model and estimate your desired effects. *
Design matrix In statistics and in particular in regression analysis, a design matrix, also known as model matrix or regressor matrix and often denoted by X, is a matrix of values of explanatory variables of a set of objects. Each row represents an individual o ...
: A matrix description of an experiment that is useful for constructing and analyzing experiments. *
Design of Experiments The design of experiments (DOE), also known as experiment design or experimental design, is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. ...
: A systematic, rigorous approach to engineering problem-solving that applies principles and techniques at the data collection stage so as to ensure the generation of valid, defensible, and supportable engineering conclusions * Design Point: A single combination of settings for the independent variables of an experiment. A Design of Experiments will result in a set of design points, and each design point is designed to be executed one or more times, with the number of iterations based on the required statistical significance for the experiment. * Effect (of a factor): How changing the settings of a factor changes the response. The effect of a single factor is also called a main effect. A treatment effect may be assumed to be the same for each experimental unit, by the assumption of treatment-unit additivity; more generally, the treatment effect may be the average effect. Other effects may be block effects. (For a factor A with two levels, scaled so that low = -1 and high = +1, the effect of A has a mean-unbiased estimator that is evaluated by subtracting the average observed response when A is -1 from the average observed response when A = +1 and dividing the result by 2; division by 2 is needed because the -1 level is 2 scaled units away from the +1 level.) *
Error An error (from the Latin , meaning 'to wander'Oxford English Dictionary, s.v. “error (n.), Etymology,” September 2023, .) is an inaccurate or incorrect action, thought, or judgement. In statistics, "error" refers to the difference between t ...
: Unexplained variation in a collection of observations. See
Errors and residuals in statistics In statistics and optimization, errors and residuals are two closely related and easily confused measures of the deviation of an observed value of an element of a statistical sample from its "true value" (not necessarily observable). The erro ...
. Note: experimental designs typically require understanding of both
random error Observational error (or measurement error) is the difference between a measured value of a quantity and its unknown true value.Dodge, Y. (2003) ''The Oxford Dictionary of Statistical Terms'', OUP. Such errors are inherent in the measurement ...
and lack of fit error. *
Experimental unit In statistics, a unit is one member of a set of entities being studied. It is the main source for the mathematical abstraction of a "random variable". Common examples of a unit would be a single person, animal, plant, manufactured item, or countr ...
: The entity to which a specific treatment combination is applied. For example, an experimental unit can be a ** PC board ** silicon wafer ** tray of components simultaneously treated ** individual agricultural plants ** plot of land ** automotive transmissions ** Living organisms or parts of them ** etc. * Factors: Process inputs that an investigator manipulates to cause a corresponding change in the output. Some factors cannot be controlled by the experimenter but may affect the responses. These uncontrolled factors should be measured and used in the data analysis, if their effect is significant. Note: The inputs can be discrete or continuous. ** Crossed factors: Two factors are crossed if every level of one occurs with every level of the other in the experiment. ** Nested factors: A factor "A" is nested within another factor "B" if the levels or values of "A" are different for every level or value of "B". Note: Nested factors or effects have a hierarchical relationship. * Fixed effect: An effect associated with an input variable that has a limited number of levels or in which only a limited number of levels are of interest to the experimenter. * Interaction: Occurs when the effect of one factor on a response depends on the level of another factor(s). * Lack of fit error: Error that occurs when the analysis omits one or more important terms or factors from the process model. Note: Including replication in a designed experiment allows separation of experimental error into its components: lack of fit and random (pure) error. *
Model A model is an informative representation of an object, person, or system. The term originally denoted the plans of a building in late 16th-century English, and derived via French and Italian ultimately from Latin , . Models can be divided in ...
: Mathematical relationship which relates changes in a given response to changes in one or more factors. * Nested Factors: See factors above. *
Orthogonality In mathematics, orthogonality is the generalization of the geometric notion of '' perpendicularity''. Although many authors use the two terms ''perpendicular'' and ''orthogonal'' interchangeably, the term ''perpendicular'' is more specifically ...
: Two vectors of the same length are orthogonal if the sum of the products of their corresponding elements is 0. Note: An experimental design is orthogonal if the effects of any factor balance out (sum to zero) across the effects of the other factors. *
Paradigm In science and philosophy, a paradigm ( ) is a distinct set of concepts or thought patterns, including theories, research methods, postulates, and standards for what constitute legitimate contributions to a field. The word ''paradigm'' is Ancient ...
: a model created given the basic design, the hypothesis and the particular conditions for the experiment. *
Random effect In econometrics, a random effects model, also called a variance components model, is a statistical model where the model parameters are random variables. It is a kind of hierarchical linear model, which assumes that the data being analysed are ...
: An effect associated with input variables chosen at random from a population having a large or infinite number of possible values. *
Random error Observational error (or measurement error) is the difference between a measured value of a quantity and its unknown true value.Dodge, Y. (2003) ''The Oxford Dictionary of Statistical Terms'', OUP. Such errors are inherent in the measurement ...
: Error that occurs due to natural variation in the process. Note: Random error is typically assumed to be normally distributed with zero mean and a constant variance. Note: Random error is also called experimental error. *
Randomization Randomization is a statistical process in which a random mechanism is employed to select a sample from a population or assign subjects to different groups.Oxford English Dictionary "randomization" The process is crucial in ensuring the random alloc ...
: A schedule for allocating treatment material and for conducting treatment combinations in a designed experiment such that the conditions in one run neither depend on the conditions of the previous run nor predict the conditions in the subsequent runs. Note: The importance of randomization cannot be over stressed. Randomization is necessary for conclusions drawn from the experiment to be correct, unambiguous and defensible. *
Regression discontinuity design In statistics, econometrics, political science, epidemiology, and related disciplines, a regression discontinuity design (RDD) is a quasi-experimental pretest–posttest design that aims to determine the causal effects of interventions by assigning ...
: A design in which assignment to a treatment is determined at least partly by the value of an observed covariate lying on either side of a fixed threshold. * Replication: Performing the same treatment combination more than once. Note: Including replication allows an estimate of the random error independent of any lack of fit error. * Resolution: In
fractional factorial design In statistics, a fractional factorial design is a way to conduct experiments with fewer experimental runs than a full factorial design. Instead of testing every single combination of factors, it tests only a carefully selected portion. This " ...
s, "resolution" describes the degree to which the estimated main-effects are aliased (or confounded) with estimated higher-order interactions (2-level interactions, 3-level interactions, etc.). In general, the resolution of a design is one more than the smallest order interaction which is aliased with some main effect. If some main effects are confounded with some 2-level interactions, the resolution is 3. Note: Full factorial designs have no confounding and are said to have resolution "infinity". For most practical purposes, a resolution 5 design is excellent and a resolution 4 design may be adequate. Resolution 3 designs are useful as economical screening designs. *
Response Response may refer to: *Call and response (music), musical structure *Reaction (disambiguation) *Request–response **Output or response, the result of telecommunications input *Response (liturgy), a line answering a versicle * Response (music) o ...
(s): The output(s) of a process. Sometimes called dependent variable(s). * Response surface: A designed experiment that models the quantitative response, especially for the short-term goal of improving a process and the longer-term goal of finding optimum factor-values. Traditionally, response-surfaces have been modeled with quadratic-polynomials, whose estimation requires that every factor have three levels. * Rotatability: A design is rotatable if the variance of the predicted response at any point x depends only on the distance of x from the design center point. A design with this property can be rotated around its center point without changing the prediction variance at x. Note: Rotatability is a desirable property for response surface designs (i.e. quadratic model designs). * Scaling factor levels: Transforming factor levels so that the high value becomes +1 and the low value becomes -1. * Screening design: A designed experiment that identifies which of many factors have a significant effect on the response. Note: Typically screening designs have more than 5 factors. *
Test plan A test plan is a document detailing the objectives, resources, and processes for a specific test session for a software or hardware product. The plan typically contains a detailed understanding of the eventual workflow. Test plans A test plan docu ...
: a written document that gives a specific listing of the test procedures and sequence to be followed. * Treatment: A treatment is a specific combination of factor levels whose effect is to be compared with other treatments. * Treatment combination: The combination of the settings of several factors in a given experimental trial. Also known as a run. *
Treatment group In the design of experiments, hypotheses are applied to experimental units in a treatment group. In comparative experiments, members of a control group receive a standard treatment, a placebo, or no treatment at all. There may be more than one tr ...
: see
Control group In the design of experiments, hypotheses are applied to experimental units in a treatment group. In comparative experiments, members of a control group receive a standard treatment, a placebo, or no treatment at all. There may be more than one tr ...
*
Variance component In econometrics, a random effects model, also called a variance components model, is a statistical model where the model parameters are random variables. It is a kind of hierarchical linear model, which assumes that the data being analysed are ...
s: Partitioning of the overall variation into assignable components.


See also

*
Glossary of probability and statistics This glossary of statistics and probability is a list of definitions of terms and concepts used in the mathematical sciences of statistics and probability, their sub-disciplines, and related fields. For additional related terms, see Glossary of ma ...
*
Notation in probability and statistics Probability theory and statistics have some commonly used conventions, in addition to standard mathematical notation and mathematical symbols. Probability theory * Random variables are usually written in upper case Roman letters, such as X or Y ...
*
Glossary of clinical research A glossary of terms used in clinical research. __NOTOC__ A ; Activities of daily living : The tasks of everyday life. These activities include eating, dressing, getting into or out of a bed or chair, taking a bath or shower, and using the toi ...
*
List of statistical topics 0–9 * 1.96 *2SLS (two-stage least squares) redirects to instrumental variable *3SLS – see three-stage least squares * 68–95–99.7 rule *100-year flood A *A priori probability *Abductive reasoning *Absolute deviation *Absolute risk re ...


References


External links

* {{NIST-PD Design of experiments
Experimental design The design of experiments (DOE), also known as experiment design or experimental design, is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. ...
Experimental design The design of experiments (DOE), also known as experiment design or experimental design, is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. ...
Wikipedia glossaries using unordered lists