The theory of conjoint measurement (also known as conjoint measurement or additive conjoint measurement) is a general, formal theory of continuous
quantity. It was independently discovered by the French economist
Gérard Debreu (1960) and by the American mathematical psychologist
R. Duncan Luce and statistician
John Tukey
John Wilder Tukey (; June 16, 1915 – July 26, 2000) was an American mathematician and statistician, best known for the development of the fast Fourier Transform (FFT) algorithm and box plot. The Tukey range test, the Tukey lambda distributi ...
.
The theory concerns the situation where at least two natural attributes, ''A'' and ''X'', non-interactively relate to a third attribute, ''P''. It is not required that ''A'', ''X'' or ''P'' are known to be quantities. Via specific relations between the levels of ''P'', it can be established that ''P'', ''A'' and ''X'' are continuous quantities. Hence the theory of conjoint measurement can be used to quantify attributes in empirical circumstances where it is not possible to combine the levels of the attributes using a side-by-side operation or
concatenation
In formal language theory and computer programming, string concatenation is the operation of joining character strings end-to-end. For example, the concatenation of "snow" and "ball" is "snowball". In certain formalizations of concatenati ...
. The quantification of psychological attributes such as attitudes, cognitive abilities and utility is therefore logically plausible. This means that the scientific measurement of psychological attributes is possible. That is, like physical quantities, a magnitude of a psychological quantity may possibly be expressed as the product of a
real number
In mathematics, a real number is a number that can be used to measure a continuous one- dimensional quantity such as a duration or temperature. Here, ''continuous'' means that pairs of values can have arbitrarily small differences. Every re ...
and a unit magnitude.
Application of the theory of conjoint measurement in psychology, however, has been limited. It has been argued that this is due to the high level of formal mathematics involved (e.g., ) and that the theory cannot account for the "noisy" data typically discovered in psychological research (e.g., ). It has been argued that the
Rasch model is a stochastic variant of the theory of conjoint measurement (e.g., ; ; ; ; ; ), however, this has been disputed (e.g., Karabatsos, 2001; Kyngdon, 2008). Order restricted methods for conducting probabilistic tests of the cancellation axioms of conjoint measurement have been developed in the past decade (e.g., Karabatsos, 2001; Davis-Stober, 2009).
The theory of conjoint measurement is (different but) related to
conjoint analysis, which is a statistical-experiments methodology employed in
marketing
Marketing is the act of acquiring, satisfying and retaining customers. It is one of the primary components of Business administration, business management and commerce.
Marketing is usually conducted by the seller, typically a retailer or ma ...
to estimate the parameters of additive utility functions. Different multi-attribute stimuli are presented to respondents, and different methods are used to measure their preferences about the presented stimuli. The coefficients of the utility function are estimated using alternative regression-based tools.
Historical overview
In the 1930s, the
British Association for the Advancement of Science established the Ferguson Committee to investigate the possibility of psychological attributes being measured scientifically. The British physicist and measurement theorist
Norman Robert Campbell was an influential member of the committee. In its Final Report (Ferguson, ''et al.'', 1940), Campbell and the Committee concluded that because psychological attributes were not capable of sustaining concatenation operations, such attributes could not be continuous quantities. Therefore, they could not be measured scientifically. This had important ramifications for psychology, the most significant of these being the creation in 1946 of the ''operational theory of measurement'' by Harvard psychologist
Stanley Smith Stevens. Stevens' non-scientific theory of measurement is widely held as definitive in psychology and the behavioural sciences generally .
Whilst the German mathematician
Otto Hölder (1901) anticipated features of the theory of conjoint measurement, it was not until the publication of Luce & Tukey's seminal 1964 paper that the theory received its first complete exposition. Luce & Tukey's presentation was algebraic and is therefore considered more general than Debreu's (1960)
topological work, the latter being a special case of the former . In the first article of the inaugural issue of the ''Journal of Mathematical Psychology'', proved that via the theory of conjoint measurement, attributes not capable of concatenation could be quantified. N.R. Campbell and the Ferguson Committee were thus proven wrong. That a given psychological attribute is a continuous quantity is a logically coherent and empirically testable hypothesis.
Appearing in the next issue of the same journal were important papers by
Dana Scott (1964), who proposed a hierarchy of cancellation conditions for the indirect testing of the solvability and Archimedean
axioms, and David Krantz (1964) who connected the Luce & Tukey work to that of Hölder (1901).
Work soon focused on extending the theory of conjoint measurement to involve more than just two attributes. and
Amos Tversky
Amos Nathan Tversky (; March 16, 1937 – June 2, 1996) was an Israeli cognitive and mathematical psychologist and a key figure in the discovery of systematic human cognitive bias and handling of risk.
Much of his early work concerned th ...
(1967) developed what became known as
polynomial conjoint measurement, with providing a schema with which to construct conjoint measurement structures of three or more attributes. Later, the theory of conjoint measurement (in its two variable, polynomial and ''n''-component forms) received a thorough and highly technical treatment with the publication of the first volume of ''Foundations of Measurement'', which Krantz, Luce, Tversky and philosopher
Patrick Suppes cowrote .
Shortly after the publication of Krantz, et al., (1971), work focused upon developing an "error theory" for the theory of conjoint measurement. Studies were conducted into the number of conjoint arrays that supported only single cancellation and both single and double cancellation (; ). Later enumeration studies focused on polynomial conjoint measurement (; ). These studies found that it is highly unlikely that the axioms of the theory of conjoint measurement are satisfied at random, provided that more than three levels of at least one of the component attributes has been identified.
Joel Michell (1988) later identified that the "no test" class of tests of the double cancellation axiom was empty. Any instance of double cancellation is thus either an acceptance or a rejection of the axiom. Michell also wrote at this time a non-technical introduction to the theory of conjoint measurement which also contained a schema for deriving higher order cancellation conditions based upon Scott's (1964) work. Using Michell's schema, Ben Richards (Kyngdon & Richards, 2007) discovered that some instances of the triple cancellation axiom are "incoherent" as they contradict the single cancellation axiom. Moreover, he identified many instances of the triple cancellation which are trivially true if double cancellation is supported.
The axioms of the theory of conjoint measurement are not stochastic; and given the ordinal constraints placed on data by the cancellation axioms, order restricted inference methodology must be used . George Karabatsos and his associates (Karabatsos, 2001; ) developed a
Bayesian Markov chain Monte Carlo methodology for
psychometric applications. Karabatsos & Ullrich 2002 demonstrated how this framework could be extended to polynomial conjoint structures. Karabatsos (2005) generalised this work with his multinomial Dirichlet framework, which enabled the probabilistic testing of many non-stochastic theories of
mathematical psychology. More recently, Clintin Davis-Stober (2009) developed a frequentist framework for order restricted inference that can also be used to test the cancellation axioms.
Perhaps the most notable (Kyngdon, 2011) use of the theory of conjoint measurement was in the
prospect theory
Prospect theory is a theory of behavioral economics, judgment and decision making that was developed by Daniel Kahneman and Amos Tversky in 1979. The theory was cited in the decision to award Kahneman the 2002 Nobel Memorial Prize in Economics. ...
proposed by the Israeli – American psychologists
Daniel Kahneman
Daniel Kahneman (; ; March 5, 1934 – March 27, 2024) was an Israeli-American psychologist best known for his work on the psychology of judgment and decision-making as well as behavioral economics, for which he was awarded the 2002 Nobel Memor ...
and
Amos Tversky
Amos Nathan Tversky (; March 16, 1937 – June 2, 1996) was an Israeli cognitive and mathematical psychologist and a key figure in the discovery of systematic human cognitive bias and handling of risk.
Much of his early work concerned th ...
(Kahneman & Tversky, 1979). Prospect theory was a theory of decision making under risk and uncertainty which accounted for choice behaviour such as the
Allais Paradox
The Allais paradox is a choice problem designed by to show an inconsistency of actual observed choices with the predictions of expected utility theory. The Allais paradox demonstrates that individuals rarely make rational decisions consistently ...
. David Krantz wrote the formal proof to prospect theory using the theory of conjoint measurement. In 2002, Kahneman received the
Nobel Memorial Prize in Economics for prospect theory (Birnbaum, 2008).
Measurement and quantification
The classical / standard definition of measurement
In
physics
Physics is the scientific study of matter, its Elementary particle, fundamental constituents, its motion and behavior through space and time, and the related entities of energy and force. "Physical science is that department of knowledge whi ...
and
metrology
Metrology is the scientific study of measurement. It establishes a common understanding of Unit of measurement, units, crucial in linking human activities. Modern metrology has its roots in the French Revolution's political motivation to stan ...
, the standard definition of measurement is the estimation of the ratio between a magnitude of a continuous quantity and a unit magnitude of the same kind (de Boer, 1994/95; Emerson, 2008). For example, the statement "Peter's hallway is 4 m long" expresses a measurement of an hitherto unknown length magnitude (the hallway's length) as the ratio of the unit (the metre in this case) to the length of the hallway. The number 4 is a real number in the strict mathematical sense of this term.
For some other quantities, invariant are ratios between attribute ''differences''. Consider temperature, for example. In the familiar everyday instances, temperature is measured using instruments calibrated in either the Fahrenheit or Celsius scales. What are really being measured with such instruments are the magnitudes of temperature differences. For example,
Anders Celsius defined the unit of the Celsius scale to be 1/100 of the difference in temperature between the freezing and boiling points of water at sea level. A midday temperature measurement of 20 degrees Celsius is simply the difference of the midday temperature and the temperature of the freezing water divided by the difference of the Celsius unit and the temperature of the freezing water.
Formally expressed, a scientific measurement is:
: