HOME

TheInfoList



OR:

Choice modelling attempts to model the decision process of an individual or segment via
revealed preference Revealed preference theory, pioneered by economist Paul Anthony Samuelson in 1938, is a method of analyzing choices made by individuals, mostly used for comparing the influence of policies on consumer behavior. Revealed preference models assume th ...
s or stated preferences made in a particular context or contexts. Typically, it attempts to use discrete choices (A over B; B over A, B & C) in order to infer positions of the items (A, B and C) on some relevant latent scale (typically "
utility As a topic of economics, utility is used to model worth or value. Its usage has evolved significantly over time. The term was introduced initially as a measure of pleasure or happiness as part of the theory of utilitarianism by moral philosophe ...
" in economics and various related fields). Indeed many alternative models exist in
econometrics Econometrics is the application of statistical methods to economic data in order to give empirical content to economic relationships. M. Hashem Pesaran (1987). "Econometrics," '' The New Palgrave: A Dictionary of Economics'', v. 2, p. 8 p. ...
,
marketing Marketing is the process of exploring, creating, and delivering value to meet the needs of a target market in terms of goods and services; potentially including selection of a target audience; selection of certain attributes or themes to emph ...
, sociometrics and other fields, including
utility As a topic of economics, utility is used to model worth or value. Its usage has evolved significantly over time. The term was introduced initially as a measure of pleasure or happiness as part of the theory of utilitarianism by moral philosophe ...
maximization, optimization applied to consumer theory, and a plethora of other identification strategies which may be more or less accurate depending on the
data In the pursuit of knowledge, data (; ) is a collection of discrete values that convey information, describing quantity, quality, fact, statistics, other basic units of meaning, or simply sequences of symbols that may be further interpret ...
,
sample Sample or samples may refer to: Base meaning * Sample (statistics), a subset of a population – complete data set * Sample (signal), a digital discrete sample of a continuous analog signal * Sample (material), a specimen or small quantity of ...
,
hypothesis A hypothesis (plural hypotheses) is a proposed explanation for a phenomenon. For a hypothesis to be a scientific hypothesis, the scientific method requires that one can test it. Scientists generally base scientific hypotheses on previous obse ...
and the particular decision being modelled. In addition, choice modelling is regarded as the most suitable method for estimating consumers'
willingness to pay In behavioral economics, willingness to pay (WTP) is the maximum price at or below which a consumer will definitely buy one unit of a product.Varian, Hal R. (1992), Microeconomic Analysis, Vol. 3. New York: W.W. Norton. This corresponds to the st ...
for quality improvements in multiple dimensions.


Related terms

There are a number of terms which are considered to be synonyms with the term choice modelling. Some are accurate (although typically discipline or continent specific) and some are used in industry applications, although considered inaccurate in academia (such as conjoint analysis). These include the following: # Stated preference discrete choice modeling # Discrete choice #
Choice experiment A choice is the range of different things from which a being can choose. The arrival at a choice may incorporate motivators and models. For example, a traveler might choose a route for a journey based on the preference of arriving at a give ...
#Stated preference studies # Conjoint analysis # Controlled experiments Although disagreements in terminology persist, it is notable that the academic journal intended to provide a cross-disciplinary source of new and empirical research into the field is called the Journal of Choice Modelling.


Theoretical background

The theory behind choice modelling was developed independently by economists and mathematical psychologists. The origins of choice modelling can be traced to Thurstone's research into food preferences in the 1920s and to
random utility theory In common usage, randomness is the apparent or actual lack of pattern or predictability in events. A random sequence of events, symbols or steps often has no order and does not follow an intelligible pattern or combination. Individual ran ...
. In economics, random utility theory was then developed by Daniel McFadden and in mathematical psychology primarily by Duncan Luce and Anthony Marley. In essence, choice modelling assumes that the utility (benefit, or value) that an individual derives from item A over item B is a function of the frequency that (s)he chooses item A over item B in repeated choices. Due to his use of the
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
Thurstone was unable to generalise this binary choice into a multinomial choice framework (which required the
multinomial logistic regression In statistics, multinomial logistic regression is a classification method that generalizes logistic regression to multiclass problems, i.e. with more than two possible discrete outcomes. That is, it is a model that is used to predict the prob ...
rather than probit link function), hence why the method languished for over 30 years. However, in the 1960s through 1980s the method was axiomatised and applied in a variety of types of study.


Distinction between revealed and stated preference studies

Choice modelling is used in both
revealed preference Revealed preference theory, pioneered by economist Paul Anthony Samuelson in 1938, is a method of analyzing choices made by individuals, mostly used for comparing the influence of policies on consumer behavior. Revealed preference models assume th ...
(RP) and stated preference (SP) studies. RP studies use the choices made already by individuals to estimate the value they ascribe to items - they "reveal their preferences - and hence values (utilities) – by their choices". SP studies use the choices made by individuals made under experimental conditions to estimate these values – they "state their preferences via their choices". McFadden successfully used revealed preferences (made in previous transport studies) to predict the demand for the Bay Area Rapid Transit (BART) before it was built. Luce and Marley had previously axiomatised random utility theory but had not used it in a real world application; furthermore they spent many years testing the method in SP studies involving psychology students.


History

McFadden's work earned him the
Nobel Memorial Prize in Economic Sciences The Nobel Memorial Prize in Economic Sciences, officially the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel ( sv, Sveriges riksbanks pris i ekonomisk vetenskap till Alfred Nobels minne), is an economics award administered ...
in 2000. However, much of the work in choice modelling had for almost 20 years been proceeding in the field of stated preferences. Such work arose in various disciplines, originally transport and marketing, due to the need to predict demand for new products that were potentially expensive to produce. This work drew heavily on the fields of conjoint analysis and
design of experiments The design of experiments (DOE, DOX, or experimental design) is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associ ...
, in order to: # Present to consumers goods or services that were defined by particular features (attributes) that had levels, e.g. "price" with levels "$10, $20, $30"; "follow-up service" with levels "no warranty, 10 year warranty"; # Present configurations of these goods that minimised the number of choices needed in order to estimate the consumer's utility function (decision rule). Specifically, the aim was to present the minimum number of pairs/triples etc of (for example) mobile/cell phones in order that the analyst might estimate the value the consumer derived (in monetary units) from every possible feature of a phone. In contrast to much of the work in conjoint analysis, discrete choices (A versus B; B versus A, B & C) were to be made, rather than ratings on category rating scales ( Likert scales). David Hensher and Jordan Louviere are widely credited with the first stated preference choice models. They remained pivotal figures, together with others including Joffre Swait and Moshe Ben-Akiva, and over the next three decades in the fields of transport and marketing helped develop and disseminate the methods. However, many other figures, predominantly working in transport economics and marketing, contributed to theory and practice and helped disseminate the work widely.


Relationship with conjoint analysis

Choice modelling from the outset suffered from a lack of standardisation of terminology and all the terms given above have been used to describe it. However, the largest disagreement has proved to be geographical: in the Americas, following industry practice there, the term "choice-based conjoint analysis" has come to dominate. This reflected a desire that choice modelling (1) reflect the attribute and level structure inherited from conjoint analysis, but (2) show that discrete choices, rather than numerical ratings, be used as the outcome measure elicited from consumers. Elsewhere in the world, the term discrete choice experiment has come to dominate in virtually all disciplines. Louviere (marketing and transport) and colleagues in environmental and health economics came to disavow the American terminology, claiming that it was misleading and disguised a fundamental difference discrete choice experiments have from traditional conjoint methods: discrete choice experiments have a testable theory of human decision-making underpinning them (random utility theory), whilst conjoint methods are simply a way of decomposing the value of a good using ''statistical'' designs from numerical ratings that have no ''psychological'' theory to explain what the rating scale numbers mean.


Designing a choice model

Designing a choice model or discrete choice experiment (DCE) generally follows the following steps: # Identifying the good or service to be valued; # Deciding on what attributes and levels fully describe the good or service; # Constructing an
Experimental design The design of experiments (DOE, DOX, or experimental design) is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associ ...
that is appropriate for those attributes and levels, either from a design catalogue, or via a software program; # Constructing the survey, replacing the design codes (numbers) with the relevant attribute levels; # Administering the survey to a sample of respondents in any of a number of formats including paper and pen, but increasingly via web surveys; # Analysing the data using appropriate models, often beginning with the
Multinomial logistic regression In statistics, multinomial logistic regression is a classification method that generalizes logistic regression to multiclass problems, i.e. with more than two possible discrete outcomes. That is, it is a model that is used to predict the prob ...
model, given its attractive properties in terms of consistency with economic demand theory.


Identifying the good or service to be valued

This is often the easiest task, typically defined by: * the research question in an academic study, or * the needs of the client (in the context of a consumer good or service)


Deciding on what attributes and levels fully describe the good or service

A good or service, for instance mobile (cell) phone, is typically described by a number of attributes (features). Phones are often described by shape, size, memory, brand, etc. The attributes to be varied in the DCE must be all those that are of interest to respondents. Omitting key attributes typically causes respondents to make inferences (guesses) about those missing from the DCE, leading to omitted variable problems. The levels must typically include all those currently available, and often are expanded to include those that are possible in future – this is particularly useful in guiding product development.


Constructing an experimental design that is appropriate for those attributes and levels, either from a design catalogue, or via a software program

A strength of DCEs and conjoint analyses is that they typically present a subset of the full factorial. For example, a phone with two brands, three shapes, three sizes and four amounts of memory has 2x3x3x4=72 possible configurations. This is the full factorial and in most cases is too large to administer to respondents. Subsets of the full factorial can be produced in a variety of ways but in general they have the following aim: to enable estimation of a certain limited number of parameters describing the good: main effects (for example the value associated with brand, holding all else equal), two-way interactions (for example the value associated with this brand and the smallest size, that brand and the smallest size), etc. This is typically achieved by deliberately confounding higher order interactions with lower order interactions. For example, two-way and three-way interactions may be confounded with main effects. This has the following consequences: * The number of profiles (configurations) is significantly reduced; * A regression coefficient for a given main effect is unbiased if and only if the confounded terms (higher order interactions) are zero; * A regression coefficient is biased in an unknown direction and with an unknown magnitude if the confounded interaction terms are non-zero; * No correction can be made at the analysis to solve the problem, should the confounded terms be non-zero. Thus, researchers have repeatedly been warned that design involves critical decisions to be made concerning whether two-way and higher order interactions are likely to be non-zero; making a mistake at the design stage effectively invalidates the results since the hypothesis of higher order interactions being non-zero is untestable. Designs are available from catalogues and statistical programs. Traditionally they had the property of Orthogonality where all attribute levels can be estimated independently of each other. This ensures zero collinearity and can be explained using the following example. Imagine a car dealership that sells both luxury cars and used low-end vehicles. Using the utility maximisation principle and assuming an MNL model, we hypothesise that the decision to buy a car from this dealership is the sum of the individual contribution of each of the following to the total utility. * Price * Marque (BMW, Chrysler, Mitsubishi) * Origin (German, American) * Performance Using multinomial regression on the sales data however will not tell us what we want to know. The reason is that much of the data is collinear since cars at this dealership are either: * high performance, expensive German cars * low performance, cheap American cars There is not enough information, nor will there ever be enough, to tell us whether people are buying cars because they are European, because they are a BMW or because they are high performance. This is a fundamental reason why RP data are often unsuitable and why SP data are required. In RP data these three attributes always co-occur and in this case are perfectly
correlated In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistic ...
. That is: all BMWs are made in Germany and are of high performance. These three attributes: origin, marque and performance are said to be collinear or non-orthogonal. Only in experimental conditions, via SP data, can performance and price be varied independently – have their effects decomposed. An
experimental design The design of experiments (DOE, DOX, or experimental design) is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associ ...
(below) in a Choice Experiment is a strict scheme for controlling and presenting hypothetical scenarios, or
choice set A choice set is a finite collection of available options selected from a larger theoretical decision space. For example, a consumer has thousands of conceivable alternatives when purchasing a car, far more than they could reasonably be expected to ...
s to respondents. For the same experiment, different designs could be used, each with different properties. The best design depends on the objectives of the exercise. It is the experimental design that drives the experiment and the ultimate capabilities of the model. Many very efficient designs exist in the public domain that allow near optimal experiments to be performed. For example the Latin square 1617 design allows the estimation of all main effects of a product that could have up to 1617 (approximately 295 followed by eighteen zeros) configurations. Furthermore this could be achieved within a sample frame of only around 256 respondents. Below is an example of a much smaller design. This is 34 main effects design. This design would allow the estimation of main effects utilities from 81 (34) possible product configurations ''assuming all higher order interactions are zero''. A sample of around 20 respondents could model the main effects of all 81 possible product configurations with statistically significant results. Some examples of other experimental designs commonly used: *Balanced incomplete block designs (BIBD) *Random designs *Main effects *Higher order interaction designs *Full factorial More recently, efficient designs have been produced. These typically minimise functions of the variance of the (unknown but estimated) parameters. A common function is the D-efficiency of the parameters. The aim of these designs is to reduce the sample size required to achieve statistical significance of the estimated utility parameters. Such designs have often incorporated Bayesian priors for the parameters, to further improve statistical precision. Highly efficient designs have become extremely popular, given the costs of recruiting larger numbers of respondents. However, key figures in the development of these designs have warned of possible limitations, most notably the following. Design efficiency is typically maximised when good A and good B are as different as possible: for instance every attribute (feature) defining the phone differs across A and B. This forces the respondent to trade across price, brand, size, memory, etc; no attribute has the same level in both A and B. This may impose cognitive burden on the respondent, leading him/her to use simplifying heuristics ("always choose the cheapest phone") that do not reflect his/her true utility function (decision rule). Recent empirical work has confirmed that respondents do indeed have different decision rules when answering a less efficient design compared to a highly efficient design. More information on experimental designs may be found
here Here is an adverb that means "in, on, or at this place". It may also refer to: Software * Here Technologies, a mapping company * Here WeGo (formerly Here Maps), a mobile app and map website by Here Television * Here TV (formerly "here!"), a ...
. It is worth reiterating, however, that small designs that estimate main effects typically do so by deliberately confounding higher order interactions with the main effects. This means that unless those interactions are zero in practice, the analyst will obtain biased estimates of the main effects. Furthermore (s)he has (1) no way of testing this, and (2) no way of correcting it in analysis. This emphasises the crucial role of design in DCEs.


Constructing the survey

Constructing the survey typically involves: * Doing a "find and replace" in order that the experimental design codes (typically numbers as given in the example above) are replaced by the attribute levels of the good in question. * Putting the resulting configurations (for instance types of mobile/cell phones) into a broader survey than may include questions pertaining to sociodemographics of the respondents. This may aid in segmenting the data at the analysis stage: for example males may differ from females in their preferences.


Administering the survey to a sample of respondents in any of a number of formats including paper and pen, but increasingly via web surveys

Traditionally, DCEs were administered via paper and pen methods. Increasingly, with the power of the web, internet surveys have become the norm. These have advantages in terms of cost, randomising respondents to different versions of the survey, and using screening. An example of the latter would be to achieve balance in gender: if too many males answered, they can be screened out in order that the number of females matches that of males.


Analysing the data using appropriate models, often beginning with the

multinomial logistic regression In statistics, multinomial logistic regression is a classification method that generalizes logistic regression to multiclass problems, i.e. with more than two possible discrete outcomes. That is, it is a model that is used to predict the prob ...
model, given its attractive properties in terms of consistency with economic demand theory

Analysing the data from a DCE requires the analyst to assume a particular type of decision rule - or functional form of the utility equation in economists' terms. This is usually dictated by the design: if a main effects design has been used then two-way and higher order interaction terms cannot be included in the model. Regression models are then typically estimated. These often begin with the conditional logit model - traditionally, although slightly misleadingly, referred to as the multinomial logistic (MNL) regression model by choice modellers. The MNL model converts the observed choice frequencies (being estimated probabilities, on a ratio scale) into utility estimates (on an interval scale) via the logistic function. The utility (value) associated with every attribute level can be estimated, thus allowing the analyst to construct the total utility of any possible configuration (in this case, of car or phone). However, a DCE may alternatively be used to estimate non-market environmental benefits and costs.


Strengths

* Forces respondents to consider trade-offs between attributes; * Makes the frame of reference explicit to respondents via the inclusion of an array of attributes and product alternatives; * Enables implicit prices to be estimated for attributes; * Enables welfare impacts to be estimated for multiple scenarios; * Can be used to estimate the level of customer demand for alternative 'service product' in non-monetary terms; and * Potentially reduces the incentive for respondents to behave strategically.


Weaknesses

* Discrete choices provide only
ordinal data Ordinal data is a categorical, statistical data type where the variables have natural, ordered categories and the distances between the categories are not known. These data exist on an ordinal scale, one of four levels of measurement described b ...
, which provides less information than ratio or interval data; * Inferences from ordinal data, to produce estimates on an interval/ratio scale, require assumptions about error distributions and the respondent's decision rule (functional form of the utility function); * Fractional factorial designs used in practice deliberately confound two-way and higher order interactions with lower order (typically main effects) estimates in order to make the design small: if the higher order interactions are non-zero then main effects are biased, with no way for the analyst to know or correct this ex post; * Non-probabilistic (deterministic) decision-making by the individual violates random utility theory: under a random utility model, utility estimates become infinite. * There is one fundamental weakness of all limited dependent variable models such as logit and probit models: the means (true positions) and variances on the latent scale are perfectly Confounded. In other words they cannot be separated.


The mean-variance confound

Yatchew and Griliches first proved that means and variances were confounded in limited dependent variable models (where the dependent variable takes any of a discrete set of values rather than a
continuous Continuity or continuous may refer to: Mathematics * Continuity (mathematics), the opposing concept to discreteness; common examples include ** Continuous probability distribution or random variable in probability and statistics ** Continuous g ...
one as in conventional linear regression). This limitation becomes acute in choice modelling for the following reason: a large estimated beta from the MNL regression model or any other choice model can mean: # Respondents place the item high up on the latent scale (they value it highly), or # Respondents do not place the item high up on the scale BUT they are very certain of their preferences, consistently (frequently) choosing the item over others presented alongside, or # Some combination of (1) and (2). This has significant implications for the interpretation of the output of a regression model. All statistical programs "solve" the mean-variance confound by setting the variance equal to a constant; all estimated beta coefficients are, in fact, an estimated beta multiplied by an estimated lambda (an inverse function of the variance). This tempts the analyst to ignore the problem. However (s)he must consider whether a set of large beta coefficients reflect strong preferences (a large true beta) or consistency in choices (a large true lambda), or some combination of the two. Dividing all estimates by one other – typically that of the price variable – cancels the confounded lambda term from numerator and denominator. This solves the problem, with the added benefit that it provides economists with the respondent's willingness to pay for each attribute level. However, the finding that results estimated in "utility space" do not match those estimated in "willingness to pay space", suggests that the confound problem is not solved by this "trick": variances may be attribute specific or some other function of the variables (which would explain the discrepancy). This is a subject of current research in the field.


Versus traditional ratings-based conjoint methods

Major problems with ratings questions that do not occur with choice models are: *no trade-off information. A risk with ratings is that respondents tend not to differentiate between perceived 'good' attributes and rate them all as attractive. *variant personal scales. Different individuals value a '2' on a scale of 1 to 5 differently. Aggregation of the frequencies of each of the scale measures has no theoretical basis. *no relative measure. How does an analyst compare something rated a 1 to something rated a 2? Is one twice as good as the other? Again there is no theoretical way of aggregating the data.


Other types


Ranking

Rankings do tend to force the individual to indicate relative preferences for the items of interest. Thus the trade-offs between these can, like in a DCE, typically be estimated. However, ranking models must test whether the same utility function is being estimated at every ranking depth: e.g. the same estimates (up to variance scale) must result from the bottom rank data as from the top rank data.


Best–worst scaling

Best–worst scaling (BWS) is a well-regarded alternative to ratings and ranking. It asks people to choose their most and least preferred options from a range of alternatives. By subtracting or integrating across the choice probabilities, utility scores for each alternative can be estimated on an interval or ratio scale, for individuals and/or groups. Various psychological models may be utilised by individuals to produce best-worst data, including the MaxDiff model.


Uses

Choice modelling is particularly useful for: *Predicting uptake and refining
new product development In business and engineering, new product development (NPD) covers the complete process of bringing a new product to market, renewing an existing product or introducing a product in a new market. A central aspect of NPD is product design, along ...
*Estimating the implied willingness to pay (WTP) for goods and services *Product or service viability testing *Estimating the effects of product characteristics on consumer choice *Variations of product attributes *Understanding brand value and preference *Demand estimates and optimum pricing


See also

*
Consumer choice The theory of consumer choice is the branch of microeconomics that relates preferences to consumption expenditures and to consumer demand curves. It analyzes how consumers maximize the desirability of their consumption as measured by their pre ...
* Discrete choice


References


External links

*{{Commons category-inline
Curated bibliography
at
IDEAS/RePEc Research Papers in Economics (RePEc) is a collaborative effort of hundreds of volunteers in many countries to enhance the dissemination of research in economics. The heart of the project is a decentralized database of working papers, preprints, ...
Economics models Econometric modeling