In
decision theory
Decision theory (or the theory of choice; not to be confused with choice theory) is a branch of applied probability theory concerned with the theory of making decisions based on assigning probabilities to various factors and assigning numerical ...
and quantitative
policy analysis
Policy analysis is a technique used in the public administration sub-field of political science to enable civil servants, nonprofit organizations, and others to examine and evaluate the available options to implement the goals of laws and elected ...
, the expected value of including uncertainty (EVIU) is the expected difference in the value of a decision based on a
probabilistic analysis
Probability is the branch of mathematics concerning numerical descriptions of how likely an event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and 1, where, roughly speakin ...
versus a decision based on an analysis that ignores
uncertainty
Uncertainty refers to Epistemology, epistemic situations involving imperfect or unknown information. It applies to predictions of future events, to physical measurements that are already made, or to the unknown. Uncertainty arises in partially ...
.
Background
Decisions must be made every day in the ubiquitous presence of uncertainty. For most day-to-day decisions, various
heuristics
A heuristic (; ), or heuristic technique, is any approach to problem solving or self-discovery that employs a practical method that is not guaranteed to be optimal, perfect, or rational, but is nevertheless sufficient for reaching an immediate, ...
are used to act reasonably in the presence of uncertainty, often with little thought about its presence. However, for larger high-stakes decisions or decisions in highly public situations, decision makers may often benefit from a more systematic treatment of their decision problem, such as through quantitative analysis or
decision analysis Decision analysis (DA) is the discipline comprising the philosophy, methodology, and professional practice necessary to address important decisions in a formal manner. Decision analysis includes many procedures, methods, and tools for identifying, ...
.
When building a quantitative decision model, a model builder identifies various relevant factors, and encodes these as ''input variables''. From these inputs, other quantities, called ''result variables'', can be computed; these provide information for the decision maker. For example, in the example detailed below, the decision maker must decide how soon before a flight's schedule departure he must leave for the airport (the decision). One input variable is how long it takes to drive to the airport parking garage. From this and other inputs, the model can compute how likely it is the decision maker will miss the flight and what the net cost (in minutes) will be for various decisions.
To reach a decision, a very common practice is to ignore uncertainty. Decisions are reached through quantitative analysis and model building by simply using a ''best guess'' (single value) for each input variable. Decisions are then made on computed ''point estimates''. In many cases, however, ignoring uncertainty can lead to very poor decisions, with estimations for result variables often misleading the decision maker
An alternative to ignoring uncertainty in quantitative decision models is to explicitly encode uncertainty as part of the model. With this approach, a
probability distribution
In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomeno ...
is provided for each input variable, rather than a single best guess. The
variance
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of number ...
in that distribution reflects the degree of
subjective uncertainty (or lack of knowledge) in the input quantity. The software tools then use methods such as
Monte Carlo analysis
Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be determini ...
to propagate the uncertainty to result variables, so that a decision maker obtains an explicit picture of the impact that uncertainty has on his decisions, and in many cases can make a much better decision as a result.
When comparing the two approaches—ignoring uncertainty versus modeling uncertainty explicitly—the natural question to ask is how much difference it really makes to the quality of the decisions reached. In the 1960s,
Ronald A. Howard proposed
one such measure, the
expected value of perfect information (EVPI), a measure of how much it would be worth to learn the "true" values for all uncertain input variables. While providing a highly useful measure of sensitivity to uncertainty, the EVPI does not directly capture the actual improvement in decisions obtained from explicitly representing and reasoning about uncertainty. For this, Max Henrion, in his Ph.D. thesis, introduced the ''expected value of including uncertainty'' (EVIU), the topic of this article.
Formalization
Let
:
When not including uncertainty, the optimal decision is found using only