]
"Why Most Published Research Findings Are False" is a 2005 essay written by
John Ioannidis
John P. A. Ioannidis ( ; , ; born August 21, 1965) is a Greek-American physician-scientist, writer and Stanford University professor who has made contributions to evidence-based medicine, epidemiology, and clinical research. Ioannidis studies sc ...
, a professor at the
Stanford School of Medicine, and published in ''
PLOS Medicine''. It is considered foundational to the field of
metascience
Metascience (also known as meta-research) is the use of scientific methodology to study science itself. Metascience seeks to increase the quality of scientific research while reducing inefficiency. It is also known as "research on research" and ...
.
In the paper, Ioannidis argued that a large number, if not the majority, of published
medical research
Medical research (or biomedical research), also known as health research, refers to the process of using scientific methods with the aim to produce knowledge about human diseases, the prevention and treatment of illness, and the promotion of ...
papers contain results that cannot be
replicated. In simple terms, the essay states that scientists use
hypothesis testing to determine whether scientific discoveries are significant.
Statistical significance
In statistical hypothesis testing, a result has statistical significance when a result at least as "extreme" would be very infrequent if the null hypothesis were true. More precisely, a study's defined significance level, denoted by \alpha, is the ...
is formalized in terms of probability, with its
''p-''value measure being reported in the scientific literature as a screening mechanism. Ioannidis posited assumptions about the way people perform and report these tests; then he constructed a statistical model which indicates that most published findings are likely
false positive results.
While the general arguments in the paper recommending reforms in scientific research methodology were well-received, Ionnidis received criticism for the validity of his model and his claim that the majority of scientific findings are false. Responses to the paper suggest lower false positive and false negative rates than what Ionnidis puts forth.
Argument
Suppose that in a given scientific field there is a known baseline probability that a result is true, denoted by
. When a study is conducted, the probability that a positive result is obtained is
. Given these two factors, we want to compute the
conditional probability
In probability theory, conditional probability is a measure of the probability of an Event (probability theory), event occurring, given that another event (by assumption, presumption, assertion or evidence) is already known to have occurred. This ...
, which is known as the
positive predictive value (PPV).
Bayes' theorem
Bayes' theorem (alternatively Bayes' law or Bayes' rule, after Thomas Bayes) gives a mathematical rule for inverting Conditional probability, conditional probabilities, allowing one to find the probability of a cause given its effect. For exampl ...
allows us to compute the PPV as:
where
is the
type I error rate (false positives) and
is the
type II error rate (false negatives); the
statistical power
In frequentist statistics, power is the probability of detecting a given effect (if that effect actually exists) using a given test in a given context. In typical use, it is a function of the specific test that is used (including the choice of tes ...
is
. It is customary in most scientific research to desire
and
. If we assume
for a given scientific field, then we may compute the PPV for different values of
and
:
However, the simple formula for PPV derived from Bayes' theorem does not account for
bias
Bias is a disproportionate weight ''in favor of'' or ''against'' an idea or thing, usually in a way that is inaccurate, closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individ ...
in study design or reporting. Some published findings would not have been presented as research findings if not for researcher bias. Let