In published academic
research
Research is creative and systematic work undertaken to increase the stock of knowledge. It involves the collection, organization, and analysis of evidence to increase understanding of a topic, characterized by a particular attentiveness to ...
, publication bias occurs when the outcome of an experiment or research study
bias
Bias is a disproportionate weight ''in favor of'' or ''against'' an idea or thing, usually in a way that is inaccurate, closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individ ...
es the decision to publish or otherwise distribute it. Publishing only results that show a
significant finding disturbs the balance of findings in favor of positive results.
The study of publication bias is an important topic in
metascience
Metascience (also known as meta-research) is the use of scientific methodology to study science itself. Metascience seeks to increase the quality of scientific research while reducing inefficiency. It is also known as "research on research" and ...
.
Despite similar quality of execution and
design
A design is the concept or proposal for an object, process, or system. The word ''design'' refers to something that is or has been intentionally created by a thinking agent, and is sometimes used to refer to the inherent nature of something ...
,
papers with statistically significant results are three times more likely to be published than those with
null results. This unduly motivates researchers to manipulate their practices to ensure statistically significant results, such as by
data dredging
Data dredging, also known as data snooping or ''p''-hacking is the misuse of data analysis to find patterns in data that can be presented as statistically significant, thus dramatically increasing and understating the risk of false positives. Th ...
.
Many factors contribute to publication bias.
[ For instance, once a scientific finding is well established, it may become newsworthy to publish reliable papers that fail to reject the ]null hypothesis
The null hypothesis (often denoted ''H''0) is the claim in scientific research that the effect being studied does not exist. The null hypothesis can also be described as the hypothesis in which no relationship exists between two sets of data o ...
. Most commonly, investigators simply decline to submit results, leading to non-response bias. Investigators may also assume they made a mistake, find that the null result fails to support a known finding, lose interest in the topic, or anticipate that others will be uninterested in the null results.[
Attempts to find unpublished studies often prove difficult or are unsatisfactory.][H. Rothstein, A. J. Sutton and M. Borenstein. (2005). ''Publication bias in meta-analysis: prevention, assessment and adjustments''. Wiley. Chichester, England; Hoboken, NJ.] In an effort to combat this problem, some journals require that authors preregister their methods and analyses, ''prior'' to collecting data, with organizations like the Center for Open Science
The Center for Open Science is a non-profit technology organization based in Charlottesville, Virginia with a mission to "increase the openness, integrity, and reproducibility of scientific research." Brian Nosek and Jeffrey Spies founded the o ...
.
Other proposed strategies to detect and control for publication bias[ include p-curve analysis and disfavoring small and non-randomized studies due to high susceptibility to error and bias.][
]
Definition
Publication bias occurs when the publication of research results depends not just on the quality of the research but also on the hypothesis tested, and the significance and direction of effects detected. The subject was first discussed in 1959 by statistician Theodore Sterling to refer to fields in which "successful" research is more likely to be published. As a result, "the literature of such a field consists in substantial part of false conclusions resulting from errors of the first kind in statistical tests of significance". In the worst case, false conclusions could canonize as being true if the publication rate of negative results is too low.
One effect of publication bias is sometimes called the file-drawer effect, or file-drawer problem. This term suggests that negative results, those that do not support the initial hypotheses of researchers are often "filed away" and go no further than the researchers' file drawers, leading to a bias in published research. The term "file drawer problem" was coined by psychologist Robert Rosenthal in 1979.
Positive-results bias, a type of publication bias, occurs when authors are more likely to submit, or editors are more likely to accept, positive results than negative or inconclusive results. Outcome reporting bias occurs when multiple outcomes are measured and analyzed, but the reporting of these outcomes is dependent on the strength and direction of its results. A generic term coined to describe these post-hoc choices is HARKing
HARKing (hypothesizing after the results are known) is an acronym coined by social psychologist Norbert Kerr that refers to the questionable research practice of "presenting a post hoc hypothesis in the introduction of a research report as if it w ...
("Hypothesizing After the Results are Known").
Evidence
There is extensive meta-research
Metascience (also known as meta-research) is the use of scientific methodology to study science itself. Metascience seeks to increase the quality of scientific research while reducing inefficiency. It is also known as "research on research" and ...
on publication bias in the biomedical field. Investigators following clinical trials from the submission of their protocols to ethics committees (or regulatory authorities) until the publication of their results observed that those with positive results are more likely to be published. In addition, studies often fail to report negative results when published, as demonstrated by research comparing study protocols with published articles.
The presence of publication bias was investigated in meta-analyses
Meta-analysis is a method of synthesis of quantitative data from multiple independent studies addressing a common research question. An important part of this method involves computing a combined effect size across all of the studies. As such, th ...
. The largest such analysis investigated the presence of publication bias in systematic reviews of medical treatments from the Cochrane Library
The Cochrane Library (named after Archie Cochrane) is a collection of databases in medicine and other healthcare specialties provided by Cochrane and other organizations. At its core is the collection of Cochrane Reviews, a database of systema ...
. The study showed that statistically positive significant findings are 27% more likely to be included in meta-analyses of efficacy than other findings. Results showing no evidence of adverse effects have a 78% greater probability of inclusion in safety studies than statistically significant results showing adverse effects. Evidence of publication bias was found in meta-analyses published in prominent medical journals.
Meta-analyses (reviews) have been performed in the field of ecology and environmental biology. In a study of 100 meta-analyses in ecology, only 49% tested for publication bias. While there are multiple tests that have been developed to detect publication bias, most perform poorly in the field of ecology because of high levels of heterogeneity in the data and that often observations are not fully independent.
, "No trial published in China or Russia/USSR found a test treatment to be ineffective."
Impact on meta-analysis
Where publication bias is present, published studies are no longer a representative sample of the available evidence. This bias distorts the results of meta-analyses and systematic review
A systematic review is a scholarly synthesis of the evidence on a clearly presented topic using critical methods to identify, define and assess research on the topic. A systematic review extracts and interprets data from published studies on ...
s. For example, evidence-based medicine
Evidence-based medicine (EBM) is "the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. It means integrating individual clinical expertise with the best available exte ...
is increasingly reliant on meta-analysis to assess evidence.
Meta-analyses and systematic reviews can account for publication bias by including evidence from unpublished studies and the grey literature. The presence of publication bias can also be explored by constructing a funnel plot
A funnel plot is a graph designed to check for the existence of publication bias; funnel plots are commonly used in systematic reviews and meta-analyses. In the absence of publication bias, it assumes that studies with high precision will be plott ...
in which the estimate of the reported effect size is plotted against a measure of precision or sample size. The premise is that the scatter of points should reflect a funnel shape, indicating that the reporting of effect sizes is not related to their statistical significance.[ However, when small studies are predominately in one direction (usually the direction of larger effect sizes), asymmetry will ensue and this may be indicative of publication bias.
Because an inevitable degree of subjectivity exists in the interpretation of funnel plots, several tests have been proposed for detecting funnel plot asymmetry.] These are often based on linear regression including the popular Eggers regression test, and may adopt a multiplicative or additive dispersion parameter to adjust for the presence of between-study heterogeneity. Some approaches may even attempt to compensate for the (potential) presence of publication bias,[ which is particularly useful to explore the potential impact on meta-analysis results.]
In ecology and environmental biology, a study found that publication bias impacted the effect size, statistical power, and magnitude. The prevalence of publication bias distorted confidence in meta-analytic results, with 66% of initially statistically significant meta-analytic means becoming non-significant after correcting for publication bias. Ecological and evolutionary studies consistently had low statistical power (15%) with a 4-fold exaggeration of effects on average (Type M error rates = 4.4).
The presence of publication bias can be detected by Time-lag bias tests, where time-lag bias occurs when larger or statistically significant effects are published more quickly than smaller or non-statistically significant effects. It can manifest as a decline in the magnitude of the overall effect over time. The key feature of time-lag bias tests is that, as more studies accumulate, the mean effect size is expected to converge on its true value.[
]
Compensation examples
Two meta-analyses of the efficacy of reboxetine as an antidepressant
Antidepressants are a class of medications used to treat major depressive disorder, anxiety disorders, chronic pain, and addiction.
Common side effects of antidepressants include Xerostomia, dry mouth, weight gain, dizziness, headaches, akathi ...
demonstrated attempts to detect publication bias in clinical trials. Based on positive trial data, reboxetine was originally passed as a treatment for depression in many countries in Europe and the UK in 2001 (though in practice it is rarely used for this indication). A 2010 meta-analysis concluded that reboxetine was ineffective and that the preponderance of positive-outcome trials reflected publication bias, mostly due to trials published by the drug manufacturer Pfizer
Pfizer Inc. ( ) is an American Multinational corporation, multinational Pharmaceutical industry, pharmaceutical and biotechnology corporation headquartered at The Spiral (New York City), The Spiral in Manhattan, New York City. Founded in 184 ...
. A subsequent meta-analysis published in 2011, based on the original data, found flaws in the 2010 analyses and suggested that the data indicated reboxetine was effective in severe depression (see Reboxetine § Efficacy). Examples of publication bias are given by Ben Goldacre and Peter Wilmshurst.
In the social sciences, a study of published papers exploring the relationship between corporate social and financial performance found that "in economics, finance, and accounting journals, the average correlations were only about half the magnitude of the findings published in Social Issues Management, Business Ethics, or Business and Society journals".
One example cited as an instance of publication bias is the refusal to publish attempted replications of Bem's work that claimed evidence for precognition by '' The Journal of Personality and Social Psychology'' (the original publisher of Bem's article).
An analysis comparing studies of gene-disease associations originating in China to those originating outside China found that those conducted within the country reported a stronger association and a more statistically significant result.
Risks
John Ioannidis
John P. A. Ioannidis ( ; , ; born August 21, 1965) is a Greek-American physician-scientist, writer and Stanford University professor who has made contributions to evidence-based medicine, epidemiology, and clinical research. Ioannidis studies sc ...
argues that "claimed research findings may often be simply accurate measures of the prevailing bias." He lists the following factors as those that make a paper with a positive result more likely to enter the literature and suppress negative-result papers:
* The studies conducted in a field have small sample sizes.
* The effect sizes in a field tend to be smaller.
* There is both a greater number and lesser preselection of tested relationships.
* There is greater flexibility in designs, definitions, outcomes, and analytical modes.
* There are prejudices (financial interest, political, or otherwise).
* The scientific field is hot and there are more scientific teams pursuing publication.
Other factors include experimenter bias and white hat bias.
Remedies
Publication bias can be contained through better-powered studies, enhanced research standards, and careful consideration of true and non-true relationships.[ Better-powered studies refer to large studies that deliver definitive results or test major concepts and lead to low-bias meta-analysis. Enhanced research standards such as the pre-registration of protocols, the registration of data collections, and adherence to established protocols are other techniques. To avoid false-positive results, the experimenter must consider the chances that they are testing a true or non-true relationship. This can be undertaken by properly assessing the false positive report probability based on the statistical power of the test] and reconfirming (whenever ethically acceptable) established findings of prior studies known to have minimal bias.
Study registration
In September 2004, editors of prominent medical journals (including the ''New England Journal of Medicine
''The New England Journal of Medicine'' (''NEJM'') is a weekly medical journal published by the Massachusetts Medical Society. Founded in 1812, the journal is among the most prestigious peer-reviewed medical journals. Its 2023 impact factor was ...
'', ''The Lancet
''The Lancet'' is a weekly peer-reviewed general medical journal, founded in England in 1823. It is one of the world's highest-impact academic journals and also one of the oldest medical journals still in publication.
The journal publishes ...
'', ''Annals of Internal Medicine
''Annals of Internal Medicine'' is an academic medical journal published by the American College of Physicians (ACP). It is one of the most widely cited and influential specialty medical journals in the world. ''Annals'' publishes content releva ...
'', and ''JAMA
''JAMA'' (''The Journal of the American Medical Association'') is a peer-reviewed medical journal published 48 times a year by the American Medical Association. It publishes original research, reviews, and editorials covering all aspects of b ...
'') announced that they would no longer publish results of drug research sponsored by pharmaceutical companies unless that research was registered in a public clinical trials registry
Preregistration is the practice of registering the hypotheses, methods, or analyses of a scientific study before it is conducted. Clinical trial registration is similar, although it may not require the registration of a study's analysis protocol. F ...
database from the start. Furthermore, some journals (e.g. '' Trials),'' encourage publication of study protocols in their journals.
The World Health Organization
The World Health Organization (WHO) is a list of specialized agencies of the United Nations, specialized agency of the United Nations which coordinates responses to international public health issues and emergencies. It is headquartered in Gen ...
(WHO) agreed that basic information about all clinical trials should be registered at the study's inception and that this information should be publicly accessible through the WHO International Clinical Trials Registry Platform
The International Clinical Trials Registry Platform (ICTRP) is a platform for the registration of clinical trials operated by the World Health Organization.
The ICTRP combines data from multiple cooperating clinical trials registries to generate ...
. Additionally, the public availability of complete study protocols, alongside reports of trials, is becoming more common for studies.
Megastudies
In a megastudy, a large number of treatments are tested simultaneously. Given the inclusion of different interventions in the study, a megastudy's publication likelihood is less dependent on the statistically significant effect of any specific treatment, so it has been suggested that megastudies may be less prone to publication bias.[Tkachenko, Y., Jedidi, K. A megastudy on the predictability of personal information from facial images: Disentangling demographic and non-demographic signals. Sci Rep 13, 21073 (2023). https://doi.org/10.1038/s41598-023-42054-9] For example, an intervention found to be ineffective would be easier to publish as part of a megastudy as just one of many studied interventions. In contrast, it might go unreported due to the file-drawer problem if it were the sole focus of a contemplated paper. For the same reason, the megastudy research design may encourage researchers to study not only the interventions they consider more likely to be effective but also those interventions that researchers are less sure about and that they would not pick as the sole focus of the study due to the perceived high risk of a null effect.
See also
*
*
*
*
*
*
*
*
*
*
*
*
References
External links
*
Register of clinical trials conducted in the US and around the world, maintained by the National Library of Medicine, Bethesda
Journal of Negative Results in Biomedicine
The All Results Journals
Journal of Articles in Support of the Null Hypothesis
Psychfiledrawer.org: Archive for replication attempts in experimental psychology
{{Use dmy dates, date=April 2017
Academic publishing
Bias
Criticism of academia
Systematic review
Meta-analysis
Bias
Bias is a disproportionate weight ''in favor of'' or ''against'' an idea or thing, usually in a way that is inaccurate, closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individ ...
Academic terminology
Metascience