HOME

TheInfoList



OR:

Citation impact is a measure of how many times an academic journal article or book or author is cited by other articles, books or authors. Citation counts are interpreted as measures of the impact or influence of academic work and have given rise to the field of bibliometrics or
scientometrics Scientometrics is the field of study which concerns itself with measuring and analysing scholarly literature. Scientometrics is a sub-field of informetrics. Major research issues include the measurement of the impact of research papers and academi ...
, specializing in the study of patterns of academic impact through citation analysis. The journal impact factor, the two-year average ratio of citations to articles published, is a measure of the importance of journals. It is used by academic institutions in decisions about academic tenure, promotion and hiring, and hence also used by authors in deciding which journal to publish in. Citation-like measures are also used in other fields that do ranking, such as Google's PageRank algorithm, software metrics,
college and university rankings College and university rankings order the best institutions in higher education based on factors that vary depending on the ranking. Some rankings evaluate institutions within a single country, while others assess institutions worldwide. Rankings ...
, and business performance indicators.


Article-level

One of the most basic citation metrics is how often an article was cited in other articles, books, or other sources (such as theses). Citation rates are heavily dependent on the discipline and the number of people working in that area. For instance, many more scientists work in neuroscience than in mathematics, and neuroscientists publish more papers than mathematicians, hence neuroscience papers are much more often cited than papers in mathematics. Similarly, review papers are more often cited than regular research papers because they summarize results from many papers. This may also be the reason why papers with shorter titles get more citations, given that they are usually covering a broader area.


Most-cited papers

The most-cited paper in history is a paper by Oliver Lowry describing an assay to measure the concentration of proteins. By 2014 it had accumulated more than 305,000 citations. The 10 most cited papers all had more than 40,000 citations. To reach the top-100 papers required 12,119 citations by 2014. Of Thomson Reuter's Web of Science database with more than 58 million items only 14,499 papers (~0.026%) had more than 1,000 citations in 2014.


Journal-level

The simplest journal-level metric is the journal impact factor (JIF), the average number of citations that articles published by a journal in the previous two years have received in the current year, as calculated by Clarivate. Other companies report similar metrics, such as the
CiteScore CiteScore (CS) of an academic journal is a measure reflecting the yearly average number of citations to recent articles published in that journal. This journal evaluation metric was launched in December 2016 by Elsevier as an alternative to the ge ...
(CS), based on Scopus. However, very high JIF or CS are often based on a small number of very highly cited papers. For instance, most papers in ''Nature'' (impact factor 38.1, 2016) were only cited 10 or 20 times during the reference year (see figure). Journals with a lower impact (e.g. '' PLOS ONE'', impact factor 3.1) publish many papers that are cited 0 to 5 times but few highly cited articles. Journal-level metrics are often misinterpreted as a measure for journal quality or article quality. However, the use of non-article-level metrics to determine the impact of a single article is statistically invalid. Moreover, studies of methodological quality and reliability have found that "reliability of published research works in several fields may be decreasing with increasing journal rank", contrary to widespread expectations. Citation distribution is skewed for journals because a very small number of articles is driving the vast majority of citations; therefore, some journals have stopped publicizing their impact factor, e.g. the journals of the American Society for Microbiology. Citation counts follow mostly a lognormal distribution, except for the
long tail In statistics and business, a long tail of some probability distribution, distributions of numbers is the portion of the distribution having many occurrences far from the "head" or central part of the distribution. The distribution could involv ...
, which is better fit by a
power law In statistics, a power law is a Function (mathematics), functional relationship between two quantities, where a Relative change and difference, relative change in one quantity results in a proportional relative change in the other quantity, inde ...
. Other journal-level metrics include the Eigenfactor, and the
SCImago Journal Rank The SCImago Journal Rank (SJR) indicator is a measure of the prestige of scholarly journals that accounts for both the number of citations received by a journal and the prestige of the journals where the citations come from. Rationale Citati ...
.


Author-level

Total citations, or average citation count per article, can be reported for an individual author or researcher. Many other measures have been proposed, beyond simple citation counts, to better quantify an individual scholar's citation impact. The best-known measures include the h-index and the g-index. Each measure has advantages and disadvantages, spanning from bias to discipline-dependence and limitations of the citation data source. Counting the number of citations per paper is also employed to identify the authors of citation classics. Citations are distributed highly unequally among researchers. In a study based on the Web of Science database across 118 scientific disciplines, the top 1% most-cited authors accounted for 21% of all citations. Between 2000 and 2015, the proportion of citations that went to this elite group grew from 14% to 21%. The highest concentrations of ‘citation elite’ researchers were in the Netherlands, the United Kingdom,
Switzerland ). Swiss law does not designate a ''capital'' as such, but the federal parliament and government are installed in Bern, while other federal institutions, such as the federal courts, are in other cities (Bellinzona, Lausanne, Luzern, Neuchâtel ...
and Belgium. Note that 70% of the authors in the Web of Science database have fewer than 5 publications, so that the most-cited authors among the 4 million included in this study constitute a tiny fraction.


Alternatives

An alternative approach to measure a scholar's impact relies on usage data, such as number of downloads from publishers and analyzing citation performance, often at article level. As early as 2004, the ''
BMJ ''The BMJ'' is a weekly peer-reviewed medical trade journal, published by the trade union the British Medical Association (BMA). ''The BMJ'' has editorial freedom from the BMA. It is one of the world's oldest general medical journals. Origina ...
'' published the number of views for its articles, which was found to be somewhat correlated to citations. In 2008 the '' Journal of Medical Internet Research'' began publishing views and Tweets. These "tweetations" proved to be a good indicator of highly cited articles, leading the author to propose a "Twimpact factor", which is the number of Tweets it receives in the first seven days of publication, as well as a Twindex, which is the rank percentile of an article's Twimpact factor. In response to growing concerns over the inappropriate use of journal impact factors in evaluating scientific outputs and scientists themselves, Université de Montréal, Imperial College London, PLOS, eLife, EMBO Journal, The Royal Society, ''Nature'' and ''Science'' proposed citation distributions metrics as alternative to impact factors.


Open Access publications

Open access Open access (OA) is a set of principles and a range of practices through which research outputs are distributed online, free of access charges or other barriers. With open access strictly defined (according to the 2001 definition), or libre op ...
(OA) publications are accessible without cost to readers, hence they would be expected to be cited more frequently. Some experimental and observational studies have found that articles published in OA journals do not receive more citations, on average, than those published in subscription journals; other studies have found that they do. The evidence that author-self-archived ("green") OA ''articles'' are cited more than non-OA articles is somewhat stronger than the evidence that ("gold") OA ''journals'' are cited more than non-OA journals. Two reasons for this are that many of the top-cited journals today are still only hybrid OA (author has the option to pay for gold) and many pure author-pays OA journals today are either of low quality or downright fraudulent "predatory journals," preying on authors' eagerness to publish-or-perish, thereby lowering the average citation counts of OA journals.


Recent developments

An important recent development in research on citation impact is the discovery of ''universality'', or citation impact patterns that hold across different disciplines in the sciences, social sciences, and humanities. For example, it has been shown that the number of citations received by a publication, once properly rescaled by its average across articles published in the same discipline and in the same year, follows a universal log-normal distribution that is the same in every discipline. This finding has suggested a ''universal citation impact measure'' that extends the h-index by properly rescaling citation counts and resorting publications, however the computation of such a universal measure requires the collection of extensive citation data and statistics for every discipline and year. Social
crowdsourcing Crowdsourcing involves a large group of dispersed participants contributing or producing goods or services—including ideas, votes, micro-tasks, and finances—for payment or as volunteers. Contemporary crowdsourcing often involves digita ...
tools such as Scholarometer have been proposed to address this need. Kaur et al. proposed a statistical method to evaluate the universality of citation impact metrics, i.e., their capability to compare impact fairly across fields. Their analysis identifies universal impact metrics, such as the field-normalized h-index. Research suggests the impact of an article can be, partly, explained by superficial factors and not only by the scientific merits of an article. Field-dependent factors are usually listed as an issue to be tackled not only when comparison across disciplines are made, but also when different fields of research of one discipline are being compared. For instance in Medicine among other factors the number of authors, the number of references, the article length, and the presence of a colon in the title influence the impact. Whilst in Sociology the number of references, the article length, and title length are among the factors. Also it is found that scholars engage in ethically questionable behavior in order to inflate the number of citations articles receive. Automated citation indexing has changed the nature of citation analysis research, allowing millions of citations to be analyzed for large scale patterns and knowledge discovery. The first example of automated citation indexing was CiteSeer, later to be followed by Google Scholar. More recently, advanced models for a dynamic analysis of citation aging have been proposed. The latter model is even used as a predictive tool for determining the citations that might be obtained at any time of the lifetime of a corpus of publications. Some researchers also propose that the journal citation rate on Wikipedia, next to the traditional citation index, "may be a good indicator of the work’s impact in the field of psychology." According to Mario Biagioli: "All metrics of scientific evaluation are bound to be abused.
Goodhart's law Goodhart's law is an adage often stated as, "When a measure becomes a target, it ceases to be a good measure". It is named after British economist Charles Goodhart, who is credited with expressing the core idea of the adage in a 1975 article on mon ...
..states that when a feature of the economy is picked as an indicator of the economy, then it inexorably ceases to function as that indicator because people start to game it."


References


Further reading

* *


External links

* {{Academic publishing, state=collapsed Research and development Academic publishing Citation metrics