

In 2010 we have seen the creation and popularization of a website dedicated to monitoring retractions ( ), while 2011 has been described as the “the year of the retraction” ( Hamilton, 2011). Retractions and the Decline EffectĪ disturbing trend has recently gained wide public attention: The retraction rate of articles published in scientific journals, which had remained stable since the 1970's, began to increase rapidly in the early 2000's from 0.001% of the total to about 0.02% (Figure 1A). In recent years, the scientific study of the effectiveness of such measures of quality control has grown. The hierarchical publication system (“journal rank”) used to communicate scientific results is thus central, not only to the composition of the scientific community at large (by selecting its members), but also to science's position in society. Peer-review is one of the mechanisms which have evolved to increase the reliability of the scientific literature.Īt the same time, the current publication system is being used to structure the careers of the members of the scientific community by evaluating their success in obtaining publications in high-ranking journals. In Karl Popper's words: “Science is not a system of certain, or established, statements” ( Popper, 1995). Even under ideal circumstances, science can never provide us with absolute truth. However, every scientific publication may in principle report results which prove to be unreliable, either unintentionally, in the case of honest error or statistical variability, or intentionally in the case of misconduct or fraud. In other words, the reliability of science is not only a societal imperative, it is also vital to the scientific community itself. Therefore, it is critical that public trust in science remains high. Scientific research is largely a public endeavor, requiring public trust. The more our society relies on science, and the more our population becomes scientifically literate, the more important the reliability of scientific research becomes. Moreover, today's global problems cannot be solved without scientific input and understanding. Science is the bedrock of modern society, improving our lives through advances in medicine, communication, transportation, forensics, entertainment and countless other areas. This new system will use modern information technology to vastly improve the filter, sort and discovery functions of the current journal system. Therefore, we suggest that abandoning journals altogether, in favor of a library-based scholarly communication system, will ultimately be necessary. Moreover, the data lead us to argue that any journal rank (not only the currently-favored Impact Factor) would have this negative impact. These data corroborate previous hypotheses: using journal rank as an assessment tool is bad scientific practice. In this review, we present the most recent and pertinent data on the consequences of our current scholarly communication system with respect to various measures of scientific quality (such as utility/citations, methodological soundness, expert ratings or retractions). So far, contributions to the debate concerning the limitations of journal rank as a scientific impact assessment tool have either lacked data, or relied on only a few studies. On the other hand, much has been written about the negative effects of institutionalizing journal rank as an impact measure. Most researchers acknowledge an intrinsic hierarchy in the scholarly journals (“journal rank”) that they submit their work to, and adjust not only their submission but also their reading strategies accordingly. 3UK Centre for Tobacco Control Studies and School of Experimental Psychology, University of Bristol, Bristol, UK.2School of Social and Community Medicine, University of Bristol, Bristol, UK.1Institute of Zoology-Neurogenetics, University of Regensburg, Regensburg, Germany.Björn Brembs 1*, Katherine Button 2 and Marcus Munafò 3
