More often than not, when the corrupting influence of impact
factors on science is discussed, fingers are pointed at Nature and Science, as if
these two scientific journals invented impact factors and as if they are the
main culprits in debasing science. This is not even remotely true. Rather, the
finger should be pointed at the academic community exclusively and on no one
else. In fact, there are even limits as to how much the academic community can
be blamed, as I argue in my next post.
Nature and Science, and especially the former, are
the best known and most sought after of what has come to be known as the
“glamour” journals in science. There are, of course, other “glamour journals”,
as well as ones that aspire to that status, but none has reached quite the
same status as these two. It is therefore perhaps not surprising that they
should bear the brunt of the blame.
But it would be hard for even the enemies of these two
journals not to acknowledge that both have done a remarkably good job in
selecting outstanding scientific papers over so many years. Journals do not
gain prestige status by impact factors alone; if they did, their prestige wouild fall, and their impact factors along with it. I myself have little doubt but
that the editors of both journals are hard-working, conscientious people,
striving to do the best by themselves, by their journal and by science. One way
of measuring their success is through impact factors, which is a guide to how
often papers in the journal are cited. Impact factors are blind to quality,
readability, or importance of a paper. They are simply one measure – among
others – of how well a journal is doing and how wide an audience it is
reaching. One could equally use other measures, for example the advertising
revenue or some kind of Altmetric rating. Impact factors just happen to be one
of those measures. And let us face it, no editor of any journal would be
satisfied with low impact factors for their journal; if s/he says otherwise,
s/he lies. The un-ending search for better impact factors really boils down to
human behaviour - the desire to convince oneself and others that one is doing a
good job, to esteem oneself and gain the esteem of others. Editors of journals
are no different. Like the rest of us, they aspire to succeed and be seen – by
their employers, their colleagues and the world at large – to have succeeded. Is it any wonder that they celebrate their impact factors?
To the editors of these journals – and to the rest of us -
impact factors are therefore a guide to how successful they have been. I see
nothing wrong with that, and find it hard to blame them for competing against
each other to attain the best impact factor status. In other words, there is
nothing really wrong with impact factors, except the uses to which they are
put, and they are put to undesirable uses by the academic community, not by the
journals.
In spite of the sterling service both have done to science, by
publishing so many good papers, it is also true that they have published some
pretty bad ones. In fact, of about the ten worst papers I have read in my
subject, in my judgment one (the worst) was published in Nature Neuroscience, one in Nature,
and one in Science. I have, as well, read many mediocre papers in
these journals, as well as in others aspiring to the same status, such as the Proceedings of the National Academy of
Sciences and Current Biology.
This is not surprising; the choice of papers is a matter of judgment, and the
judgment made by these journals is actually made by humans; they are bound to
get it wrong sometimes, and apparently do so often. By Nature’s own admission in an editorial some time ago, there are
also “gems” in it which do not get much notice. Hence, not only does one find
some bad or mediocre papers in these journals but un-noticed good ones as well.
Retraction rates in both journals are not much worse or better than other
journals although retraction rates apparently correlate with impact factors,
the higher the impact factor, the more frequent the retractions. But it would
of course be entirely wrong to blame the journals themselves, or their
referees, if they publish papers which subsequently have to be retracted. The
blame for that must lie with the authors.
“Send it to a
specialized journal” (euphemism for “Your paper won’t help our impact factor”)
I recently had an interesting experience of how they can
also be wrong in their judgment, at least their judgment of the general
interest in a scientific work (of course the more the general interest, the
higher their impact factor is likely to be). We sent our paper on “The experience of mathematical beauty and its neural correlates” first to Nature, which rejected it without
review, stating that “These editorial judgements are based on such
considerations as the degree of advance provided, the breadth of potential
interest to researchers and timeliness” (somewhere in that sentence, probably at “breadth of potential
interest”, they are implicitly saying that our paper does not have the breadth
of potential interest – in other words will not do much to improve their impact
factors). We then sent it to Science, which again returned it without
sending it out for review, saying that “we feel that the scope and focus of
your paper make it more appropriate for a more specialized journal.”
(Impact factors playing a role again here, at least implicitly, because, of
course, specialized articles will appeal to a minority and will not enhance the
impact factor of a journal, since they are also likely to be cited less often
and then only by a minority).
Finally, going several steps down a very steep ladder, we
sent it to Current Biology, which
also returned it without sending it out to referees for in-depth review,
writing that “…our feeling is that the work you describe
would be better suited to a rather more specialized journal than Current
Biology” (my translation- it will do nothing for our impact
factor since only a limited number of workers are likely to read and cite it).
The paper was finally published in Frontiers in Human Neuroscience (after a very rigorous review).
Given that this paper has, as of this writing, been viewed over 71,000 times in
just over 2.5 months, and that it has been viewed even in war-torn countries
(Syria, Libya, Ethiopia, Iraq, Kashmir, Crimea, Ukraine), it would seem that
our article was of very substantial interest to a very wide range of people all
over the world; very few papers in
neuroscience, and I daresay in science generally, achieve the status of being viewed
so many times over such a brief period. On this count, then, I cannot say that the
judgment that the paper should be sent to a specialized general or that its
breadth of interest was potentially limited inspires much confidence.
“We only want to
publish the most influential papers”
It is of course a bit rich for these journals to pretend
that they are not specialized. I doubt that any biologist reading the
biological papers in Nature or Science would comprehend more than one
paper in any issue, and that is being generous. In fact, very often what makes
their papers comprehensible are the news and views sections in the same issue, a practice that some othert journals are taking up, though somewhat more timidly. By any standard,
Nature and Science and all the other journals that pretend otherwise are in
fact highly specialized journals.
Be that as it may, they are only pursuing a policy that many
other journals also pursue. Consider this letter from e-Life, a recently instituted open access journal, which I have
seen being written about as if it is a welcome answer to Nature.
Well, they returned a (different) paper I sent within 24
hours, after an internal review, saying that “The consensus of the
editors is that your submission should not be considered for in-depth peer
review”, adding
prissily “This is not meant as a
criticism of the quality of the data or the rigor of the science, but merely
reflects our desire to publish only the most influential research”,
apparently without realizing that a research can only be judged to have been
influential retrospectively, sometimes years after it has been published. But
what does “influential” research amount to – one which is cited many times,
thereby boosting – you guessed it – the impact factor of the journal. Indeed, e-Life (which has also published some interesting articles) even has a section in its regular
email alerts that is intended for the media – which of course help publicize a
paper and boost – you guessed correctly again – the impact factor!
So why
single out Nature and Science, when so many journals are also
pursuing impact factors with such zeal? It is just that Nature and Science are
better at it. And their higher impact factors really means that the papers they
select for publication are being cited more often than those selected in other
journals with aspirations to scientific glamour.
So, instead
of pointing fingers at them, let us direct the enquiry at ourselves, while
acknowledging that both journals, in spite of all their blemishes and warts,
have done a fairly good job for science in general.
In my next post, I will discuss why impact factors - however repellent the uses to which they are put by us - are here to stay.