I had intended to follow up my earlier post
on impact factors (in defense of Nature and Science) with this one some weeks
ago but could not get to it until now. This is fortunate for, in the meantime,
my colleague David Colquhoun has written
an excellent post about impact factors and such like, with which I agree
completely. In fact, David is using facts and figures to give teeth to what we
all know – that impact factors are deeply flawed and that no self respecting
academic institution or individual should take the slightest notice of them
when judging an individual for a position or for a grant application. Above
all, papers should be judged by their content – which of course implies reading
them, not always an easy thing to do it seems. A particularly important point
that David makes is that the importance and influence of a paper may emerge
years after its publication. This is why I regard the reason given by the open
access journal e-Life for declining my paper – that “we only want to publish the most influential papers” - as among the silliest, and indeed the most stupid, I have ever
received from a journal, and a journal, moreover, which is viewed as a rival to
the so-called “glamour” journals, at least by some. I suspect that the editors
of Nature and Science have had much too much experience to write anything quite
so silly.
It is now generally acknowledged in the
academic community that there is something unsavoury about impact and allied
factors. Indeed, there is a San Francisco declaration (DORA) which is a sort of
code for honourable conduct for the academic community. I have signed the DORA
declaration – even though I detected a few cynical signatures there – and
applaud its aims. Given the large number of academics and those from allied
trades who have also done so, it is obvious that we all know (or most of us do) that
impact factors and the allied statistics regularly collected to classify scientists
and their standing are dangerous because so flawed. Yet in spite of this
knowledge, impact factors continue to be used and abused on unprecedented
scales. Therefore the more interesting question, it seems to me, is why this
should be so. That, I think is the question that the academic community must
address.
Here
to stay because they serve deep-seated human needs
I believe that, whatever their flaws, impact
factors are here to stay because they serve at least two needs, as I stated in
my last post on the question in respect of editors. They increase their
self-esteem and allow them, in the endless and often silent competition that
goes on between journals, to declare to others, “we are better than you”. Hence
Nature sent out emails some time ago
stating that “To celebrate our impact factor…” we are doing such and such. They
were the producers and consumers of their own propaganda, declaring in the same
breath to themselves and to others that they are better than the competition.
What applies to editors applies equally to
individuals. When a scientist declares “I have published 20 papers in Nature and 15 in Science”, s/he is also saying “I am a better scientist [than those
of you who cannot publish in these high impact journals], without actually
having to spell it out in words, perhaps even without thinking about it. In this way, they give notice to their
colleagues, both superior and inferior, that they are better. They too are producers and consumers of their
own propaganda, since it elevates their self-esteem on the one hand and the
esteem accorded them on the other. In other words, impact factors serve the
same purposes for both editors and contributors. And the desire for esteem is
deeply ingrained, and any means of obtaining it – of which impact factors is
one - is bound to enjoy great success. Impact factors, in brief, are not going
to be easily brushed aside by pointing out their flaws, when they cater for
much deeper needs.
Two
examples
This was dramatically illustrated for me
during a lecture I attended fairly recently, given not by an aspirant to a job
but by a senior professor. Slide after slide had emblazoned on it Nature or Science in conspicuous letters at least four times the size of the
title of the article displayed, the face of the professor growing ever more
smug with each additional slide. The irony is that I remember nothing of his
lecture, save this and the title of the lecture. And that, of course, was part
of the intended effect. For I now remember that the eminent professor publishes
in “glamour” journals. He must therefore be good, which is what I assume he was
trying to tell us.
Another example: Some four years ago, an
advertisement for a research fellow from a very distinguished institution
stated that “the successful candidate will have a proven ability to publish in
high impact journals”. Nothing about attracting research grants or, better
still, attacking important problems. Presumably, if someone publishes in
glamour journals, s/he is worthy of consideration because they must be good.
When I made my disapproval of the wording
known to the Director of the institution (whom I knew vaguely) he looked at me
as if I had come from another planet, or perhaps one who had seen better days
and had not kept up with the world. What he said in defense of his preposterous
ad confirmed this. I was shocked.
Actually, I should not have been, because
he was right and I was wrong.
Centuries
of impact factors
There is in essence nothing new in impact
factors, although they have been with us for only about 15 years or so. In one
guise or another, they have actually been with us for centuries. Does such “in
your face” advertising differ radically from putting some grand title after
one’s name? Fellows of the Royal Society have for centuries put FRS after their
name, to indicate to all their somewhat elevated status. In France, those who
have a Légion d’Honneur do
it more conspicuously “in your face”, by wearing permanently a badge on their lapel
to announce their higher status. In Britain, in official ceremonies, guests are
often asked to wear their decorations – to signify their status to others.
Impact factors, as used by scientists, are just another means of declaring
status. The difference between these displays of superiority and impact factors
is simply a difference in the means of delivery, nothing else.
It all really boils down to the same thing.
So, if impact factors were to be banished by some edict from the scientific nomenclatura, their place would be taken
by some other factor. And what replaces it may in fact be worse.
Impact
factors as short-cuts
Impact factors appeal to another feature of
the human mind, namely to acquire knowledge through short-cuts. Where we do not understand easily or cannot
be bothered or are sitting on a committee that is judging dozens of applicants for a top position, impact factors become a welcome aid.
How
does saying that “s/he has published in high impact factor journals” differ
from a common phraseology used by many (not only trade book authors), along the
lines of, “Dr. X, the Nobel Prize physicist, has affirmed that the war in Mont
Blanc is totally unjustified from a demographic point of view”, when there is
not the slightest guarantee that a Nobel Prize in physics qualifies its
recipient to give informed views on such wars? Once again, the difference
between this and impact factors is simply a difference in means of delivery. Like
impact factors, this one short-cuts the thinking process and, since similar
phraseologies are used so often, we must assume that most of us, when handling
issues outside our competence, welcome the comfort of a label that apparently
guarantees superior knowledge or ability and hence soothes our ignorance.
There was a hilarious correspondence years
ago (so my memory of it is somewhat blurred) in, I think, The Times, about the nuclear deterrent. Someone, anxious to
convince readers of the wisdom of maintaining our nuclear deterrent, wrote that
he knew of two Nobel laureates who had come to the conclusion that we should
keep our nuclear deterrent. In response, Peter Medawar (the Nobel laureate, if
I may say so) wrote back: “If we are going to conduct this dialogue by the
beating of gongs, let me say that I know of 5 Nobel laureates who think that we
should not keep our nuclear deterrent”.
A recent paper states that “… publication in journals with high impact
factors can be associated with improved job opportunities, grant success, peer
recognition, and honorific rewards, despite widespread acknowledgment that
impact factor is a flawed measure of scientific quality and importance”. It
is obvious why this is so; the habit of short-cutting the thinking process,
common to most, is wonderfully aided by impact factors. I know of only one
outstanding scientist (you guessed it, he was a Nobel laureate) to whom impact
factors and all other short-cuts made not the slightest difference in assessing
the suitability of a person; he searched for the evidence by reading the papers;
that was his sole guide and authority. He was, of course, a loner; almost all
of us are vulnerable to being impressed.
It is totally fatuous to assume that, sitting
in a committee that is trying to appoint a new research fellow, I would not –
like countless others and absent the process of assessing all the evidence– be
impressed by someone who has published 10 papers in Nature and 20 in Science and
not show a preference for him over someone who has published entirely in those
“specialized” journals which the editors of glamour journals so cynically advise those
whose papers they reject without review to publish in.
Hence, for editors, administrators, ordinary
and extraordinary scientists alike, impact factors serve deep-seated human
needs – to classify people easily and painlessly on the one hand, to declare to
others one’s superiority, and to soothe oneself with the belief that one is
actually better. There is no way in which all the flaws attached to impact
factors could overcome these more basic needs.
One can point to other similarities, where
what is considered to be flawed nevertheless is durable and highly successful.
Take, for example, posh addresses, say in Belgrave Square in London or Avenue
Foch in Paris, or Park Avenue in New York. There is absolutely no guarantee
that those inhabiting such addresses are in any way better than those who do
not, although they may be (and often are) far richer. This is not unlike those
who advertise incessantly that they have published in the glamour journals.
There is no evidence whatever that those who publish in them are better
scientists. But giving an address such as Science
on a paper is like giving an address in Belgravia. It works like magic. Once
again, the difference lies only in the means of delivery.
So, let us sink back and get used to it…impact
factors are here to stay, and for a very long time. No good blaming editors of
journals for it, no good blaming scientists, no good blaming administrators. It all has to do with the workings and habits of
the mind, as well as with its needs.
The only way to deal with the crushing and
highly undesirable influence of impact factors in science is to prohibit their
use in judging candidates and research grants and indeed the merit of
individuals. That would be a great help, and some research organizations are
actually implementing such procedures, with what success I cannot guess. But I fear that even that will not be
enough. To get rid of it completely, one must change human nature. And that is
not currently in our gift.
1 comment:
Fascinating. Heard a talk recently on how NERC has introduced unconscious bias training for their Peer Review College, to encourage academics to examine their decision-making process in appointments or research evaluation http://www.nerc.ac.uk/latest/news/nerc/bias/
Post a Comment