But The “Tibetan Monks’ Journal of Cultural Experimentation” Said. . .Taking a Second Look At Your Source (‘Impact Factors’)

Have you ever found an article so fabulous, new, and ground-breaking that you can’t wait to blog on it, and share with the entire blogosphere this knowledge it seems only you–for the moment–possess?

Perhaps it’s the discovery you’ve always longed to hear about, that exercise kills brain cells, or that sugar soda shrinks tumor growth, or maybe you’ve just read that there’s a new pill that takes off pounds without your ever having to diet or leave your couch, as, perhaps, some Botswanan Journal of Dietetic Innovations has published in what it terms ‘ground-breaking research.’ 

Well, here’s the unpleasant truth. You still can’t believe everything you read–even if it’s in a scientific journal.

Realizing that some journals are more equal than others is part of growing up–and part of doing valid research.

So, with an idea hatched as far back as 1955, one Dr. Eugene Garfield figured that the number of citations for a given journal was a crucial part of the heuristic for determining the journal’s relevance in the field–and its reliability, too, in a sense.

The concept–and its formulation–would revolutionize analsysis of journal significance.

Henry Small, chief scientist of Thomson Reuters, said of the idea:

Garfield’s vision was ahead of its time. He saw the potential of citation searching for researchers, but it took 40 years for the technology to advance to the point that allowed his vision to be fully realized.

And with Garfield’s vision of citation searching and evaluating was born the concept of the journal impact factor (or simply “IF” to those in the know), often interpreted as representative of the relative importance of a journal within its field. Journals with higher numbers are frequently designated more relevant–and perhaps more reliable–than those with lower factors.

Applied to science and technology and social science journals, the journal impact factor measures how frequently the ‘average article’ in a journal has been cited in any particular year, and it’s useful in giving a sense of the significance of the journal and its citations.

Dr. Garfield founded the Institute for Scientific Information that worked on computing  these factors, now owned by Thomson Reuters.

So IFs are calculated every year by the Institute for Scientific Information, and published in Thomson Reuters’ Journal Citation Reports, which provides data for more than 5,900 science and technology journals and data from over 1,700 social science journals.

The formula for the calculation is open and avaialable to all, and this explanation can be found verbatim across websites and articles, as if it were perfectly self-explanatory: The value is found by

dividing the number of citations in the current year to articles published in the previous two years by the total number of articles published in the two previous years.

And perhaps everyone but me has got it, but I, being more of a visual learner, was helped greatly by the figure below.

Dr. Eugene Garfield, the brains behind this whole operation, shared the following on a page dedicated to explaining the system.

Calculation for journal impact factor
A= total cites in 1992
B= 1992 cites to articles published in 1990-91 (this is a subset of A)
C= number of articles published in 1990-91
D= B/C = 1992 impact factor

Why did citations get to be the arbiter of greatness?

Well, Henry Small, if you remember him, chief scientist of Reuters, said of the premise, “Citations are an acknowledgement of intellectual debt. [Cataloging the results] lets researchers instantly recognize works that are well regarded by their peers. That way, they know they are basing their work on quality research.”

Now, this may all beg the question from many readers. . .Who cares?I don’t blame you for asking.

But I have a ready answer for you, never fear.

First off, let’s say you run a research library, and you’ve got only a given budget. You’d likely want to spend that budget on the most influential journals in a given area.

The IF is made for you. Without much thinking [and the lack of thinking is not recommended, but it happens sometimes ], you can figure out the best journals to purchase without having to do any advanced math and possibly hurting yourself. It’s a librarian’s dream. [Truthfully, there’s much more to selecting important journals than just looking at IF, but let’s leave that for another day.]

Then, well, if you’re thinking of publishing a scientific paper, why  not go for the gold in terms of IF? Getting a paper into Science ( 31.364 in 2010) or Nature (36.101 in 2010), with their high IFs, will clearly do your career more good than an acceptance in, say, the Journal of Plant Nutrition (2010 IF of 0.726; May 2012 highlight: “Effect of foliar application of urea, molybdenum, benzyladenine, sucrose and salicylic acid on yield, nitrogen metabolism of radish plants and quality of edible roots”).

But let’s just say you’re not publishing this year, for some reason or another. Anyone doing research really needs to know the reliability of the journals in which they’re finding their articles.  If I’m searching out new psychotherapeutic techniques, I can have a better feeling about the reliability of the data, in general, if I seek out an article from, say, the Psychological Bulletin (11.975 in 2011) as opposed to the South African Journal of Psychology (0.164).

Really the IF is a broad overgeneralization–but at times like these the point may still hold.

Just something to think about when you find an article that sounds too good to be true (“Diabetes journal finds that a diet of straight carbs helps stabilize blood sugar”) in a journal you’ve never heard of (The Norwegian We-Oppose-Peer-Review Diabetes Journal).

And for the record. . .

In 2012,

the oncology journals with the highest IFs were:

  • A Cancer Journal for Clinicians (94.262)
  • Nature Reviews Cancer (37.178)
  • Cancer Cell (26.925)

the psychiatry ones were:

  • Molecular Psychiatry (15.47)
  • American Journal of Psychiatry (12.759)
  • Archives of General Psychiatry (10.782)

and top honors in psychology went to:

  • Annual Review of Psychology (22.75)
  • Behavioral and Brain Science (21.952)
  • Archives of General Psychiatry (I guess it got double-billing) (10.782)

Some examples of journals with low impact factors (cataloged by Julhash U Kazi):

Look–to be frank, the IF is far from perfect. Dr. Garfield himself says it so eloquently:

Impact Factor is not a perfect tool to measure the quality of articles but there is nothing better and it has the advantage of already being in existence and is, therefore, a good technique for scientific evaluation.

As far as I can tell from that, he’s saying: well, the best part about it is that it’s already around.

I’m willing to go with that.

I had some pretty innovative ideas for posts from that Journal of Plant Nutrition–but I’ll just have to double-check them, I guess.

In a stroke of good luck, I’ve got access to  the Chilean Journal of Soil Science and Plant Nutrition. It’s impact factor is only 0.595–but if I add together the IFs of the two journals, I’m really getting somewhere.

Of interest:

PLoS ONE: Is a High Impact Factor a Blessing or a Curse?

Medical Journal Impact Factors 2012

Top 50 psychology journals – ranked by JCR impact factor

Leave a Reply