Journal impact factors are much abused. Originally developed to help librarians make rational decisions about subscriptions, they are increasingly used to judge the worth of a scientist’s output. If we can place a paper in a high-impact-factor journal, we bask in the reflected glory of those who have gone before us, whether our paper is really any good or not. On the other hand, if we publish in lower-impact-factor journals, it’s guilt by association.
If you write grant applications, or have to apply for tenure or promotion, someone is likely to look at the impact factors (IFs) of the journals you have published in, particularly if the papers were published relatively recently and haven’t had the time to accumulate many citations. They are particularly likely to do that if they aren’t experts in your field, and aren’t sure about the quality of the journals you publish in. Like it or not, you are going to have to face the IF gauntlet. The problem is that IFs vary widely by field. What you need to do is to provide some perspective to the people reading your file so that they don’t assume that the standards of their field apply to yours.
I recently reviewed a grant application whose author found a nice way to address this issue: Each journal in the Thomson-Reuters database is assigned to one or more categories based on the area(s) of science they cover. For each category, the Journal Citation Reports provides a median impact factor as well as an aggregate impact factor, the latter being the impact factor you would calculate for all the articles published in the journals concerned as if they came from a single journal. If you want to put the impact factor of a particular journal in perspective, what you do is that you compare that impact factor either to the median or to the aggregate impact factor for the category (or categories) that the journal belongs to.
If you’re going to do this, I would suggest that you, first, be consistent about which statistic you use and, second, give this statistic for all the categories that a given journal belongs to. This will avoid accusations that you are cherry-picking statistics.
For example, my most recent paper was published in Mathematical Modelling of Natural Phenomena (MMNP), a journal with an impact factor of 0.8, which doesn’t seem impressive on the surface. This journal has been classified by Thomson-Reuters as belonging to the following categories:
Category | Median IF | Aggregate IF | Quartile |
Mathematical & Computational Biology | 1.5 | 2.5 | 4 |
Mathematics, Interdisciplinary Applications | 1.1 | 1.5 | 3 |
Multidisciplinary Sciences | 0.7 | 5.3 | 2 |
This, I think, putsĀ Mathematical Modelling of Natural Phenomena in perspective: It’s not a top-of-the-table journal, but its 0.8 impact factor isn’t ridiculously small either.
A closely related strategy would be to indicate which quartile of the impact factor scale a journal belongs to in its category. This information is also available in Journal Citation Reports, and I have provided these data for MMNP in the table above.
The main point I’m trying to make is that, if at all possible, you should provide an interpretation of your record and not let others impose an interpretation on your file. If you are in a position to fight the IF fire with fire, i.e. with category data from the Journal Citation Reports, it may be wise to do that.
All of that being said, some of the statistics for MMNP shown above demonstrate how crazy IF statistics are. If we look at the quartile placement of this journal in different categories, they range from the 2nd quartile, which should be suggestive of a pretty good journal, to the 4th, which makes this journal look pretty weak. In an ideal world, I would not suggest that you include such flaky statistics in your grant applications. But we don’t live in an ideal world. Referees and grant panel members discuss IFs all the time, so if it happens that you can tell a positive story based on the analysis of IFs, it’s just smart to do so.