Journal impact factor: more trouble than it’s worth

This post written by David Beales.

When Karen and I kick back and talk about libraries over a large Yogurito on the rocks (Japan’s #1 yogurt based liqueur), one of the themes that we find ourselves returning to is that librarians put ourselves under too much pressure. By claiming expertise in so many disparate fields, we can’t possibly do more than bluff our way through many of them. Our answer is that we need to take the pressure off and give each other permission to not know stuff.Karen and I have both written about parts of our job that we’re not very good at. But there is another way to take the pressure off for other people; by not building something up just because you’re an expert. Here’s an example:

Don’t worry if you don’t know about journal impact factors

I’ve taught classes on bibliometrics and impact measures. I’ve helped faculty to find their h-index and understand what it means. I can explain Eigenfactor metrics without having to look at the user guide. I am one of the few librarians I know who really gets “impact factor” and all those other journal and article metrics and I don’t use a single one to make decisions. I don’t think there are many librarians who do.

I was leading a webinar this week with colleagues from other CSU campuses on the method I have developed (with my colleague, Nikki DeMoville) for measuring the value of our ejournal packages when negotiating with publishers. I’m really proud of the work that we have done so far and we’ve had some great feedback. One of the questions which came up was whether we should include the impact factor of each journal as ammunition in our negotiation. My answer was “no”. Librarians aren’t comfortable enough with what the impact factor means to argue the toss with publishers or to make cancellation decisions based on it. And we don’t all need to be.

To be honest, I was glad that the question had come up because I’d done a lot of background reading on the subject and I didn’t want it to go to waste  (I’ve kept a brief bibliography). I searched for all of the case studies of journal cancellation projects that I could find in Serials Review and in The Serials Librarian in the last five years. Only one of those projects gathered impact factors for each journal and they only describe it as being available in the information they gave to faculty.

Libraries have a role if we want one

If you’ve already read the San Francisco Declaration on Research Assessment you’ll know of the growing concern in the academic community about the distorting effect the “journal impact factor” has on scientific research. There are faculty who want advice and librarians who offer training are pushing on an open door. Impact factors are a great outreach tool, just don’t beat yourself up if you don’t use these metrics yourself. They won’t help you make decisions.

If you’ve ever had the time to do any reading about impact factors you’ll have read all about their limitations and their flaws . You’ll have read how the Eigenfactor has tried to address this but has problems of its own. Read enough about metrics and you’ll have enough evidence to discount any metric you disagree with…and to doubt any that you do agree with.

So, let’s take the pressure off. You’ve got enough on your plate without worrying about impact factors.

P.S. In a future blog post, I will look at the difference between theory and practice in the use of impact factors and altmetrics in collection management. A cliffhanger ending if ever I saw one

Photo by Eric Fischer