11 de julio de 2016

Hate journal impact factors? New study gives you one more reason

Citation lists are key to calculating
journal impact factors.
Scientists have a love-hate relationship with the journal impact factor (JIF), the measurement used to rank technical journals by prestige. They have come to use it not only for deciding where to submit research papers, but for judging their peers, as well as influencing who wins jobs, tenure, and grants. All that from a single, easy to read number.

And yet a journal’s impact factor is dismissed by many as useless or even destructive to the scientific community. In an attempt to shed some light, a group of researchers and journal editors today released a data set and analysis of the citation counts used to calculate this magical number. And their conclusions are likely to delight critics of the metric.

Calculating the impact factor might seem straightforward. It is just the average number of times that a journal’s articles are cited over the past 2 years. Nature, for example, currently has a JIF of 41.456, which is generally interpreted to mean that over the past 2 years, Nature articles have been cited, on average, about 41 times each.

But that number is easily misinterpreted. For example, if the citation counts of articles were like the heights of people, then the average number would be informative. Men are taller than women, on average, and indeed you can do better than random at predicting the height of people knowing nothing more than their sex. But for the articles published in any given journal, the distribution of citations is highly skewed. A small fraction of influential papers get most of the citations, whereas the vast majority of papers get few or none at all. So the average number of citations is often highly misleading.

In particular, the JIF has “two problems," says Lucas Carey, a cell biologist at Pompeu Fabra University in Barcelona, Spain, who was not involved with the study. One is that "it is meaningless as a predictive measure," he says, meaning that publishing a paper in a high-impact journal does not necessarily mean that it is more likely to be cited. The other problem, he says, is that "the way in which [the JIF] is calculated is opaque."

The opacity comes from Thomson Reuters, the private company that does the calculating. It makes money by selling access to the Web of Science, a detailed journal database, and the citation data it curates are not public. It takes work to get those citation data, which must be gleaned from the reference sections of papers. Journals use a variety of formats, and some require citations to be greatly abbreviated, making it difficult to identify the article being cited. Typographic errors and outright mistakes also add to the data-cleaning challenge.

So why don't journal publishers cooperate with each other, using the Thomson Reuters database to calculate their own impact factors? That idea crystallized at a meeting last year at the United Kingdom's Royal Society, says Stephen Curry, a cell biologist at Imperial College London. "It wasn’t hard to convince people," he says, and "the core group coalesced organically in the months following." The 11 journals taking part in today's data release are Science, eLife, The EMBO Journal, the Journal of Informetrics, the Proceedings of the Royal Society B, three journals published by the Public Library of Science, and Nature along with two of its sister journals. In 2013 and 2014, those journals published more than 366,000 research articles and 13,000 review articles. The team then combed through the Thomson Reuters database to count all citations to those articles in 2015. Vincent Larivière, an expert on journal citations at the University of Montreal in Canada led the analysis of the data.

The results give more ammunition to JIF critics. The citation distributions are so skewed that up to 75% of the articles in any given journal had lower citation counts than the journal's average number. So trying to use a journal’s JIF to forecast the impact of any particular paper is close to guesswork. The analysis also revealed a large number of flaws in the Thomson Reuters database, with citations unmatchable to known articles. "We hope that this analysis helps to expose the exaggerated value attributed to the JIF and strengthens the contention that it is an inappropriate indicator for the evaluation of research or researchers," the team concludes in a preprint paper posted to bioRxiv.

The team reached out to Thomson Reuters with the analysis in hand, says co-author Bernd Pulverer, chief editor of The EMBO Journal in Heidelberg, Germany. "The discussion was actually rather constructive and Thomson Reuters wanted to continue the dialogue,” he told ScienceInsider. But “while they agreed to essentially all the key points we made, they did not want to change anything that would collapse journal rankings, as they see this as their key business asset."

"The authors are correct to point out that JIF should only be used as an aid to understand the impact of a journal," says James Pringle, Thomson Reuters' head of Industry Development and Innovation, IP & Science in Philadelphia, Pennsylvania. "JIF is a reflection of the citation performance of a journal as a whole unit, not as an assembly of diverse published items." As for the errors in its database, "Thomson Reuters' editorial team continuously works with publishers, identifying ways to improve item-level matching, providing authors with simple processes to correct their records, contributing to cross-industry initiatives and encouraging use of standard identifiers."

Will the data release start a revolution? Unlikely, says David Smith, an academic publishing expert in Oxfordshire, U.K., who contributes regularly to the influential Scholarly Kitchen blog. "These results, from a limited population of journals, are interesting and worthy of further and larger investigation," he says, but citation data from far more journals are needed. "One question here is whether high impact journals have fundamentally different citation profiles." But even if the data released today is representative of the whole scientific community, Smith contends that no single number can capture the worth of a scientist's work. "JIF isn’t the problem,” he says. “It’s the way we think about scholarly progress that needs the work."

Autor: John BohannonJul
Email: <gonzo@aaas.org>
Twitter: <@bohannonscience>
Fuente: <http://www.sciencemag.org/>

No hay comentarios.: