15 de diciembre de 2016

CiteScore–Flawed But Still A Game Changer

Last Thursday, Elsevier announced CiteScore metrics, a free citation reporting service for the academic community. The primary metric promoted by this service is also aptly named CiteScore and is similar, in many ways, to the Impact Factor.
Construction of the CiteScore metric
Both CiteScore and the Impact Factor are journal-level indicators built around a ratio of citations to documents. The specifics in how each indicator is constructed makes them different enough such that they should not be considered substitutes.
  1. Observation Window. CiteScore is based on citations made in a given year to documents published in the past three years. The Impact Factor is based on documents published in the past two years.
  2. Sources. CiteScore is based on citations from about 22,000 sources. The Impact Factor is based on citations from approximately 11,000 selected sources.
  3. Document Types. CiteScore includes all document types in its denominator. The Impact Factor is limited to papers classified as articles and reviews (also known as “citable items”).
  4. Updates. CiteScore will be calculated on a monthly basis. The Impact Factor is calculated annually.
After its launch, it didn’t take long for critics to voice their objections to CiteScore.

Writing for Nature, Richard Van Noorden illustrated that many high Impact Factor journals performed very poorly in CiteScore, the result of including non-research material (news, editorials, letters, etc.) in its denominator. Based on their CiteScore rank, top medical journals, like The New England Journal of Medicine and The Lancet, and general multidisciplinary science journals, like Nature and Science, rank well below mid-tier competitors. This kind of head-scratching ranking creates bad optics for the validity of the CiteScore metric.

Ludo Waltman, at Leiden University, praised CiteScore for being more transparent and consistent about the the calculation of CiteScores, but voiced reservations over how treating all documents equally creates a strong bias against publications that serve to provide a forum for news and discussion. Similarly, the researchers at Eigenfactor.org argued that CiteScore tried to solve a problem by creating another one:
[J]ournals that produce large amounts of front matter are probably receiving a bit of an extra boost from the Impact Factor score. But since these items are not cited all that much, the boost is probably not very large in most cases. The CiteScore measure eliminates this boost, but in our opinion goes too far in the other direction by counting all of these front matter pieces as regular articles.
CiteScore subject rankings also exposed some high-profile misfits. For example, the Annual Review of Plant Biology ranked 4th among General Medicine journals. While Scopus tweeted that they were working on fixing some journal classifications, cases like this also creates bad optics for their new metric. More user-testing would have caught high-profile irregularities like this before launch.

Others critics worried that getting into the metrics business put Elsevier into a conflict of interest. Departing from idle speculation, Carl Bergstrom and Jevin West demonstrated in a series of scatterplots how Elsevier journals benefited generally from CiteScore over competitors’ journals.

Taken together, it doesn’t appear that the CiteScore indicator can be considered a viable alternative to the Impact Factor.

Until recently, Scopus produced an Impact Factor-like metric — the Impact per Publication (IPP) — which was limited to Articles, Reviews and Conference Papers. I asked Wim Meester, Head of Product Management for Content Strategy at Elsevier, why they stopped producing the IPP. Wim responded that calculating IPP was a time-consuming process but will still be calculated and reported independently on Leiden University’s Journal Indicators web service.

Abandoning a more reasonable metric for a quick, dirty, and overtly biased one makes me wonder whether Elsevier’s decision reflects a different marketing strategy for Scopus that has nothing to do with building a better performance indicator.

The Impact Factor metric is reported in Clarivate’s (formerly Thomson Reuters) annual Journal Citation Reports (JCR), a product that is sold to subscribing publishers and institutions. While the calculations that go into generating the JCR each year are enormous and time-consuming, the information contained in the report disseminates almost instantaneously upon publication. Each June, within hours of its release, publishers and editors extract the numbers they need. Impact Factors get refreshed on journal web pages and shared widely with non-subscribers. Some web hosts even use JCR data as clickbait to sell ads.

In contrast, Elsevier has adopted a different business model: Make the metrics free but charge for access to the underlying dataset. To me, CiteScore is developed with publishers and editors in mind, not librarians and bibliometricians. Each journal has its own score card, which is updated monthly — yes, monthly. No more waiting until mid-June to get a performance metric on how your journal performed last year. While an annual report may be sufficient for a librarian making yearly journal collection decisions, it is woefully insufficient for an editor who wants regular feedback on how his/her title is performing.

The CiteScore metric is controversial because of its overt biases against journals that publish a lot of front-matter. Nevertheless, for most academic journals, CiteScore will provide rankings that are similar to the Impact Factor. The free service provides raw citation and document counts, along with two field-normalized metrics, the SNIP and SJR. As the underlying Scopus dataset is available only by subscription, marketing will come from editors at institutions without current access.

In the past, products like Scopus were marketed with librarians and administrators in mind. The launch of CiteScore may be an attempt to market directly to editors.

Autor: Phil Davis
Twitter: <@scholarlykitchn>
Fuente: <https://scholarlykitchen.sspnet.org/>

No hay comentarios.: