14 de diciembre de 2015

Dead Metrics

Scholarly metrics come, but some never go - even when they should. Here are three examples of dead metrics, scholarly metrics that are still around yet infrequently used and of little or no practical value to researchers or librarians.

Meaningless metric.
1. The SCImago Journal Rank (SJR)

When was the last time you looked at a journal’s SJR ranking or made a decision based on it? Never? Me either. While this metric was designed with the best of intentions and the latest bibliometric knowledge, it has never caught on.

Predatory and low-quality open-access journals included in SJR love to display the little box the service automatically generates using a supplied link. They exploit this metric and others to make themselves look more legitimate than they really are and to attract manuscripts, which they quickly accept and invoice the authors for.

2. The Eigenfactor

The Eigenfactor is a perfectly over-engineered metric, so perfect that it surpasses any practical use by everyday researchers and librarians. I think it’s a metric chiefly used just by bibliometricians (and mainly just to get publications). I know of no academic library that utilizes this metric.

An interpretation of the Eigenfactor logo.
Journals frequently report their Impact Factors on their websites and in their promotional materials, but have you ever seen a journal report its Eigenfactor? I have not.

The Eigenfactor is a theoretical metric that has never found a practical purpose or an audience. It is floundering.

3. The Web of Science h-index

Have you ever used Web of Science to calculate your h-index? If you did, you’re one of the very few, and you probably noticed that it took a lot of clicks to get to the datum, the process was unintuitive, and the value was lower than expected - much lower than the value Google Scholar calculates for the same metric.

For example, Web of Science calculates my h-index as 4. Google Scholar calculates my h-index as 14. When two companies calculate and supply the same metric, naturally most will report the higher of the two.

This is because Google Scholar is much less selective than Web of Science, so it records more citations, making its h-index calculation - which based on the number of papers published and citations to them - significantly higher than Web of Science’s.
Lots of clicking for a lower value.


(Also, will citation data from Thomson Reuters’ new Emerging Sources Citation Index be used to calculate researchers’ h-indexes? If so, then everyone can expect his or her Thomson Reuters h-index to go up.)

Discussion and Conclusion

Some metrics, including some of those described here, were not created to fill a need but rather to compete with the Impact Factor. In doing so, they created complex metrics, like the Eigenfactor, that betray the simplicity of the Impact Factor and function essentially as the scholarly metrics version of a Rube Goldberg machine.

Based on my experience serving on tenure committees, the only metrics I have seen reported in candidates’ dossiers have been:
  1. The Impact Factor.
  2. The raw number of citations their publications had received, drawn either from Google Scholar, or calculated manually, using various databases.
In their dossiers, I observed that tenure candidates would list their publications, and following each reference, they appended the journal’s Impact Factor.

For raw citations, they usually stated the total number of citations as a single figure, something like “According to Google Scholar, my articles have been cited 250 times.”

[I do understand and have documented that Google Scholar is easily gamed.]

I acknowledge the weaknesses of the Impact Factor, and there are even a few journals on my lists that have legitimate Impact Factors, so it’s clearly not a measure of quality in all or even most cases.

But at least the Impact Factor is a living metric, understood by most, bearing a practical, demonstrated, and enduring value. It’s not a dead metric.

Fuente: <http://scholarlyoa.com/>

No hay comentarios.: