19 de enero de 2016

The relationship between journal rejections and their impact factors

Frontiers recently published a fascinating article about the relationship between the impact factors (IF) and rejection rates from a range of journals. It was a neat little study designed around the perception that many publishers have that in order to generate high citation counts for their journals, they must be highly selective and only publish the ‘highest quality’ work.

Apart from issues involved with what can be seen as wasting time and money in rejecting perfectly good research, this apparent relationship has important implications for researchers. They will tend to often submit to higher impact (and therefore apparently more selective) journals in the hope that this confers some sort of prestige on their work, rather than letting their research speak for itself. Upon the relatively high likelihood of rejection, submissions will then continue down the ‘impact ladder’ until a more receptive venue is finally obtained for their research.

The new data from Frontiers shows that this perception is most likely false. From a random sample of 570 journals (indexed in the 2014 Journal Citation Reports; Thomson Reuters, 2015), it seems that journal rejection rates are almost entirely independent of impact factors. Importantly, this implies that researchers can just as easily submit their work to less selective journals and still have the same impact factor assigned to it. This relationship will remain important while the impact factor continues to dominate assessment criteria and how researchers evaluate each other (whether or not the IF is a good candidate for this is another debate).

I wanted to look into this a bit more to see how this pattern changes when we look at different partitions of the dataset. For example, one might think that this pattern is driven by a prevalence of low impact and highly unselective journals. Also, in the Figure reported by Frontiers, the y-axis (IF) is log-transformed for some reason – it’s not clear if the data are transformed, but either way this distorts the correlation reported either visually or statistically, so I figured it would be good to look at the raw data again here.

Thankfully, the dataset is published via Figshare and openly available to all for analysis.

Relationship between rejection rates and impact factors for (see here for sources). (click for larger image) 
As you can see, based on the full dataset, whether or not the impact factor is log-transformed has little effect on the overall structure of the data. What is clearer though is that those journals with the highest impact factors tend to have the highest rejection rates, of around 0.9 (or 90%). As Frontiers report, the correlations for this are quite weak based on a combination of parametric and non-parametric tests (those reported are Pearson’s, Spearman’s and Kendall’s tests). However, the correlations are an order of magnitude higher than those reported in Frontiers, which is most likely due to the log-transformation of the raw data, which I didn’t do here (the R script used to play with this data can be found here, and by appending the file extension from .txt to .r).

Importantly for researchers, there appears to be a range of journals with impact factors between 5 and 10 (i.e., moderate) that have extremely low rejection rates. These are what we might refer to as ‘good’ journals, as your likelihood or being published with them will be higher. Of course, what this implies is that the impact factor is a very poor predictor for the perceived quality of work, based on the probability that it will be rejected or accepted by differently ranked journals. You might as well shoot for a journal which is 10 times more likely to accept your work than a highly selective one with an equal impact factor.

If we look at a partitioned dataset, by excluding all journals with a rejection rate of lower than 0.6, then a slightly different structure emerges. The correlation strengths increase, and begin to show that within journals that tend to have higher rejection rates, those rejection rates correspond weakly to journal impact factors. This correlation is undoubtedly skewed by the few highlighted (and un-named, unfortunately) journals, which comparatively have anomalously high impact factors for their rejection rates, way above the usual trend. Importantly though, it shows that there might be a different pattern between ‘mid-tier’ journals and those with lower rejection rates.

As above, but discarding all data with a rejection rate below 0.6.
However, if we look at just these more selective journals (i.e., above 0.9 rejection rate), then we see that still there is no correlation between rejection rate and impact factor, even for this anomalous subset of the data. This is because there are still plenty of journals out there that have very low impact factors but very high rejection rates. So comparatively, would make a poor choice of journal, if impact factor played into your selection criteria for submission.

As above again, but discarding all journals with a rejection rate below 0.9
As the Frontiers article points out, this data is good evidence against the notion that to obtain a high IF, your journal must be highly selective and reject a lot of research. This is actually really important for both publishers and researchers, as it tells us that the amount of time and money which is wasted chasing higher IFs only serves to increase rejection rates, and not the impact factor of journals. Furthermore, it shows that if we assume IFs measure some aspect of journal or article quality, as many do, then this has very little to do with selectivity of journals based on a priori assessment.

The IF originates from the subscription-based era of publishing and was originally intended to help librarians to select journals worth purchasing. It neither reflects the actual number of citations for a given article nor its scientific quality. At ScienceOpen, we believe that alternative metrics that measure “impact”, “relevance” and “quality” at the article level and by various other means will replace the IF sooner or later. This is why ScienceOpen supports the San Francisco Declaration on Research Assessment (DORA), and why we report altmetric scores for every article within our archive.

What would be really cool in the future is to expand upon this dataset. Adding journal names would be an obvious benefit for researchers, so they could see which journals might be better candidates for submission. Another dimension could be to include aggregated journal altmetric scores, which would allow us to explore whether highly selective journals get the most social attention for the research they publish. Another aspect to investigate might simply be the number of articles published against the rejection rate. Either way, it’s a really nice dataset to explore some of the more detailed aspects of publishing, and we thank the Frontiers team for publishing it.

Autor: Jon Tennant
Twitter: <@Protohedgehog>
Fuente: <http://blog.scienceopen.com/>

No hay comentarios.: