Apart from issues involved with what can be seen as wasting time and money in rejecting perfectly good research, this apparent relationship has important implications for researchers. They will tend to often submit to higher impact (and therefore apparently more selective) journals in the hope that this confers some sort of prestige on their work, rather than letting their research speak for itself. Upon the relatively high likelihood of rejection, submissions will then continue down the ‘impact ladder’ until a more receptive venue is finally obtained for their research.
The new data from Frontiers shows that this perception is most likely false. From a random sample of 570 journals (indexed in the 2014 Journal Citation Reports; Thomson Reuters, 2015), it seems that journal rejection rates are almost entirely independent of impact factors. Importantly, this implies that researchers can just as easily submit their work to less selective journals and still have the same impact factor assigned to it. This relationship will remain important while the impact factor continues to dominate assessment criteria and how researchers evaluate each other (whether or not the IF is a good candidate for this is another debate).
I wanted to look into this a bit more to see how this pattern changes when we look at different partitions of the dataset. For example, one might think that this pattern is driven by a prevalence of low impact and highly unselective journals. Also, in the Figure reported by Frontiers, the y-axis (IF) is log-transformed for some reason – it’s not clear if the data are transformed, but either way this distorts the correlation reported either visually or statistically, so I figured it would be good to look at the raw data again here.
Thankfully, the dataset is published via Figshare and openly available to all for analysis.
Relationship between rejection rates and impact factors for (see here for sources). (click for larger image) |
Importantly for researchers, there appears to be a range of journals with impact factors between 5 and 10 (i.e., moderate) that have extremely low rejection rates. These are what we might refer to as ‘good’ journals, as your likelihood or being published with them will be higher. Of course, what this implies is that the impact factor is a very poor predictor for the perceived quality of work, based on the probability that it will be rejected or accepted by differently ranked journals. You might as well shoot for a journal which is 10 times more likely to accept your work than a highly selective one with an equal impact factor.
If we look at a partitioned dataset, by excluding all journals with a rejection rate of lower than 0.6, then a slightly different structure emerges. The correlation strengths increase, and begin to show that within journals that tend to have higher rejection rates, those rejection rates correspond weakly to journal impact factors. This correlation is undoubtedly skewed by the few highlighted (and un-named, unfortunately) journals, which comparatively have anomalously high impact factors for their rejection rates, way above the usual trend. Importantly though, it shows that there might be a different pattern between ‘mid-tier’ journals and those with lower rejection rates.
As above, but discarding all data with a rejection rate below 0.6. |
As above again, but discarding all journals with a rejection rate below 0.9 |
The IF originates from the subscription-based era of publishing and was originally intended to help librarians to select journals worth purchasing. It neither reflects the actual number of citations for a given article nor its scientific quality. At ScienceOpen, we believe that alternative metrics that measure “impact”, “relevance” and “quality” at the article level and by various other means will replace the IF sooner or later. This is why ScienceOpen supports the San Francisco Declaration on Research Assessment (DORA), and why we report altmetric scores for every article within our archive.
What would be really cool in the future is to expand upon this dataset. Adding journal names would be an obvious benefit for researchers, so they could see which journals might be better candidates for submission. Another dimension could be to include aggregated journal altmetric scores, which would allow us to explore whether highly selective journals get the most social attention for the research they publish. Another aspect to investigate might simply be the number of articles published against the rejection rate. Either way, it’s a really nice dataset to explore some of the more detailed aspects of publishing, and we thank the Frontiers team for publishing it.
Autor: Jon Tennant
Twitter: <@Protohedgehog>
Fuente: <http://blog.scienceopen.com/>
No hay comentarios.:
Publicar un comentario