The journal Impact Factor and alternative metrics
A variety of bibliometric measures has been developed to supplant the Impact Factor to better assess the impact of individual research papers
Send us a link
A variety of bibliometric measures has been developed to supplant the Impact Factor to better assess the impact of individual research papers
Thomson Reuters claims it has “never advocated” the use of the impact factor for the “analysis of individual research artefacts or people”.
The impact factor is a poor measure of a journal's quality, and academics say it should either be overhauled or done away with entirely.
The focus on impact of published research has created new opportunities for misconduct and fraudsters, says Mario Biagioli.
The goal being to avoid contributing further to the inappropriate focus on journal IFs.
Senior staff at Nature, Science and other journals want to end inappropriate use of the measure.
Analysis finds citation rankings can be very misleading.
A possible replacement for the Journal Impact Factor.
Citation indicators addressing total impact, co-authorship, and author positions offer complementary insights about impact. This article shows that a composite score including six citation indicators identifies extremely influential scientists better than single indicators.
Google Scholar is great, but its inclusiveness and mix of automatically updated and hand-curated profiles means you should never take any of its numbers at face value.
A longitudinal and cross-disciplinary comparison.
The rhetoric of “excellence” is pervasive across the academy. It is used to refer to research outputs as well as researchers, theory and education, individuals and organisations, from art history to zoology. But what does “excellence” mean? Does it in fact mean anything at all? And is the pervasive narrative of excellence and competition a good thing?
A researcher collaborating with many groups will normally have more papers (and thus higher citations and h-index) than a researcher spending all his/her time working alone or in a small group. While analyzing an author’s research merit, it is therefore not enough to consider only the collective impact of the published papers, it is also necessary to quantify his/her share in the impact. For this quantification, here I propose the I-index which is defined as an author’s percentage share in the total citations that his/her papers have attracted.
Altmetrics have gained momentum and are meant to overcome the shortcomings of citation-based metrics. In this regard some light is shed on the dangers associated with the new “all-in-one” indicator altmetric score.
As part of our Event Data work we’ve been investigating where DOI resolutions come from.
Or 'how to tweet your way to honour and glory'.
The impact factor is academia’s worst nightmare. So much has been written about its flaws, both in calculation and application, that there is little point in reiterating the same tired points here …
A 40-year longitudinal cross-validation of citations, downloads, and peer review in astrophysics
If Thomson Reuters can calculate Impact Factors and Eigenfactors, why can’t they deliver a simple median score?
Why does the impact factor continue to play such a consequential role in academia? Alex Rushforth and Sarah de Rijcke look at how considerations of the metric enter in from early stages of research…
Citation counts are not purely a reflection of scientific merit and the impact factor is, in fact, auto-correlated.