The Complex Ecosystem of Hyperprolific Authors
This paper presents a systematic review of the literature on hyperprolific authorship to examine how it is defined, investigated, and perceived across disciplines.
Send us a link
This paper presents a systematic review of the literature on hyperprolific authorship to examine how it is defined, investigated, and perceived across disciplines.
Nation prevented far more in medical spending and lost productivity than it spent on testing, buying & delivering the 2021 vaccines.
The Spark grants scheme, run by the Swiss National Science Foundation (SNSF), anonymises applications. This has meant a more diverse range of winners, particularly younger scientists and those new to SNSF funding.
Reform efforts may need to reconsider the usefulness of value-led strategies.
Embracing uncertainty could improve peer review processes.
When evidence-based policymaking is so often mired in disagreement and controversy, how can we know if the process is meeting its stated goals?
A more nuanced balance between the use of metrics and peer review in research assessment might be needed.
For research funders seeking to minimize bias in their selection process, removing applicants’ institutional affiliations from their submissions could help address a common disparity: disproportionate funding going to those at the most prestigious places.
Major platforms such as the Web of Science, widely used to generate metrics and evaluate researchers, are proprietary. More than 30 research and funding organizations call for the community to commit to platforms that instead are free for all, more transparent about their methods, and without restrictions about how the data can be used.
The European Research Council (ERC) introduced a more inclusive application form for applicants this year to give researchers on all career pathways a fair chance to demonstrate their excellence.
The European Research Council is revamping its project evaluation process from 2024 in line with the EU-wide push for a less prescriptive approach to evaluating scientific impact.
The UK Government’s research evaluation system encourages a higher quantity and lower quality of work from academics, according to a recent paper.
The targeted research Missions set up under Horizon Europe are turning three years old this year, and their ambitious logic is facing its first test in an upcoming review at the midpoint of the EU's €95.9 billion research programme.
In a landmark decision this week, the European Research Council (ERC) announced changes to its application forms and evaluation procedures that will be implemented starting with the 2024 calls for proposals.
Why the greatest scientific experiment in history failed, and why that's a great thing.
Dominant approaches to research quality rest on the assumption that academic peers are the only relevant stakeholders in its assessment. In contrast, impact assessment frameworks recognize a large and heterogeneous set of actors as stakeholders.
China created a research evaluation system based on publications indexed in the SCI and on the Journal Impact Factor, which helped China become the largest contributor to scientific literature and increase the position of its universities in global rankings.
More than resource allocations, evaluations of funding applications have become central instances for status bestowal in academia. Much attention in past literature has been devoted to grasping the status consequences of prominent funding evaluations.
The Swiss National Science Foundation's 'narrative' template seeks evidence of applicants' wider contributions to science.
From a research data repositories’ perspective, offering research data management services in line with the FAIR principles is becoming increasingly important. However, there exists no globally established and trusted approach to evaluate FAIRness to date. This article applies five different available FAIRness evaluation approaches to selected data archived in the World Data Center for Climate (WDCC).
This article explores how factors relating to grades and grading affect the correctness of choices that grant-review panels make among submitted proposals. It seeks to identify interventions in panel design that may be expected to increase the correctness of choices.
LERU published a position paper “A Pathway towards Multidimensional Academic” to provide a LERU framework for assessing researchers careers. The report elaborates on three perspectives that form the basis of the framework for the assessment of research.
Current programme evaluations do not adequately measure the skills and characteristics of individuals and collectives doing transdisciplinary research.
The focus on a narrow set of metrics leads to a lack of diversity in the types of leader and institution that win funding.
The role they play in evaluations for graduate school admissions, fellowships and jobs can be baffling.