Send us a link
Can ChatGPT Evaluate Research Environments?
Can Large Language Models (LLMs) support the process or validate expert evaluations? ChatGPT-4o mini scores correlated positively with expert scores in almost all 34 (field-based) Units of Assessment (UoAs).
The Time Has Come for Big Changes to Improve Research Funding
The Complex Ecosystem of Hyperprolific Authors
This paper presents a systematic review of the literature on hyperprolific authorship to examine how it is defined, investigated, and perceived across disciplines.
Can AI Support the Assessment of REF Research Environments?
Can AI Support the Assessment of REF Research Environments?
Science Funding Needs Fixing - but Not Through Chaotic Reforms
Investing in vaccination more than paid off for U.S., study finds
Investing in vaccination more than paid off for U.S., study finds
Nation prevented far more in medical spending and lost productivity than it spent on testing, buying & delivering the 2021 vaccines.
Blind Application Process Helps Swiss Spark Grants Back Unconventional Research
Blind Application Process Helps Swiss Spark Grants Back Unconventional Research
The Spark grants scheme, run by the Swiss National Science Foundation (SNSF), anonymises applications. This has meant a more diverse range of winners, particularly younger scientists and those new to SNSF funding.
Unanswered Questions in Research Assessment - Whose Values Lead Value-led Approaches?
Unanswered Questions in Research Assessment - Whose Values Lead Value-led Approaches?
Reform efforts may need to reconsider the usefulness of value-led strategies.
Randomisation Can Resolve the Uncertainty at the Heart of Peer Review
Randomisation Can Resolve the Uncertainty at the Heart of Peer Review
Embracing uncertainty could improve peer review processes.
Unearthing 'hidden' Science Would Help to Tackle the World's Biggest Problems
Processing Horizon Europe Grants is Taking 23 Days Longer Than Horizon 2020
A Model of Faulty and Faultless Disagreement for Post-hoc Assessments of Knowledge Utilization in Evidence-based Policymaking
A Model of Faulty and Faultless Disagreement for Post-hoc Assessments of Knowledge Utilization in Evidence-based Policymaking
When evidence-based policymaking is so often mired in disagreement and controversy, how can we know if the process is meeting its stated goals?
Research Evaluation Should Be Pragmatic, Not a Choice Between Peer Review and Metrics
Research Evaluation Should Be Pragmatic, Not a Choice Between Peer Review and Metrics
A more nuanced balance between the use of metrics and peer review in research assessment might be needed.
Anonymizing Research Funding Applications Could Reduce ‘Prestige Privilege’
For research funders seeking to minimize bias in their selection process, removing applicants’ institutional affiliations from their submissions could help address a common disparity: disproportionate funding going to those at the most prestigious places.
Researchers need ‘open’ bibliographic databases, new declaration says
Researchers need ‘open’ bibliographic databases, new declaration says
Major platforms such as the Web of Science, widely used to generate metrics and evaluate researchers, are proprietary. More than 30 research and funding organizations call for the community to commit to platforms that instead are free for all, more transparent about their methods, and without restrictions about how the data can be used.
Is ChatGPT Corrupting Peer Review? Telltale Words Hint at AI Use
The European Research Council Has Changed How It Evaluates Applicants. Here's Why...
The European Research Council Has Changed How It Evaluates Applicants. Here's Why...
The European Research Council (ERC) introduced a more inclusive application form for applicants this year to give researchers on all career pathways a fair chance to demonstrate their excellence.
Research Lobbies Cheer European Research Council Rollout of 'Inclusive' Evaluation Rules
Research Lobbies Cheer European Research Council Rollout of 'Inclusive' Evaluation Rules
The European Research Council is revamping its project evaluation process from 2024 in line with the EU-wide push for a less prescriptive approach to evaluating scientific impact.
REF Pushes Academics to Churn out Lower Quality Research, New Study Shows
REF Pushes Academics to Churn out Lower Quality Research, New Study Shows
The UK Government’s research evaluation system encourages a higher quantity and lower quality of work from academics, according to a recent paper.
Horizon Europe Missions Gear Up for Their First Evaluation
Horizon Europe Missions Gear Up for Their First Evaluation
The targeted research Missions set up under Horizon Europe are turning three years old this year, and their ambitious logic is facing its first test in an upcoming review at the midpoint of the EU's €95.9 billion research programme.
European Research Council Announces Plan to Update Its Evaluation System
In a landmark decision this week, the European Research Council (ERC) announced changes to its application forms and evaluation procedures that will be implemented starting with the 2024 calls for proposals.
The Rise and Fall of Peer Review
Why the greatest scientific experiment in history failed, and why that's a great thing.
Enriching Research Quality: A Proposition for Stakeholder Heterogeneity
Enriching Research Quality: A Proposition for Stakeholder Heterogeneity
Dominant approaches to research quality rest on the assumption that academic peers are the only relevant stakeholders in its assessment. In contrast, impact assessment frameworks recognize a large and heterogeneous set of actors as stakeholders.
China's Research Evaluation Reform: What Are the Consequences for Global Science?
China's Research Evaluation Reform: What Are the Consequences for Global Science?
China created a research evaluation system based on publications indexed in the SCI and on the Journal Impact Factor, which helped China become the largest contributor to scientific literature and increase the position of its universities in global rankings.
Stress-Inducing and Anxiety-Ridden: A Practice-Based Approach to the Construction of Status-Bestowing Evaluations in Research Funding
Stress-Inducing and Anxiety-Ridden: A Practice-Based Approach to the Construction of Status-Bestowing Evaluations in Research Funding
More than resource allocations, evaluations of funding applications have become central instances for status bestowal in academia. Much attention in past literature has been devoted to grasping the status consequences of prominent funding evaluations.
Swiss Funder Unveils New CV Format to Make Grant Evaluation Fairer
Swiss Funder Unveils New CV Format to Make Grant Evaluation Fairer
The Swiss National Science Foundation's 'narrative' template seeks evidence of applicants' wider contributions to science.
Recommendations for Discipline-Specific FAIRness Evaluation Derived from Applying an Ensemble of Evaluation Tools
Recommendations for Discipline-Specific FAIRness Evaluation Derived from Applying an Ensemble of Evaluation Tools
From a research data repositories’ perspective, offering research data management services in line with the FAIR principles is becoming increasingly important. However, there exists no globally established and trusted approach to evaluate FAIRness to date. This article applies five different available FAIRness evaluation approaches to selected data archived in the World Data Center for Climate (WDCC).
Designing Grant-Review Panels for Better Funding Decisions: Lessons from an Empirically Calibrated Simulation Model
Designing Grant-Review Panels for Better Funding Decisions: Lessons from an Empirically Calibrated Simulation Model
This article explores how factors relating to grades and grading affect the correctness of choices that grant-review panels make among submitted proposals. It seeks to identify interventions in panel design that may be expected to increase the correctness of choices.
A Pathway Towards Multidimensional Academic Careers
A Pathway Towards Multidimensional Academic Careers
LERU published a position paper “A Pathway towards Multidimensional Academic” to provide a LERU framework for assessing researchers careers. The report elaborates on three perspectives that form the basis of the framework for the assessment of research.