Empathy and Grit - Not Just Publication Records - Should Be Considered in Researcher Assessment
Critics of current methods for evaluating researchers’ work say a system that relies on bibliometric parameters favours a ‘quantity over quality’ approach, and undervalues achievements such as social impact and leadership.
Citations Systematically Misrepresent the Quality and Impact of Research Articles: Survey and Experimental Evidence from Thousands of Citers
Citations are ubiquitous in evaluating research, but how exactly they relate to what they are thought to measure is unclear. This article investigates the relationships between citations, quality, and impact using a survey with an embedded experiment.
This paper presents a simple model of the lifecycle of scientific ideas that points to changes in scientist incentives as the cause of scientific stagnation. It explores ways to broaden how scientific productivity is measured and rewarded, involving both academic search engines such as Google Scholar measuring which contributions explore newer ideas and university administrators and funding agencies utilizing these new metrics in research evaluation.
Scientists Call for Reform on Rankings and Indices of Science Journals
Researchers are used to being evaluated based on indices like the impact factors of the scientific journals in which they publish papers and their number of citations. A team of 14 natural scientists from nine countries are now rebelling against this practice, arguing that obsessive use of indices is damaging the quality of science.
The Acceptability of Using a Lottery to Allocate Research Funding: a Survey of Applicants
The Health Research Council of New Zealand is the first major government funding agency to use a lottery to allocate research funding for their Explorer Grant scheme. A recent survey examines how well the measure is accepted.
Games Academics Play and Their Consequences: How Authorship, H-Index and Journal Impact Factors Are Shaping the Future of Academia
Research is a highly competitive profession where evaluation plays a central role. Yet such evaluations are often done in inappropriate ways that are damaging to individual careers, and to the profession.
Growing evidence suggests that the evaluation of researchers’ careers on the basis of narrow definitions of excellence is restricting diversity in academia, both in the development of its labour force and its approaches to address societal challenges. Recommendations are suggested for the Marie Skłodowska-Curie Actions.
Scientific Output Scales with Resources. A Comparison of US and European Universities
A recent study finds a strong correlation between university revenues and their volume of publications and (field-normalized) citations. These results demonstrate empirically that international rankings are by and large richness measures and, therefore, can be interpreted only by introducing a measure of resources.
The Evaluative Inquiry: a New Approach to Research Evaluation
This article outlines the four principles that give shape to a new, less standardised approach to research assessment called "evaluative inquiry": employing versatile methods; shifting the contextual focus away from the individual; knowledge diplomacy; and favouring ongoing engagement ahead of open-and-shut reporting.
"Excellence R Us": University Research and the Fetishisation of Excellence
The rhetoric of "excellence" is pervasive across the academy. It is used to refer to research outputs as well as researchers, theory and education, individuals and organizations, from art history to zoology. But does "excellence" actually mean anything?
The Hong Kong Principles for Assessing Researchers: Fostering Research Integrity
The primary goal of research is to advance knowledge. For that knowledge to benefit research and society, it must be trustworthy. Trustworthy research is robust, rigorous and transparent at all stages of design, execution and reporting. The authors developed the Hong Kong Principles (HKP) with a specific focus on the need to drive research improvement through ensuring that researchers are explicitly recognized and rewarded for behavior that leads to trustworthy research.
How Journals and Publishers Can Help to Reform Research Assessment
It is well established that administrators and decision-makers use journal prestige and impact factors as a shortcut to assess research. But it is not enough to recognize the problem. Identifying specific approaches that publishers can take to address these concerns really is key.
What Words Are Worth: National Science Foundation Grant Abstracts Indicate Award Funding
Can word patterns from grant abstracts predict National Science Foundation (NSF) funding? The data describe a clear relationship between word patterns and funding magnitude: Grant abstracts that are longer than the average abstract, contain fewer common words, and are written with more verbal certainty receive more money.
Is Blinded Review Enough? How Gendered Outcomes Arise Even Under Anonymous Evaluation
Blinded review is an increasingly popular approach to reducing bias and increasing diversity in the selection of people and projects. We explore the impact of blinded review on gender inclusion in research grant proposals submitted to the Gates Foundation from 2008-2017. Despite blinded review, female applicants receive significantly lower scores.