Part one of a four part series on major barriers to equitable decision-making in hiring, review, promotion, and tenure processes that commonly result from biased thinking in academia. Part one delves into objective comparisons.
Gender and Other Potential Biases in Peer Review: Cross-sectional Analysis of 38 250 External Peer Review Reports
The Swiss National Science Foundation (SNSF) set out to examine whether the gender of applicants and peer reviewers and other factors influence peer review of grant proposals submitted to a national funding agency.
Value People As Well As Papers to Improve Research Culture
As scientists, we try to make sure our research is rigorous so that we can avoid costly errors. We should take the same approach to tackle issues in research culture, says Professor Christopher Jackson.
Empathy and Grit - Not Just Publication Records - Should Be Considered in Researcher Assessment
Critics of current methods for evaluating researchers’ work say a system that relies on bibliometric parameters favours a ‘quantity over quality’ approach, and undervalues achievements such as social impact and leadership.
Citations Systematically Misrepresent the Quality and Impact of Research Articles: Survey and Experimental Evidence from Thousands of Citers
Citations are ubiquitous in evaluating research, but how exactly they relate to what they are thought to measure is unclear. This article investigates the relationships between citations, quality, and impact using a survey with an embedded experiment.
This paper presents a simple model of the lifecycle of scientific ideas that points to changes in scientist incentives as the cause of scientific stagnation. It explores ways to broaden how scientific productivity is measured and rewarded, involving both academic search engines such as Google Scholar measuring which contributions explore newer ideas and university administrators and funding agencies utilizing these new metrics in research evaluation.
Scientists Call for Reform on Rankings and Indices of Science Journals
Researchers are used to being evaluated based on indices like the impact factors of the scientific journals in which they publish papers and their number of citations. A team of 14 natural scientists from nine countries are now rebelling against this practice, arguing that obsessive use of indices is damaging the quality of science.
The Acceptability of Using a Lottery to Allocate Research Funding: a Survey of Applicants
The Health Research Council of New Zealand is the first major government funding agency to use a lottery to allocate research funding for their Explorer Grant scheme. A recent survey examines how well the measure is accepted.
Games Academics Play and Their Consequences: How Authorship, H-Index and Journal Impact Factors Are Shaping the Future of Academia
Research is a highly competitive profession where evaluation plays a central role. Yet such evaluations are often done in inappropriate ways that are damaging to individual careers, and to the profession.
Growing evidence suggests that the evaluation of researchers’ careers on the basis of narrow definitions of excellence is restricting diversity in academia, both in the development of its labour force and its approaches to address societal challenges. Recommendations are suggested for the Marie Skłodowska-Curie Actions.
Scientific Output Scales with Resources. A Comparison of US and European Universities
A recent study finds a strong correlation between university revenues and their volume of publications and (field-normalized) citations. These results demonstrate empirically that international rankings are by and large richness measures and, therefore, can be interpreted only by introducing a measure of resources.
The Evaluative Inquiry: a New Approach to Research Evaluation
This article outlines the four principles that give shape to a new, less standardised approach to research assessment called "evaluative inquiry": employing versatile methods; shifting the contextual focus away from the individual; knowledge diplomacy; and favouring ongoing engagement ahead of open-and-shut reporting.
"Excellence R Us": University Research and the Fetishisation of Excellence
The rhetoric of "excellence" is pervasive across the academy. It is used to refer to research outputs as well as researchers, theory and education, individuals and organizations, from art history to zoology. But does "excellence" actually mean anything?
The Hong Kong Principles for Assessing Researchers: Fostering Research Integrity
The primary goal of research is to advance knowledge. For that knowledge to benefit research and society, it must be trustworthy. Trustworthy research is robust, rigorous and transparent at all stages of design, execution and reporting. The authors developed the Hong Kong Principles (HKP) with a specific focus on the need to drive research improvement through ensuring that researchers are explicitly recognized and rewarded for behavior that leads to trustworthy research.