How Journals and Publishers Can Help to Reform Research Assessment
It is well established that administrators and decision-makers use journal prestige and impact factors as a shortcut to assess research. But it is not enough to recognize the problem. Identifying specific approaches that publishers can take to address these concerns really is key.
What Words Are Worth: National Science Foundation Grant Abstracts Indicate Award Funding
Can word patterns from grant abstracts predict National Science Foundation (NSF) funding? The data describe a clear relationship between word patterns and funding magnitude: Grant abstracts that are longer than the average abstract, contain fewer common words, and are written with more verbal certainty receive more money.
Is Blinded Review Enough? How Gendered Outcomes Arise Even Under Anonymous Evaluation
Blinded review is an increasingly popular approach to reducing bias and increasing diversity in the selection of people and projects. We explore the impact of blinded review on gender inclusion in research grant proposals submitted to the Gates Foundation from 2008-2017. Despite blinded review, female applicants receive significantly lower scores.
Peer Review or Lottery? A Critical Analysis of Two Different Forms of Decision-Making Mechanisms for Allocation of Research Grants
By forming a pool of funding applicants who have minimal qualification levels and then selecting randomly within that pool, funding agencies could avoid biases, disagreement and other limitations of peer review.
Paper provides new evidence on gender bias in teaching evaluations. Despite the fact that neither students’ grades nor self-study hours are affected by the instructor’s gender, it was found that women receive systematically lower teaching evaluations than their male colleagues.
Use of the Journal Impact Factor in Academic Review, Promotion, and Tenure Evaluations
The Journal Impact Factor (JIF) was originally designed to aid libraries in deciding which journals to index and purchase for their collections. Over the past few decades, however, it has become a relied upon metric used to evaluate research articles based on journal rank. Surveyed faculty often report feeling pressure to publish in journals with high JIFs and mention reliance on the JIF as one problem with current academic evaluation systems.
Predicting the Results of Evaluation Procedures of Academics
The 2010 reform of the Italian university system introduced the National Scientific Habilitation (ASN) as a requirement for applying to permanent professor positions. Since the CVs of the 59149 candidates and the results of their assessments have been made publicly available, the ASN constitutes an opportunity to perform analyses about a nation-wide evaluation process.
Counting is Not Enough - How Plain Language Statements Could Improve Research Assessment
Academic hiring and promotion committees and funding bodies often use publication lists as a shortcut to assessing the quality of applications.In order to avoid bias towards prestigious titles, plain language statements should become a standard feature of academic assessment.
2019 EUA Workshop on Research Assessment in the Transition to Open Science
EUA is organising a series of workshops raising awareness and fostering discussion on research assessment reform. The 2019 edition will focus on research evaluation for the purpose of recruitment and career progression of researchers.
Ghent University, in Belgium, Embraces New Approach to Faculty Evaluation Less Focused on Quantitative Metrics
Saying it wants to "again become a place where talent feels valued and nurtured," Ghent University overhauls its system for faculty evaluation to de-emphasize quantitative metrics and annual progress reports. Professors will be asked about their goals and what they are proud of.
Elsevier Acquires Science-Metrix Inc., Provider of Research Analytics Services and Data
Elsevier, the information analytics business specializing in science and health, has acquired Science-Metrix Inc., a research evaluation firm that provides science research evaluation and analytics to assess science and technology activities
DORA, Plan S and the (open) Future of Research Evaluation
Slides from a talk given to the general assembly of Science Europe in Brussels on 22 Nov 2018. Gives an overview of the problems of over-metricised research evaluation and how this might be tackled, in part through initiatives driven by DORA, and how they are linked with drives such as Plan S to promote open science. Shared under a CC-BY-SA opinion (though Figshare doesn't seem to allow me to select that option from their drop-down menu).
Scaling Down Inequality: Rating Scales, Gender Bias, and the Architecture of Evaluation
Quantitative performance ratings are ubiquitous in modern organizations — from businesses to universities — yet there is substantialevidence of bias against women in suchratings. This study examines how gender inequalities in evaluations dependon the design of the tools used to judge merit.