Games Academics Play and Their Consequences: How Authorship, H-Index and Journal Impact Factors Are Shaping the Future of Academia
Research is a highly competitive profession where evaluation plays a central role. Yet such evaluations are often done in inappropriate ways that are damaging to individual careers, and to the profession.
Growing evidence suggests that the evaluation of researchers’ careers on the basis of narrow definitions of excellence is restricting diversity in academia, both in the development of its labour force and its approaches to address societal challenges. Recommendations are suggested for the Marie Skłodowska-Curie Actions.
Scientific Output Scales with Resources. A Comparison of US and European Universities
A recent study finds a strong correlation between university revenues and their volume of publications and (field-normalized) citations. These results demonstrate empirically that international rankings are by and large richness measures and, therefore, can be interpreted only by introducing a measure of resources.
The Evaluative Inquiry: a New Approach to Research Evaluation
This article outlines the four principles that give shape to a new, less standardised approach to research assessment called "evaluative inquiry": employing versatile methods; shifting the contextual focus away from the individual; knowledge diplomacy; and favouring ongoing engagement ahead of open-and-shut reporting.
"Excellence R Us": University Research and the Fetishisation of Excellence
The rhetoric of "excellence" is pervasive across the academy. It is used to refer to research outputs as well as researchers, theory and education, individuals and organizations, from art history to zoology. But does "excellence" actually mean anything?
The Hong Kong Principles for Assessing Researchers: Fostering Research Integrity
The primary goal of research is to advance knowledge. For that knowledge to benefit research and society, it must be trustworthy. Trustworthy research is robust, rigorous and transparent at all stages of design, execution and reporting. The authors developed the Hong Kong Principles (HKP) with a specific focus on the need to drive research improvement through ensuring that researchers are explicitly recognized and rewarded for behavior that leads to trustworthy research.
How Journals and Publishers Can Help to Reform Research Assessment
It is well established that administrators and decision-makers use journal prestige and impact factors as a shortcut to assess research. But it is not enough to recognize the problem. Identifying specific approaches that publishers can take to address these concerns really is key.
What Words Are Worth: National Science Foundation Grant Abstracts Indicate Award Funding
Can word patterns from grant abstracts predict National Science Foundation (NSF) funding? The data describe a clear relationship between word patterns and funding magnitude: Grant abstracts that are longer than the average abstract, contain fewer common words, and are written with more verbal certainty receive more money.
Is Blinded Review Enough? How Gendered Outcomes Arise Even Under Anonymous Evaluation
Blinded review is an increasingly popular approach to reducing bias and increasing diversity in the selection of people and projects. We explore the impact of blinded review on gender inclusion in research grant proposals submitted to the Gates Foundation from 2008-2017. Despite blinded review, female applicants receive significantly lower scores.
Peer Review or Lottery? A Critical Analysis of Two Different Forms of Decision-Making Mechanisms for Allocation of Research Grants
By forming a pool of funding applicants who have minimal qualification levels and then selecting randomly within that pool, funding agencies could avoid biases, disagreement and other limitations of peer review.
Paper provides new evidence on gender bias in teaching evaluations. Despite the fact that neither students’ grades nor self-study hours are affected by the instructor’s gender, it was found that women receive systematically lower teaching evaluations than their male colleagues.
Use of the Journal Impact Factor in Academic Review, Promotion, and Tenure Evaluations
The Journal Impact Factor (JIF) was originally designed to aid libraries in deciding which journals to index and purchase for their collections. Over the past few decades, however, it has become a relied upon metric used to evaluate research articles based on journal rank. Surveyed faculty often report feeling pressure to publish in journals with high JIFs and mention reliance on the JIF as one problem with current academic evaluation systems.
Predicting the Results of Evaluation Procedures of Academics
The 2010 reform of the Italian university system introduced the National Scientific Habilitation (ASN) as a requirement for applying to permanent professor positions. Since the CVs of the 59149 candidates and the results of their assessments have been made publicly available, the ASN constitutes an opportunity to perform analyses about a nation-wide evaluation process.
Counting is Not Enough - How Plain Language Statements Could Improve Research Assessment
Academic hiring and promotion committees and funding bodies often use publication lists as a shortcut to assessing the quality of applications.In order to avoid bias towards prestigious titles, plain language statements should become a standard feature of academic assessment.
2019 EUA Workshop on Research Assessment in the Transition to Open Science
EUA is organising a series of workshops raising awareness and fostering discussion on research assessment reform. The 2019 edition will focus on research evaluation for the purpose of recruitment and career progression of researchers.