Student Evaluations of Teaching Are Deeply Flawed
New study says student evaluations of teaching are still deeply flawed measures of teaching effectiveness, even when we assume they are unbiased and reliable.
publications
Send us a link
New study says student evaluations of teaching are still deeply flawed measures of teaching effectiveness, even when we assume they are unbiased and reliable.
Citations are ubiquitous in evaluating research, but how exactly they relate to what they are thought to measure is unclear. This article investigates the relationships between citations, quality, and impact using a survey with an embedded experiment.
Many of the words used by scientists when reviewing manuscripts, job candidates and grant applications - words such as incremental, novelty, mechanism, descriptive and impact - have lost their meaning.
This article proposes measures and policies which can be adopted by journals and publishers to promote good practices in data sharing.
A study suggests that the productivity and impact of gender differences are explained by different publishing career lengths and dropout rates. This inequality in academic publishing has important consequences for institutions and policy makers.
Inappropriate practices of science have been suggested as causes of irreproducibility. This editorial proposes that a lack of raw data or data fabrication is another possible cause of irreproducibility.
A new ALLEA report provides key recommendations to make digital data in the humanities. The document is designed as a practical guide to navigate the shift towards a sustainable data sharing culture.
This paper presents a simple model of the lifecycle of scientific ideas that points to changes in scientist incentives as the cause of scientific stagnation. It explores ways to broaden how scientific productivity is measured and rewarded, involving both academic search engines such as Google Scholar measuring which contributions explore newer ideas and university administrators and funding agencies utilizing these new metrics in research evaluation.
How can science–society relations be better understood, evaluated, and improved by focusing on the organizations that typically interact in a specific domain of research.
Self-archiving is a key aspect of Open Access. Read the infographic to learn more about OA repositories
Kvarven, Strømland and Johannesson compare meta-analyses to multiple-laboratory replication projects and find that meta-analyses overestimate effect sizes by a factor of almost three. Commonly used methods of adjusting for publication bias do not substantively improve results.
In imposing travel restrictions against China during the current outbreak of the 2019 novel coronavirus disease (COVID-19), many countries are violating the International Health Regulations.
With rightwing demagogues gaining power and public debate getting nastier, many are calling for a return to a more sensible politics. But this approach has its own fatal flaws.
Mauro Ferrari says scientists should get rid of ‘disciplinary goggles’ and combine expertise to create new fields of scientific research.
This article elaborates on the role of research funding organizations in developing a FAIR funding model to support the FAIR research data management in the funding cycle.
The Health Research Council of New Zealand is the first major government funding agency to use a lottery to allocate research funding for their Explorer Grant scheme. A recent survey examines how well the measure is accepted.
A new study found that Registered Reports are only about 50% as likely as standard, non-RR research to confirm their hypothesis.
Scientists influenced by funding priorities promoted by regional, national and transnational funding bodies, as well as by the academic mania for ‘interdisciplinariness’, feel compelled to develop a concrete interdisciplinary research topic and organize their research collaboratively.
Altmetrics have become an increasingly ubiquitous part of scholarly communication, although the value they indicate is contested. A recent study examined the relationship of peer review, altmetrics, and bibliometric analyses with societal and academic impact. Drawing on evidence from REF2014 submissions, it argues altmetrics may provide evidence for wider non-academic debates, but correlate poorly with peer review assessments of societal impact.
The publication output of doctoral students is increasingly used in selection processes for funding and employment in their early careers.
In recent years, the full text of papers are increasingly available electronically which opens up the possibility of quantitatively investigating citation contexts in more detail.
Originality has self-evident importance for science, but objectively measuring originality poses a formidable challenge.
Papers are getting more rigorous, according to a text-mining analysis of 1.6 million papers, but progress is slower than some researchers would like.
Great strides have been made to encourage researchers to archive data created by research and provide the necessary systems to support their storage. Additionally it is recognised that data are meaningless unless their provenance is preserved, through appropriate meta-data. Alongside this is a pressing need to ensure the quality and archiving of the software that generates data, through simulation, control of experiment or data-collection and that which analyses, modifies and draws value from raw data.
Perspectives on and experiences of research culture, based on a survey of more than 4,000 researchers in the UK and globally.
An independent report published by Information Power aims to improve the transparency of Open Access (OA) prices and services.
This evaluation of Finnish research organisations, research-funding organisations, academic and cultural institutes abroad and learned societies and academies examines the key indicators chosen to assess the performance on openness. Key indicators are used to provide some insights on the competences and capacity of the research system in supporting progress towards openness. Barriers and development needs are discussed, with suggestions for improvement.
Citation metrics have value because they aim to make scientific assessment a level playing field, but urgent transparency-based changes are necessary to ensure that the data yields an accurate picture. One problematic area is the handling of self-citations.
Robert van der Vooren conducted a study commissioned by the National Library of Sweden about new ways of distributing publisher contract costs to Bibsam Consortium participants. The study is intended to be a basis when the Bibsam Consortium makes cost distribution future proof for full open access publishing.