Closed Loop Peer Review
Funders and publishers have a lot to gain from sharing and aligning peer reviews.
Send us a link
Funders and publishers have a lot to gain from sharing and aligning peer reviews.
In academia, assessment of grant proposals is the forward‐looking review, the laying out and checking of your research plan, while peer reviews in journals are the final, consolidatory scrutiny before publication. An important difference between these academic checkpoints and my, admittedly somewhat forced fashionista analogy, is that in academia the two stages of review take place independently of each other.
Peer review is lauded in principle as the guarantor of quality in academic publishing and grant distribution. But its practice is often loathed by those on the receiving end. Here, seven academics offer their tips on good refereeing, and reflect on how it may change in the years to come
Competitive funding once helped novel ideas get off the ground, but now funding 'excellence' is hampering new research, says Dutch institute
Automated tools could speed up and improve the review process, but humans are still in the driving seat. Most researchers have good reason to grumble about peer review: it is time-consuming and error-prone, and the workload is unevenly spread, with just 20% of scientists taking on most reviews. Now peer review by artificial intelligence (AI) is promising to improve the process, boost the quality of published papers — and save reviewers time.
It is a great challenge to get Early Career Researchers (ECRs) involved in peer review and to get them the necessary training to be confident reviewers.
Review, promotion, and tenure (RPT) processes significantly affect how faculty direct their own career and scholarly progression. Although RPT practices vary between and within institutions, and affect various disciplines, ranks, institution types, genders, and ethnicity in different ways, some consistent themes emerge when investigating what faculty would like to change about RPT. For instance, over the last few decades, RPT processes have generally increased the value placed on research, at the expense of teaching and service, which often results in an incongruity between how faculty actually spend their time vs. what is considered in their evaluation. Another issue relates to publication practices: most agree RPT requirements should encourage peer-reviewed works of high quality, but in practice, the value of publications is often assessed using shortcuts such as the prestige of the publication venue, rather than on the quality and rigor of peer review of each individual item.
A look at the system's weaknesses, and possible ways to combat them.
A manuscript is much more than words on paper. Painstakingly drafted, fuelled by coffee over long nights, then (constructively) dismantled by colleagues, re-drafted several times, and finally, assembled into something you're proud of. It is the culmination of months or years of hard work, and could potentially lead to recognition for you and your whole... Read more "
A new Research Square product for tracking peer review activity of a paper in submission.
We know that peer review is important and that the hard work of reviewers should be recognized. Yet we still don't really know how that recognition should work.
Scientists receive too little peer-review training. Here's one method for effectively peer-reviewing papers, says Mathew Stiller-Reeve.
Data underlying science’s quality control process is revealing worrying trends — and suggestions are pouring in on how to address the concerns.
A worksheet compiled from the advice of a number of journalsand publications. The aim of the worksheet is to give less-experiencedpeer reviewers a concrete workflow of questions and tasks to follow whenthey first peer-review.
How three scholars gulled academic journals to publish hoax papers on ‘grievance studies.’
Governing board of the evidence-based medicine group may now be dissolved entirely.
We continue our Peer Review Week celebrations with a roundup of articles about bias, diversity, and inclusion in peer review, by Alice Meadows, including eight lessons we can all learn from them.
The Global State of Peer Review is one of the largest ever studies into the practice of scholarly peer review around the world focusing on four questions: 1. Who is doing the review? 2. How efficient is the peer review process? 3. What do we know about peer review quality? 4. What does the future hold?
Scientists in emerging economies respond fastest to peer review invitations but are invited least.
Support for publication of reviewer reports has been mounting as part of a greater effort to inform the discussion on peer review practice.
Biomedical funders and ASAPbio call on journals to sign a pledge to make reviewers’ anonymous comments part of the official scientific record.
The acceptance rate for eLife manuscripts with male last authors was significantly higher than for female last authors, and this gender inequity was greatest when the team of reviewers was all male; mixed-gender gatekeeper teams lead to more equitable peer review outcomes.
Open letter signed by many journals supporting the idea that publishing peer review reports would benefit the research community by increasing transparency of the assessment process.
Jessica K. Polka and colleagues call on journals to sign a pledge to make reviewers’ anonymous comments part of the official scientific record.
We present an agent-based model of paper publication and consumption that allows to study the effect of two different evaluation mechanisms, peer review and reputation, on the quality of the manuscripts accessed by a scientific community.
Publons’ ECR Reviewer Choice Award celebrates early-career researchers' exceptional contribution to peer review, recognizing an individual who has been influential in the realm of peer review or has significantly contributed to improving the system.
Citizen science: crowdsourcing for systematic reviews looks at how people can contribute their expertise to scientific studies using new online platforms - even if they don’t think of themselves as researchers or scientists.
Virtual peer review using videoconferencing or teleconferencing appears promising for reducing costs by avoiding the need for reviewers to travel, but again any consequences for quality have not been adequately assessed.
The problem with peer review is that, despite its rigor, it suffers from bias because reviewers are competing for the same recognition and resources.