Journals weigh up double-blind peer review
Anonymity of authors as well as reviewers could level field for women and minorities in science.
Send us a link
Anonymity of authors as well as reviewers could level field for women and minorities in science.
The rate of retractions of scientific papers has been growing over the past decade, suggestive to some of a crisis of confidence in science. Can we no longer trust the scientific literature?
Nature, the pre-eminent journal for reporting scientific research, has had to retract two papers it published in January after mistakes were spotted in the figures, some of the methods descriptions were found to be plagiarised and early attempts to replicate the work failed.
In an era of large collaborations, multi-authored papers, and enormous datasets, is there still room for the single creative idea that proves to be a gamechanger?
The Winnower is another open access online science publishing platform that employs open post-publication peer review, aiming to revolutionize science by breaking down the barriers to scientific communication through cost-effective and transparent publishing for scientists.
Scientists make much of the fact that their work is scrutinised anonymously by some of their peers before it is published. This "peer review" is supposed to spot mistakes and thus keep the whole process honest.
Most academic papers today are published only after some academic peers have had a chance to review the merits and limitations of the work. This seems like a good idea, but there is a growing movement that wants to retort as Albert Einstein did to such a review process.
Academics have internalised research assessment to such a degree that the effects may be irreversible.
Scientists are asked to comment on static, final, published versions of papers, with virtually no potential to improve the articles. This is the state of post-publication peer review today.
Comment on a recent Nature blog entry by Richard Van Noorden
Felipe Fernández-Armesto bristles at the stifling effect of peer review.
New scientists have grown up commenting on their friends pictures, their silly comments on Facebook and their favorite YouTube videos. Will this practice carry over into their scientific publishing?
Science is now able to self-correct instantly. Post-publication peer review is here to stay.
A platform comparing research journal's performance aiming to make the peer review process more efficient.
Scientific publishing is under the spotlight at the moment. Is it time for change?
Peer review, many boffins argue, channelling Churchill, is the worst way to ensure quality of research, except all the others. The system, which relies on papers being vetted by anonymous experts prior to publication, has underpinned scientific literature for decades.
Controversial model points to benefits of more opinionated reviews.
Abstract: A semi-supervised model of peer review is introduced that is intended to overcome the bias and incompleteness of traditional peer review. Traditional approaches are reliant on human biases, while consensus decision-making is constrained by sparse information. Here, the architecture for one potential improvement (a semi-supervised, human-assisted classifier) to the traditional approach will be introduced and evaluated.
Report quality is significantly higher on the open peer review model for questions relating to comments on the methods and study design, supplying evidence to substantiate comments and constructiveness.
Research repository launches comment platform for post-publication peer review.
Peer review is one of the oldest and most respected instruments of quality control in science and research. Peer review means that a paper is evaluated by a number of experts on the topic of the article (the peers). The criteria may vary, but most of the time they include methodological and technical soundness, scientific relevance, and presentation.
Following Nature's Future of Publishing special issue this spring, Science has just published a similar series of articles. Needless to say, there is a definite ideological bent to the articles included in both and more misleading information about open access.
Software experiment raises prospect of extra peer review.
At the International Congress on Peer Review and Biomedical Publication, efforts to explore the scientific literature have shifted away from peer review and into other areas, such as bias and authorship. With a dearth of available data and funding, large systematic studies of how peer review works and doesn't aren't easy to get off the ground.