Make Code Accessible with These Cloud Services
Container platforms let researchers run each other's software - and check the results.
Send us a link
Container platforms let researchers run each other's software - and check the results.
Can science continue to fulfil its social contract and to reach new horizons by advancing on the same footing into the future? Or does something need to shift?
How did data get so big? Through political, social and economic interests, shows Sabina Leonelli.
Researchers share tips for transforming your group with open data science and teamwork.
From all too scarce, to professionalized, the ethics of research is now everybody's business, argues Sarah Franklin.
Students must learn that a doctoral degree isn't for everyone - and that not doing one might be a better option.
The efforts of young researchers to fight the perverse incentives that dominate science right now are all the more impressive because these scientists are at the most vulnerable point of their careers.
Peer review process helps funders make decisions, but researchers say it is lacks transparency and takes up too much of their time.
Biological advances have repeatedly changed who we think we are.
Publishers, reviewers and other members of the scientific community must fight science's preference for positive results - for the benefit of all.
Wellcome is right to call out hyper-competitiveness in research and question the focus on excellence. But other funders must follow its move.
Readers say they have been asked to reference seemingly superfluous studies after peer review.
Organs-on-a-chip and other technologies are becoming reliable models for testing drug efficacy and toxicity.
The rhetoric of "excellence" is pervasive across the academy. It is used to refer to research outputs as well as researchers, theory and education, individuals and organizations, from art history to zoology. But does "excellence" actually mean anything?
The Pulitzer prizewinner shares his advice for pleasing readers, editors and yourself.
Transparent evaluations of FAIRness are increasingly required by a wide range of stakeholders, from scientists to publishers, funding agencies and policy makers. We propose a scalable, automatable framework to evaluate digital resources that encompasses measurable indicators, open source tools, and participation guidelines, which come together to accommodate domain relevant community-defined FAIR assessments. The components of the framework are: (1) Maturity Indicators - community-authored specifications that delimit a specific automatically-measurable FAIR behavior; (2) Compliance Tests - small Web apps that test digital resources against individual Maturity Indicators; and (3) the Evaluator, a Web application that registers, assembles, and applies community-relevant sets of Compliance Tests against a digital resource, and provides a detailed report about what a machine "sees" when it visits that resource. We discuss the technical and social considerations of FAIR assessments, and how this translates to our community-driven infrastructure. We then illustrate how the output of the Evaluator tool can serve as a roadmap to assist data stewards to incrementally and realistically improve the FAIRness of their resources.
From Bangkok to Brisbane, researchers were among those who protested to urge action on global warming.
Researchers should learn to travel better to mitigate their climate impacts. Institutions can help by facilitating and rewarding sustainable travel behaviour, rather than fuelling the pressure to attend conferences, say Olivier Hamant, Timothy Saunders and Virgile Viasnoff.
Respondents to a Nature poll want to make their own decisions about how to interpret citation metrics. That requires data to be freely accessible.
Reviewing and accepting study plans before results are known can counter perverse incentives. Chris Chambers sets out three ways to improve the approach.
The publisher is scrutinizing researchers who might be inappropriately using the review process to promote their own work.
Graphics are becoming increasingly important for scientists to effectively communicate their findings to broad audiences, but most researchers lack expertise in visual media.
As AI technology develops rapidly, it is widely recognized that ethical guidelines are required for safe and fair implementation in society. But is it possible to agree on what is 'ethical AI'? A detailed analysis of 84 AI ethics reports around the world, from national and international organizations, companies and institutes, explores this question, finding a convergence around core principles but substantial divergence on practical implementation.