OpenAI's Big Lesson for Science Policy
The incredible success of Large Language Models like ChatGPT is both a scientific breakthrough and a boon for future scientific discovery. What is Open AI's role in this?
Send us a link
The incredible success of Large Language Models like ChatGPT is both a scientific breakthrough and a boon for future scientific discovery. What is Open AI's role in this?
Large language models seem startlingly intelligent. But what's really happening under the hood?
Utilizing data from Twitter and applying natural language processing artificial intelligence algorithms, researchers created a new, accurate prediction model for depression and anxiety.
From a CUP Announcement: The rules are set out in the first AI ethics policy from Cambridge University Press and apply to research papers, books and other scholarly works. They include a ban on AI being treated as an 'author' of academic papers and books we publish.
Have you heard people talking about how amazing these new AI chat bots are? About how much immaculate text they can generate in a split second? It's time to talk about what they can't do.
ChatGPT might not yet give us sparkling prose. But it can free scientists up to focus on more-stimulating writing tasks.
Conversational AI is a game-changer for science. Here's how to respond.
As researchers dive into the brave new world of advanced AI chatbots, publishers need to acknowledge their legitimate uses and lay down clear guidelines to avoid abuse.
At least four articles credit the AI tool as a co-author, as publishers scramble to regulate its use.
Preceding all others, a peer-reviewed paper titled 'Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse?' was recently published by Siobhan O'Connor, Senior Lecturer at the School of Health Sciences and an Adjunct Associate Professor at Western University.
An artificially intelligent first author presents many ethical questions—and could upend the publishing process.
The EU and US have set out a joint roadmap to find common ways to define and evaluate artificial intelligence (AI), though critics say they are still not going far enough to make sure AI protects democracy and human rights.
Anna Severin explains how her team used machine learning to try to assess the quality of thousands of reviewers' reports.
The evaluation of the Ethics and Governance of AI Initiative is presented in this report.
What makes pre-trained AI models so impressive-and potentially harmful.
The engineer says the system has the perception of, and ability to express thoughts and feelings equivalent to, a human child.
Developers of artificial intelligence must learn to collaborate with social scientists and the people affected by its applications.
The machine learning outfit's foray into pharmaceuticals could be very useful, but its grand claims should be taken with a pinch of salt.
The ELN's Sylvia Mishra writes that AI-generated fake videos - deep fakes - threaten to exacerbate chaos in conflict, lower nuclear thresholds and complicate nuclear weapons decision-making. The uncontrolled use and spread of this technology requires urgent attention from the nuclear policy community.
If by 2052 a computer could match the human brain then we need better ways to build it.