Hello Nature readers, would you like to get this Briefing in your inbox free every week? Sign up here.
The release of OpenAI’s sophisticated video-generating tool Sora has been met with a mix of trepidation and excitement. Some observers worry that the technology could lead to a barrage of realistic-looking misinformation. “We’re going to have to learn to evaluate the content we see in ways we haven’t in the past,” says digital-culture researcher Tracy Harwood. Others see positive potential: such systems could help to simplify and communicate complex scientific findings, and speed up the process of illustrating papers, conference posters or presentations. In some cases, for example when reconstructing extinct lifeforms, AI illustrations could mislead both scientists and the public. For now, many scientific journals prohibit AI-generated imagery in papers.
Nature | 5 min read & Nature | 6 min read
Researchers have laid out safety guidelines for AI-powered protein design to head off the possibility of the technology being used to develop bioweapons. The voluntary effort calls for the biodesign community to police itself and improve screening of DNA synthesis, a key step in translating proteins into actual molecules. “It’s a good start,” says global health policy specialist Mark Dybul. But he also thinks that “we need government action and rules, and not just voluntary guidance”.
Occasionally erasing part of an AI model’s ‘memory’ seems to make it better at adapting to new languages, particularly those for which not much data is available or that are linguistically distant from English. Researchers periodically reset a neural network’s embedding layer during the initial training in English. When the periodic-forgetting system was retrained on a language with a small dataset, its accuracy score dropped by only 22 points, compared with almost 33 for a standard model. “An apple is something sweet and juicy, instead of just a word,” says AI researcher and study co-author Yihong Chen, who explains that the neural network displays the same high-level reasoning.
Reference: arXiv preprint (not peer reviewed)
Image of the week
This robot snail can heal itself when it’s damaged. The electrically conductive gel connecting the motor to the battery was designed with specific chemical bonds that knit the material back together after it is cut. (Nature | 12 min read)This article is part of Nature Outlook: Robotics and artificial intelligence, an editorially independent supplement produced with financial support from FII Institute.
Features & opinion
Researchers should be careful about projecting ‘superhuman’ abilities onto AI systems, warn anthropologist Lisa Misseri and cognitive scientist Molly Crockett. They characterized four mindsets — AI as oracle, AI as arbiter, AI as quant and AI as surrogate — after reviewing 100 papers, preprints, conference proceedings and books. Scientists should consider these cognitive ‘traps’ before embedding AI tools in their research.
Read more: Why scientists trust AI too much — and what to do about it (Nature editorial | 6 min read)
A well-structured prompt increases the likelihood of accurate text prediction in large language models and minimizes the compounding effect of errors, says psychologist Zhicheng Lin. Here are his tips for prompt engineering:
• Break down tasks into sequential components
• Provide examples and relevant context as input
• Be explicit in your instructions
• Ask for multiple options
• Instruct the model to roleplay, for example as a writing coach or a sentient cheesecake
• Specifying the response format such as reading level and tone
• Experiment a lot
Nature Human Behaviour | 13 min read
The ageing US electricity grid is struggling to keep up with skyrocketing demand from green-technology factories and the data centres that crunch the numbers for crypto, cloud computing and AI. “How were the projections that far off?” asks Jason Shaw from Georgia’s electricity regulator. “This has created a challenge like we have never seen before.” Already, the power crunch is delaying coal plant closures and it remains unclear who should pay for creating new power infrastructure. Some data-centre developers are hoping that off-grid small nuclear or fusion power plants will eventually solve the problem.