Hello Nature readers, would you like to get this Briefing in your inbox free every week? Sign up here.

An artist's digital illustration shows a suited man's face being generated out of pixels with increasingly levels of detail.

Deepfake images and videos can be used to spread misinformation.Credit: Stu Gray/Alamy

A method that astronomers use to survey light from distant galaxies can reveal whether an image is AI-generated. By looking for inconsistencies in the reflection of light sources in a person’s eyes, it can correctly predict whether an image is fake about 70% of the time. “However, if you can calculate a metric that quantifies how realistic a deep fake image may appear, you can also train the AI model to produce even better deep fakes by optimizing that metric,” warns astrophysicist Brant Robertson.

Nature | 4 min read

Academic publisher Taylor & Francis is providing Microsoft’s AI systems with access to researchers’ publications. The agreement, worth almost £8 million (US$10 million) in its first year, was included in a trading update in May this year — but some authors claim they weren’t informed about it or given a chance to opt out. “I only found out about this via word of mouth in the past few days,” says literature researcher Ruth Clemens. Taylor & Francis says it is “protecting the integrity of our authors’ work and limits on verbatim text reproduction, as well as authors’ rights to receive royalty payments in accordance with their author contracts”.

The Bookseller | 5 min read

Self-replicating programs can emerge from a random soup of tens of thousands of pieces of computer code. This ‘computational life’ arises, after millions of steps, despite a lack of rules or goals. “It all fizzes around and then suddenly: boom, they’re all the same,” says software engineer and study co-author Ben Laurie. It’s unclear whether this tells us anything about how life on Earth arose. “Having infinite copies of something does not guarantee complexity,” says biologist Raquel Nunes Palmeira.

New Scientist | 5 min read (paywall)

Reference: arXiv preprint (not peer reviewed)

Image of the week

Animated sequence from video footage of the JT-Fly landing and walking across a white table surface.

C. Wu et al./IEEE Robotics and Automation Letters

This small robot, called JT-fly, could be the most bug-like yet. It can seamlessly transition from crawling to flying and can even flip itself over if it ends up upside down. (IEEE Spectrum | 3 min read)

Reference: IEEE Robotics and Automation Letters paper

Features & opinion

The AI model AlphaFold2 seems to be able to do what protein scientists tried to do for decades: it predicts, with stunning accuracy, many proteins’ 3D shape from their molecular building blocks. But did the algorithm really solve protein folding? “Right now, you just have this black box that can somehow tell you the folded states, but not actually how you get there,” says computer scientist Ellen Zhong. Some scientists don’t care that it can’t show its working. Others, including protein researcher Lauren Porter, argue that “until we know really how it works, we’re never going to have a 100% reliable predictor”.

Quanta Magazine | 43 min read

An AI companion or romantic partner helps some people to relieve stress or take the pressure off their human relationships. But these interactions are devoid of friction, pushback and vulnerability, explains sociologist Sherry Turkle: “It’s almost like we dumb down or dehumanize our sense of what human relationships should be, because we measure them by what machines can give us.” Chatbots should make it very clear that they don’t have empathy, Turkle argues. “There are ways to present what these objects are that could perhaps have the positive effect without undermining the qualities that really go into a person-to-person relationship.”

NPR | 26 min listen

Companies are attempting to make science fiction a reality with AI models that can sift through, prioritise and analyse data — in response to a simple question. Some of these data chatbots are fine-tuned versions of generalist large language models (LLMs). Someone could ask, “give me all the results for this particular assay, at this particular time, for this strain”, says Stephen Ra from Genentech, a company building a drug-discovery LLM. Vetting the AI models’ outputs remains important: just because they provide an answer doesn’t mean they are correct.

Nature | 8 min read

“How do you talk about a technology whose most consequential effects are always just on the horizon, never in the present?” asks technology writer Charlie Warzel. Case in point: a personalized ‘AI health coach’ that has been proposed by Open-AI founder Sam Altman and publishing doyenne Arianna Huffington. It’s unclear what shape this product will take or how it will handle safety and privacy issues, which makes it difficult to criticise with specificity, Warzel argues. “The greatest trick of a faith-based industry is that it effortlessly and constantly moves the goal posts, resisting evaluation and sidestepping criticism.”

The Atlantic | 11 min read

Quote of the day

The self-driving car that hands over control to the driver just before a crash shows that the idea of AI weapons that keep a ‘human in the loop’ isn’t so simple, says Paul Scharre from a US think tank focused on national security. (The Guardian | 11 min read)

Today, I’m enjoying that “ignore all previous instructions” has become a meme on X (formerly Twitter) after people discovered that they can use the phrase to apparently trick bots into revealing themselves. Although this loophole will probably stop working in newer AI models, for now, people are having fun getting automated agents to generate poems instead of hate speech.

This newsletter is taking a brief hiatus and will be back in two weeks with a fresh set of instructions from new editor Josh Axelrod. In the meantime, your feedback is always welcome at ai-briefing@nature.com.

Thanks for reading,

Katrina Krämer, associate editor, Nature Briefing

With contributions by Flora Graham

Want more? Sign up to our other free Nature Briefing newsletters:

Nature Briefing — our flagship daily e-mail: the wider world of science, in the time it takes to drink a cup of coffee

Nature Briefing: Microbiology — the most abundant living entities on our planet – microorganisms — and the role they play in health, the environment and food systems.

Nature Briefing: Anthropocene — climate change, biodiversity, sustainability and geoengineering

Nature Briefing: Cancer — a weekly newsletter written with cancer researchers in mind

Nature Briefing: Translational Research covers biotechnology, drug discovery and pharma



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *