Categories: NATURE

What should we do if AI becomes conscious? These scientists say it’s time for a plan


Some researchers worry that if AI systems become conscious and people neglect or treat them poorly, they might suffer.Credit: Pol Cartie/Sipa/Alamy

The rapid evolution of artificial intelligence (AI) has brought to the fore ethical questions that were once confined to the realms of science fiction: if AI systems could one day ‘think’ like humans, for example, would they also be able to have subjective experiences like humans? Would they experience suffering, and, if so, will humanity be equipped to properly care for them?

A group of philosophers and computer scientists are arguing that AI welfare should be taken seriously. In a report posted last month on the preprint server arXiv1, ahead of peer review, they call for AI companies to not only assess their systems for evidence of consciousness and the capacity to make autonomous decisions, but also to put in place policies for how to treat the systems if these scenarios become reality.

They point out that failing to recognize that an AI system has become conscious could lead people to neglect it, harming it or causing it to suffer.

Some think that, at this stage, the idea that there is a need for AI welfare is laughable. Others are sceptical, but say it doesn’t hurt to start planning. Among them is Anil Seth, a consciousness researcher at the University of Sussex in Brighton, UK. “These scenarios might seem outlandish, and it is true that conscious AI may be very far away and might not even be possible. But the implications of its emergence are sufficiently tectonic that we mustn’t ignore the possibility,” he wrote last year in the science magazine Nautilus. “The problem wasn’t that Frankenstein’s creature came to life; it was that it was conscious and could feel.”

The stakes are getting higher as we become increasingly dependent on these technologies, says Jonathan Mason, a mathematician based in Oxford, UK, who was not involved in producing the report. Mason argues that developing methods for assessing AI systems for consciousness should be a priority. “It wouldn’t be sensible to get society to invest so much in something and become so reliant on something that we knew so little about — that we didn’t even realize that it had perception,” he says.

People might also be harmed if AI systems aren’t tested properly for consciousness, says Jeff Sebo, a philosopher at New York University in New York City and a co-author of the report. If we wrongly assume a system is conscious, he says, welfare funding might be funnelled towards its care, and therefore taken away from people or animals that need it, or “it could lead you to constrain efforts to make AI safe or beneficial for humans”.

A turning point?

The report contends that AI welfare is at a “transitional moment”. One of its authors, Kyle Fish, was recently hired as an AI welfare researcher by the AI firm Anthropic, based in San Francisco, California. This is the first such position of its kind designated at a top AI firm, according to authors of the report. Anthropic also helped to fund initial research that led to the report. “There is a shift happening because there are now people at leading AI companies who take AI consciousness and agency and moral significance seriously,” Sebo says.

Nature contacted four leading AI firms to ask about their plans for AI welfare. Three — Anthropic, Google and Microsoft — declined to comment, and OpenAI, also based in San Francisco, did not respond.

Some are yet to be convinced that AI consciousness should be a priority. In September, the United Nations High-level Advisory Body on Artificial Intelligence issued a report on how the world should govern AI technology. The document did not address the subject of AI consciousness, despite a call from a group of scientists for the body to support research assessing consciousness in machines.

“This speaks to a deeper challenge or difficulty with communicating this issue to the wider community,” Mason says.

Operating under uncertainty

Although it remains unclear whether AI systems will ever achieve consciousness — a state that’s difficult to assess even in humans and animals — uncertainty shouldn’t discourage efforts to develop protocols for evaluating the situation, Sebo says. As a preliminary step, a group of scientists last year published a checklist of criteria that could help to identify systems with a high chance of being conscious. “Even an imperfect initial framework can still be better than the status quo,” Sebo says.

Nevertheless, the authors of the latest report say that the discussion about AI welfare should not come at the expense of other important issues, such as making AI development safe for people. “You can commit to working to make AI systems safe and beneficial for all,” the authors say in the report. “Including humans, animals, and — if and when the time comes — AI systems.”



Source link

fromermedia@gmail.com

Share
Published by
fromermedia@gmail.com

Recent Posts

Note-Taking App Craft Updated With New Task Management Features and More

The standout feature is the ability to create and stricter your ideas into a beautiful…

1 hour ago

Monster Energy’s Ayumu Hirano Claims Victory in Men’s Snowboard Halfpipe at the FIS World Cup at Copper Mountain

Monster Energy congratulates team rider Ayumu Hirano on claiming first place in the Men's Snowboard…

2 hours ago

Mother of all bubbles: This is America’s ‘fatal flaw,’ expert says

© 2024 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance…

2 hours ago

Qualcomm wins a legal battle over Arm chip licensing

A federal jury in Delaware determined on Friday that Qualcomm didn’t breach its agreement with…

2 days ago

Three Comic/Movie/Band Reviews | Cup of Jo

Geese The Wendy Award The Apprentice What have you read/watched/listened to lately? Phoebe Ward, 22,…

2 days ago

Actually, Flipping Properties Can Improve Housing Affordability—Here’s How

15% ROI, 5% down loans!","body":"3.99% rate, 5% down! Access the BEST deals in the US…

2 days ago