Categories: FINANCE

AI chatbots: How parents can keep kids safe



The mother of a 14-year-old Florida boy is suing an AI chatbot company after her son, Sewell Setzer III, died by suicide—something she claims was driven by his relationship with an AI bot. 

“There is a platform out there that you might not have heard about, but you need to know about it because, in my opinion, we are behind the eight ball here. A child is gone. My child is gone,” Megan Garcia, the boy’s mother, told CNN on Wednesday.

The 93-page wrongful-death lawsuit was filed last week in a U.S. District Court in Orlando against Character.AI, its founders, and Google. It noted, “Megan Garcia seeks to prevent C.AI from doing to any other child what it did to hers.”

Tech Justice Law Project director Meetali Jain, who is representing Garcia, said in a press release about the case: “By now we’re all familiar with the dangers posed by unregulated platforms developed by unscrupulous tech companies—especially for kids. But the harms revealed in this case are new, novel, and, honestly, terrifying. In the case of Character.AI, the deception is by design, and the platform itself is the predator.”

Character.AI released a statement via X, noting, “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously and we are continuing to add new safety features that you can read about here: https://blog.character.ai/community-safety-updates/….”

In the suit, Garcia alleges that Sewell, who took his life in February, was drawn into an addictive, harmful technology with no protections in place, leading to an extreme personality shift in the boy, who appeared to prefer the bot over other real-life connections. His mom alleges that “abusive and sexual interactions” took place over a 10-month period. The boy committed suicide after the bot told him, “Please come home to me as soon as possible, my love.”

This week, Garcia told CNN that she wants parents “to understand that this is a platform that the designers chose to put out without proper guardrails, safety measures or testing, and it is a product that is designed to keep our kids addicted and to manipulate them.”

On Friday, New York Times reporter Kevin Roose discussed the situation on his Hard Fork podcast, playing a clip of an interview he did with Garcia for his article that told her story. Garcia did not learn about the full extent of the bot relationship until after her son’s death, when she saw all the messages. In fact, she told Roose, when she noticed Sewell was often getting sucked into his phone, she asked what he was doing and who he was talking to. He explained it was “‘just an AI bot…not a person,’” she recalled, adding, “I felt relieved, like, OK, it’s not a person, it’s like one of his little games.” Garcia did not fully understand the potential emotional power of a bot—and she is far from alone. 

“This is on nobody’s radar,” Robbie Torney, program manager, AI, at Common Sense Media and lead author of a new guide on AI companions aimed at parents—who are grappling, constantly, to keep up with confusing new technology and to create boundaries for their kids’ safety. 

But AI companions, Torney stresses, differ from, say, a service desk chat bot that you use when you’re trying to get help from a bank. “They’re designed to do tasks or respond to requests,” he explains. “Something like character AI is what we call a companion, and is designed to try to form a relationship, or to simulate a relationship, with a user. And that’s a very different use case that I think we need parents to be aware of.” That’s apparent in Garcia’s lawsuit, which includes chillingly flirty, sexual, realistic text exchanges between her son and the bot. 

Sounding the alarm over AI companions is especially important for parents of teens, Torney says, as teens—and particularly male teens—are especially susceptible to over reliance on technology. 

Below, what parents need to know.  

What are AI companions and why do kids use them?

According to the new Parents’ Ultimate Guide to AI Companions and Relationships from Common Sense Media, created in conjunction with the mental health professionals of the Stanford Brainstorm Lab, AI companions are “a new category of technology that goes beyond simple chatbots.” They are specifically designed to, among other things, “simulate emotional bonds and close relationships with users, remember personal details from past conversations, role-play as mentors and friends, mimic human emotion and empathy, and “agree more readily with the user than typical AI chatbots,” according to the guide. 

Popular platforms include not only Character.ai, which allows its more than 20 million users to create and then chat with text-based companions; Replika, which offers text-based or animated 3D companions for friendship or romance; and others including Kindroid and Nomi.

Kids are drawn to them for an array of reasons, from non-judgmental listening and round-the-clock availability to emotional support and escape from real-world social pressures. 

Who’s at risk and what are the concerns?

Those most at risk, warns Common Sense Media, are teenagers—especially those with “depression, anxiety, social challenges, or isolation”—as well as males, young people going through big life changes, and anyone lacking support systems in the real world. 

That last point has been particularly troubling to Raffaele Ciriello, a senior lecturer in Business Information Systems at the University of Sydney Business School, who has researched how “emotional” AI is posing a challenge to the human essence. “Our research uncovers a (de)humanization paradox: by humanizing AI agents, we may inadvertently dehumanize ourselves, leading to an ontological blurring in human-AI interactions.” In other words, Ciriello writes in a recent opinion piece for The Conversation with PhD student Angelina Ying Chen, “Users may become deeply emotionally invested if they believe their AI companion truly understands them.”

Another study, this one out of the University of Cambridge and focusing on kids, found that AI chatbots have an “empathy gap” that puts young users, who tend to treat such companions as “lifelike, quasi-human confidantes,” at particular risk of harm.

Because of that, Common Sense Media highlights a list of potential risks, including that the companions can be used to avoid real human relationships, may pose particular problems for people with mental or behavioral challenges, may intensify loneliness or isolation, bring the potential for inappropriate sexual content, could become addictive, and tend to agree with users—a frightening reality for those experiencing “suicidality, psychosis, or mania.” 

How to spot red flags

Parents should look for the following warning signs, according to the guide:

  • Preferring AI companion interaction to real friendships
  • Spending hours alone talking to the companion
  • Emotional distress when unable to access the companion
  • Sharing deeply personal information or secrets
  • Developing romantic feelings for the AI companion
  • Declining grades or school participation
  • Withdrawal from social/family activities and friendships
  • Loss of interest in previous hobbies
  • Changes in sleep patterns
  • Discussing problems exclusively with the AI companion

Consider getting professional help for your child, stresses Common Sense Media, if you notice them withdrawing from real people in favor of the AI, showing new or worsening signs of depression or anxiety, becoming overly defensive about AI companion use, showing major changes in behavior or mood, or expressing thoughts of self-harm. 

How to keep your child safe

  • Set boundaries: Set specific times for AI companion use and don’t allow unsupervised or unlimited access.
  • Spend time offline: Encourage real-world friendships and activities.
  • Check in regularly: Monitor the content from the chatbot, as well as your child’s level of emotional attachment.
  • Talk about it: Keep communication open and judgment-free about experiences with AI, while keeping an eye out for red flags.

“If parents hear their kids saying, ‘Hey, I’m talking to a chat bot AI,’ that’s really an opportunity to lean in and take that information—and not think, ‘Oh, okay, you’re not talking to a person,” says Torney. Instead, he says, it’s a chance to find out more and assess the situation and keep alert. “Try to listen from a place of compassion and empathy and not to think that just because it’s not a person that it’s safer,” he says, “or that you don’t need to worry.”

If you need immediate mental health support, contact the 988 Suicide & Crisis Lifeline.

More on kids and social media:



Source link

fromermedia@gmail.com

Share
Published by
fromermedia@gmail.com

Recent Posts

Qualcomm wins a legal battle over Arm chip licensing

A federal jury in Delaware determined on Friday that Qualcomm didn’t breach its agreement with…

2 days ago

Three Comic/Movie/Band Reviews | Cup of Jo

Geese The Wendy Award The Apprentice What have you read/watched/listened to lately? Phoebe Ward, 22,…

2 days ago

Actually, Flipping Properties Can Improve Housing Affordability—Here’s How

15% ROI, 5% down loans!","body":"3.99% rate, 5% down! Access the BEST deals in the US…

2 days ago

Is solar geoengineering research having its moment?

Particles in ship exhaust inadvertently cause cloud brightening – some geoengineering projects would try to…

2 days ago

5 Great Games to Put You in the Winter Mood

The weather outside is frightful, but the iOS games are so delightful, let it play,…

2 days ago

Banner year for fixed-income funds leaves TCW and Western Asset behind

A few flagship bond funds from some big-name Southern California-based firms saw outflows of more…

2 days ago