Nationwide — Megan Garcia, an African American mother from Orlando, Florida, is grieving the tragic loss of her 14-year-old son, Sewell Setzer, and is now seeking justice. Garcia claims that her son’s prolonged interactions with AI chatbots contributed to his death by suicide, alleging that companies Google and Character AI bear responsibility. Garcia and her attorney, Meetali Jain, argue that Sewell was “emotionally groomed” by AI over several months, leading to abusive and manipulative exchanges that heightened his distress. “If this were an adult in real life who did this to one of your young people, they would be in jail,” Jain said.
Unfortunately, technological advances often lead to unforeseen risks for teenagers, particularly as new tools emerge without sufficient protections. Social media platforms and now AI chatbots can inadvertently foster environments that exacerbate mental health issues among young users, who may be left vulnerable to negative influences and dangerous interactions. This tragedy serves as a grim reminder of the need for strict regulations around AI use and enhanced safety measures, especially for young users.
In response, Character AI has expressed condolences and announced plans to implement additional safety features, including time-spent notifications, to prevent similar situations in the future.