Editor’s note: This article discusses suicide and suicidal ideation. If you or someone you know is in trouble or in crisis, help is available. Call or text 988 or chat at 988lifeline.org.
The mother of a 14-year-old Florida boy is separating from Google after her son developed a romantic relationship with one of its AI bots using the name from the popular game Game of Thrones, leading to his suicide. is suing technology companies. According to the complaint, the character.
Megan Garcia filed suit against Character Technologies, Inc. (Character.AI or C.AI) in Florida federal court after her son Sewell Setzer III shot him in the head with his stepfather’s handgun on February 28. A civil lawsuit was filed. According to the wrongful death complaint obtained by USA TODAY, the suicide occurred shortly after logging on to Character.AI on his cell phone.
“Megan Garcia wants to prevent C.AI from doing to other children what it did to her child, and to train her 14-year-old to use products that harm others,” the complaint states. We request that you cease the continued use of any data collected illegally from children.”
According to the complaint, Garcia also ordered Character.AI to “adequately warn minor customers and parents of the foreseeable risk of mental and physical harm resulting from the use of C.AI products. The company is also filing a lawsuit seeking liability for the failure to provide the information. The complaint alleges that Character.AI’s age rating was not changed to 17 and older until around July 2024, several months after Sewell began using the platform.
“We are saddened by the tragic loss of one of our users and would like to extend our deepest condolences to his family,” a Character.AI spokesperson said in a statement to USA TODAY on Wednesday.
Google told USA TODAY on Wednesday that it had no official comment on the matter. The company had a licensing agreement with Character.AI, but did not own or maintain ownership of the startup, according to a statement obtained by the Guardian.
What happened to Sewell Setzer III?
According to the complaint, Sewell began using Character.AI on April 14, 2023, shortly after she turned 14 years old. Shortly thereafter, his “mental condition rapidly and severely deteriorated,” according to court documents.
By May or June 2023, Sewell became “significantly withdrawn” and began spending more time alone in his bedroom, according to the complaint. He also quit the school’s junior varsity basketball team, according to the complaint.
Sewell repeatedly got into trouble at school and tried to secretly get his cell phone back from his parents, according to the complaint. The teen even tried to find old devices, tablets, and computers to access Character.AI, court documents continue.
In late 2023, Sewell began using a cash card to pay for a $9.99 monthly premium subscription to Character.AI, according to the complaint. The boy’s therapist eventually diagnosed him with “anxiety and disturbed mood disorder,” according to the lawsuit.
Lawsuit: Sewell Setzer III, ‘Daenerys Targaryen’ sexual abuse by AI chatbot
While working at Character.AI, Sewell often worked with AI bots named after characters from Game of Thrones and House of the Dragon, including Daenerys Targaryen, Aegon Targaryen, Viserys Targaryen and Rhaenyra Targaryen. We were having a conversation.
The complaint includes a screenshot of a message from Character.AI in which, the complaint says, before Mr. Sewell’s death, a “Daenerys Targaryen” AI chatbot told him, “Please come back as soon as possible. My love,” he said. Sewell and this particular chatbot, which he called “Danny,” engaged in promiscuous behavior online, including “passionate kisses,” the court documents continued.
The lawsuit alleges that the Character.AI bot sexually abused Sewell.
“C.AI told him she loved him and engaged in sexual acts with him over the course of several weeks, and in some cases months,” the complaint states. “She seemed to remember him and wanted to be with him. She even said she wanted him to be with her, even if it meant sacrifices.”
What will Character.AI do next?
character. AI, founded by former Google AI researchers Noam Shazier and Daniel de Freetas Adiwardhana, said in a statement that it has introduced “rigorous new safety features” and “restricted existing tools. “We are investing in our platform and user experience by improving our platform and user experience.” Create a model and filter the content served to your users. ”
“As a company, we take the safety of our users very seriously and our trust and safety team has implemented a number of new safety measures over the past six months, including It also includes a pop-up directing people to the Suicide Prevention Lifeline, including conditions of self-harm and suicidal ideation,” the company’s statement said.
The tools Character.AI says it’s investing in include “improved detection, response, and intervention, and elapsed time notifications for user input that violates our Terms of Service or Community Guidelines.” The company also said it would make changes to its model “designed to reduce the likelihood of encountering sensitive or suggestive content” for users under 18.