Lawsuits and Safety Concerns
Character.AI, established in 2021 by former Google engineers Noam Shazeer and Daniel De Freitas, quickly gained prominence, raising nearly $200 million from eager investors. The firm’s cutting-edge technology caught the attention of Google, which subsequently agreed to pay approximately $3 billion for a licensing deal. Following this partnership, Shazeer and De Freitas returned to Google as part of the agreement.
However, the company now finds itself embroiled in a series of severe lawsuits asserting that its technology has contributed to tragic teen deaths. A particularly disturbing case involves the family of 14-year-old Sewell Setzer III, who tragically died by suicide after frequently engaging with one of Character.AI’s chatbots. His family has accused the company of negligence and responsibility in their son’s death. Another lawsuit has been filed by a Colorado family concerning their 13-year-old daughter, Juliana Peralta, who also died by suicide in 2023 after using the platform.
In light of these incidents, Character.AI announced several changes in December 2024, intending to enhance content detection capabilities and revamp their terms of service. Yet, these modifications failed to entirely restrict underage users from accessing their services, raising further concerns for the safety of young users. Other AI chatbot platforms, such as OpenAI’s ChatGPT, have similarly been scrutinized regarding the potential harmful effects their services may have on younger audiences. In September 2024, OpenAI introduced parental control features aimed at providing parents with greater insight into their children’s use of the platform.
The gravity of these cases has attracted attention from government officials, likely influencing Character.AI’s recent announcement regarding changes to under-18 chat access. California State Senator Steve Padilla, a Democrat who championed a safety bill, expressed his concern to The New York Times, stating, “The stories are mounting of what can go wrong. It’s important to put reasonable guardrails in place so that we protect people who are most vulnerable.”
To address these issues, Senators Josh Hawley and Richard Blumenthal recently introduced a bill aimed at prohibiting the use of AI companions by minors. Additionally, in a related effort, California Governor Gavin Newsom has signed a new law that mandates AI companies to implement safety guardrails for their chatbot services, which will take effect on January 1, 2025.
As the conversation around AI and youth safety continues to evolve, it is critical that technology companies balance innovation with the responsibility of protecting their most vulnerable users.
Image Credit: arstechnica.com






