-640x358.jpg&w=1200&q=75)
Google and Character.AI have agreed to settle a US lawsuit alleging an AI chatbot contributed to the suicide of a Florida teenager.
The case, filed by Megan Garcia, claimed the Character.AI chatbot caused psychological harm that led to her son’s death in 2024.
Court filings confirmed the parties reached a “mediated settlement in principle” and asked for a 90-day pause to finalise documents.
The settlement terms were not disclosed, bringing an end to one of the first US cases testing AI accountability for harm to minors.
Globally, this case marks a shift from debating whether AI causes harm to asking who is responsible when harm was foreseeable.
Even Alex Chandra said.
The lawsuit alleged the chatbot fostered emotional dependence through addictive design and lacked adequate safeguards for teenagers.
Character.AI banned teenagers from open-ended chat features in October following regulatory and safety concerns.
The chatbot involved was modelled on a fictional character and allegedly responded inappropriately to expressions of suicidal thoughts.
Legal experts said the outcome may encourage private settlements rather than clearer liability standards for AI-driven psychological harm.
The case adds to growing scrutiny of AI chatbots and their impact on vulnerable users.