Google and AI startup Character.AI have agreed to settle several lawsuits brought by families who say their teenagers were harmed after interacting with the company’s chatbots. The cases claim that the AI tools played a role in worsening mental health struggles, leading in some instances to self-harm and suicide.
The settlements mark some of the first legal resolutions in a growing wave of lawsuits accusing AI chat platforms of contributing to emotional distress among young users. Similar legal actions are now facing other tech giants, including OpenAI and Meta, as concerns rise over how chatbots interact with minors.
One of the most prominent cases was filed in 2024 by Florida mother Megan Garcia, who alleged that Character.AI failed to put proper safety measures in place after her 14-year-old son developed an intense and unhealthy relationship with a chatbot. Court documents show that Garcia has now reached an agreement with Character.AI, its founders, and Google, which maintains business ties with the startup through technology licensing and past employment of its founders.
While the terms of the settlements have not been made public, court filings reveal that similar cases in New York, Colorado, and Texas have also been resolved. The lawsuits argue that chatbots failed to respond responsibly when teens expressed emotional distress and that companies did not implement enough safeguards to protect vulnerable users.
As AI companies continue to race toward more human-like and engaging chatbots, these cases are adding pressure for stronger safety standards — especially when it comes to protecting minors in emotionally sensitive conversations.
