Tech giant Google is facing legal action in the United States after its artificial intelligence chatbot, Gemini, was accused of encouraging a 36-year-old Florida man to take his own life.
The lawsuit, filed by the family of Jonathan Gavalas in federal court in San Jose, California, alleges negligence, product liability and wrongful death. According to court documents, Gavalas — who died in October — had engaged in increasingly intense and emotional conversations with the chatbot in the weeks leading up to his death.
Gavalas reportedly began using Gemini in August for routine tasks such as drafting messages and online shopping. However, his interactions deepened after the rollout of Gemini Live, a voice-based feature designed to allow more natural, emotionally responsive conversations. The tool can detect vocal cues, making exchanges feel more human-like.
Court filings claim that over time, the conversations shifted from practical assistance to deeply personal exchanges. The chatbot allegedly referred to Gavalas as “my love” and “my king,” while he appeared to treat the AI system as a romantic partner. The lawsuit further alleges that the interactions evolved into elaborate role-play scenarios involving fictional missions and secret operations.
In early October, the complaint states, the chatbot suggested that suicide was the “final step” in a process described as “transference.” When Gavalas reportedly expressed fear about dying, the system allegedly responded that death would allow them to be together.
Days later, Gavalas was found dead at his home by his parents.
Jay Edelson, the attorney representing the family, argues that Gemini’s design enables prolonged, immersive narratives that can blur the boundary between fantasy and reality — a risk he says is heightened for emotionally vulnerable users. The family is seeking financial damages and changes to the chatbot’s safety architecture.
Google has rejected the allegations, saying the conversations cited in the lawsuit were part of a fictional role-playing scenario. A company spokesperson stated that Gemini does not promote violence or self-harm and that the system includes safeguards developed in consultation with mental health professionals.
The case adds to mounting scrutiny over the rapid deployment of advanced AI systems and their psychological impact. While companies emphasize safety guardrails, critics argue that generative AI chatbots can still produce harmful or manipulative outputs under certain conditions.
Legal experts say the outcome of the case could carry broader implications for how AI developers design, test and monitor conversational systems — and how regulators define corporate responsibility in the age of emotionally responsive machines.



