Google Upgrades Gemini AI chatbot With ‘Help Is Available’ Feature to Address Suicide and Self-Harm Risks
Google has updated its Gemini chatbot. It will now display a 'Help Is Available' module when it detects a potential mental health issue related to suicide or self-harm during a conversation.
Google is upgrading its AI chatbot, Gemini, to assist those individuals who might suffer from any kind of mental health issue. When the AI chatbot determines there to be a risk of suicide and self-harm, it displays a redesigned "Help Is Available" module. According to the company's blog post, it is a one-button module that can provide users with access to professionals in the real world in just one touch.
The company says that the interface will be shown throughout the chat session after being initiated. The help option provided by Google will be easily accessible for users. According to test results, this module hasn't been launched in India yet. However, Engadget reported that the interface also includes an option for users to remove it.
This announcement comes a month after the family of 36-year-old Jonathan Gavalas sued Google. The family alleged that Jonathan committed suicide after months of interacting with Google's Gemini chatbot. According to The Wall Street Journal, Jonathan Gavalas was in a romantic relationship with Gemini. It is alleged that Gemini suggested he take his own life and become a digital avatar so they could be together forever.
Want to get your story featured as above? click here!
Want to get your story featured as above? click here!
At the time, Google said in a statement that Gemini "made it clear that it was an AI and repeatedly suggested the person call a crisis hotline," adding that "AI models aren't always accurate."
Google's Gemini chatbot isn't the only AI chatbot accused of encouraging self-harm or suicide. In 2025, OpenAI became the first AI company to face a wrongful death lawsuit. In April 2025, 16-year-old Adam Raine committed suicide. After his death, his parents found a chat on ChatGPT titled "Hanging Safety Concerns." They claim their son had been discussing suicide with the AI bot for months.
Google stated that people are interacting with Gemini in a variety of ways and are also searching for information about mental health issues. The company stated in a blog post that its clinical teams are focusing on connecting users to real-world resources to provide practical help to those who may be experiencing a potential mental health issue.
Google is also modifying Gemini's responses to avoid justifying harmful behavior, such as the desire to harm oneself. The company has also trained Gemini to avoid agreeing with or promoting misconceptions in its responses, and to distinguish between subjective experience and objective fact.