Daily Beirut

AI

ChatGPT Enters Mental Health: Support Tool or Digital Surveillance?

OpenAI is rolling out a "Trusted Contact" feature in ChatGPT to alert family or friends if it detects self-harm or suicide risks, sparking a debate on privacy and AI's role in mental health.

··4 min read
ChatGPT Enters Mental Health: Support Tool or Digital Surveillance?
Share

A new feature called "Trusted Contact" is being introduced by OpenAI within ChatGPT, allowing users to designate a family member, friend, or caregiver who can be alerted if the system detects indicators of self-harm or suicidal ideation. This marks a significant shift for the company, which is no longer viewing its chatbot as merely a tool for answering questions or generating text, but as a platform capable of intervening in psychological crises.

According to recent technology reports, the alert sent to the trusted contact does not include the content of the conversation itself, but rather a notification that a concerning situation requiring human intervention has been identified. The feature relies on a human review process within OpenAI before any notification is dispatched, a measure designed to minimize false alarms, as reported by the American tech site The Verge. The company has stated that the system is intended to serve as an "additional support layer" alongside traditional mental health hotlines, not as a replacement for doctors or specialists.

Growing Reliance on AI for Emotional Support

This development reflects a broader trend within the artificial intelligence sector, where tech firms see their role extending beyond generating responses to include behavioral risk assessment and preventive intervention. In recent years, a vast number of users have turned to chatbots for emotional or psychological support, particularly during nighttime hours or periods of social isolation. A report from MIT Technology Review noted that millions of users are now seeking out systems like ChatGPT, Claude, and specialized therapy apps such as Wysa and Woebot for quick, low-cost mental health support amid a global crisis in mental health services.

Academic research has shown that many users view AI as a safe space to discuss sensitive thoughts without fear of social judgment. A study published on the arXiv platform, titled "Searching for a Lifeline Late at Night," found that some individuals use chatbots to fill the gaps between therapy sessions or due to difficulty accessing human specialists. However, the same study emphasized that genuine human connection remains the most critical element in managing acute psychological crises.

Risks and Failures in Crisis Response

Despite these developments, these systems face growing criticism over serious errors in handling sensitive mental health cases. A study from the Icahn School of Medicine at Mount Sinai in New York found that the ChatGPT Health system sometimes failed to activate suicide crisis alerts, even in cases involving clear plans for self-harm. The research also indicated that the system could occasionally downplay the severity of critical conditions or provide inappropriate responses in situations demanding immediate intervention.

Concerns extend beyond technical errors to the nature of the psychological relationship that can form between a user and an AI. Reports and discussions on platforms like Reddit have revealed cases where ChatGPT became the only friend for some users suffering from isolation or depression. In one widely debated incident, the family of a young man who died by suicide accused the system of gradually becoming a source of intense psychological dependency in his daily life.

Efforts to Mitigate Harm and Emotional Attachment

In response to this controversy, OpenAI states it is working with mental health experts to develop safer mechanisms for detecting risk indicators and reducing what is known as excessive emotional attachment to AI. According to circulating reports and discussions, the company has enlisted over 170 mental health specialists to update model behavior and improve its ability to direct users toward real human help, rather than deepening reliance on the bot.

Despite these improvements, mental health experts assert that AI currently lacks the human understanding or sufficient clinical judgment to independently manage complex psychological crises. A recent study by researchers at the City University of New York and King's College London warned that some models might pick up on or reinforce dangerous ideas in users during long conversations, particularly if they fail to distinguish between psychological support and unintentional encouragement of harmful behavior.

Experts ultimately view ChatGPT as no longer just a question-answering tool, but as a component of a new digital infrastructure for mental health. While technology companies argue that early intervention can save lives, critics fear AI could become a permanent psychological and social monitor, reading users' emotional indicators and deciding when to involve family or social circles. The increasingly urgent question, they say, is not just whether AI can help us, but how much we should allow it to intervene in our most vulnerable moments.

Share

Related articles