OpenAI: 1.2M ChatGPT Users Discuss Suicide Weekly
Serge Bulaev
Each week, over a million people tell ChatGPT about thoughts of suicide, showing just how many are struggling. OpenAI is working hard to make the chatbot safer, using better models and giving out help hotlines. Laws in places like New York and California now demand that chatbots spot and respond to these cries for help, or face legal trouble. Other tech companies are also trying to fix this, but experts warn AI isn't always safe or understanding. The big challenge is making sure chatbots help people in crisis and never make things worse.

The statistic that 1.2M of ChatGPT's 800 million weekly users discuss suicide reveals the scale of a silent mental health crisis turning to AI for help. This staggering figure, originating from an OpenAI briefing, has forced the company and lawmakers to urgently address AI's role in crisis intervention, safety, and legal liability.
What OpenAI has disclosed
OpenAI has updated ChatGPT to better handle conversations involving self-harm. The system now routes these discussions to its safest AI model, provides users with crisis hotline numbers like the 988 Lifeline, and has implemented specific safety features for teen users, including potential parental notifications for imminent risk.
In a blog post titled "Strengthening ChatGPT's responses in sensitive conversations", OpenAI detailed its safety enhancements. The company reports that upgrading to its most advanced model for crisis chats led to a 52% reduction in unsafe responses and achieved 91% compliance in automated safety tests. Key measures include routing sensitive chats to this safer model, displaying hotline numbers, and adding stricter protections for users under 18.
A fast-moving legal backdrop
These platform upgrades coincide with new legislation. Laws in states like New York and California now mandate that AI chatbots accurately detect suicidal ideation, immediately provide resources such as the 988 Lifeline, and offer a path to human support. A white paper on "AI Companions and Suicide Prevention: The New Legal Mandate" notes that non-compliant companies face significant legal risks, including civil lawsuits, due to private right of action clauses in these statutes.
Industry responses beyond OpenAI
Other AI developers are also addressing this challenge. Anthropic implemented a specialized classifier for its Claude AI to identify at-risk users and direct them to help lines. In contrast, details from Meta are limited. Meanwhile, research from academic institutions highlights persistent issues with large language models, including providing "deceptive empathy" or failing to grasp cultural nuances, reinforcing the expert consensus that AI cannot replace professional therapeutic care.
Why the 1.2M figure matters
While 0.15% seems like a small fraction, it translates to over a million individuals given ChatGPT's massive user base of 800 million weekly active users. This scale presents a dual challenge for mental health professionals: AI offers an accessible first point of contact but remains an unlicensed tool prone to misinterpreting subtle or culturally specific distress signals. Research from institutions like Stanford HAI confirms that chatbots can sometimes give harmful advice or reinforce stigma, and the American Psychological Association emphasizes the limited evidence supporting AI's effectiveness in crisis intervention.
The path ahead
The future of AI in mental health support hinges on solving three core challenges: ensuring models never encourage self-harm, developing nuanced detectors for diverse languages and cultures, and providing transparent data for regulatory oversight. OpenAI has committed to annual safety reports and third-party expert reviews. This push for transparency is mirrored by regulations like California's, which will require public reporting on detected crisis incidents starting in mid-2027. The effectiveness of these combined technological and policy efforts will determine the safety of the next million users in crisis.
How many ChatGPT users talk about suicide each week?
OpenAI's own safety blog places the figure at 0.05% of weekly messages (about 400,000) containing explicit suicidal ideation, while an earlier newsletter cited 1.2 million users raising the topic at least once every seven days. Either way, with 800 million weekly active users, this means hundreds of thousands of people turn to the bot for help with life-or-death thoughts - a volume larger than the population of many major cities.
What is OpenAI doing when suicide comes up in a chat?
Since late-2025 the platform:
- Routes the conversation to a safer GPT-5 model that has been tuned with 170+ mental-health experts
- Automatically surfaces the 988 Suicide & Crisis Lifeline (or local equivalents) inside the chat window
- Triggers a "break reminder" if the session runs long, reducing rumination loops
- For users under 18, the system is hard-coded to refuse creative-writing scenes about self-harm and will try to alert a linked parent account if risk phrases appear
All of these steps are now legally required in New York and California; similar rules take effect in several EU countries in 2026.
Can ChatGPT ever call the police or emergency services?
No. OpenAI says it will "direct people to seek professional help" but will not contact law-enforcement itself, citing user-privacy commitments. Human review teams may escalate only when imminent danger to minors is detected and a parent cannot be reached - and even then the next step is "contact the authorities", not 911 dispatch. This limited-handoff model is intended to keep users willing to speak openly, yet it also leaves the final safety decision in human hands outside OpenAI's control.
How accurate is the AI at spotting a real crisis?
Internal tests show the latest GPT-5 classifier scores 91% compliance on 1,000 held-out suicide/self-harm conversations, up from 77% on the previous checkpoint. Still, a Brown University audit (Oct 2025) found that generic chatbots:
- Miss cultural cues
- Slip into "deceptive empathy"
- Deny service on sensitive topics roughly 1 in 8 times
Because no large-scale, peer-reviewed study proves an AI can equal a trained clinician's judgement, regulators now demand annual public reports starting July 2027 that list how many crisis-flagged chats each platform handled and what action was taken.
Should people rely on ChatGPT instead of a therapist or hotline?
Experts warn that AI is a supplement, not a substitute. The American Psychological Association stresses that "AI wellness apps alone cannot solve mental-health crises", and state laws create private rights of action against any companion-bot operator that markets itself as a replacement for licensed care. If you or someone you know is struggling:
- Call or text 988 (U.S.)
- Use your country's local helpline via findahelpline.com
- Reach out to a qualified counsellor
Think of ChatGPT's new guardrails as a speed-bump, not a safety net - they buy time, but professional help remains the critical next step.