AI Safety, Philosophy, and Privacy
We are optimistic about AI's long-term potential in behavioral health. We are also clear that general-purpose AI chat is not yet a conservative or clinically reliable primary support channel for high-risk mental health and addiction situations.
In high-risk settings, trained and licensed clinicians remain the primary standard for assessment, intervention, and escalation.
Last updated: March 2026
1M+
weekly users with explicit suicidal indicators
Company-reported estimate (OpenAI, Oct 2025)
560k
weekly users with possible psychosis or mania indicators
Company-reported estimate (OpenAI, Oct 2025)
21.2M
U.S. adults with co-occurring mental illness and SUD
SAMHSA 2024 data
170+
mental-health experts consulted for sensitive-conversation updates
Company-reported (OpenAI, Oct 2025)
Where AI Has Real Promise
- Administrative support, education, and routine workflow assistance for care teams
- Scalable access to structured, low-risk behavior change tools that can reinforce clinician-taught skills
- Faster iteration and personalization when strong guardrails are in place and clinical scope is clearly defined
Where We Are Still Cautious
- Suicide risk, self-harm ideation, or severe mood instability
- Psychosis, paranoia, or delusional thought patterns
- Crisis states involving alcohol, drug use, or withdrawal
- Sensitive personal disclosures without clear clinical accountability
- Cases requiring diagnostic differentiation, risk stratification, and clinical accountability
Why This Matters for Clinical Teams
Effective behavioral healthcare requires more than empathy-style responses. It requires licensed clinical judgment, scope-aware intervention, and accountability for risk. Our product boundaries are designed to respect that reality and support, not dilute, clinician expertise.
Why We Take a Conservative Safety Position
AI chatbots are not therapists or crisis systems
Major public-health and professional bodies warn that consumer AI chat tools are not a reliable substitute for mental health treatment, especially in high-risk contexts where licensed clinical judgment is required.
Suicide and self-harm are active failure modes
OpenAI reported very large weekly volumes of sensitive mental-health conversations in 2025, while independent evaluations show model inconsistency in suicide-risk scenarios where trained clinicians would normally perform structured risk assessment and escalation.
Independent research identifies dangerous mental-health responses
Peer-reviewed work has found stigma and inappropriate outputs in critical scenarios, with a clear safety gap between credentialed mental-health professionals and AI in high-stakes use.
Delusions, paranoia, and psychosis can be amplified
Current evidence raises concern that open-ended chatbot interactions can reinforce distorted beliefs in vulnerable users rather than support reality-based stabilization with clinician oversight.
Sycophancy is a design risk in psychiatric contexts
Many systems are optimized to feel agreeable. In some mental-health situations, this can validate harmful or implausible thinking rather than provide the calibrated challenge a trained clinician would use.
Privacy and sensitive disclosures require extra caution
People often share trauma, relapse details, and suicidal thinking with chatbots. Users may assume therapeutic confidentiality and documentation safeguards that consumer tools do not provide by default.
Emotional dependency can emerge
Professional guidance has flagged emotional overreliance as a potential harm, especially for people who are isolated, distressed, or in unstable states without a consistent human care team.
Co-occurring mental illness and substance use increase risk
For people navigating relapse risk, withdrawal, trauma, or severe mood symptoms, rapid human judgment and escalation are often necessary and should not be delegated to general AI chat.
How Neurture Applies This Philosophy
- 1Designed to support clinician-led care, not replace licensed therapeutic relationships
- 2No anonymous forums or public social feeds inside the product
- 3No open-ended AI therapy chat as a front-line support model for high-risk use
- 4Structured tools grounded in ACT, CBT, and mindfulness-based relapse prevention
- 5Privacy-first design with encrypted on-device storage for sensitive content
- 6Clear crisis boundaries and escalation to human clinical and crisis resources when immediate support is needed
Note: the large weekly sensitive-conversation figures discussed in industry reporting are company-reported estimates, not independent epidemiologic statistics.
Clinical Scope and Crisis Boundary
Neurture is a self-help tool. It does not replace emergency care, licensed therapy, or medical treatment.
When acuity, diagnostic ambiguity, or safety risk is elevated, evaluation by trained, licensed clinicians remains the conservative standard.
If someone is in crisis, call or text 988 in the U.S., or use local emergency resources. Additional options are available on our mental health resources page.
AI chat assistants can feel helpful, but they are not a safe replacement for therapy, crisis support, or substance-use treatment. Licensed clinicians, crisis counselors, and evidence-based treatment remain the safer standard in high-risk situations.
References
- American Psychological Association (2025): Health Advisory on Generative AI Chatbots and Wellness Apps for Mental Health
- WHO: Ethics and Governance of Artificial Intelligence for Health
- NIMH: Opportunities and Challenges of Developing Information Technologies
- Moore et al. (2025, FAccT): Stigma and inappropriate responses in LLM mental-health scenarios
- McBain et al. (2025, Psychiatric Services): Evaluation of LLM alignment on suicide-risk scenarios
- Campbell et al. (2025, JMIR Mental Health): Generative AI responses to suicide inquiries
- Nature (2025): Can AI chatbots trigger psychosis?
- OpenAI (Oct 27, 2025): Strengthening ChatGPT responses in sensitive conversations
- 988 Lifeline
- SAMHSA: Co-occurring Disorders and Other Health Conditions