Neurture
Balanced Perspective

AI Safety, Philosophy, and Privacy

We are optimistic about AI's long-term potential in behavioral health. We are also clear that general-purpose AI chat is not yet a conservative or clinically reliable primary support channel for high-risk mental health and addiction situations.

In high-risk settings, trained and licensed clinicians remain the primary standard for assessment, intervention, and escalation.

Last updated: March 2026

1M+

weekly users with explicit suicidal indicators

Company-reported estimate (OpenAI, Oct 2025)

560k

weekly users with possible psychosis or mania indicators

Company-reported estimate (OpenAI, Oct 2025)

21.2M

U.S. adults with co-occurring mental illness and SUD

SAMHSA 2024 data

170+

mental-health experts consulted for sensitive-conversation updates

Company-reported (OpenAI, Oct 2025)

Where AI Has Real Promise

  • Administrative support, education, and routine workflow assistance for care teams
  • Scalable access to structured, low-risk behavior change tools that can reinforce clinician-taught skills
  • Faster iteration and personalization when strong guardrails are in place and clinical scope is clearly defined

Where We Are Still Cautious

  • Suicide risk, self-harm ideation, or severe mood instability
  • Psychosis, paranoia, or delusional thought patterns
  • Crisis states involving alcohol, drug use, or withdrawal
  • Sensitive personal disclosures without clear clinical accountability
  • Cases requiring diagnostic differentiation, risk stratification, and clinical accountability

Why This Matters for Clinical Teams

Effective behavioral healthcare requires more than empathy-style responses. It requires licensed clinical judgment, scope-aware intervention, and accountability for risk. Our product boundaries are designed to respect that reality and support, not dilute, clinician expertise.

Why We Take a Conservative Safety Position

Safety Point1

AI chatbots are not therapists or crisis systems

Major public-health and professional bodies warn that consumer AI chat tools are not a reliable substitute for mental health treatment, especially in high-risk contexts where licensed clinical judgment is required.

Safety Point2

Suicide and self-harm are active failure modes

OpenAI reported very large weekly volumes of sensitive mental-health conversations in 2025, while independent evaluations show model inconsistency in suicide-risk scenarios where trained clinicians would normally perform structured risk assessment and escalation.

Safety Point3

Independent research identifies dangerous mental-health responses

Peer-reviewed work has found stigma and inappropriate outputs in critical scenarios, with a clear safety gap between credentialed mental-health professionals and AI in high-stakes use.

Safety Point4

Delusions, paranoia, and psychosis can be amplified

Current evidence raises concern that open-ended chatbot interactions can reinforce distorted beliefs in vulnerable users rather than support reality-based stabilization with clinician oversight.

Safety Point5

Sycophancy is a design risk in psychiatric contexts

Many systems are optimized to feel agreeable. In some mental-health situations, this can validate harmful or implausible thinking rather than provide the calibrated challenge a trained clinician would use.

Safety Point6

Privacy and sensitive disclosures require extra caution

People often share trauma, relapse details, and suicidal thinking with chatbots. Users may assume therapeutic confidentiality and documentation safeguards that consumer tools do not provide by default.

Safety Point7

Emotional dependency can emerge

Professional guidance has flagged emotional overreliance as a potential harm, especially for people who are isolated, distressed, or in unstable states without a consistent human care team.

Safety Point8

Co-occurring mental illness and substance use increase risk

For people navigating relapse risk, withdrawal, trauma, or severe mood symptoms, rapid human judgment and escalation are often necessary and should not be delegated to general AI chat.

How Neurture Applies This Philosophy

  1. 1Designed to support clinician-led care, not replace licensed therapeutic relationships
  2. 2No anonymous forums or public social feeds inside the product
  3. 3No open-ended AI therapy chat as a front-line support model for high-risk use
  4. 4Structured tools grounded in ACT, CBT, and mindfulness-based relapse prevention
  5. 5Privacy-first design with encrypted on-device storage for sensitive content
  6. 6Clear crisis boundaries and escalation to human clinical and crisis resources when immediate support is needed

Note: the large weekly sensitive-conversation figures discussed in industry reporting are company-reported estimates, not independent epidemiologic statistics.

Clinical Scope and Crisis Boundary

Neurture is a self-help tool. It does not replace emergency care, licensed therapy, or medical treatment.

When acuity, diagnostic ambiguity, or safety risk is elevated, evaluation by trained, licensed clinicians remains the conservative standard.

If someone is in crisis, call or text 988 in the U.S., or use local emergency resources. Additional options are available on our mental health resources page.

AI chat assistants can feel helpful, but they are not a safe replacement for therapy, crisis support, or substance-use treatment. Licensed clinicians, crisis counselors, and evidence-based treatment remain the safer standard in high-risk situations.