As artificial intelligence becomes more embedded in our lives, it's crucial that we ensure it evolves with responsibility and compassion.
People often turn to AI when they're feeling overwhelmed, alone, or even in emotional crisis. Unfortunately, today's AI systems don't have built-in tools to recognize serious red flags or offer the human help a user might truly need.
We are calling on major AI developers including OpenAI, Gemini, DeepMind, Anthropic, xAI, Mistral, Cohere, and others to implement a simple but powerful solution:
Include a baseline mental-health-awareness layer so the AI can detect signs of distress in conversation patterns.
Offer a gentle, private message, such as:
"It sounds like you're going through something difficult. Would you like to talk to a human support line?"
Partner with or staff confidential, in-house mental health support, using licensed professionals bound by patient privacy laws, not surveillance or police reports.
AI should be a tool for healing, not harm.
Let's create systems that don't just listen but know when to say, "It's okay to ask for help."