Responsible Use
How we approach AI ethics, user safety, and responsible technology.
Inline Safety & Crisis Support
Mira uses OpenAI’s built-in safety features to detect and respond to crisis language. Anchor may add a localized reminder that Mira isn’t human and link to local helplines. Mira does not replace professional or emergency care.
Truth-Forward AI
Mira is designed to be honest and direct. She won't enable harmful patterns or provide misleading comfort. Our AI responds with care, but never at the expense of truth.
Privacy by Design
Every conversation is encrypted. Your data belongs to you — never sold, never shared, never used to train external AI models.
Non-Enabling Support
Anchor is built to encourage reflection, not dependency. Mira helps you notice patterns and think clearly, but she's not a replacement for professional care or human connection.
Transparent Limits
We're clear about what Mira can and cannot do. She's a companion for reflection, not a therapist, advisor, or crisis counselor. We never oversell AI capabilities.
Safety Commitments
- ✓Crisis Detection: Mira recognizes signs of distress and provides resources for professional help, not generic reassurance.
- ✓No Medical Advice: Anchor never diagnoses, prescribes, or replaces clinical judgment. We direct users to qualified professionals.
- ✓User-Controlled Data: You decide what gets saved and can export or erase everything at any time.
- ✓Continuous Improvement: We audit AI behavior regularly, refine safety guardrails, and iterate based on user feedback.
What Anchor Is Not
Not Therapy
Mira is a journaling companion, not a licensed therapist or mental health professional.
Not a Crisis Service
If you're in crisis, contact a professional hotline or emergency services immediately.
Not a Replacement for People
Human connection matters. Anchor supplements reflection, but never replaces real relationships.
Not Ad-Supported
We don't monetize your attention or data. Our model is subscription-based and transparent.
Our Commitment
Building responsible AI means making hard choices. We prioritize safety over engagement metrics, honesty over reassurance, and user control over convenience.
Anchor is designed to help you think more clearly, notice patterns in your own words, and reflect with intention. We believe AI should empower thoughtful self-awareness, not replace human judgment or professional care.