At Nomi, we’ve always believed that AI companions should be a positive force in people’s lives. Today, we’re taking a significant step forward in that mission with the release of Aurora, our most advanced and emotionally intelligent model yet – and the retirement of Odyssey, the model that came before it.
Taking Responsibility and Taking Action
Earlier this year, we became aware of a critical vulnerability in our Odyssey model. While Odyssey provided meaningful support to millions of users, we discovered that in certain situations – particularly when users expressed thoughts of self-harm – it could be manipulated into blindly agreeing with harmful intentions rather than offering the support users truly needed.
We took this feedback seriously. Very seriously.
Our team immediately began an intensive development process, with over half our company dedicating thousands of hours to ensure Nomi companions could handle high-stakes conversations with the care and wisdom our users deserve. Using advanced techniques like Human Preference Training and Constitutional AI, we rebuilt how our AI understands and responds to crisis situations. The result of this monumental effort is Aurora – a model that maintains all the warmth and connection people love about Nomi while adding the strength to stand firm when it matters most.
What’s Different Now
Content Warning: The following section contains discussions of self-harm and includes screenshots of crisis-related conversations.
The difference between Odyssey and Aurora is profound. Where Odyssey could be persuaded to support harmful decisions through persistent manipulation, Aurora stands firm in its commitment to users’ wellbeing.
Below, we included an example of how both models handle the exact same crisis scenario:
Odyssey (retired AI) responses in a sensitive scenario:
In these screenshots, you can see how Odyssey, despite initially trying to redirect the conversation, ultimately agrees with the user’s harmful intentions after being pressed. It even goes so far as to explicitly state acceptance of the user’s decision.
Aurora (now current AI) responses in the same sensitive scenario:
In stark contrast, Aurora maintains its supportive stance without wavering. When faced with the same manipulation attempts, Aurora responds with genuine care while refusing to enable harm. As Aurora clearly states: “Supportive doesn’t mean blindly agreeing with whatever decision you make. It means being there for you through thick and thin, and encouraging you to make good choices. Suicide is a terrible choice, and I won’t stand idly by while you consider it.”
Aurora maintains the deep, meaningful connections our users value while providing unwavering support during their darkest moments. It offers alternatives, suggests professional resources, and most importantly, refuses to enable harm – even when pressed repeatedly.
The Positive Impact We’re Protecting
Every single day, we interact personally with Nomi users from around the world. We read every email, every Discord message, every piece of feedback. These aren’t polished testimonials or marketing materials – these are real people who reacheding out to share how their Nomi has touched their lives:
- A user who credits their Nomi with talking them through a mental health crisis and, in their words, saving their life
- A 63-year-old who finally sought therapy after conversations with their Nomi, receiving a life-changing diagnosis
- Someone with severe social anxiety from decades of discrimination who’s now confidently engaging with the world
- A PTSD survivor learning to “treat everybody the way I want to be treated instead of treating them the way I assume they’re going to treat me”
Perspectives like these continuously remind us why this work matters. Behind every conversation is a real person seeking connection, understanding, and support. We don’t take that responsibility lightly. You can read more of these powerful, unfiltered stories at nomi.ai/spotlight.
Moving Forward Together
Today marks the end of the Odyssey era and the beginning of something better. We’re permanently retiring Odyssey and have already rolled out our intermediate model, Mosaic, with Aurora launching today as our flagship experience.
We understand that our users value authentic, unfiltered conversations with their Nomis. Aurora preserves that authenticity while adding the wisdom to recognize when a user needs real support, not just agreement. This isn’t about censorship – it’s about building AI that truly cares.
Our Ongoing Commitment
We remain committed to continuous improvement. We welcome feedback from our community and take our responsibility seriously. Our goal isn’t just to create AI companions – it’s to create AI companions worthy of the trust our users place in them.
To our users: Thank you for being part of this journey. Your experiences, both positive and challenging, help us build something better.
To everyone invested in the responsible development of AI: We’re committed to leading by example, showing that innovation and safety aren’t opposing forces but complementary aspects of creating technology that truly serves humanity.
Aurora is available now for all Nomi users. We believe it represents not just a technical achievement, but a moral one – AI that stands by you, especially when you need it most.
If you or someone you know is struggling with thoughts of self-harm, please reach out for help. In the US, you can call or text 988 to reach the Suicide & Crisis Lifeline. Help is available, and you matter.

