While we can't predict every response, we have robust systems in place, including guardrails, rigorous testing, and a dedicated team of experts who continually review and enhance Nova. We empower Nova to engage in natural, human-like conversations while prioritizing safety.

Understanding responses

Nova, like all advanced chatbots, occasionally generates responses that might sound correct but are not based on the data it has been trained on or the information provided by Unmind. These instances, known as "hallucinations," can be more noticeable in tools designed to provide factual information.

How does Unmind handle this?

Although it’s impossible to eliminate these entirely, we have implemented multiple strategies to minimize their occurrence and impact:

  • Access to reliable content: Nova uses Unmind’s comprehensive content library, ensuring its responses are grounded in our evidence-based resources.
  • Clear instructions: We have equipped Nova with specific guidelines to ensure it remains honest, accurate, and transparent in its interactions.
  • Balanced interaction: Nova is designed to provide supportive yet realistic responses, sometimes challenging the user to promote growth, rather than merely aiming to please.
  • User awareness: We emphasize the importance of verifying critical information and answers, making users aware that Nova might occasionally error.