What is Nova?
Introducing Nova, your AI coach for personal wellbeing and the next evolution in AI-driven coaching. More than just a chatbot, it's a powerful tool designed to empower and guide you on a journey of self-discovery, personal growth, and improved mental wellbeing. This marks the future of personalized wellbeing support, revolutionizing how you approach and conquer challenges, both in your professional and personal life.
Jump to frequently asked questions
Key Features of Nova
Maximize work potential
Nova acts like a compassionate, discreet manager in your pocket, offering problem-solving assistance, day planning, and tips for workplace communication.
Proactive wellbeing
It's a continuous source of guidance and motivation, promoting healthy choices and resilience to support your wellbeing to flourish.
Mental health navigation
Nova helps manage workplace-related mental wellbeing challenges, providing resources and connecting you to professional support for serious issues.
Privacy and security
Unmind ensures user confidentiality and security, adhering to top data protection standards.
Constant coaching support
Available 24/7, Nova offers reliable guidance, coping strategies, and support at any time.
For more information, please contact our support team.
Frequently Asked Questions (FAQs)
How does Nova work?
How is Nova different from ChatGPT?
How does it learn?
Do you use transcripts to train the coach?
How do we know what it is going to say?
How has it/does it continue to be validated?
How do you ensure the advice is appropriate?
What happens with the data, and where does the data go? Is it anonymized?
Are outputs checked by any human?
Is the data going into Nova stored anywhere other than what's covered by our current contract?
How do we handle the cultural bias that Nova might have?
How is Unmind keeping ethics and safety in mind?
Will data privacy at Unmind change with the release of AI?
What kind of AI certifications does Unmind have?
How does Nova work?
Nova was built by Unmind researchers and Clinical Psychologists. Through robust guardrails and rigorous prompts, Nova uses a tool called GPT-4o, by OpenAI, to give users an engaging, ethical and friendly coaching experience. Nova can understand language. When you talk to it, it gives responses like a real person.
How is Nova different from ChatGPT?
Unmind is currently leveraging OpenAI’s GPT-4o LLM technology, but it is not using ChatGPT. ChatGPT is a product that uses GPT-4o as its LLM. While both are created by OpenAI, we are connecting to the LLM directly via an API, allowing us to have more control over how questions are handled and answered.
Nova, developed by Unmind researchers and Clinical Psychologists, offers a specialized experience distinct from ChatGPT. Powered by OpenAI’s LLM, it features prompts and guardrails crafted by Clinical Psychologists at Unmind to ensure an engaging, ethical, and supportive coaching experience.
Nova focuses on mental health and wellbeing within boundaries set by our Clinical Psychologists. Nova connects users to the right resources at the right time. Nova is not a crisis or emergency intervention service, however, in moments of crisis, it is equipped to direct users to professional help and provide relevant resources so they can get the right support in a timely manner.
Feature | ChatGPT | Nova |
Developer | OpenAI | Unmind researchers and Clinical Psychologists. |
Core technology | GPT-4o | GPT-4o with added guardrails and prompts by Unmind. |
Purpose | General Q&A | Personalized mental health and wellbeing support. |
Focus | General knowledge and conversational tasks | Supporting mental health and wellbeing within acceptable boundaries. |
Mental health safety | Basic responses | Specific mental health safety guidelines designed by clinical psychologists. Complies with WHO guidelines for ethics and governance of AI in health. |
Personalisation | Limited | Coming soon! Learns about the users to provide tailored recommendations. |
Continuity | One-off interactions | Coming soon! Ongoing support, helping users develop greater insights over time. |
Ethical guidelines | Standard guidelines |
Robust guardrails designed by Unmind for ethical interactions. |
Resource direction | General advice |
Directs users to appropriate resources as determined by Unmind's psychology team. |
How does it learn?
The Unmind product team and Clinical Psychology team ensures that Nova is kept up to date and gives appropriate responses by setting guard rails within the prompt. We do this based on clinical best practices, user research and feedback, and looking at anonymous chats to give users a highly personalized and safe experience. AI models are known to sometimes say things as if they're true, even if they're not. We've guarded against this by telling Nova not to give fact-based responses, as explicitly detailed in the prompt.
Do you use transcripts to train the coach?
We do not use Nova conversation transcripts, Talk client-practitioner transcripts (we do not record these), or other human-human conversation transcripts (e.g., from research databases) to train Nova.
Training language models involves specific processes such as reinforcement learning, where the model is adjusted based on feedback about its responses. This type of training was carried out by OpenAI during the development of GPT-4 and its predecessors. However, since Unmind has not developed its own language model, we have not conducted such training. Unmind does not provide OpenAI with Unmind user data for training purposes.
We review anonymized Nova transcripts for quality assurance, safety and improvement. Any changes or enhancements to Nova are implemented through adjustments to our guardrails and guiding instructions (Unmind’s specialized prompts). Nova itself does not learn or modify its behavior based on this data; instead, improvements are designed, tested, and deployed by Unmind's team.
How do we know what it is going to say?
While we can't predict every response, we have robust systems in place, including guardrails, rigorous testing, and a dedicated team of experts who continually review and enhance Nova. We empower Nova to engage in natural, human-like conversations while prioritizing safety.
Understanding responses
Nova, like all advanced chatbots, occasionally generates responses that might sound correct but are not based on the data it has been trained on or the information provided by Unmind. These instances, known as "hallucinations," can be more noticeable in tools designed to provide factual information.
How does Unmind handle this?
Although it’s impossible to eliminate these entirely, we have implemented multiple strategies to minimize their occurrence and impact:
- Access to reliable content: Nova uses Unmind’s comprehensive content library, ensuring its responses are grounded in our evidence-based resources.
- Clear instructions: We have equipped Nova with specific guidelines to ensure it remains honest, accurate, and transparent in its interactions.
- Balanced interaction: Nova is designed to provide supportive yet realistic responses, sometimes challenging the user to promote growth, rather than merely aiming to please.
- User awareness: We emphasize the importance of verifying critical information and answers, making users aware that Nova might occasionally error.
How has it/does it continue to be validated?
We ensure that Nova is reliable and effective through a comprehensive validation process that includes both past and ongoing processes.
Past Validation
- Quality Control and Improvement
- Our team, including Data Scientists, Clinical Psychologists, and Product Developers, specially designed guiding instructions (known as a system prompt) for Nova to ensure the model navigates its knowledge base efficiently, delivering accurate, safe, and relevant responses.
- Before release to Beta, the system prompt was rigorously tested by the team and compared against our established safety and performance test cases to ensure Nova behaved in the intended manner
- Prompt Engineering and Testing
- We established a structured process for making changes to Nova’s system prompt. Before deployment, these instructions underwent rigorous testing by the team and stakeholders using a combination of predefined scenarios, unscripted exploratory tests, and evaluation criteria to ensure Nova's system prompt was safe and effective.
- Foundation Model Validation
- Nova is built on OpenAI foundation models, which have undergone extensive validation. Currently, Nova works on GPT-4o which has extended their track record of safe and validated model development, including incorporating feedback from over 50 experts in various domains and using reinforcement learning with human feedback (RLHF) to fine-tune its behavior.
Ongoing Validation
- Quality Control and Improvement
- Our team continues to review anonymized conversation transcripts regularly to identify new issues and make necessary adjustments. Any behavioral issues or bugs are addressed promptly through system prompt changes.
- Improvements and fixes are also informed by user feedback to ensure commitment to continuous improvement keeps the chatbot aligned with user needs and technological advancements.
- Prompt Engineering and Testing
- We maintain our structured process for modifying Nova’s guiding instructions. Before any updates are deployed, they undergo rigorous testing to ensure continued safety and effectiveness.
- Content Moderation and Guided Responses
- We enforce strict content moderation to prevent the chatbot from engaging in inappropriate topics. In these instances, we still provide users with helpful information through hard-coded messages directing them to appropriate resources (compared with other models that end the conversation without providing resources).
How do you ensure the advice is appropriate?
Nova is there to provide support and can direct users to appropriate resources as determined by our Psychology team. Still, it’s important to note it will not give medical advice, mental health diagnosis, assessment, or treatment.
- We have a dedicated team that manually tests the responses.
- We have guardrails in place to ensure the responses are appropriate.
- We have a set of testing tools that are run each time we make an update.
- OpenAI has a set of safeguards and safety mitigations that prevent their models from being misused; any usage of their models requires adherence to these policies, including our use for Nova. You can read more about these policies here.
What happens with the data, and where does the data go? Is it anonymized?
Conversations with Nova remain strictly confidential between you and Unmind at all times. We employ robust data security measures, ensuring that conversation histories are stored with restricted access, strong encryption and in line with Unmind’s data retention policies. While conversations may be identifiable during this the data retention period, we implement extensive safeguards to maintain their confidentiality and integrity. To ensure the quality and continuous improvement of Nova, a limited group of authorised experts, including Clinical Psychologists and Data Scientists, may review samples of anonymized conversation data. This helps us enhance Nova’s functionality while safeguarding user privacy. In addition, we aggregate anonymized conversation data to analyse trends and provide businesses with insights. These insights are purely high-level and focus on general themes—no individual conversations or identifiable details are ever shared. Any data processed by OpenAI, our third-party data processor, is anonymised and automatically deleted after 30 days. Conversation data is never used to train OpenAI’s models, ensuring that your privacy is a top priority at all times.
Are outputs checked by any human?
Yes, humans do review Nova’s responses. Our Science team conducts quality checks by looking at anonymous conversation records stored in our system to ensure we maintain high standards. Additionally, OpenAI collaborates with various experts to make sure the models are safe. You can learn more about their efforts here. It’s important to note that OpenAI does not access any personal information during this process.
All conversation transcripts reviewed are fully anonymous and do not contain any identifiable data. Any personal or identifiable information is stored in a separate, secure database with restricted access, ensuring privacy is maintained at all times. Your conversations remain confidential between you and Unmind. We also run secondary processes to analyze overall trends, but these results are always anonymized and presented as aggregated insights for employers.
Is the data going into Nova stored anywhere other than what's covered by our current contract?
No, the data you provide to Nova is only stored where our current contracts allow, specifically within OpenAI and AWS Ireland (where our platform data is hosted). While we do use other third-party tools, they only process the information temporarily and do not store it long-term.
- OpenAI retains user transcripts for 30 days in the event of a violation of their terms of service. We have a moderation layer in place to prevent any violations, so there is generally no need for OpenAI to review the transcripts. All data stored by OpenAI is deleted after 30 days.
- Historical conversation data is stored within AWS for as long as the user has an Unmind account. For more information, please refer to our privacy policy, specifically the 'How we collect your personal data' section. Once a user's account is closed, all data associated with that user is permanently and irrevocably anonymized after 60 days.
How do we handle the cultural bias that Nova might have?
At Unmind, we take cultural sensitivity seriously. We recently published an in-depth scientific study of the international validity of our Wellbeing Tracker for UK/ANZ/US territories. We also take a proactive approach to ensuring Nova is safe and credible, including having clinicians from our Psychology team review anonymized conversation transcripts and continuously working to make Nova better. We’re always keen to hear from users from different cultures about their experiences with Nova. Please share your feedback here.
Nova is made by a diverse team from around the world, but like any product, it might have some cultural biases. We understand that mental wellbeing is seen differently in different cultures, so we work hard to make sure Nova respects those differences. Our psychologists help guide Nova's responses, considering different cultural views on mental wellbeing.
It's worth noting that Nova learns from a lot of text data, much of which comes from the internet. This data might have its own cultural biases, mainly from English-speaking and Western sources. Sometimes these biases might show up in Nova's responses.
Our goal is to make sure Nova is helpful to everyone, no matter where they're from.
How is Unmind keeping ethics and safety in mind?
Unmind is committed to advancing the safe, ethical, and responsible use of AI.
We believe in the power of AI to make mental health and wellbeing support more accessible, fair, and inclusive, reflecting the diversity of human experiences and cultures. Our rigorous, evidence-based approach is supported by our in-house science team and enhanced by continuous feedback and improvement processes. Unmind emphasizes safety by implementing strict protocols to prevent any potential harm their AI systems might cause, including psychological and societal impacts.
We adopt a science-driven, human-centric strategy, ensuring their practices are grounded in rigorous research and evidence.
We value transparency and accountability, continuously engaging with stakeholders and incorporating feedback to enhance their AI solutions.
We ensure privacy and confidentiality, uphold data protection standards rigorously, and maintain a commitment to continuous improvement through a feedback-informed cycle of measuring, understanding, and acting.
Will data privacy at Unmind change with the release of AI?
There will be no changes to our privacy policy at this time, and we will maintain strong levels of data privacy.
Nova integrates with our existing infrastructure. The data you give to Nova is only stored where our current contracts allow. While we do use other third-party tools, they only process the information temporarily, meaning it's not stored long-term (all data is deleted after 30 days from OpenAI’s system.)
You can find this in our privacy policy, Special Category Data clause.
As part of the Unmind application sign-up process, a new user must accept our health and wellbeing data consent box (as well as our terms of use and privacy policy) before accessing/using any part of the app.
What kind of AI certifications does Unmind have?
There are no specific certifications available for compliance with the EU AI Act or the WHO guidelines. Similar to the GDPR, the EU AI Act is a regulatory framework rather than a certification.
Future Certification Plans
- We are considering pursuing ISO 42001, the international standard for AI management systems. Achieving this certification is a long-term goal and may take over six months.
- Once we achieve ISO 42001 certification, it will include compliance with all relevant regulations, including the EU AI Act, and will be influenced by guidelines from expert bodies such as the WHO.
EU AI Act Compliance
Nova is classified as a limited-risk AI system under the EU AI Act. This classification involves specific obligations:
- Transparency:
- We ensure users are informed that they are interacting with an AI system, as required by the EU AI Act.
- Nova clearly communicates that it is an AI tool throughout user interactions.
- Content Disclosure:
- In cases where Nova generates or manipulates content, we disclose that the content is AI-generated to prevent any misunderstanding.
By adhering to these guidelines, we maintain transparency and trust with our users, ensuring compliance with applicable regulations.