Purpose
Welcome to the future of innovation at Unmind. In this guide, we're excited to share how our commitment to safety and responsibility shapes the development and deployment of our AI technologies. Explore how these cutting-edge tools not only align with, but also enhance our company’s purpose and vision, ensuring we advance together in the most secure and ethical way possible.
Guiding Principles
Always be human-centric
We always put people first and are in the business of nurturing and celebrating mental health and wellbeing. We design all our AI systems with a deep understanding of human needs front and center, and believe that AI should empower people in all aspects of their lives.
- We’re creating a world where mental health is celebrated and we believe in the power of AI to help us achieve this. Aligned with our drive, AI should be user-friendly, equitable, accessible, and inclusive of the diversity of human experience and cultures.
- One of our core values is “Be Human”. We celebrate uniqueness and proactively work to identify and mitigate bias to promote a fair and inclusive user experience for all.
- The use of technology for good underpins our endeavors, and we extend this belief to the use of AI to be used for the benefit of humanity. This includes the individual, organizations, and wider cultures.
- We are aligned with and informed by the WHO Ethical Principles for the use of AI.
Always prioritize safety
Our AI systems will enhance, not diminish, our users and others’ mental health and wellbeing. Some highlights from our Mental Health Safety Guidelines for AI Systems guidelines include:
- It will not engage in, encourage, or suggest ideas or behaviors that might cause societal, financial, lifestyle, psychological, or physical harm or damage.
- Our AI systems are developed to adhere to strict safety guardrails while being considerate and compassionate of sensitive mental health needs.
- We take a multidisciplinary approach to safety, including embedded clinicians and psychological experts within AI projects.
- We are aligned with WHO guidance on the provision and deployment of AI systems and endeavor to adhere to local and international legislation and regulations as they are developed and implemented. We ensure we only partner with AI developers who also adhere to these guiding principles, our internal values, and high ethical standards.
Always be powered by science
Our in-house science team ensures that everything we do is evidence-based and designed to actually be effective.
- In the emerging field of AI, we quickly learn from and implement new practices to ensure our best-in-class products and services to further our purpose and vision.
- We back this up through the rigorous research conducted by our in-house science team.
Always be learning
Central to our approach in everything we do, we are committed to continuous improvement and learning to deliver the best products and services.
- We continuously improve AI systems based on user feedback, usage data, scientific advancements, and ethical considerations.
- We implement rigorous testing and ongoing monitoring of our AI systems to prevent potential harm and misuse, and ensure any risks or mistakes are identified and mitigated.
Always be transparent & accountable
We believe these are fundamental in building trust, which in turn, is integral for mental health and wellbeing to flourish, and a cornerstone of our work. Our approach to AI is no different.
- We use a blend of in-house and external technologies (including foundation model providers) to ensure a robust, intelligent, and responsive user experience.
- We ensure users are informed about the nature, capabilities, and limitations of the AI system they engage with.
- We provide guidelines on how our AI systems should behave in critical domains (e.g., our Mental Health Safety guidelines) to enhance the users’ ability to differentiate bugs and features, and further enable us to streamline fixes if required.
Always be secure & private
We will never share user or client information without consent, and uphold strict data privacy standards to ensure all your personal and sensitive data is secure and protected. We recognize the importance of these for trust and effectiveness in mental health and wellbeing.
For existing customers, rest assured, AI is fully covered under your existing contract, with no changes to our robust Privacy Policy, maintaining our commitment to your data security.
- We have achieved certification in the ISO 27001 Standard and Cyber Essentials. We also comply with GDPR and HIPAA.
- When we use external proprietary models, data is processed quickly and efficiently right within the system's memory to make sure everything runs smoothly and instantly.
- All user data that is processed by our 3rd party proprietary models is only stored in memory and is deleted after no more than 30 days. It is not used to train external models.
Implementation Guidelines
💡 How are we putting these principles into practice:
- AI Steering Group: Senior leadership and key experts meet regularly to review current and future AI projects and uses to ensure they align with the above principles and Unmind’s values.
- Staff Training: All staff is trained and up-skilled in the applied understanding and use of AI, including ethical and responsible use and data protection (generally and in consideration of sensitive mental health information).
- Policy Reviews: We regularly review and update our AI policies as necessary to reflect any relevant advancements in AI technology or ethical application, mental health, and user needs, and ensure legislative and regulatory compliance.
- Consent & Autonomy: To build trust and effectiveness, user consent will always be obtained transparently. We give autonomy to how users engage with AI; including how their data is shared with employers.
- Informed by Science: All AI projects are informed by the latest and best research, and staff are supported to stay up-to-date with emerging research in AI, ethical application of AI, and mental health. Our in-house clinicians and psychological experts are embedded within and work closely with product teams on all AI projects.
- Feedback: We proactively collect feedback and provide a variety of mechanisms for users to provide feedback or make complaints, and have well-established processes for addressing these promptly. Feedback and scientific evaluation are crucial to our Measure, Understand, Act approach and providing effective, human-centric solutions.