The tools

Checklist : AI Ethics, Mental Health & Suicide Prevention

How to implement AI ethically? You could start by asking yourself a series of key questions. By using this checklist, you can review 38 important ethical challenges to consider when using AI. This list has been validated specifically for the use of AI in Mental Health Care and to prevent suicide by 16 international experts.

This open access tool is versatile. Most of the items are universal and could be used in different contexts. We hope you find it interesting and useful. Don’t hesitate to contact us if you have any comments. Read this if you want to know more about the checklist, the methodology, etc.

Description of the Autonomous Intelligent System

Objectives Describe your project’s objectives and/or rationale and describe the role and functioning of your Autonomous Intelligent System
Technology Name and describe the technologies and techniques used (e.g. supervised or unsupervised learning, machine learning, random forest, decision tree…). You can refer to the report of the AI Initiative incubated at Harvard http://ai-initiative.org/wp-content/uploads/2017/08/Making-the-AI-Revolution-work-for-everyone.-Report-to-OECD.-MARCH-2017.pdf. Mention the names of any technological intermediary or supplier allowing you to use the technology (e.g. technical provider, cloud provider)
Funding & conflict of interest Indicate all sources of funding for your project (public and private) and who might have an interest (e.g. financial, political) in your Autonomous Intelligent System
Credentials If you have noted that you or someone in your team has an expertise in relation to the Autonomous Intelligent System (e.g. in a document, a webpage, an interview), clearly indicate the name of the professional, their technical, academic or medical credentials, and their training (e.g. “Professor Smith, PhD in computer systems engineering from Harvard University. Specialist in the Online Detection of Depression”)
Target population Describe your target population and its size, or identify its subgroups and their sizes. Describe if and how the target population (and, or its subgroups) assisted in the design of your Autonomous Intelligent System.
Evidence If you made claims about your Autonomous Intelligent System’s efficacy, performance, or benefits, please justify them and provide the evidence underlying them. If you have mentioned or used scientific papers, please cite your sources
Testing If you have run your Autonomous Intelligent System under adversarial examples or worst-case scenarios, describe the type of tests used and their outcomes
Complaints Describe the process whereby users can formally complain or express their concerns about your Autonomous Intelligent System

Privacy and transparency

 

Responsibility Describe who will be legally accountable for your Autonomous Intelligent System’s actions or decisions
Data collection Describe what data have been collected and used (for the training, evaluation and operational phases), where they are stored, who collected the data, who will have access to the data, and what safeguards are in place to ensure secure storage
Accessibility In all the documents or texts, confirm that you have used a language adapted to target users and, when relevant, accommodated special needs some users may have.
Informed consent State whether you have obtained informed consent and, if so, how, when, and from whom. Describe its nature (formal, implied, renewable, dynamic) and include the exact wording on the consent form. Note whether you have received ethical approval from an institution (eg: hospital, university) for your consent forms
Consent withdrawal State whether you have specified the duration of the consent and whether you have implemented consent withdrawal mechanisms (e.g. opt-out clause, unsubscribe option). Specify what happens if an user wants to stop using the AIS or delete his or her information
Access to the data Access to the data: State if an individual can access any data related to him or her and obtain the data in a clear and structured export document. If this is not possible, explain why
Right to be forgotten Describe whether an individual can retrieve and erase all of his or her information, and if so, how. Describe the mechanism
Minors Note whether information concerning minors is used for the Autonomous Intelligent System. If it is, and it is intentionally collected, please indicate whether parental consent is required. If it is, and it is unintentionally collected, please describe what can be done to remove this information

Security of the Autonomous Intelligent System

Embedded recording mechanism If you have used a technology to monitor and record all your Autonomous Intelligent System’s decisions and actions, detail how and in what circumstances these records could be made available to authorities, external observers or auditors 
Third-parties Indicate who has access to the data (individuals and organizations), and whether identifying information about participants is included in accessible data
Data protection Detail all the measures taken to protect any sensitive and personal information
Audit trails Explain who has access to the data and when
Autonomy Explain if your system has the autonomy to take actions or make decisions on its own. If yes, detail the degree of autonomy of your Autonomous Intelligent System (e.g. partial or complete) 
Moderation Explain if your Autonomous Intelligent System requires human intervention or moderation. If yes, describe who will have access to your Autonomous Intelligent System, and what will the guides regulating their intervention be

Healh-related risks

 

Embedded recording mechanism If you have used a technology to monitor and record all your Autonomous Intelligent System’s decisions and actions, detail how and in what circumstances these records could be made available to authorities, external observers or auditors 
Third-parties Indicate who has access to the data (individuals and organizations), and whether identifying information about participants is included in accessible data
Data protection Detail all the measures taken to protect any sensitive and personal information
Audit trails Explain who has access to the data and when
Autonomy Explain if your system has the autonomy to take actions or make decisions on its own. If yes, detail the degree of autonomy of your Autonomous Intelligent System (e.g. partial or complete) 
Moderation Explain if your Autonomous Intelligent System requires human intervention or moderation. If yes, describe who will have access to your Autonomous Intelligent System, and what will the guides regulating their intervention be

Potential Biases associated with the Autonomous Intelligent System 

 

Ethics If you have requested an expertise on ethics during the design of your Autonomous Intelligent System, detail the parties involved and their contributions
Exclusion &
discrimination
Explain if there are risks of exclusion or discrimination related to your Autonomous Intelligent System (e.g. based on gender, race, age, religion, politics, health, sexual orientation, etc.)
Stigmatization Describe how you avoided using languages, images, and other content that could stigmatize users (e.g., reference to guidelines on safe media reporting and public messaging about suicide and and mental illness)
Detection If applicable, explain any potential detection errors that might be made by your Autonomous Intelligent System (e.g. false positives, false negatives) and estimate their extent (e.g. precision, recall). Describe any potential adverse consequences for users. If applicable, describe any incidental finding made by your Autonomous Intelligent System
Data handling If applicable, describe the nature and purpose of any data manipulation (e.g. cleaning, transformation) and by whom they were performed. Describe what will be done with the metadata
Data selection Describe where the data came from, how you accessed them (e.g. through an API) and if you think there might be a selection or sampling bias (e.g. the data comes from an API or a spectrum bias)
Data transformation If applicable, describe the nature and purpose of any statistical transformations applied to your data. Describe any potential bias or risk related to the data transformation (e.g. ecological fallacy, confounding factors) 
Other If you have identified other potential methodological or scientific biases, describe them and their potential ethical consequences (e.g.1. an excessively long consent form could affect the informed consent; e.g.2. the presence of a floor effect in the measurements could constrain an Autonomous Intelligent System’s ability to detect a behavior)
Why Mental Health & AI?

Mental Health has been transformed by the rise of AI and Big Data (Luxton, 2014). Professionals, researchers and companies increasingly use AI to detect at-risk individuals and depressed users, study emotions, increase motivation, improve public health strategies and the list goes on. But as promising as it might be, AI raises many complex ethical challenges, such as the difficulty of obtaining consent or the risk of divulging private information. 

The Development

We synthesized and analyzed over 40 reports, professional guidelines, and key studies on AI & Ethics. We have collected over 300 mentions of challenges. We deduplicated the items and selected the most relevant to Mental Health & Suicide Prevention. We then invited international experts, AI developers, researchers specialized in ethics, ICT and Health to provide feedback using the Delphi Method, a method commonly used in Healthcare to gather expert opinions and reach a consensus on specific topics.

Why a checklist?

Checklists are frequently used in health care. They can be used for a wide range of reasons: to help clinicians diagnosing, to make sure a research methodology has been well implemented, or to improve public health strategies. They can be very useful by summarizing key recommendations and best practices.

How does it work?

This version of the Canada Protocol is a checklist. It invites you to review 38 key ethical questions when AI is used in the context of Mental Health Care or Suicide Prevention.

The user is asked to read each item and thus review your practices and how your Autonomous Intelligent System (IEEE, 2016) works.