2.7.2 Other Issues of Transparency and Explainability
Other points can promote transparency in the mental health and disability context, and include:
- Open Procurement (for Government) – Some governments’ enthusiastic embrace of digital mental health technologies gives cause to ensure governments are transparent about their use. Digital rights advocacy organization, Access Now, recommends that open procurement standards should see governments publishing ‘the purpose of the system, goals, parameters, and other information to facilitate public understanding’ as well as a ‘period for public comment’ including reaching out to ‘potentially affected groups where relevant to ensure an opportunity to input’.405 Open procurement rules could be a key mechanism for addressing risks to accountability discussed in Section 2.2.1 of this report on privatisation and accountability.
- Right to Information – Promoting a right to information would aim to ensure individuals know about various aspects of the use of, and their interaction with, algorithmic systems in the mental health context. Several German Federal Ministries have promoted the right to information to access the criteria, objectives, and logic of a particular algorithmic decision system, and extended that to require ‘labelling and publication obligations […] in plain language and [in ways that are] easily accessible’.406 This obligation aligns with the accessibility requirements for persons with disabilities enunciated in the Convention on the Rights of Persons with Disabilities (see below page 87). More broadly, good technology governance requires that terms of service – whether by governments or corporations – are accessible, clear and understandable rather than being presented in ‘legalese’ or buried in a mass of information (and acknowledging the sheer limitations of terms of service as an adequate remedy to the broader issues raised in this report).
- Notification when Automated Decisions are Made about an Individual – This point relates specifically to AI, machine learning and other algorithmic decision systems, and is closely related to preserving individuals’ ability to opt-out of such systems. Autonomy and the opportunity to consent are dependent upon a person knowing they are subject to automated decisions. (An example where this did not occur is the automated hiring decision affecting US citizen, Mr Kyle Behm, which is noted in the previous section on ‘Non-Discrimination and the Prevention of Bias’, page 64). Clarity is needed in any automated decision process concerning a person’s mental health or disability to be informed of how to contact a human and to ensure automated decisions can be checked or corrected.407
- Notification when Interacting with Automated Systems – In the mental health and disability context, people should always be made aware when they’re engaging with technology rather than directly with another person. People with lived experience of crises, distress and mental health interventions have been very clear in studies that new and emerging technologies in mental health services should emphasise human connection and avoid creating isolation, loneliness and alienation.408 ‘Interacting’ is a key word in this principle as notification should not be limited to automated decisions, which may be taken to describe when an action is automated, but should apply to interactions—for example, a person typing responses to a chatbot. This is not to suggest that chatbots cannot be richly crafted, and ‘weave together code and poetry, emotions and programming,’ as one commentator described it,409 but is to suggest that notification that the chatbot is an automated system should be unambiguous, with clear information on how a person may reach a human where needed
- Regular Reporting – This point refers to obligations placed on entities who are using automated decision systems to disclose that usage.
- Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems (2019) 28 https://ethicsinaction.ieee.org/.>
- 405 Access Now (n 125), p.32.
- 406 German Federal Ministry of Education and Research, the Federal Ministry for Economic Affairs and Energy, and the Federal Ministry of Labour and Social Affairs, ‘Artificial Intelligence Strategy’ (2020) 38
- 407 European Commission, ‘Artificial Intelligence for Europe: Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee, and the Committee of the Regions’ COM (2018) p. 17.
- 408 Hollis et al (n 42).
- 409 ng (n 1).