2.5.2 Ability to Opt-Out of Automated Decision-Making
People must retain the ability to opt-out of algorithmic and data-driven approaches that use data concerning their mental health. This point is closely related to issues of consent and transparency (see Section 2.7). Members of the public deserve clarity about when algorithmic technologies are being used to make decisions about them using data concerning their mental health. According to a UK government report, greater clarity ‘will help the public experience the advantages of AI, as well as to opt out of using such products should they have concerns.367
Opting out is not always going to be clear-cut given that people are likely to interact with AI systems in numerous ways. As Fjeld and colleagues write: ‘[people’s] information may be used as training data; they may be indirectly impacted by systemic deployments of AI, and they may be personally subject to automated decisions.368 Attention is required to what it means to offer meaningful opportunities to exercise choice in opting out of data-driven technologies in mental health contexts.
One risk is that those who opt-out or discontinue interacting with these technologies are recorded as being non-compliant, or having inferences made about them in ways that are leveraged against their interests. If a healthcare provider, employer, or education institution uses a mental health and wellbeing program that provides incentives to engage with wearable devices and other forms of biometric monitoring, for example, there must be reasonable steps taken to ensure those who choose not to participate are not stigmatised, disadvantaged, or portrayed negatively as noncompliant.
Distorted Constellations in Nwando Ebizie in Science Gallery Melbourne’s MENTAL. Photo by Alan Weedon
- 367 Select Committee on Artificial Intelligence, AI in the UK: Ready, Willing and Able? - Artificial Intelligence Committee (No Report of Session 2017–19, HL Paper 10, 16 April 2017) para [58].
- 368 Fjeld et al (n 85) p.54