Introduction
Urgent public attention is needed to make sense of the expanding use of algorithmic and data-driven technologies in the mental health context. On the one hand, well-designed digital technologies that offer high degrees of public involvement can be used to promote good mental health and crisis support in communities. They can be employed safely, reliably and in a trustworthy way, including to help build relationships, allocate resources, and promote human flourishing.1
On the other hand, there is clear potential for harm. The list of ‘data harms’ in the mental health context is growing longer, in which people are in worse shape than they would be had the activity not occurred.2 Examples in this report include the hacking of psychotherapeutic records and the extortion of victims, algorithmic hiring programs that discriminate against people with histories of mental healthcare, and criminal justice and border agencies weaponising data concerning mental health against individuals. Issues also come up not where technologies are misused or faulty, but where technologies like biometric monitoring or surveillance work as intended, and where the very process of ‘datafying’ and digitising individuals’ behaviour – observing, recording and logging them to an excessive degree – carry inherent harm.
Public debate is needed to scrutinise these developments. Critical attention must be given to current trends in thought about technology and mental health, including the values such technologies embody, the people driving them, and their diverse visions for the future. Some trends – for example, the idea that ‘objective digital biomarkers’ in a person’s smartphone data can identify ‘silent’ signs of pathology, or the entry of Big Tech into mental health service provision – have the potential to create major changes not only to health and social services but to the very way human beings experience ourselves and our world. This possibility is also complicated by the spread of ‘fake and deeply flawed’ or ‘snake oil’ AI,3 and the tendency in the technology sector – and indeed in mental health sciences4 – to over-claim with the promise of a ‘silver bullet’ and, inevitably,under-deliver.
Meredith Whitaker and colleagues at the AI Now research institute observe that disability and mental health have been largely omitted from discussions about AI-bias and algorithmic accountability.5 This report brings them to the fore. It is written to promote basic standards of algorithmic and technological transparency and auditing, but also takes the opportunity to ask more fundamental questions, such as whether algorithmic and digital systems should be used at all in some circumstances—and if so, who gets to govern them.6 These issues are particularly important given the COVID-19 pandemic, which has accelerated the digitisation of physical and mental health services worldwide.7 and driven more of our lives online.
0.1 Structure
Part 1 charts the rise of algorithmic and data-driven technology in the mental health context. It outlines issues which make mental health unique in legal and policy terms, particularly the significance of involuntary or coercive psychiatric interventions in any analysis of mental health and technology. The section makes a case for elevating the perspective of people with lived experience of profound psychological distress, mental health conditions, psychosocial disabilities, and so on, in all activity concerning mental health and technology
Part 2 looks at prominent themes of accountability. Eight key themes are discussed – fairness and non-discrimination, human control of technology, professional responsibility, privacy, accountability, safety and security, transparency and explainability, and promotion of public interest. International law, and particularly the Convention on the Rights of Persons with Disabilities, is also discussed as a source of data governance.
Case studies throughout show the diversity of technological developments and draw attention to their real-life implications. Many case studies demonstrate instances of harm. This may seem overly negative to some readers. Yet, there is a lack of readily available resources that list real and potential harms caused by algorithmic and data-driven technologies in the mental health and disability context. In contrast, there is an abundance of public material promoting their benefit. This report seeks to rebalance public deliberation and promote a conversation about public good and harm, and what it would take to govern such technological initiatives responsibly. The case studies also seek to ground discussion in the actual agonies of existing technology rather than speculative worries about technology whose technical feasibility is often exaggerated in misleading and harmful ways (for example, Elon Musk’s claim that his ‘AI-brain chips will “solve” autism and schizophrenia’).8
This resource is meant for diverse audiences, including advocates and activists concerned with mental health and disability, service users and those who have experienced mental health interventions and their representative organisations, clinical researchers, technologists, service providers, policymakers, regulators, private sector actors, academics, and journalists
0.2 How was the Report Written?
This report emerged from a two-year exploration conducted throughout 2020 and 2021. The work was undertaken by the authors, with backgrounds in media studies, policymaking, law, data ethics, and so on, and most of whom have had firsthand encounters with mental health services, distress or disability. The report co-ordinator, Piers Gooding, received funding as a Mozilla Fellow in 2020. With Simon Katterl, Piers led the drafting of Part 1 and 2 of the report, with guidance from the other co-authors. The report recommendations were jointly and equally authored.
- 1 Claudia Lang, ‘Craving to Be Heard but Not Seen – Chatbots, Care and the Encoded Global Psyche’, Somatosphere (13 April 2021) http://somatosphere.net/2021/chatbots.html Lang describes the potential for tech to ‘weave together code and poetry, emotions and programming, despair and reconciliation, isolation and relatedness in human-techno worlds.’
- 2 Joanna Redden, Jessica Brand and Vanesa Terzieva, ‘Data Harm Record – Data Justice Lab’, Data Justice Lab (August 2020) https://datajusticelab.org/data-harm-record/
- 3 Frederike Kaltheuner (Ed.) Fake AI (Meatspace Press, 2021) https://fakeaibook.com (accessed 7/12/2021).
- 4 Anne Harrington, Mind Fixers: Psychiatry’s Troubled Search for the Biology of Mental Illness (W. W. Norton & Company, 2019
- 5 Meredith Whittaker et al, Disability, Bias, and AI (AI Now, November 2019) 8.
- 6 Frank Pasquale, ‘The Second Wave of Algorithmic Accountability’, Law and Political Economy (25 November 2019) https://lpeproject.org/blog/the-second-wave-of-algorithmic-accountability/
- 7 John Torous et al, ‘Digital Mental Health and COVID-19: Using Technology Today to Accelerate the Curve on Access and Quality Tomorrow’ (2020) 7(3) JMIR Mental Health e18848.
- 8 Isobel Asher Hamilton, ‘Elon Musk Said His AI-Brain-Chips Company Could “solve” Autism and Schizophrenia’, Business Insider Australia (14 November 2019) https://www.businessinsider.com.au/elon-musk-said-neuralink-could-solve-autism-and-schizophrenia-2019-11