2.6.3 Against Hype and ‘Techno-solutionism’
Some technology vendors and clinical experts may presume to have algorithmic and data-driven solutions for mental health care but it is not always clear whether people impacted by psychological distress actually want or need them.379 The history of both mental health and computer sciences are littered with hubris and outlandish claims about scientific solutions to the longstanding and complex issue of human distress, mental health issues, and so on. For psychiatry and neuroscience, grandiose claims in the recent past include the purported discovery of ‘breakthrough’ biological or neurological treatments that will ‘revolutionise’ care.380 For technologists, claims include being able to ‘solve’ issues using AI from crime, corruption, pollution or obesity.381 Elon Musk’s claim that his ‘AI-brain-chips company could “solve” autism and schizophrenia’382 suggests that these traditions are beginning to merge.
Such overblown claims perpetuate a form of ‘solutionism’. Solutionism, or ‘techno-solutionism’, refers to the (flawed) belief that every social problem has a technological fix, and that simple technological fixes are possible for what are actually highly complex social issues.383 As well as perpetuating the belief that certain issues are amenable to being solved by technology, this type of over-hyping can easily lead to over-promising and under-delivering. On an individual level, one consequence could be to shape individuals’ and mental health professionals’ preferences and expectations about treatment.
When over-hyped initiatives fail, or misrepresent a ‘problem’, individuals may despair that these technological steps haven’t reduced their distress. On a macro-level, over-stating the evidence can alter how funding is directed and draw resources away from where they are needed most. This increases the possibility of technology monopolising limited resources.
There is also a risk with techno-utopian or ‘techno-optimist’ approaches that technology-driven solutions and fixes are presented as an unquestioned good. This is not to argue the opposite and reject technological approaches as wholly negative. Instead, it is to caution against the presentation of any digital initiative in mental healthcare as self-evidently virtuous. Such an altruistic and optimistic picture can shutdown important questions about the way problems are framed, and who benefits and who loses as a result.
For example, one widely-promoted idea that can be sidelined by uncritical optimism is that digital mental health initiatives are cost-effective. There may be some evidence supporting this claim from individual initiatives. However, Jacqueline Sin and colleagues examined claims about cost-savings in a systematic review of ‘entirely web-based interventions that provided screening and signposting for treatment, including self-management strategies, for people with [common mental disorders] or subthreshold symptoms’.384 Many interventions promised low cost of service relative to face-to-face support, which was then used to suggest it could be expanded and delivered to larger populations; yet the review identified that ‘no data were available regarding estimated cost-effectiveness and only 1 paper included economic modeling’.385385 Advocacy organisation Privacy International has likewise argued that there remains little evidence that AI will necessarily lead to more efficient healthcare systems, despite a widespread assumption – boosted by technology vendors – that this will be the case.386386
Another commonly-held view is that computational monitoring, measuring and evaluation of people necessarily affords access to knowledge about individuals, including their inner states. Instead, computational technology may well get in the way of understanding people, including the unique experience of each new person in crisis or distress who deserves to be heard fully.
The assumption that there is always or even often a technological fix for any problem is highly likely to be misplaced regarding various aspects of humane and effective responses to supporting people in severe distress, trauma, mental health crises, and so on. Hence, there is a need not only to mitigate against proven and potential harms, but also to establish sufficient standards to highlight unproven benefits that remain clouded by hype and solutionism, as noted previously.
- 373 Chelsea Chandler, Peter W Foltz and Brita Elvevåg, ‘Using Machine Learning in Psychiatry: The Need to Establish a Framework That Nurtures Trustworthiness’ (2020) 46(1) Schizophrenia Bulletin 11.
- 374 Ibid.
- 375 Hollis et al (n 42).
- 376 Tom Foley and James Woollard, ‘The Digital Future of Mental Healthcare and Its Workforce: A Report on a Mental Health Stakeholder Engagement to Inform the Topol Review’ (National Health Service (UK), February 2019) p.31.
- 377 Simon B Goldberg et al, ‘Mobile Phone-Based Interventions for Mental Health: A Systematic Meta-Review of 14 Meta-Analyses of Randomized Controlled Trials’ (2022) 1(1) PLOS Digital Health e0000002.
- 378 Ibid.
- 379 Carr (n 53).
- 380 Harrington (n 4).
- 381 Morozov (n 11).
- 382 Hamilton (n 8).
- 383 Morozov (n 11).
- 384 Jacqueline Sin et al, ‘Digital Interventions for Screening and Treating Common Mental Disorders or Symptoms of Common Mental Illness in Adults: Systematic Review and Meta-Analysis’ (2020) 22(9) Journal of Medical Internet Research e20581.
- 385 Ibid.
- 386 Privacy International, ‘Our Analysis of the WHO Report on Ethics and Governance of Artificial Intelligence for Health’, Privacy International (20 July 2021) http://privacyinternational.org/news-analysis/4594/our-analysis-who-report-ethics-and-governance-artificial-intelligence-health.