0.3 What Recommendations Does the Report Make?
These broad recommendations seem justified based on the discussion in this report.9
1. The well documented negative impacts of algorithmic and data-driven technology on people in extreme distress, persons with psychosocial disability, people with lived experience of mental health issues, and so on, need to be openly acknowledged and rectified by governments, business, national human rights institutions, civil society and people with lived experience of distress and disabilities working together.10
2. Authentic, active, and ongoing engagement with persons with lived experience of distress and disability and their representative organisations is required at the earliest exploratory stages in the development, procurement and deployment of algorithmic and data-driven technology that directly impacts them. This engagement is required under the Convention on the Rights of Persons with Disabilities, and is key to technology being a force for good in the mental health context. Instead of more technology ‘for’ or ‘about’ distressed and disabled people and the collection of vast amounts of data to be fed into opaque processes, these groups themselves should be steering discussions on when and how emerging technologies should be integrated into mental health and crisis responses – if at all. True partnership and engagement with people with lived experience should include compensation for their time and true decision-making power which counteracts tokenisation and minimal involvement.
3. ‘Techno-solutionism’11 must be resisted, in which digital initiatives in the mental health context are presented as self-evidently virtuous and effective, and a simple fix to the complex issues of human distress, anguish and existential pain. Not only must proven and potential harms be squarely acknowledged, but so must unproven benefits. Technology is not neutral. When new technologies are presented as technocratic and apolitical, this overlooks the significant role of human decision-making, power, finance, and social trust, which should be part of public discussion. Fundamental questions must be asked as to whether certain systems should be built at all, whether proposals are technically feasible (or merely unrealistic and over-hyped), and – if they are to be pursued – who should govern them.
4. Given the limited (and sometimes highly limited) evidence-base for many algorithmic and data-driven technologies in the mental health context, standards are required that are developed with active involvement of people with lived experience and disability, for use as a mechanism for consensus on scientific integrity standards. This involvement can help limit the many sensational and misleading claims about what AI and other algorithmic technology can achieve and curb their use as cheap alternatives to well-resourced face-to-face support. Government funding for digital initiatives in the mental health and disability context should be dependent upon submissions regarding stringent evidence of safety and efficacy, and in accordance with disability-inclusive public-procurement standards
5. There should be an immediate cessation to all algorithmic and data-driven technological interventions in the mental health context that have a significant impact on individuals’ lives that are imposed without the free and informed consent of the person concerned. Regarding algorithmic forms of diagnosis or proxy-diagnosis, the consequences of being diagnosed and pathologised in the mental health context, whether accurately or not, are often profound. Such measures should never be undertaken without the free and informed consent of the person. Among other things, informed consent processes should provide explicit details of data safety and security measures, and clarify who shall monitor compliance.
6. Governments, private companies, not-for-profits, and so on must, at a minimum, eliminate forms of mental health- and disability-based bias and discrimination from algorithmic and data-driven systems, particularly in areas such as employment, education and insurance. Such steps should extend to preventing discrimination against people who are marginalised across intersecting lines of race, gender, sexual orientation, class, and so on. Those facing discrimination must have access to an effective and accessible remedy, such as a clear source for complaints and legal review.
7. Ethical standards will never be enough. Robust legal and regulatory frameworks are required that acknowledge the risks of employing algorithmic and data-driven technologies in response to distress, mental health crises, disability support needs, and so on. As part of this, a legal and regulatory framework is required that effectively prohibits systems that by their very nature will be used to cause unacceptable individual and social harms and infringe human rights. This could include:
a. mandatory, publicly accessible, and contestable impact assessments for forms of automation and digitisation to determine the appropriate safeguards, including the potential for prohibiting uses that infringe on fundamental rights;
b. proportionality testing of risks against any potential benefits to ensure opportunities to interrogate the objectives, outcomes and inherent trade-offs involved in using algorithmic systems, and doing so in a way that centres the interest of the affected groups, not just the entity using the system such as a healthcare service or technology start-up;12
c. strengthening non-discrimination rules concerning mental health and psychosocial disability to prevent harms caused by leaked, stolen, or traded data concerning mental health and disability.13
8. Public sector accountability needs to be strengthened, including adequately resourcing relevant institutions, which will be vital to addressing the dangers of private sector actors, not-for-profits and government agencies that (mis)use people’s data concerning mental health. This includes developing a willing and empowered state-sponsored regulatory framework as well as resources for affected people and civil society organisations to proactively contribute to enforcement. This includes supporting the capacity-building of representative organisations of service users and persons with disabilities to effectively monitor the impact of data driven technology on persons with lived experience of mental health crises or disability. Monitoring could include: advocating for responsible and inclusive data-driven technology, interacting effectively with all key actors including the private sector, and highlighting harmful or discriminatory uses of the technology.14
9. Robust civil society responses are more likely where lived experience groups and disabled people and their representative organisations connect with other activists at the intersections of race, gender, class, and other axes of oppression, rather than viewing algorithmic and data-driven injustices purely through a mental health or disability lens. This could include collectives, nonprofit technology organisations, free and open source projects, philanthropic funders and activists with data practices and skills that help them more fully realise their missions. Those working for economic, social, racial and climate justice can share digital tools, resources and practices to help maximize their effectiveness and impact and, in turn, change the world.15
10. Interdisciplinary academic input is needed beyond disciplines like medicine, psychology, computer science and engineering, to include researchers from the humanities and social sciences. This will help address the common presentation of algorithmic and data-driven technologies as neutral—as facilitating factual, un-mediated, digital processing. This technocratic framing neglects matters including the significant role of power, the social and economic underpinnings of distress, unjust macroeconomic structures, Big Tech hegemony, and so on.
11. Steps must be taken to prevent the undercutting of face-to face encounters of care and support, particularly where private sector interests are expanding into digitised responses to distress or care, and particularly where governments are pursuing digital options as cheap alternatives to well-resourced forms of support. Relations of care and support must be adequately recognised and protected. The over-emphasis of metrics and computational approaches should be resisted in appreciation of the virtues that make for a truly human life.
Echo by Georgie Pinn in Science Gallery Melbourne’s MENTAL. Photo by Alan Weedon.
- 9 An important caveat is that we are not a representative body. The authors are based in high-income countries, as the initial scope of this project looked to regulatory arrangements in the EU and US. The recommendations are not meant to be exhaustive, and nor should they foreclose other strategies and recommendations, particularly by persons with lived experience of mental health crises, disabled people, and their representative organisations.
- 10 This recommendation draws from the 2021 thematic report on artificial intelligence and disability by the UN Special Rapporteur for the Rights of Persons with Disabilities, Gerard Quinn. Human Rights Council, Report of the Special Rapporteur on the Rights of Persons with Disabilities (UN Doc A/HRC/49/52, 28 December 2021) para 73 https://undocs.org/pdf?symbol=en/A/HRC/49/52.
- 11 Evgency Morozov coined this term to describe a pervasive ideology that recasts complex social phenomena like politics, public health, education, and law enforcement as “neatly defined problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized—if only the right algorithms are in place!” Evgeny Morozov, To Save Everything, Click Here: Technology, Solutionism, and the Urge to Fix Problems That Don’t Exist (Penguin UK, 2013) 5
- 12 Alexandra Givens, ‘Algorithmic Fairness for People With Disabilities: The Legal Framework’ (Georgetown Institute for Tech Law & Policy, 27 October 2019) https://docs.google.com/presentation/d/ 1EeaaH2RWxmzZUBSxKGQOGrHWom0z7UdQ/present?ueb=true&slide=id.p1
- 13 Mason Marks, ‘Algorithmic Disability Discrimination’ in Anita Silvers et al (eds), Disability, Health, Law, and Bioethics (Cambridge University Press, 2020) 242
- 14 Human Rights Council, ‘Report of the Special Rapporteur on the Rights of Persons with Disabilities’ (n 10). Para 76(g).
- 15 Language for this recommendation is borrowed from ‘Aspiration Manifesto | Aspiration’ https://aspirationtech.org/publications/manifesto This recommendation comes with a call for caution about the potential misalignment of non-profit organisations with desired aims, and we draw attention to calls to breakaway from the non-profit industrial complex, including turning toward potential alternatives such as grassroots movements and worker self-directed non-profits that aim to improve the accountability and participatory nature of social movements. Jake Goldenfein and Monique Mann, ‘Tech Money in Civil Society: Whose Interests Do Digital Rights Organisations Represent?’ (2022) 0(0) Cultural Studies 1