2.9 International Human Rights
World Health Organisation437
Many people and organisations have supported international human rights law as a basis for regulating algorithmic and data-driven technologies.438 Although not without critics,439 proponents view human rights as a helpful organising framework for the design, development and use of new technologies. This includes offering factors that governments and businesses should consider in order to avoid violating human rights.440 Lorna McGregor and colleagues list some of the many human rights engaged in the growing information economy:
- automated credit scoring can affect employment and housing rights; the increasing use of algorithms to inform decisions on access to social security potentially impacts a range of social rights; the use of algorithms to assist with identifying children at risk may impact upon family life; algorithms used to approve or reject medical intervention may affect the right to health; while algorithms used in sentencing decisions affect the right to liberty.441
In each of these examples, data concerning mental health may be decisive to high stakes decisions. For example, a person may be ‘red-lighted’ in automated credit scoring systems or in social security determinations due to data generated by mental health services or inferred based on data suggesting a person is experiencing distress. The same data might be used to assess risk ascribed to a person in relation to child protection, insurance, criminal sentencing, and so on.
Human rights violations persist against people with psychiatric diagnoses and psychosocial disabilities across low-, middle- and high-income countries. In 2019, Dainius Pūras, then UN Special Rapporteur on the Right to the Highest Attainable Physical and Mental Health, commented on the ‘global failure of the status quo to address human rights violations in mental health-care systems’.442 He argued that this failure ‘reinforces a vicious cycle of discrimination, disempowerment, coercion, social exclusion and injustice’, including in the very systems designed to ‘help’. Pūras raised a very brief concern about impact on the right to health of expanding surveillance technologies, and warned against technologies that ‘categorize an individual for commercial, political or additional surveillance purposes’.443 He did not elaborate on the rise of algorithmic and data-driven technologies in mental health settings in general, and indeed there is little research on the human rights implications of these developments.444
In 2022, the UN Special Rapporteur for the rights of persons with disabilities, Gerard Quinn, published a thematic study on artificial intelligence and its impact on persons with disabilities.445 The report was delivered to the 49th session of the Human Rights Council, and takes a human rights lens to the ways AI, machine learning and other algorithmic technologies can both enhance and threaten the rights of disabled people worldwide.
Some fundamental rights that are relevant here include but are not limited to: prohibition of discrimination, the right to privacy, freedom of expression, the right to health, the right to a fair trial, and the right to an effective remedy. The Convention on the Rights of Persons with Disabilities is the most relevant international human rights instrument, given its role in applying established human rights norms to the disability context, and given the strong involvement of people with first-hand experience of lived experience and psychosocial disability in its development. Relevant sections of the Convention on the Rights of Persons with Disabilities include:
- Article 4creates an obligation on states to ‘eliminate discrimination on the basis of disability by any person, organization or private enterprise’ (art. 4.1 (e)). As Special Rapporteur for the Rights of Persons with Disabilities, Gerard Quinn, notes ‘[t]hat certainly engages the regulatory responsibilities of Governments vis-à-vis the private sector when it comes to the development and use of artificial intelligence’.446
- Article 5 prohibits disability-based discrimination
- Article 8 requires States to educate the private sector (developers and users of artificial intelligence), as well as the public sector and State institutions that use AI and other forms of algorithmic technology, in full collaboration with disabled people and artificial intelligence experts, on their obligation to provide reasonable accommodation. ‘Reasonable accommodation’ means necessary and appropriate modification and adjustments where needed in a particular case, to ensure to persons with disabilities can enjoy all human rights and fundamental freedoms on an equal basis with others
- Article 9imposes an obligation on states to promote the design and development of accessible information technologies ‘at an early stage’ (art. 9.2 (h)). This provision, according to Quinn, ‘hints at a robust responsibility of the State to appropriately incentivize and regulate the private sector’.447
- Article 12 concerns equal recognition before the law. This article would be engaged, for example, by algorithmic risk assessments in criminal justice proceedings that incorporated a person’s mental health history.
- Article 14 guarantees liberty and security of the person and prohibits disability-based deprivations of liberty. As with Article 12, this provision would be engaged where actuarial risk-assessments are used in justifying and facilitating indefinite and preventative detention of particular individuals;448 but it may also be engaged where electronic monitoring systems – whether in the criminal justice context or in ‘care’ services – are used in ways that amount to a deprivation of liberty.
- Article 17 states that ‘(e)very person with disabilities has a right to respect for his or her physical and mental integrity on an equal basis with others’, a provision that would be engaged where digital initiatives may threaten or enhance that integrity
- Article 19 regards ‘living independently and being included in the community’ and requires ‘access to a range of in-home, residential and other community support services, including personal assistance necessary to support living and inclusion in the community, and to prevent isolation or segregation from the community’. It is possible that efforts to build connection via digital technologies could help to promote this right (for example, a person who needs to stay at home being assisted to connect with others online), but also that efforts that inadvertently isolate or segregate might violate this right (for example, where people with less resources are only able to access online rather than face-to-face support options; or where residential facilities for persons with disabilities impose alienating surveillance technologies that cut down expert human care and support)
- Article 22 concerns respect for privacy, and states that ‘[n]o person with disabilities … shall be subjected to arbitrary or unlawful interference with his or her privacy, family, home or correspondence or other types of communication…’ and that ‘[p]ersons with disabilities have the right to the protection of the law against such interference or attacks’. Further, governments must ‘protect the privacy of personal, health and rehabilitation information of persons with disabilities on an equal basis with others’.
- Article 25 sets out the right to health, directing that States Parties shall ‘[r]equire health professionals to provide care of the same quality to persons with disabilities as to others, including on the basis of free and informed consent’. Article 25(d) broadens the promotion of free and informed consent to include an obligation on States Parties to raise ‘awareness of the human rights, dignity, autonomy and needs of persons with disabilities through training and the promulgation of ethical standards for ... health care’. Subsection (e) prohibits ‘discrimination against persons with disabilities in the provision of health insurance, and life insurance where such insurance is permitted by national law, which shall be provided in a fair and reasonable manner’.
There is also a potential role for data-driven and algorithmic technologies in preventive monitoring to promote and protect human rights. Preventive monitoring of closed environments, like psychiatric wards, aged care homes, and disability residential facilities, for example, can help to promote Articles 16 (freedom from exploitation, violence and abuse) and 33 (monitoring and implementation), as the following case study suggests.
CASE STUDY: Rights-Based Monitoring – Preventing Harmful Prescription
In 2018, Lisa Pont and colleagues developed computer software to analyze routinely collected pharmacological prescribing data in Australia to monitor medicine use in 71 residential aged care facilities.449 The aim was to prevent prescribing errors and medication misuse. A major concern was the excessive prescription of psychiatric drugs used in forms of ‘chemical restraint’ or tranquilisation of aged care residents. Pont and colleagues’ public data initiative was used to successfully detect high rates of psychopharmaceutical drug-use in some facilities that could not be easily explained, flagging the need for regulatory investigation
More research is needed to identify the range of human rights engaged by algorithmic and data-driven technology in the mental health and disability context,450 and ways that such technologies can promote and protect, rather than threaten, human rights. Disability-inclusive research is critical to realising this aim. Research that actively involves people with disability in the development, design and implementation of technology – as well as its governance – will help to ensure technology is enabling rather than further disabling.
‘Human rights by design’ represents an emerging approach to design that ensures human rights are built into all elements of technology and AI development.451 The Oxford Handbook on AI Ethics identifies four pillars to human rights by design:452
- 1. Design and deliberation – the systems should be designed in ways that are compatible with human rights, and should include public consultations to properly identify any human rights risks and mitigation strategies
- 2. Assessment, testing and evaluation – technologies should be assessed, tested and evaluated, in an ongoing manner, against human rights principles and obligations
- 2. Assessment, testing and evaluation – technologies should be assessed, tested and evaluated, in an ongoing manner, against human rights principles and obligations
- 2. Assessment, testing and evaluation – technologies should be assessed, tested and evaluated, in an ongoing manner, against human rights principles and obligations
Human rights by design could be pursued by governments and civil society actors, including technology developers and businesses.
The UN has also developed Guiding Principles on Business and Human Rights (‘Guiding Principles’), which are relevant here given the prominent and expanding role of the private sector in generating and processing data concerning mental health and disability.453 For example, Principle 5 sets out a special duty of governments to protect against human rights abuses when they contract with private businesses. Advocacy group Access Now have drawn on this principle in calling for open government procurement, recommending that:
- When a government body seeks to acquire an AI system or components thereof, procurement should be done openly and transparently according to open procurement standards. This includes publication of the purpose of the system, goals, parameters, and other information to facilitate public understanding. Procurement should include a period for public comment, and states should reach out to potentially affected groups where relevant to ensure an opportunity to input.454
The International Labour Organisation has developed guidance and tool kits, as well as establishing an ILO Global Business and Disability Network,455 which may assist businesses and others to uphold the Guiding Principles.
Returning to human rights more generally, the potential benefits of adopting a human rights lens include the following:
- Linking harms and benefits of algorithmic and data-driven technology to particular rights, and identifying mechanisms for redressing violations or promoting particular rights;
- Engaging with the broad global movement of disabled people that organises around the CRPD to help address harms and promote benefits arising from data concerning mental health and other disabilities;
- Strengthening calls to ensure active involvement of disabled people in laws, policies and programs that concern them, in keeping with the long-standing slogan of the global disability movement, ‘Nothing about us without us’ (and more generally building the power of marginalised groups); and
- Engaging with international bodies, such as the World Health Organisation, UN human rights treaty bodies, UN Special Rapporteurs, that hold influence over policies, guidelines and other regulatory frameworks at the international, regional and national levels. National human rights institutions may also prove important.
Doing Nothing with AI by Emanuel Gollob in Science Gallery Melbourne’s MENTAL. Photo by Alan Weedon
- 437 orld Health Organization (n 276) xi.
- 438 Access Now (n 253); Lorna McGregor, Daragh Murray and Vivian Ng, ‘International Human Rights Law as a Framework for Algorithmic Accountability’ (2019) 68(2) International & Comparative Law Quarterly 309; Algorithm Charter for Aotearoa New Zealand 2020.
- 439 There are important critiques of a human rights approach to the issues raised by algorithmic and data-driven technologies (see eg Floridi, 2010), including critiques of its underlying liberal ideas as being ill-equipped to challenge the supremacy of private corporations over individuals in the age of Big Data. Sebastian Benthall and Jake Goldenfein, Data Science and the Decline of Liberal Law and Ethics (SSRN Scholarly Paper No ID 3632577, Social Science Research Network, 22 June 2020) https://papers.ssrn.com/abstract=3632577. It is outside the scope of this paper to enter these important debates. Instead, this report is premised on the belief that an organised public can create space to express authentic concern for individual and group rights, which can effect institutional change. This does not preclude the need to seek other organising principles, nor to engage seriously with critics of human rights.
- 440 McGregor, Murray and Ng (n 441).
- 441 Ibid.
- 442 Human Rights Council, ‘Report of the Special Rapporteur on the Right of Everyone to the Enjoyment of the Highest Attainable Standard of Physical and Mental Health’ (n 36) para 82.
- 443 Ibid. para 76.
- 444 For notable exceptions, see Cosgrove et al (n 100); Bernadette McSherry, ‘Risk Assessment, Predictive Algorithms and Preventive Justice’ in John Pratt and Jordan Anderson (eds), Criminal Justice, Risk and the Revolt against Uncertainty (Springer International Publishing, 2020) 17 https://doi.org/10.1007/978-3-030-37948-3_2.
- 445 Human Rights Council, ‘Report of the Special Rapporteur on the Rights of Persons with Disabilities’ (n 10).
- 446 Ibid.
- 447 Ibid. para 37.
- 448 McSherry (n 447).
- 449 Lisa G Pont et al, ‘Leveraging New Information Technology to Monitor Medicine Use in 71 Residential Aged Care Facilities: Variation in Polypharmacy and Antipsychotic Use’ (2018) 30(10) International Journal for Quality in Health Care 810.
- 450 Whittaker et al (n 5).
- 451 Australian Human Rights Commission, Human rights and technology: final report (2021), 91-92.
- 452 Karen Yeung, Andrew Howes and Ganna Pogrebna, ‘AI Governance by Human Rights-Centred Design, Deliberation and Oversight: An End to Ethics Washing’ in Markus D Dubber, Frank Pasquale and Sunit Das (eds), The Oxford Handbook of AI Ethics (Oxford University Press, 2020) 77, cited in Australian Human Rights Commission, Human rights and technology: final report (2021), 92.
- 453 Guiding Principles on Business and Human Rights: Implementing the United Nations ‘Protect, Respect and Remedy’ Framework (Guiding Principles), UN Doc. HR/PUB/11/04 (2011), available at www.ohchr. org/Documents/Publications/GuidingPrinciplesBusinessHR_EN.pdf. Some commentators have called for a more robust treaty affecting the private sector – rather than just guidelines – with greater enforcement measures. D Bilchitz, ‘The necessity for a business and human rights treaty’ (2016) 1(2) Business and Human Rights Journal 203-227.
- 454 Access Now (n 242), p.32.
- 455 See www.ilo.org/global/topics/disability-and-work/ (accessed 21/12/21).