2.5 Human control of technology
Important decisions concerning mental health that are made with algorithmic technology must be subject to human control and review. Human control over technology is key to other themes discussed in this report, including maintaining safety and security, accountability, transparency, equity and non-discrimination.
The advance of algorithmic and data-driven technologies is often presented as inevitable, with thought leaders in computer science often depicting automation as a force of nature propelled by unstoppable technological change.357 This view risks delegating autonomy and responsibility for such systems away from humans to some higher and abstract authority. Even terms like ‘artificial intelligence’ can imply an external intelligence that is somehow ‘out there’. Relinquishing accountability in this way is sometimes described as the ‘substitution effect’,358 an effect which makes it harder to ensure human control over technologies by those who are actually designing and implementing them—and those who are using and/or subject to them.
Examples exist of civil society groups successfully challenging the introduction of some types of AI, such as facial recognition. Such groups have insisted that these technologies are not inevitable and demonstrated the power of public input to change the direction of, and even halt, the use of certain technologies. Similar examples are emerging in the mental health context (see Section 1.5).
Emphasising the importance of human control in the mental health context is important given the risk that algorithmic technologies like AI become a ‘substitute decision-maker’ over both professionals and those receiving care and support, which will impact individuals’ agency and autonomy.359 One ethical risk in professional decision support technology in healthcare, is that clinicians defer to algorithmic technology suggestions even in the face of a contrary opinion. An overreliance on automated systems may therefore displace human agency, moral responsibility, liability and other forms of accountability. Even the use of ‘algorithm’ to describe a decision-making system has been characterised by some advocates as ‘often a way to deflect accountability for human decisions’.360
More broadly, there is a risk of normalising and accepting technologies which reinforce potentially inaccurate, unhelpful or harmful categorisations of people’s mental states (as discussed in the biometric monitoring section of this report).
CASE STUDY: Covert and Commercial Automated ‘Narcissism or psychopathy’ Testing
AirBnB has reportedly claimed to be able to use computer modelling to determine whether a customer displays ‘narcissism or psychopathy’.361 The determination is reportedly aimed at screening out undesirable platform users, and specifically possible tenants who may damage the property of landlords.362
This case study raises questions about the appropriateness (and lawfulness) of certain claims being made about individuals, including whether it is appropriate to claim that behavioural sensing can ‘reveal’ an underlying mental state or diagnosable disorder through ‘silent digital biomarkers’. The claim rests on a presumption about the capacity of computer technology, combined with psychometrics, to capture reality. Even the common claim that various technologies ‘collect’ data suggests that there is some neutral, objective information-gathering process underway. Instead, it seems more accurate to say that new forms of data concerning mental health are being created and generated. This creation and generation is not value-neutral—it is value-laden and often rests on multiple social and political claims that may be unstated. The nature of this data-generation, the meaning given to different types of data, and the often uncritical presentation of these methods as neutral forms of data ‘collection’ warrants critical scrutiny—whether in technology sales materials, media, government policy documents, or in leading scholarly journals.
Another risk is that the push to generate ever more data to improve algorithmic and datadriven solutions can undermine human action and control by distracting from the need to change existing mental health services, policies and practices. An extraordinary amount of energy can go into efforts to generate, curate, store and use data. This process can undermine action on information that already exists, particularly information highlighting existing problems of resourcing, discrimination, coercive practices, and other prominent issues in the politics of mental health.363
- 357 Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 1 edition, 2016).
- 358 Ian Kerr and Vanessa Gruben, ‘AIs as Substitute Decision-Makers’ (2019) 21(78) Yale Journal of Law & Technology 80.
- 359 Ibid.
- 360 Kristian Lum and Rumman Chowdhury, ‘What Is an “Algorithm”? It Depends Whom You Ask’, MIT Technology Review (26 February 2021) https://www.technologyreview.com/2021/02/26/1020007/what-is-an-algorithm/.
- 361 Aaron Holmes, ‘Airbnb Has Patented Software That Digs through Social Media to Root out People Who Display “Narcissism or Psychopathy”’, Business Insider Australia (7 January 2020) https://www.businessinsider.com.au/airbnb-software-predicts-if-guests-are-psychopaths-patent-2020-1.
- 362 Ibid.