2.2 Accountability
– Sarah Carr252
Accountability for the impacts of algorithmic systems in mental health and disability contexts must be appropriately distributed, with adequate remedies in place. Accountability measures are needed to ensure opportunities to interrogate the objectives, outcomes and inherent trade-offs involved in using algorithmic systems, and to do in a way that centres the interest of the user-subject and the broader public, not just the entity using the system.253253 The appropriate attribution of responsibility and redress is not only vital for individuals who are affected but can be vital for public trust in technology-driven solutions.
CASE STUDY: Biometric Monitoring and Cognitive Impairment
In 2019, a group of researchers analysed 12 weeks of phone usage data from 113 older adults and were reportedly able to reliably identify which users had cognitive impairment by identifying aspects of phone usage that ‘strongly relate with cognitive health’.254 The authors reported on their capacity to draw from the ‘rich source of information about a person’s mental and cognitive state’ and use it to ‘discriminat[e] between healthy and symptomatic subjects’.255
This type of case study raises important questions about accountability—questions which could be generalised about any biometric monitoring concerning cognitive impairment and disability.
What must researchers do to consider the impact of the initiative, including potential harms to those designated as ‘cognitively impaired’? Should the methods for such monitoring be widely shared given the ubiquity of mobile phone use in many parts of the world, and the apparent ease with which private companies can collect data that ‘strongly correlate with cognitive health’? If the biometric monitoring used in the study was used outside experimental conditions, what safeguards would be in place to allow those designated as impaired be able to contest that designation before it was transferred to other entities? If such technologies were deployed on a larger scale, who would be responsible in the event of an adverse outcome? For example, if an app collecting sensitive personal data relating to people’s cognitive status inadvertently releases the data to a third-party because it malfunctions, or is compromised by a security flaw, who is responsible? Is it the company that owns the app, the individual programmer(s) who made the error, the service that recommended or even prescribed the app?
These are among the questions that may be asked about accountability. Public efforts to promote accountability tend to suggest that different strategies are needed at different stages in the ‘lifecycle’ of algorithmic and data-driven systems, particularly during design (pre-deployment), monitoring (during deployment), and redress (after harm has occurred).256 Possible strategies include:
Design
Environmental responsibility. The ecological impact of algorithmic and data-driven may seem unrelated to this report. However, the environmental toll of data-driven technologies on the planet260 can be tied to the importance of healthy ecologies in human (mental) life,261 and constitutes an important issue for the accountability of those designing and deploying them.
Monitoring
Evaluation and auditing requirements.Minimum evaluation and auditing requirements are needed to ensure that technologies are built in a way that are capable or being audited, but also such that the lessons from feedback and evaluations can improve systems. Some proposals include ensuring ‘systems that have a significant risk of resulting in human rights abuses [can be subject to] independent third-party audits’;262 other approaches focus on making datasets and processes available to a range of actors who can help identify possible flaws and room for improvement.263
Ability to appeal.Individuals or groups who are the subject of decisions made using algorithmic and data-driven technologies in the mental health or disability context require mechanisms to challenge that decision. Access Now has argued that the ability to appeal should be possible both as a means to challenge the use of an algorithmic system, as well as an ability to appeal a decision that has been ‘informed or wholly made by an AI system’.264
Redress
Remedy for automated decision-making.As with the ability to appeal, remedies should be available concerning the operation of algorithmic and data-driven technology, just as they are for the consequences of human actions. Remedy typically follows from the ability to appeal, given that remedy allows rectification of the consequences. Various proposals exist, which often distinguish between the role of private companies and states in ensuring a process of redress, compensation, sanctions and guarantees of non-repetition.265
Creating new regulations.There seems to be broad consensus on the need to address inadequacies in existing regulatory frameworks. Yet reform proposals vary enormously within and between countries, and in various sectors, from healthcare to ad-tech. More deliberative work is needed to ensure new regulations address issues raised in the mental health and disability context.
These principles may overlap between categories of design, monitoring and redress.
A common point in efforts to achieve accountability is the need to avoid placing accountability on the technology itself rather than on those who design, develop and deploy it. Governments, companies and their business partners, researchers, developers, and users/subjects will have varying degrees of responsibility for harms depending on context. There is a vital role for individuals, advocates, and technical experts in flagging errors and demonstrating the adverse effects of various new algorithmic technologies, but there need to be forums and institutional mechanisms for these concerns to be raised and, where necessary, acted upon
- 252 Carr (n 53).
- 253 Alexandra Givens, ‘Algorithmic Fairness for People with Disabilities: The Legal Framework’ (Georgetown Institute for Tech Law & Policy, 27 October 2019) https://docs.google.com/presentation/d/1EeaaH2RWxmzZUBSxKGQOGrHWom0z7UdQ/present?ueb=true&slide=id.p17&usp=embed_facebook.
- 254 Rauber, Fox and Gatys (n 163).
- 255 Ibid.
- 256 Fjeld et al (n 190)
- 257 Access Now (n 187); Australian Human Rights Commission, Human Rights and Technology - Final Report (Australian Human Rights Commission, 2021) https://tech.humanrights.gov.au/sites/default/files/2021-05/AHRC_RightsTech_2021_Final_Report.pdf; N. Götzmann,Ed. Handbook on human rights impact assessment (2019, Edward Elgar Publishing).
- 258 Fjeld et al (n 190).
- 259 Treasury Board of Canada Secretariat, ‘Algorithmic Impact Assessment Tool’ (guidance, 22 March 2021) https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html.
- 260 Kate Crawford, The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, 2021).
- 261 Nikolas Rose, Rasmus Birk and Nick Manning, ‘Towards Neuroecosociality: Mental Health in Adversity’ [2021] Theory, Culture & Society 0263276420981614.
- 262 Access Now promote the incorporation of a ‘a failsafe to terminate acquisition, deployment, or any continued use if at any point an identified human rights violation is too high or unable to be mitigated’. Amnesty International and Access Now, ‘The Toronto Declaration: Protecting the Right to Equality and Non-Discrimination in Machine Learning Systems’, Toronto Declaration (2018) https://www.torontodeclaration.org/declaration-text/english/.
- 263 Fjeld et al (n 189) p.32.
- 264 Ibid p.33; cited in Fjeld et al (n 97) p.32-33.