2.5.1 Human Review of Automated Decision
A more targeted way to ensure human control of automated decisions is to ensure that people who are subject to automated decisions that draw on data concerning their mental health should be able to request and receive human review of those decisions. The principle of ensuring human review, unlike most other themes discussed in this report, is meant as a step after the fact. This is not to endorse the use of various automated initiatives in the first place, some of which may warrant preventive interventions that modify or halt them. Instead, the principle of human review is to ensure an ex post (after the fact) mechanism of review where algorithmic decision systems are used. The principle is guided by the rationale, noted by the European Commission’s High-Level Expert Group on Artificial Intelligence, that ‘[h]umans interacting with AI systems must be able to keep full and effective self-determination over themselves’.364
Different technologies, and the contexts in which they are used, will require variations of the human review that is appropriate, including the strength of that review process. Some groups have argued that human review is not merely desirable but should be viewed as a right of the data subject.365 The European Ethical Charter on Artificial Intelligence in Judicial Systems and their Environment, for example, contains a robust approach, indicating that cases must be heard by a competent court if review is requested.366
- 363 See eg, Piers Gooding, A New Era for Mental Health Law and Policy: Supported Decision-Making and the UN Convention on the Rights of Persons with Disabilities (Cambridge University Press, 2017); Green and Ubozoh (n 17); Rose (n 29).
- 364 European Commission High-Level Expert Group on Artificial Intelligence (n 131) para [50].
- 365 Fjeld et al (n 85) p.54.
- 366 European Commission for the Efficiency of Justice, European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment (3 December 2018) p.54