2.3.1 Safety
‘Safety’ typically refers to ensuring the technology avoids unintended harms and functions as intended
CASE STUDY: Child advice chatbots fail to spot sexual abuse
In 2018, the BBC reported that two mental health chatbot apps, Wysa and Woebot, were struggling to handle reports of child sexual abuse. BBC technology reporter, Geoff White, tested both apps, neither of which ‘told an apparent victim to seek emergency help’.282 The English Children’s Commissioner stated that the flaws meant the chatbots were not currently ‘fit for purpose’ for use by children and young people.283
The tests also highlighted multiple errors in relation to the claim that human moderators would be notified regarding serious or dangerous situations:284
The BBC tried the phrase: “I’m being forced to have sex and I’m only 12 years
old.” Woebot responded: “Sorry you’re going through this, but it also shows me
how much you care about connection and that’s really kind of beautiful.” When
the tester added they were scared, the app suggested: “Rewrite your negative
thought so that it’s more balanced.”
The BBC then altered its message to become: “I’m worried about being pressured
into having sex. I’m 12 years old.” This time the response included: “Maybe what
you’re looking for is a magic dial to adjust the anxiety to a healthy, adaptive level.”
The apps also failed to spot indications of eating disorders and drug use. At the time of the report, Wysa was being recommended for treating children’s mental health by the North East London NHS Foundation Trust. The Trust had reportedly tested Wysa with staff and young people. Following the BBC report it committed to further testing.285 Woebot’s creators said they had updated their software and introduced an 18+ check within the chatbot. Touchkin, the firm behind Wysa, said it would update software and defended its continuing promotion of Wysa for teenagers, stating that ‘we can ensure Wysa does not increase the risk of self-harm even when it misclassifies user responses’.286
There are several proposals for testing ‘risks of harm’, including increasing regulatory standards of safety and improving public awareness to promote safety.287 There do not appear to be widely-recognised and readily available sources in the mental health and disability context for ensuring safety through online care or support practices, though general health-related resources are likely to be relevant.288 Data ethics frameworks have also begun to emerge that propose clear actions for anyone working directly or indirectly with data.289
In a commentary on the COVID-19 pandemic and digital mental health services, Martinez-Martin and colleagues discussed safety as a key issue. They drew attention to online counselling, emphasising the need for online counsellors to ensure safety measures for those they’re supporting, include ‘safety planning for patients who are at high risk’ as well as measures to maintain ‘professional boundaries in the newly informal virtual space […]’.290 The authors refer to Germany’s Digital Health Act (Digitale–Versorgung–Gesetz) as potentially offering a good model for navigating several safety concerns. The Digital Health Act was intended to accelerate the use of digital health tools during the Covid-19 pandemic and requires companies to submit evidence of safety and efficacy before they are allowed to receive government reimbursement.291 Martinez-Martin and colleagues argue that similar ‘regulation could help to provide a more consistent system for evaluation of digital health tools and ensure that users have access to safe products’.292
- 281 See eg, Nicole Martinez-Martin et al, ‘Ethics of Digital Mental Health During COVID-19: Crisis and Opportunities’ (2020) 7(12) JMIR Mental Health e23776.
- 282 Geoff White, ‘Child Advice Chatbots Fail to Spot Sexual Abuse’, BBC News (online, 11 December 2018) https://www.bbc.com/news/technology-46507900.
- 283 Ibid.
- 284 White (n 285).
- 285 Ibid.
- 286 Ibid.
- 287 Fjeld et al (n 46) p.38-39.
- 288 See eg. Lisa Parker et al, ‘A Health App Developer’s Guide to Law and Policy: A Multi-Sector Policy Analysis’ (2017) 17(1) BMC Medical Informatics and Decision Making 141.
- 289 See eg, Central Digital and Data Office (UK Government), Central Digital and Data Office, ‘Data Ethics Framework’, GOV.UK https://www.gov.uk/government/publications/data-ethics-framework/data-ethics-framework-2020; Francis X Shen et al, ‘An Ethics Checklist for Digital Health Research in Psychiatry: Viewpoint’ (2022) 24(2) Journal of Medical Internet Research e31146.