1.3.1 AI-based Suicide Alerts and Self-harm Surveillance
Government agencies, social media companies, not-for-profits, health services, and others have begun using machine learning and artificial intelligence in suicide prevention, including in efforts to pre-emptively identify people who may self-harm.52 In some cases, these technologies appear to have been used to activate police powers to detain people for the purposes of involuntary psychiatric intervention.
CASE STUDY: AI-Based Suicide Alerts at Facebook/Meta
In November 2018, a Facebook employee in Texas reportedly alerted police in the Indian state of Maharashtra about a 21-year-old man who had posted a suicide note on his profile. The intervention came after Facebook expanded its pattern recognition software to detect users expressing suicidal intent. Mumbai police reportedly attended the young man’s home,53 for which they have power to authorise involuntary psychiatric intervention under the Mental Healthcare Act 2017 (India). In 2018, Facebook reported that it had conducted over 1000 ‘wellness checks’ involving the dispatch of first responders.54
Facebook/Meta’s algorithmic responses also encourage peer-responses from among the person’s user-network by drawing their attention to the person’s apparent distress.55 These measures were developed after some form of consultation with suicide attempt survivors and experts on suicide prevention (though few details are available).56 Facebook/Meta provides some information about the algorithmic process behind the interventions,57 and has described the ethical issues with which programmers grappled.58
However, there remains little information about what precisely is meant by a ‘wellness check’ (including whether location data are shared with first-responders). Nor is there publicly available research as to the accuracy, scale or effectiveness of the initiative. What Facebook/Meta does with the information following each apparent crisis is also unclear.
Police appear to be the first-responders undertaking ‘wellness checks’. Facebook/Meta has therefore drawn criticism for failing to grapple with the reality of anti-Black racism in the US and the prevalence of police violence in their encounters with distressed individuals, particularly Black, Indigenous, people of colour. For example, Joshua Skorburg and Phoebe Friesen write:
- While [Facebook/Meta’s wellness checks] may seem like a positive contribution to public health on Facebook’s behalf, it is becoming increasingly clear that police wellness checks can do more harm than good. Between 2015 and August 5, 2020, 1,362 people who were experiencing mental health issues were killed by police in the United States. This remarkable number constitutes 23 percent of police fatalities in that time.59
The US is by no means alone on such patterns of police violence.60
From a legal and regulatory perspective, suicide prediction in medical systems is governed by health information laws, medical practice and clinical governance regimes, and research regulations that require transparency and peer review. Flawed as these frameworks may be, AI-based suicide prediction on social media platforms, as Mason Marks points out, ‘typically occurs outside the healthcare system where it is almost completely unregulated, and corporations often maintain their prediction methods as proprietary trade secrets’.61 To remedy this, Marks recommends several steps to improve people’s safety, privacy and autonomy, including:62
- making prediction methods more transparent, and giving users unambiguous opportunities to opt-out and delete prediction information;
- protecting consumer privacy and minimising the risk of exploitation, by ensuring suicide predictions cannot be used for advertising or be shared with third parties (such as insurance companies, employers or immigration authorities); and
- the monitoring of ongoing prediction programs by independent data monitoring committees for safety and efficacy.
The use of individual and population monitoring in efforts to prevent suicide or efforts to promote its use, are likely to increase in coming years. In December 2020, the US National Suicide Prevention Lifeline administrator recommended that the US Government authorise geo-location systems to pin-point the exact location of all callers by 2022.63 Leah Harris has criticised this recommendation, warning that ‘Mad and disabled advocates who have experienced mental health crisis intervention, and even some crisis service providers, worry that geolocation would serve to further entrench coercion in mental health and crisis response systems, replicating problematic aspects of [the US emergency services line] 911’.64 The impact of automated surveillance of callers on rates of involuntary psychiatric interventions, police involvement in crises, citizens’ willingness to report to such services, and so on, remains unknown.
- 51 Sarah Carr, ‘“AI Gone Mental”: Engagement and Ethics in Data-Driven Technology for Mental Health’ (2020) 0(0) Journal of Mental Health 1.
- 52 Marks, Artificial Intelligence Based Suicide Prediction (n 45).
- 53 Vijay K Yadav, ‘Mumbai Cyber Cops Log into Facebook to Curb Suicides’, Hindustan Times (online, 5 November 2018) https://www.hindustantimes.com/mumbai-news/mumbai-cyber-cops-log-into-facebook-to-curb-suicides/story-SMd03alcW0SUBzRJlmdDZJ.html.
- 54 Norberto Nuno Gomes de Andrade et al, ‘Ethics and Artificial Intelligence: Suicide Prevention on Facebook’ (2018) 31(4) Philosophy & Technology 669.
- 55 Catherine Card, ‘How Facebook AI Helps Suicide Prevention | Facebook Newsroom’ (10 September 2018) https://newsroom.fb.com/news/2018/09/inside-feed-suicide-prevention-and-ai/.
- 56 Gomes de Andrade et al (n 56).
- 57 Card (n 57).
- 58 Gomes de Andrade et al (n 56).
- 59 Joshua August Skorburg and Phoebe Friesen, ‘Mind the Gaps: Ethical and Epistemic Issues in the Digital Mental Health Response to Covid-19’ (2021) 51(6) Hastings Center Report 23.
- 60 See eg. Piers Gooding, ‘“The government is the cause of the disease and we are stuck with the symptoms”: deinstitutionalisation, mental health advocacy and police shootings in 1990s Victoria’ in G Goggin, L Steele, and R Cadwallader (Eds.) Normality and Disability: Intersections among Norms, Law, and Culture (Routledge, 2018) 100-110; Anthony J O’Brien et al, ‘The Nature of Police Shootings in New Zealand: A Comparison of Mental Health and Non-Mental Health Events’ (2021) 74 International Journal of Law and Psychiatry 101648.
- 61 Marks, Artificial Intelligence Based Suicide Prediction (n 45).
- 62 Ibid.
- 63 Vibrant Emotional Health, 988 Serviceable Populations and Contact Volume Projections (Vibrant, December 2020) https://www.vibrant.org/wp-content/uploads/2020/12/Vibrant-988-Projections-Report.pdf.
- 64 L Harris, ‘The New National Mental Health Crisis Line Wants to Track Your Location’, Disability Visibility Project (19 April 2021) https://disabilityvisibilityproject.com/2021/04/19/the-new-national-mental-health-crisis-line-wants-to-track-your-location.