2.1.1 Ad-Tech and Predictive Public Health Surveillance
Data and analytics in advertising can exploit behavioural biases and create consumer exploitation or be used in political targeting on an unprecedented scale.199 Sensitive information about people who are potentially in distressed states can be used by private companies to manipulate people into buying certain services or products.
CASE STUDY: Facebook/Meta ad-tech identifying when children feel ‘worthless’ and ‘insecure’
In 2017, Australian media reported that Facebook systems could target Australians and New Zealander children as young as 14 years old and help advertisers to exploit them when they’re most vulnerable.200 This included identifying when the children felt ‘worthless’, ‘stressed’, ‘anxious’, ‘insecure,’ and in ‘moments when young people need a confidence boost’. The document offering these capacities to advertisers was authored by two top Australian executives in roles described as ‘Facebook Australia’s national agency relationship managers.201
Facebook denied that it let advertisers target children and young people based on their emotional state, and claimed that it has ‘an established process’ to review such research but that this particular project ‘did not follow that process.202 Facebook reasserted that it had a policy against advertising to ‘vulnerable users.203
Four years later, in April 2021, civil society organization Reset Australia reported that Facebook was found using children’s data to on-sell to advertisers seeking to target children interested in extreme weight loss, alcohol or gambling.204
Conversely, a similar approach to targeted advertising can be used in public health initiatives that seek to direct people who may be in distress to particular mental health services. The aim of such initiatives is to provide ‘pre-emptive’ support to assist people to access support, particularly those who may be averse to accessing formal services, as the following example shows.
CASE STUDY: Predictive prevention and targeted advertising
A report by the PHG Foundation describe a ‘digital wellbeing service’ in London:205
Good Thinking is a digital wellbeing service rolled out across the city as part of the Healthy London Partnership; it uses data-driven marketing techniques to target advertisements for digital services to people who may be experiencing mental health issues. This targeting is based on people’s use of online search engines and social media platforms, thereby proactively identifying those who may benefit from services and who would not necessarily self-present to the health system. Those who display patterns of searching or social media posts consistent with early predictors of mental health decline [for example, sleep deprivation, isolation, alcohol consumption] are targeted with subtle advertisements around their personal issue. If the user engages with the advertisement, they are filtered through to a digital service containing recommended and approved apps for their specific problem. All this is done without the health system requiring access to raw data or any personal information about the user and the citizen is not aware they are engaging with the health system.
The intent of the Good Thinking targeting service, which is described as a ‘precision prevention initiative’ by its developers, is to provide benefit to individuals in apparent distress. This aim aligns with the UK Government’s Green Paper on ‘Advancing our health: prevention in the 2020s’, which describes a move towards ‘proactive, predictive and personalised prevention’.206
Serious questions may be raised about such programs given that they seem to largely target people from outside the formal healthcare system who have not consented to or necessarily intentionally sought out health care services. Although the advertisements for Good Thinking are targeted at those who appear to be searching for support in relation to distress, one component of the targeted digital advertising ‘[t]argets users whose behaviour, demographic and location suggests they are a potential service users [sic] – a “passive” audience.207
Data-based targeting and automated profiling was discussed by the World Health Organisation in its report, Ethics and Governance of Artificial Intelligence for Health, which noted its ambiguous potential:
- AI can be used for health promotion or to identify target populations or locations with “high-risk” behaviour and populations that would benefit from health communication and messaging (micro-targeting) […] Micro-targeting can also, however, raise concern, such as that with respect to commercial and political advertising, including the opaqueness of processes that facilitate micro-targeting. Furthermore, users who receive such messages may have no explanation or indication of why they have been targeted. Micro-targeting also undermines a population’s equal access to information, can affect public debate and can facilitate exclusion or discrimination if it is used improperly by the public or private sector.208
A prominent example of a data-driven preventive health monitoring initiative that faced a severe public backlash was the ‘suicide watch radar’ app that was trialled in the UK.
CASE STUDY: Crisis Surveillance and the ‘Suicide Watch Radar’ App
In 2014, the UK charity Samaritans abandoned its use of a ‘suicide watch radar’ app, which enabled users to monitor the accounts of another user for distressing messages. The project aimed to direct emergency responders to those in crisis. However, public campaigners argued the tool breached user’s privacy by collecting, processing and sharing sensitive information about their emotional and mental health.209 Dan McQuillan commented of the program:
Thanks to the inadequate involvement of service users in its production,
It ignored the fact that the wrong sort of well-meaning intervention at the wrong
time might actually make things worse,
Or that malicious users could use the app to target and troll vulnerable people.210
Automated profiling such as the Samaritan’s ‘Radar’ app clearly engages the new generation of data protection laws. The EU’s General Data Protection Regulation (GDPR), for example, defines automated profiling as ‘any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.211 Other parts of the GDPR would bear on how such profiling could be used. Article 22, for example, states that ‘the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.212 Bernadette McSherry has noted that predicting aspects of a person’s mental health appears likely to fall within the ambit of this Article.213
- 198 Nicole Martinez-Martin, ‘Chapter Three - Trusting the Bot: Addressing the Ethical Challenges of Consumer Digital Mental Health Therapy’ in Imre Bárd and Elisabeth Hildt (eds), Developments in Neuroethics and Bioethics (Academic Press, 2020) 63 http://www.sciencedirect.com/science/article/pii/S2589295920300138.
- 199 Sam Levin, ‘Facebook Told Advertisers It Can Identify Teens Feeling “insecure” and “Worthless”’, The Guardian (online, 1 May 2017) http://www.theguardian.com/technology/2017/may/01/facebook-advertising-data-insecure-teens.
- 200 Ibid.
- 201 Ms Smith, ‘Facebook Able to Target Emotionally Vulnerable Teens for Ads’ [2017] Network World (Online) https://www.proquest.com/trade-journals/facebook-able-target-emotionally-vulnerable-teens/docview/1893625693/se-2?accountid=12372.
- 202 ‘Comments on Research and Ad Targeting’, About Facebook (30 April 2017) https://about.fb.com/news/h/comments-on-research-and-ad-targeting.
- 203 Ibid.
- 204 Conor Duffy, ‘Facebook Harvests Teenagers’ Data and on-Sells It to Advertisers for Targeted Alcohol, Vaping Ads, Report Finds’, Australian Broadcasting Commission (online, 27 April 2021) https://www.abc.net.au/news/2021-04-28/facebook-instagram-teenager-tageted-advertising-alcohol-vaping/100097590.
- 205 PHG Foundation, Citizen Generated Data and Health: Predictive Prevention of Disease (University of Cambridge, November 2020) https://www.phgfoundation.org/documents/cgd-predictive-prevention-of-disease.pdf.
- 206 Advancing our health: Prevention in the 2020s – consultation document, Cabinet Office and Department of Health & Social Care (2019). Accessed at https://www.gov.uk/government/ consultations/advancing-our-health-prevention-in-the-2020s/advancing-our-health-preventionin-the-2020s-consultation-document on 24 September 2019.
- 207 The Good Thinking Journey: How the First-Ever City-Wide Digital Mental Wellbeing Service Helped a Quarter of a Million Londoners (September 2019) https://www.healthylondon.org/wp-content/uploads/2019/09/Good-Thinking_How-the-first-ever-city-wide-digital-mental-wellbeing-Sept-2019.pdf.
- 208 World Health Organisation, Ethics and Governance of Artificial Intelligence for Health (World Health Organization, 28 June 2021) 13.