2.1.2 Privacy and Monetisation of Sensitive Personal Data
Privacy issues are compounded by the increasing monetisation of health and other forms of personal data. Online mental health initiatives are emerging in an internet that is increasingly dominated by profit-driven information flows. Some mental health websites or apps and affiliated third-party companies are treating the personal data of users as a commodity and tracking them for marketing or other commercial purposes.214 This may occur as an explicit business decision by a private company that provides direct-to-consumer services for those in distress, or may occur inadvertently where a service provider is unaware of the way third-party trackers are operating on their platforms. ‘Third-party trackers’ which are sometimes described as ‘tracking cookies’ or ‘trackers’, are elements of websites that are created by parties other than the developers of the website the person is currently visiting; this would include providers of advertising, analytics and tracking services.215
CASE STUDY: Privacy International finds top mental health websites sell visitor information to third parties and breaches the GDPR
In 2019, Privacy International analysed more than 136 popular webs across France, Germany and the UK related to depression.216 The websites were chosen to reflect those that people would realistically find when searching for help online. The authors found that over three quarters of web pages contained third-party trackers for marketing purposes, which could enable targeted advertising and marketing from large companies like Google/Alphabet, Amazon and Facebook/Meta.
Most websites, according to the authors, failed to comply with the EU General Data Protection Regulation in upholding individual’s privacy (acknowledging that the UK is no longer part of the EU). In 2020, a follow-up study found that 31.8% (42 out of 132) of the tested webpages reduced the number of third parties with whom users’ data was shared. However, Privacy International concluded that ‘[g]enerally, most websites analyses haven’t taken action to limit data sharing [meaning][…] personal data are still shared for advertising purposes’ with hundreds of third parties with no clear indication of the potential consequences.217
Knowledge of a user’s distress could, at a minimum, allow companies to advertise specific treatments, services, or financial products, as noted previously. It could also be sold to other interested parties, such as insurers, as discussed later in the report (see Non-Discrimination and Equity). Some have suggested that data concerning the health of individuals will be more lucrative than the sale of particular health products. Nick Couldry and Ulises Ali Mejias have argued that this likelihood is evident in Amazon’s recent moves in the US to open an online pharmacy:
- Amazon Pharmacy’s promise of 80 per cent discounts suggests that the US retailer sees opportunities not in realising immediate profits, but in extracting a more valuable resource: data about the most intimate details of our lives.218
Not only may data extraction be used to predict the person’s distress in order to match them to an advertised product, but another broader function may be to shape the person’s experience and behaviour in order to direct them to existing advertisement/ products. Zuboff refers to this shaping of human experience and behaviour when highlighting the emergence of ‘behavioral futures markets’.219 This may be evident in the Cerebral app, noted above, which reportedly pushed platform users toward shorter appointments and more prescriptions in ways that potentially ‘accelerat[ed] the psychiatric prescribing cascade’.220
CASE STUDY: ‘Practice Fusion’ and Clinical Decision Support Software that Unlawfully Boosted Opioid Prescribing
The United States (US) government recently settled a case with a company called ‘Practice Fusion’, which produced clinical decision support software that was used by doctors when prescribing medication for patients, and was found to have received kickbacks from a pharmaceutical company intended to drive up opioid prescribing.221 Megan Prictor explains that ‘[t]he payments were for creating an alert in the [electronic health record] designed to increase the prescription of extended-release opioid medication (and hence the sale of Purdue’s products) to treat patients’ pain symptoms.222 She notes:
The court heard that Purdue Pharma’s marketing staff helped to design the software alert, which ignored evidence-based clinical guidelines for patients with chronic pain… The alert was triggered in clinical practices some 230 million times between 2016 and 2019 and resulted in additional prescriptions of extendedrelease opioids numbering in the tens of thousands, causing untold human harm. Most of the prescriptions were paid for by federal healthcare programmes.223
The fraud was uncovered through a US government investigation, which had originally investigated separate unlawful conduct by the company concerning falsely obtained government certification for its software. The company had failed to meet certification requirements, which itself had led software users inadvertently to falsely claim government incentive payments. Software users – presumably comprising of various healthcare providers – had attested that the software complied with government regulations, when in reality it did not.224
It is clear that technologies are now being designed to push ‘users’ to access services or products aligned with business interests tied to the technology;225 or to enforce conditional welfare and social benefit rules in government-funded services in ways that erode care,226 as will be discussed later in the report.
Privacy International’s finding that mental health websites sell visitor information to third parties highlights a striking fact: it is becoming harder to access mental health support without that access being digitally recorded in some way. The likelihood of such information moving beyond the discrete and relevant digital repositories of one service is increased by the massive and interconnected flow of data in today’s communication ecosystem. A report for the Consumer Policy Resource Centre notes the implications of the ease with which data can be transported:
- consumers may well start to avoid accessing important healthcare services and support if they feel that companies or governments cannot be trusted with that information, or that they may be disadvantaged by that information in future. For example, insurer MLC was found to have excluded a consumer from mental health coverage in life insurance due to her accessing mental health services for the sexual abuse she suffered as a child in the mid-1980s.227
This concern extends to accessing physical services given that the monitoring power of smartphones through location-tracking can potentially show the frequency and types of healthcare services an individual accesses.228 Some mental health initiatives that introduce a major digitalised or virtual component have explicitly prioritised privacy as a key component of appropriate support.
CASE STUDY: Privacy by Design in Digital Support in a Refugee Camp
In 2018, researchers at the Data & Society research institute released a report entitled ‘Refugee Connectivity: A Survey of Mobile Phones, Mental Health, and Privacy at a Syrian Refugee Camp in Greece’.229 The authors demonstrated ways that phones were essential to aid, survival and well-being. The survey design simultaneously employed two distinct methodologies: one concerned with mobile connectivity and mental health, and a second concerned with mobile connectivity and privacy. This project was supported by the International Data Responsibility Group. The research was premised on a view that privacy can be essential to easing distress and mental health, both in terms of receiving support, and in the lives of refugees and asylum seekers more generally, particularly those at risk of persecution
Go Mental by Josh Muir in Science Gallery Melbourne’s MENTAL. Photo by Alan Weedon.
- 209 Jamie Orme, ‘Samaritans Pulls “Suicide Watch” Radar App over Privacy Concerns’, the Guardian (7 November 2014) http://www.theguardian.com/society/2014/nov/07/samaritans-radar-app-suicide-watch-privacy-twitter-users.
- 210 McQuillan (n 150).
- 211 Regulation (EU) 2016/679 of the European Parliament and the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L 119/1, Art 4 “Definitions”.
- 212 Ibid Art 22.
- 213 Bernadette McSherry, ‘Computational Modelling, Social Media and Health-Related Datasets: Consent and Privacy Issues’ (2018) 25(4) Journal of Law and Medicine 894.
- 214 Molly Osberg and Dhruv Mehrotra, ‘The Spooky, Loosely Regulated World of Online Therapy’, Jezebel (online, 19 February 2020) https://jezebel.com/the-spooky-loosely-regulated-world-of-online-therapy-1841791137.
- 215 Michal Wlosik and Michael Sweeney, ‘First-Party & Third-Party Cookies: What’s the Difference?’, Clearcode (2 November 2018) https://clearcode.cc/blog/difference-between-first-party-third-party-cookies/. Third party trackers are mainly used for tracking and online-advertising purposes but can also provide certain services, such as live chats. These third-party elements are mainly used on mental health websites, according to Privacy International, for advertising and marketing purposes. Privacy International appear to have undertaken the most comprehensive research on this issue. ‘First-party cookies’ would refer to elements of a website developed by the website creators or operators that provide the same function as third-party cookies but which are operated and utilized by the website creators/operators themselves. Trackers by third-parties monitor users’ behaviour across various online sources such as apps, smartphones, webs, smart TVs and so on. They may be used for a variety of reasons from connecting social media platforms to monitoring analytics of how a user interacts with a web or marketing purposes. They allow for a third-party to collect, monitor and use data related to a users’ interaction with a specific online tool. Privacy International (n 29).
- 216 Privacy International, Your Mental Health for Sale? (n 188).
- 217 Privacy International, ‘Mental Health Websites Don’t Have to Sell Your Data. Most Still Do.’, Privacy International (7 October 2021) http://privacyinternational.org/report/3351/mental-health-websites-dont-have-sell-your-data-most-still-do [accessed 14/07/21].
- 218 Nick Couldry and Ulises Ali Mejias, ‘Big Tech’s Latest Moves Raise Health Privacy Fears’, Financial Times (online, 7 December 2020) https://www.ft.com/content/01d4452c-03e2-4b44-bf78-b017e66775f1.
- 219 Zuboff (n 128).
- 220 ‘ADHD Drugs Are Convenient To Get Online. Maybe Too Convenient’ (n 137).
- 221 United States Attorney’s Office, District of Vermont. 2020. Justice department announces global resolution of criminal and civil investigations with opioid manufacturer Purdue Pharma. October 21. https://www.justice.gov/usao-vt/pr/justice-department-announces-global-resolution-criminal-and-civil-investigations-opioid-0. Accessed April 3, 2022.
- 222 Megan Prictor, ‘Clinical Software and Bad Decisions: The “Practice Fusion” Settlement and Its Implications’ [2022] Journal of Bioethical Inquiry (Online First: 11/4/2022)
- 223 Ibid.
- 224 Ibid.
- 225 Some commentators have raised concerns that these technologies may unintentionally lead to an increase in the use of forced interventions where behavioural data indicate suicidality. Cosgrove et al (n 134) 620.
- 226 Alexandra Mateescu, Electronic Visit Verification: The Weight of Surveillance and the Fracturing of Care (Data & Society, November 2021) https://datasociety.net/library/electronic-visit-verification-the-weight-of-surveillance-and-the-fracturing-of-care/.
- 227 Brigid Richmond, A Day in the Life of Data: Removing the Opacity Surrounding the Data Collection, Sharing and Use Environment in Australia (Consumer Policy Resource Centre, 2019) 37.