1.4 Biometric Monitoring Technologies
Biometric monitoring technologies represent a somewhat ‘extreme’ technology for the purposes of this report – compared to say, teletherapy – given that the insights such technologies are purported to reveal about a person’s health, body, cognition, affective state and so on, create challenging ethical, social, legal and political issues.98 Biometric monitoring technologies use sensors in devices, including smartphones, wearable and connected devices, cameras and even pills, to remotely generate data concerning a person’s biology, physiology or behaviour.
There are various ways to describe biometric monitoring technologies. Computer scientists may refer to ‘context sensing’, ‘personal sensing’, or ‘mobile sensing’. In mental health settings, several prominent psychiatrists and psychologists have begun to refer to ‘digital phenotyping’, particularly in relation to the assessment of behaviour, mood and cognition through biometric data generated by devices, such as smartphones and FitBits.99 An advertisement for the prominent direct-to-consumer app company, Mindstrong, for example, describes this practice this way:
- How you passively use your smartphone—typing, swiping, scrolling—is a new way to measure things like your stress, mental health symptoms, and well-being. If you’re typing more slowly—even by a millisecond—it might mean there’s a change. You can track your measurements in the mobile app, and they’re shared with your clinical team so they can provide you with more personalized care.100
CASE STUDY: Biometric Monitoring in Mental Health Settings
From 2021, up to 20,000 Australian high school students will have their phone data monitored for up to five years in an attempt to track how mental health issues develop in adolescence. According to the researchers, ‘[t]he study aims to discover how we can use smartphones to deliver preventive interventions on a large scale’. The study makes use of ‘[c]omprehensive, technology-assisted data collection and analysis […] to determine what triggers the development of mental health symptoms.103 The authors report that no study of mental health apps has occurred at this scale anywhere in the world.104 The children and young people involved in the study will interact with game-based apps, have their movement and location tracked, and be asked specific questions about their state of mind, including whether they have contemplated committing suicide.
Proponents view biometric monitoring with therapeutic aims as a reasonable method for real-time tracking that in the right persons may enhance therapeutic alliance between mental health practitioners and individuals seeking help. This enthusiasm is typically accompanied by acknowledgement that some patients will not want to use nor gain from such measures. Similar forms of behavioural tracking holds appeal to actors in sectors outside of formal mental health services, including education, the military, the insurance industry, and the criminal justice system.105
Biometric technology has also started to appear in video monitoring and surveillance in acute psychiatric settings, in ways that do not involve on-body sensors
Photo by Arno Senoner on Unsplash.
CASE STUDY: Algorithmic Video Monitoring and Surveillance in Psychiatric Settings
In 2020 in England, a trial (the ‘Oxehealth Trial’) was undertaken on the use of ‘digital assisted observations’ at a psychiatric ward.106 The monitoring was used by nurses to take 15-minutely and hourly night-time ‘clinical observations’ of patients in 6 individual bedrooms over a 4-month period.107 The sensors used by researchers were wall-mounted video cameras along with ‘computer vision, signal processing and AI software’ that enabled nurses to track their patients’ locations and movements (‘physical monitoring’) and to record their heart and respiratory rates (‘physiological’ or ‘vital sign monitoring’).108 Physiological monitoring using the Oxehealth system allows nurses to access ‘real-time spot measurements of pulse rate and breathing rate without them having to enter the room’.109 These measurements are ‘displayed on a screen in the nursing station or on handheld tablet computers’.110 The software generates long-term information in the form of ‘a timeline summarising the patient’s location (in bed, elsewhere in their room, etc) for a day or a week … to help characterise the patient’s behaviour during that time interval’.111
Other forms of biometric sensing go ‘beneath the skin’, where ingestible sensors have been integrated with psychopharmaceutical pills
CASE STUDY: ‘Smart Pills’, ‘Digital Pills’, and Ingestible Sensors
In 2017, the US Food and Drug Administration (“FDA”) approved a so-called ‘digital 112‘Abilify MyCite’, as it is commercially named, integrates a pill with an electronic sensor. According to the FDA, ‘Abilify MyCite’ is aimed at ‘the treatment of schizophrenia, acute treatment of manic and mixed episodes associated with bipolar I disorder and for use as an add-on treatment for depression in adults’.113 When a person swallows the pill, the sensor activates upon contact with stomach fluid. Information concerning the nature and timing of ingestion is then transmitted via a patch worn on the skin to a linked device, such as a smartphone. Family members, clinicians and other third parties can, with the person’s consent, attain the information through a web-based portal. The smartphone/tablet app can also track ‘self-reported measures of rest and mood’. The pills are advertised as ‘targeting the problem of medication adherence’.114 Digital pills have also been approved for use by regulatory bodies in China and the European Union
Although biometric technology is relatively exploratory in the mental health context, its use is expanding. For example, the Oxehealth system used in the British trials is reported by the company who produces them to be ‘relied on by one in three English mental health trusts as well as acute hospitals, care homes, skilled nursing facilities, prisons and police forces in the UK and Europe’.115
1.4.1 Power and Justice in the Biometric and Digital Turn
The rise of biometric monitoring in mental health care is being debated on several fronts. This includes contested claims about what ‘digital markers’ of behaviour can reveal.116 Even the term ‘digital phenotyping’ is contested and terminology remains unsettled. David Mohr and colleagues raise concerns that the term fails to convey the reality that the practice constitutes surveillance over intimate aspects of a person’s life.117 Mohr and colleagues, who ultimately endorse the potential value of the technology, state:
- [W]hat might the term digital phenotyping signal mean to those whose data are being used? That such sensing is medical and scientific, perhaps? That it is complex? It does not convey to the average person that we are engaging in a sensitive form of surveillance: collecting large amounts of data, and using those data to understand deeply personal things, such as how they sleep, where they go, how and when they communicate with others, or whether they may be experiencing a mental health condition.118
The authors call for language that is more transparent about the intent and practice behind this technology, arguing the term ‘personal sensing’ is more appropriate.
Other commentators have drawn attention to deeper issues of justice and power. The use of biometric technologies to purportedly infer a person’s mental state or characteristics, and its use in pervasive forms of monitoring and surveillance, have raised particular concern.119 Leah Harris warns of biometric technologies developed by psychiatric or psychological professionals being used in forms of social control over marginalised individuals, not just in mental health settings, but also in prisons and other sites of carceral control, including in the ‘community’.120 Harris refers to Michel Foucault’s theorisation of the Panopticon, discussing the way ‘power is based on both the ability to observe others and the knowledge obtained through that observation’.121 The Panopticon was originally an architectural system and idea developed in the eighteenth century by Jeremy Bentham. Its purpose is to continuously observe prisoners in confinement. For Foucault, panopticism is a surveillance mechanism used to exert disciplinary power throughout society by professionals, bureaucracies, government agencies, market actors, and so on, by allowing for an ‘absolute and constant visibility surrounding the bodies of individuals’.122
Toward the end of his life, Michel Foucault conceptualised a shift in Western societies away from the dominance of disciplinary environments such as largescale psychiatric institutions, to systems of constant external surveillance. He wrote, ‘[o]ne also sees the spread of disciplinary procedures, not in the form of enclosed institutions, but as centres of observation disseminated throughout society’.123 He charts these societal shifts toward forms of control that are less costly and complex to manage.124 Harris relates panopticism to biometric monitoring in the mental health context, warning that ‘[t]here is always an inherent power imbalance between the “omnipresent” and “invisible” watchers and their “permanently visible” subjects’ and that such imbalances have been expressed in psychiatry historically through its role in governing a marginalised and oppressed group125. Harris’s framing has commonalities with broader critiques of the information economy in the current era, including Shoshanna Zuboff’s prominent characterisation of ‘surveillance capitalism’.126
EXPLAINER: Zuboff’s ‘Surveillance capitalism’
Shoshana Zuboff describes surveillance capitalism as the market-driven process that turn personal thoughts, experiences and behaviours into data that is then commodified for marketing purposes.127 Such processes rely on the increasing use of surveillance processes, through the collection of data, not just based on what a person ‘posts’ online, but from the ‘behavioral surplus data’ that emerges from how a person uses their digital technology. Biometric data, usage rates, the manner a person expresses themselves, all become converted into data that can be extracted and sold for value. Data is then on-sold with claims that it has predictive value for how someone may behave. Zuboff explains this extraction process within the context of a diabetes app:
You download a diabetes app, it takes your phone, it takes your microphone, it takes your camera, it takes your contacts. Maybe it helps you manage your diabetes a little bit, but it’s also just a part of this whole supply-chain dynamic for behavioral surplus flows. The stuff that they’re taking from you has nothing to do with the diabetes functionality for which you downloaded the app. Absolutely nothing. It’s simply siphoning off data to third parties for other revenue streams that are part of these surveillance capitalists’ ecosystems.128
However, the market incentives that form under surveillance capitalism go beyond prediction, towards shaping or controlling behaviour, or as Zuboff describes, the creation of ‘monitoring and compliance regimes’.129 That is, digital technologies can be integrated with other incentives to ensure behaviours that are compliant with businesses objectives, such as sharing additional data or maintaining engagement in order to continue having access to the full benefits of the technology. One example where this is used is the ‘internet of things’, whereby there is an integration between digital technologies and data with everyday objectives, such as those in a google home, or with a car. The failure to share data may disable features of ‘smart devices’ in the home, or if payments run late on a car, it can be remotely disabled from operating any longer. Therefore there remains choice, but with significant tradeoffs. Individual consumers in this setting have little bargaining power compared to significant digital platforms.130 The broader implications of these market incentives taken to their conclusion is the construction of a society that is in ‘perpetual compliance’ with business interests.131
How might surveillance capitalism operate in the mental health context? Various critical accounts have been offered. Lisa Cosgrove and colleagues’ state:
- Mental health apps that use digital phenotyping and other surveillance technologies position people as unwitting profit-makers; they take individuals at their most vulnerable and make them part of a hidden supply chain for the marketplace.132
Examples of such data-extraction are included throughout this paper. Jonah Bossewitch warns of the ‘arrival of surveillance psychiatry’ and queries its role in the growing information economy, whereby ‘huge pools of data are being used to train algorithms to identify signs of mental illness’:133
- Researchers are claiming they can diagnose depression based on the color and saturation of photos in your Instagram feed and predict manic episodes based on your Facebook status updates. Corporations and governments are salivating at the prospect of identifying vulnerability and dissent. The emphasis on treating risk rather than disease predates the arrival of big data, but together they are now ushering in an era of algorithmic diagnosis based on the data mining of our social media and other digital trails.134
One challenge for advocates will be to correctly identify the business models of companies generating or processing such data. Without transparency on this matter, which companies will not necessarily divulge, observers may be left to speculate. One obvious business model would be targeting platform users with commercial products, as the next example suggests.
CASE STUDY: ‘Cerebral’ – app company accused of ‘accelerating the psychiatric prescribing cascade’
A 2021 Bloomberg investigation of the popular mental health app ‘Cerebral’, for example, found evidence that it led to overtreatment that generated increased sales of home-delivered psychopharmaceutical prescriptions.135 ‘Cerebral’ does not involve biomonitoring but it highlights a business model that others will be following in the industry, regardless of how data is generated. The Cerebral app provides a platform for connecting platform users to a therapist and a psychiatric nurse practitioner at a monthly cost.136 Former Cerebral employees reported to journalists that the company prized quantity over quality, involving more patient visits, shorter appointments, and more prescriptions.137 Concerns were raised about the app ‘accelerating the psychiatric prescribing cascade’ for people seeking amphetamines prescribed for ADHD.138
We will discuss private sector interests and the role of data concerning mental health in the information economy throughout the report.
Returning to biometric monitoring, others have raised concerns that people who use algorithmic interpretations of data concerning emotions are misled about the extent to which such systems can ‘capture’ the reality of emotional experiences.139 Victoria Hollis and colleagues point to a survey of people (n=188) who showed strong interest in automatic stress and emotion tracking, where ‘many respondents expected these systems to provide objective measurements for their emotional experiences’ despite this simply not being possible.140 This framing effect (which is often exaggerated by tech vendors) can even change the way people construe their own emotions. In another study, Hollis examined how algorithmic sensor feedback influences emotional self-judgments in a mixed-methods study with 64 participants.141 ‘Despite users reporting strategies to test system outputs, users still deferred to feedback and their perceived emotions were significantly influenced by feedback frames’ with some users even ‘overr[iding] personal judgments, believing the system had access to privileged information about their emotions.142
Similarly, Lisa Parker and colleagues, in their survey of the messaging of mental health apps, argued that prominent apps tend to over-medicalise states of distress and may over-emphasise ‘individual responsibility for mental well-being’.143 As a broad comment, the user/survivor/ex-patient movement and others have advanced reasons to demedicalise approaches to supporting people in distress; which would seemingly extend to caution about framing personal mental crises as medical problems amenable to digital technological solutions.144 The framing effects of biometric monitoring often go unremarked, but the studies noted above suggest the effects can alienate people from their own self-perceptions. For their part, Hollis and colleagues argue that the framing effects of should be acknowledged and used in ways to promote agency and help individuals more actively construe their personal experiences.145
Concerns raised by Harris, Bossewitch and others move beyond questions of how to make particular technologies like biometric monitoring more equitable or ethical (for example, by ensuring the datasets adequately cover diverse communities that accommodate distinct ways of being and self-presenting). Instead, their questions relate to law and political economy, questioning whether technologies are creating a market for surveillance in the mental health context that perpetuates and even extends the worst power imbalances, inequities and harms of current mental health practices.146 Kaitlin Costello and Diana Floegel, for example, argue that the ‘link between the carceral state and mental healthcare in the United States is alarming’ and that biometric monitoring technologies ‘are poised to only further strengthen that link, despite calls to the contrary’.147 More fundamentally, this new ensemble of AI and mental health looks set to change what it is to be considered well or unwell.148
Moving the Frame from ‘What does the technology do?’ to ‘Who is benefiting and who is not?’
One analytical strategy to help counter these negative possibilities is to place the emphasis away from the technology itself and toward questions of who is benefiting from the push for these technologies, and – perhaps more importantly – who is losing. This framing challenges the common presentation of computational monitoring and evaluation as naturally being in people’s interests on the basis that ‘the more we know the more we can help’. Such an optimistic view can easily dovetail with widely-held understandings about the legitimacy and unquestioned benefit of monitoring persons experiencing distress, lived experience and disability. As Sharon Snyder and David Mitchell have argued, ‘[o]ne of the primary oppressions experienced by disabled people is that they are marked as perpetually available for all kinds of intrusions, public and private.149
The broad group of critical commentators raising concerns with biometric monitoring draw attention to the potential intrinsic harms of processes of computational observation and measurement. Just as the ‘medical gaze’ has been used as a concept to critique the biomedical and individualistic framing of distress and other human experiences, some commentators have considered the potential harms of the ‘data gaze’. The ACLU, for example, describe a potential ‘nightmare scenario’ whereby a ‘data gaze’ extends to omni-present AI-powered monitoring and surveillance:
- the consistent tracking of our every conscious and unconscious behavior that, combined with our innate social selfconsciousness, turns us into quivering, neurotic beings living in a psychologically oppressive world in which we’re constantly aware that our every smallest move is being charted, measured, and evaluated against the like actions of millions of other people — and then used to judge us in unpredictable ways.150
These concerns were not raised about the mental health context in particular, though they resonate with the concerns discussed in this section.
Others have raised concerns about the subtle harms caused by the way technological surveillance leads to an abstraction of the human body, which is then reassembled through a series of data flows.151 Jathan Sadowski has argued that the abstraction of ‘datafication’ is itself a form of violence.152 Extending these critiques to the disability context, Jackie Leach Scully and Georgia Van Toorn have argued that broader ‘datafication’ of the human body will delineate increasingly rigid boundaries between normality and disability.153 This impulse to quantify and distinguish embodied difference, they argue, ‘diverts attention from the realities of disabled lives, at a time when disability scholars and activists are arguing for more rather than less attention to the lived experience of disability’.154 LLana James, discussing algorithmic racism and the impacts of the digital turn on other marginalised groups, has discussed how datafication can undermine the need to ‘act on the reliable narrator’ (that is, listening to the person or populations affected and how they articulate their needs).155 Instead, dominant narratives about technology insist on new and alternative ways to undertake expert observation and monitoring using data-driven technology.156 In the disability context, including the mental health context, the use of automation risks diverting attention from the experienced reality of disabled lives.157
If these concerns are taken seriously, the use of technologies like AI to make assumptions and judgements about who we are, and who we will become is much more than a potential invasion of privacy; it is an existential threat to human autonomy and the ability to explore, develop and express our identities. It is potentially a normalising of surveillance in a way that is reminiscent of 19th century asylums as a state-authorised site of control over disabled lives, but using 21st century techniques of ubiquitous observation and computational ‘processing’. Grappling with these possibilities will be a necessary part of discussion about the potential harms and public benefits afforded by technology in the mental health context in general, particularly biometric monitoring.
1.4.2 Governing the Future of Biometric Monitoring in Mental Health Settings
Biometrics more generally are the subject of a growing field of research, practice, advocacy, activism, and law reform.158 In healthcare, the COVID-19 pandemic has accelerated the international adoption of forms of biomonitoring and surveillance, and other public health monitoring and security technologies, whether adopted by states, private entities or individuals.159
In the mental health context, scholarship that explores the legal, ethical, social, and political concerns with biometric technologies is emerging.160 More work is clearly required. Later themes discussed in this report will engage with some of the questions directly relevant to biometrics. Such questions include asking if those deemed through biometric monitoring to be ‘cognitively impaired’, ‘mentally disordered’, ‘suicidal’, or likely to become any of those things, will be informed that such attributions have been made. Will they be able to opt-out of the monitoring process in the first place? Will they be able to contest such labels before data are transferred to others? Given the purported ease with which mobile phone data-points can be used for automated profiling to determine cognitive impairment,161 are there sufficient safeguards to govern whether or how this should occur? More pointedly, should moratoria apply to some forms of biometric monitoring and surveillance in the mental health and disability context on the basis that they are fundamentally harmful or inconsistent with human rights? How would such a decision be made? What role is currently being played by psychiatric and psychological sciences in advancing such technologies? What role should they play?
This is a critical moment to reflect how the current choices being made in various institutions concerning ‘digital mental health’ – from research, services, policies and programming – might affect future approaches to distress, anguish, mental crises and so on. To conclude Part 1 we turn to the glaring omission from these choices of the very people for whom the technologies are purportedly designed
- 97 The Medical Futurist, ‘Artificial Intelligence In Mental Health Care’, The Medical Futurist (25 June 2019) https://medicalfuturist.com/artificial-intelligence-in-mental-health-care.
- 98 Lisa Cosgrove et al, ‘Digital Phenotyping and Digital Psychotropic Drugs: Mental Health Surveillance Tools That Threaten Human Rights’ (2020) 22(2) Health and Human Rights Journal 33; Amba Kak, Regulating Biometrics: Global Approaches and Urgent Questions (AI Now Institute, 1 September 2021) https://ainowinstitute.org/regulatingbiometrics.pdf; Nicole Martinez-Martin et al, ‘Data Mining for Health: Staking out the Ethical Territory of Digital Phenotyping’ (2018) 1(1) npj Digital Medicine 68.
- 99 Thomas R Insel, ‘Digital Phenotyping: Technology for a New Science of Behavior’ (2017) 318(13) JAMA 1215.
- 100 Mindstrong, ‘How it works’ (website) https://mindstrong.com/how-it-works [accessed 02/02/2021].
- 101 David C Mohr, Katie Shilton and Matthew Hotopf, ‘Digital Phenotyping, Behavioral Sensing, or Personal Sensing: Names and Transparency in the Digital Age’ (2020) 3(1) npj Digital Medicine 1.
- 102 Ibid.
- 103 Black Dog Institute, ‘The Future Proofing Study’ https://www.blackdoginstitute.org.au/research-centres/future-proofing [accessed 15/07/2021]
- 104 Ibid.
- 105 Kak (n 100)
- 106
- 107
- 108
- 109 Ibid 38.
- 110 Ibid 39.
- 111 Ibid 37-38.
- 112 Food and Drug Administration (US), ‘FDA News Release: FDA approves pill with sensor that digitally tracks if patients have ingested their medication, New tool for patients taking Abilify’, 13 November 2017 https://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm584933.htm
- 113 Ibid.
- 114 Craig M Klugman et al, ‘The Ethics of Smart Pills and Self-Acting Devices: Autonomy, Truth-Telling, and Trust at the Dawn of Digital Medicine’ (2018) 18(9) The American Journal of Bioethics 38.
- 115 https://www.oxehealth.com/about-us (accessed 26/08/21).
- 116 Phoebe Friesen, ‘Digital Psychiatry: Promises and Perils’ (2020) 27(1) Association for the Advancement of Philosophy and Psychiatry 2; Mohr, Shilton and Hotopf (n 103); Eric S Swirsky and Andrew D Boyd, ‘Adherence, Surveillance, and Technological Hubris’ (2018) 18(9) The American Journal of Bioethics 61.
- 117 Mohr, Shilton and Hotopf (n 103).
- 118 Ibid.
- 119 Jonah Bossewitch, ‘Brave New Apps: The Arrival of Surveillance Psychiatry’, Mad In America (9 August 2019) https://www.madinamerica.com/2019/08/brave-new-apps-the-arrival-of-surveillance-psychiatry/; Leah Harris, ‘The Rise of the Digital Asylum’, Mad In America (15 September 2019) https://www.madinamerica.com/2019/09/the-rise-of-the-digital-asylum/.
- 120 Harris, ‘The Rise of the Digital Asylum’ (n 121); L Harris, ‘The New National Mental Health Crisis Line Wants to Track Your Location’, Disability Visibility Project (19 April 2021) https://disabilityvisibilityproject.com/2021/04/19/the-new-national-mental-health-crisis-line-wants-to-track-your-location/.
- 121 Harris, ‘The Rise of the Digital Asylum’ (n 121).
- 122 Michel Foucault, Psychiatric power: Lectures at the Collège de France (Palgrave Macmillan, 2006), p.52.
- 123 Michel Foucault, Discipline and punish: The birth of the prison (Vintage Books, 1995) p.212.
- 124 Etienne Paradis-Gagné and Dave Holmes, ‘Gilles Deleuze’s Societies of Control: Implications for Mental Health Nursing and Coercive Community Care’ n/a(n/a) Nursing Philosophy e12375
- 125 Harris, ‘The Rise of the Digital Asylum’ (n 121).
- 126 Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power: Barack Obama’s Books of 2019 (Profile books, 2019).
- 127 Ibid.
- 128 Noah Kulwin, ‘Shoshana Zuboff Talks Surveillance Capitalism’s Threat to Democracy’, Intelligencer (24 February 2019) https://nymag.com/intelligencer/2019/02/shoshana-zuboff-q-and-a-the-age-of-surveillance-capital.html.
- 132 Lisa Cosgrove et al, ‘Psychology and Surveillance Capitalism: The Risk of Pushing Mental Health Apps during the COVID-19 Pandemic’ (2020) 60(5) Journal of Humanistic Psychology 611, 620.
- 133 Bossewich (n 119).
- 134 Ibid.
- 135 ‘ADHD Drugs Are Convenient To Get Online. Maybe Too Convenient’, Bloomberg.com (online, 11 March 2022) https://www.bloomberg.com/news/features/2022-03-11/cerebral-app-over-prescribed-adhd-meds-ex-employees-say.
- 136 ‘How Mental Health Apps Can Accelerate the Psychiatric Prescribing Cascade’, Lown Institute (18 March 2022) https://lowninstitute.org/how-mental-health-apps-can-accelerate-the-psychiatric-prescribing-cascade/.
- 137 ‘ADHD Drugs Are Convenient To Get Online. Maybe Too Convenient’ (n 137).
- 138 ‘How Mental Health Apps Can Accelerate the Psychiatric Prescribing Cascade’ (n 138).
- 139 Victoria Hollis et al, ‘On Being Told How We Feel: How Algorithmic Sensor Feedback Influences Emotion Perception’ (2018) 2(3) Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 114:1-114:31.
- 140 Ibid.
- 141 Ibid.
- 142 Ibid.
- 143 Lisa Parker et al, ‘Mental Health Messages in Prominent Mental Health Apps’ (2018) 16(4) The Annals of Family Medicine 338.
- 144 China Mills and Eva Hilberg, ‘The Construction of Mental Health as a Technological Problem in India’ (2020) 30(1) Critical Public Health 41. Nev Jones, for example, has examined the impact of other ways of scientifically framing mental distress, including genetic and neurobiological causal attributions of psychiatric disorder, which she warns can undercut the agency of people in distress and the nuance of the individual experiences. Nev Jones, ‘Agency, Biogenetic Discourse and Psychiatric Disorder’, Somatosphere (18 September 2012) http://somatosphere.net/2012/agency-biogenetic-discourse-and-psychiatric-disorder.html/.
- 145 Hollis et al (n 141).
- 146 Harris, ‘The Rise of the Digital Asylum’ (n 121). This broader point was made by Frank Pasquale (see above n 6).
- 147 Kaitlin L Costello and Diana Floegel, ‘“Predictive Ads Are Not Doctors”: Mental Health Tracking and Technology Companies’ (2020) 57(1) Proceedings of the Association for Information Science and Technology e250.
- 148 Dan McQuillan, ‘Mental Health and Artificial Intelligence: Losing Your Voice’ (12 November 2018) openDemocracy https://www.opendemocracy.net/en/digitaliberties/mental-health-and-artificial-intelligence-losing-your-voice-poem/.
- 149 Sharon L Snyder and David T Mitchell, Cultural Locations of Disability (University of Chicago Press, 2006) p.628.
- 150 Jay Stanley, The Dawn of Robot Surveillance: AI, Video Analytics, and Privacy (American Civil Liberties Union, 2019) 36.
- 151 Kevin D. Haggerty and Richard V. Ericson, ‘The Surveillant Assemblage’ (2000) 51 British Journal of Sociology 611; R.E. Smith, Rage inside the machine: The prejudice of algorithms, and how to stop the internet making bigots of us all (Bloomsbury Academic, 2019).
- 152 Jathan Sadowski, Too Smart: How Digital Capitalism Is Extracting Data, Controlling Our Lives, and Taking Over the World (MIT Press, 2020) 46.
- 153 Jackie Scully and Georgia Van Toorn, ‘Datafying Disability: Ethical Issues in Automated Decision Making and Related Technologies – AABHL 2021’ (19 November 2021) http://www.aabhlconference.com/3563
- 154 Ibid.
- 155 LLana James, ‘Race-Based COVID-19 Data May Be Used to Discriminate against Racialized Communities’, The Conversation (15 September 2020) http://theconversation.com/race-based-covid-19-data-may-be-used-to-discriminate-against-racialized-communities-138372.
- 156 Schulich Law, Algorithmic Racism, Healthcare & The Law: ‘Race Based’ Data Another Trojan Horse? (19 September 2020) https://www.youtube.com/watch?v=PveOVJYIu3I.
- 157 Scully and Van Toorn (n 155).
- 158 Kak (n 100).
- 159 ‘Covid-19 Is Accelerating the Surveillance State’, The Strategist (17 November 2020) 19 https://www.aspistrategist.org.au/covid-19-is-accelerating-the-surveillance-state/; ‘Homo Deus Author Yuval Harari Shares Pandemic Lessons from Past and Warnings for Future’, South China Post (online, 1 April 2020) https://www.scmp.com/news/china/article/3077960/homo-deus-author-yuval-harari-shares-pandemic-lessons-past-and-warnings?fbclid=IwAR2b6pMEt1Gj4mpsBjSapqwL79e_tg_76eL4MLL788WYGDgTGRDbkM1H8y8.
- 160 See e.g. Cosgrove et al (n 98); Bossewitch, ‘The Rise of Surveillance Psychiatry and the Mad Underground’ (n 133); Harris, ‘The Rise of the Digital Asylum’ (n 119).
- 161 Jonas Rauber, Emily B Fox and Leon A Gatys, ‘Modeling Patterns of Smartphone Usage and Their Relationship to Cognitive Health’ [2019] arXiv:1911.05683 [cs, stat] http://arxiv.org/abs/1911.05683