2.4.1 Non-discrimination and the Prevention of Bias
The potential for algorithmic bias or discrimination is a well-documented issue. Much public discussion in this area has focused on gender, race and socio-economic inequality304. However, disability, including mental health and psychosocial disability, ‘has been largely omitted from the AI-bias conversation’.305 Whitaker and colleagues have argued that ‘patterns of marginalization [concerning disability] are imprinted in the data that shapes AI systems, and embed these histories in the logics of AI’.306 For example, Ben Hutchinson and colleagues at Google, demonstrated that social attitudes casting disability as bad and even violent – particularly in regard to mental health – were encoded in AI systems designed to detect hate speech and identify negative/positive sentiment in written text.307 The ‘machine-learned model to moderate conversations’, according to Hutchinson and colleagues, classifies texts which mention disability and particularly references to mental health as more ‘toxic’, while ‘a machine-learned sentiment analysis model rates texts which mention disability as more negative’.308 Such studies highlight how biased datasets create biased algorithms, which can have major consequences for people’s lives, as the next example shows.
CASE STUDY: Disability-Discrimination in Automated Hiring Programs
Mr Kyle Behm was a high-achieving university student in the US.309 He was refused a minimum-wage job after reportedly being ‘red-lighted’ by the automated personality test he’d taken as part of his job application. Mr Behm had previously accessed mental health services and was diagnosed with a mental health condition. He only became aware of the ‘red-lighting’ after being informed by a friend who happened to work for the employer. Mr Behm applied for several other minimum-wage positions but was again seemingly ‘red-lighted’ following automated personality testing. Mr Behm’s father, a lawyer, publicised the widespread use of the job applicant selection program and launched a class-action suit alleging that the exam hiring process was unlawful. He argued that the process violated the Americans with Disabilities Act of 1990 (‘ADA’) by being equivalent to a medical exam, for which its use under the ADA for hiring purposes would be illegal. In November 2017, the US retailer Lowe’s announced a change to online application processes for retail employees ‘to ensure people with mental health disabilities can more readily be considered for opportunities with Lowe’s’.310
This case study is revealing. Mr Behm was seemingly harmed due to data to which he was never given access. Nor does it appear that Mr Behm had an easily accessible opportunity to contest, explain or investigate the test outcome. Cathy O’Neil argues that this type of algorithmic ‘red-lighting’ has the potential to ‘create an underclass of people who will find themselves increasingly and inexplicably shut out from normal life’.311
One response to biased algorithmic systems has been to focus on creating un-biased datasets. Datasets could be made more diverse, the argument goes, to capture diverse human experiences. This would avoid negative consequences for people who, through the various human and circumstantial complexities in their lives, are considered ‘statistical outliers’ for whom algorithmic decision systems are ill-equipped. This approach is certainly warranted in some circumstances, where the need for good quality and representative data can help avoid biased or discriminatory outcomes.
However, the aim of creating unbiased datasets will be insufficient in many situations. Meredith Broussard criticises this approach as being commonly ‘technochauvinist’ in nature.312 Techno-chauvinism refers here to the false view that technological solutions provide ‘appropriate and adequate fixes to the deeply human problem of bias and discrimination’.313
Representative and high-quality datasets will be important in some instances but Broussard’s criticism suggests that there is a second major category of discrimination at play: namely, discrimination perpetuated by human systems and institutions using data concerning mental health. Examples might include insurance companies discriminating against people based on data showing that they accessed mental health services at one time,314 or police and border authorities discriminating against people based on non-criminal data concerning an individual’s engagement with mental health services, as the next example shows.
CASE STUDY: Discrimination by Human Systems: Police, Surveillance, and Mental Health
In 2018, in Florida, in the US, the state legislature authorised the collection and digitisation of certain types of student mental health data and its distribution through a statewide police database.315 The purported aim was to prevent gun violence. The health-related information would be reportedly combined with social media monitoring activities, the precise nature of which was undisclosed. Journalists later reported that the type of information under consideration included ‘more than 2.5 million records related to those who received psychiatric examinations’ under the Florida Mental Health Act of 1971.316 The Department was reportedly considering including ‘records for over 9 million children in foster care, diagnosis and treatment records for substance abusers… and reports on students who were bullied and harassed because of their race or sexual orientation’.317
This example suggests that no amount of ‘unbiased datasets’ will offset the discriminatory premise of various digital initiatives, which are designed to intervene in the lives of persons with lived experience and disability on an unequal basis with others.
Discriminatory impacts are also more likely as algorithmic and data-driven technologies are applied in settings affecting marginalised populations. This includes settings in which there are broader constraints on individuals’ agency, including cumulative effects of disadvantage. This could include ethnic and racial minorities, people facing involuntary psychiatric intervention, families or individuals facing housing insecurity, returning veterans, people with addiction, the previously or presently incarcerated, and migrants and asylum-seekers.318 LLana James points to these concerns when she asks: ‘How will data labelled as Black, poor, disabled or all three impact a person’s insurance rates?319 James argues that current laws (writing in Canada) do not appear to protect health service recipients and patients and calls for updated data laws to protect against ‘the perils of monetized data and the discriminatory algorithms they are generating’.320
Regarding insurance, Access Now have expanded on James’ point regarding exclusion and insurance-based discrimination, noting that:
- [i]nsurance actors have for some time perceived digital forensics as an economical means of constructing more informed risk assessments regarding social behaviour and lifestyles. This type of granular data on driving skills sets and perhaps on attitudinal traits around the driving task (derived from AI assisted driving technology) could allow the insurers to more accurately metricise risk. For an individual, the consequences are fairly obvious in rising premium costs or even in some cases no access to insurance. However, for society the long-term impacts may be less apparent in that it may result in cohorts of people being deemed uninsurable and therefore denied access to the roads.321
This point is concerning in the mental health context, and the emergence in recent years of partnerships between insurance companies and mental health technology companies322 and other insurance company initiatives concerning mental health-related data warrant scrutiny.323
The likelihood of disability-based discrimination will be compounded when data scientists, technologists, tech entrepreneurs, clinical innovators, and so on, are not aware of the potential for discrimination using data concerning mental health. Consider the following claims being made in the ‘emotion recognition’ industry in China.
CASE STUDY: Discrimination by Human Systems: Police, Surveillance, and Mental Health
Advocacy group Article 19 recently surveyed 27 Chinese companies whose emotion recognition technologies are being trialled in three areas: public security, driving safety, and educational settings.324 Companies like Taigusys Computing and EmoKit refer to autism, schizophrenia and depression as conditions they can diagnose and monitor using ‘micro-expression recognition’. The authors of the Article 19 report argued that data harms concerning mental health remain unaddressed, as does the lack of robust scientific evidence for these technologies:
While some emotion recognition companies allege they can detect sensitive attributes, such as mental health conditions and race, none have addressed the potentially discriminatory consequences of collecting this information in conjunction with emotion data… Firms that purportedly identify neurological diseases and psychological disorders from facial emotions fail to account for how their commercial emotion recognition applications might factor in these considerations when assessing people’s emotions in non-medical settings, like classrooms.325
AI Now Institute, an interdisciplinary research centre examining artificial intelligence and society, have called for a ban on technology designed to recognise people’s emotions concerning ‘important decisions that impact people’s lives and access to opportunities’.326 This scope would surely extend to automated forms of diagnosis or proxy-diagnosis of cognitive impairments or mental health conditions. The proposed ban could apply to decisions concerning hiring, workplace assessment, insurance pricing, pain assessments or education performance. AI Now base their recommendation on two concerns: 1) the often-baseless scientific foundations of emotion recognition, and 2) the potential for bias and discrimination in the resulting decisions.327
Emotion or ‘affect’ recognition technology such as facial recognition technology raise important issues for this report. At least three points relating to non-discrimination and the mental health context are worth noting:
- First, traditional facial recognition processes based on ‘basic emotions theory’ have been discredited as pseudoscientific.328
- Second, and relatedly, there appear to be strong grounds to call for a moratorium on the use of affect technologies like facial recognition in any important decisions concerning mental health, including imputing intellectual and cognitive impairments or psychiatric diagnoses. Not only are the scientific foundations of such approaches generally spurious, which is probably reason enough to justify a moratorium, but the potential for discrimination based on impairment ascribed in this way is very poorly understood.
- Third, few people in broader debate about affect or emotion recognition technologies such as facial recognition appear to be engaging with the expansion of behavioural sensing, ‘digital phenotyping’ and other forms of biometric monitoring and surveillance in the mental health context. The scientific basis of claims being made about behavioural sensing are currently being explored in the fields of psychiatry and psychology, with studies appearing in leading psychiatric and psychology journals. This scientific exploration warrants greater dialogue between activity on affect recognition technology in the mental health context and the broader debates about facial recognition technology and other biometric technologies in society more broadly.329
There are important differences between the motives and claims of commercial actors who are promoting affect or emotion recognition technology, and that of biometric monitoring in clinical studies conducted by mental health researchers—and there are major differences in the regulatory frameworks affecting each. In general, health research is far more tightly regulated, with more entrenched infrastructure for upholding ethical research involving humans. Although not without serious problems, including the interference of private industry with academic research,330330 and the growing reliance of universities on the private sector to fund research,331331 the scholarly health research infrastructure appears better developed for the purposes of ethical oversight, than many private sector uses of affect recognition technologies, including where those technologies are sold to government agencies, such as police agencies and border authorities.
Nevertheless, there are many examples of overlap between commercial and clinical activities in the digital mental health context,332 and public scrutiny is required of clinical or scholarly claims about what behaviour can convey about a person’s inner-world. It is not possible in this report to examine the claims being made about psychiatric biometric monitoring. Instead, the aim in this section is to draw attention to poorly understood potential for discrimination and bias in the use of such technologies in the mental health and disability context.
Concerns about discrimination need not relate to algorithmic systems. For example, a coalition of Australian organisations representing people with lived experience and psychosocial disability called for a suspension of a national electronic health records scheme, citing fears of discrimination if personal mental health histories were stolen, leaked or sold.333 Previous case studies cited throughout the report demonstrate how such discrimination might occur.
One important step to preventing harms caused by data concerning mental health and disability – whether leaked, stolen or traded – is to strengthen non-discrimination rules concerning mental health and psychosocial disability.334 National discrimination laws may require amendments to ensure that discrimination on mental health grounds by online businesses is covered.335 Such amendments would be consistent with the goals and legislative history of anti-discrimination laws and would remove ambiguity regarding the status of websites, social media platforms and other online businesses.336 Remedies for individuals who are aggrieved by discriminatory behaviour and practices are also likely to require strengthening, including ensuring substantive, verifiable, auditable standards of non-discrimination in the use of algorithmic and data-driven technologies
- 303 James (n 157).
- 304 Virginia Eubanks, Automating Inequality (Macmillan, 2018); Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Crown, 2016).
- 305 Whittaker et al (n 1) p.8.
- 306 Whittaker et al (n 1) p.8.
- 307 Ben Hutchinson et al, ‘Social Biases in NLP Models as Barriers for Persons with Disabilities’ [2020] arXiv:2005.00813 [cs] http://arxiv.org/abs/2005.00813.
- 308 Ibid.
- 309 This account draws from: O’Neil, Weapons of Math Destruction (n 307).
- 310 Letter from Lowes and Bazelon Center for Mental Health Law (n 269).
- 311 Cathy O’Neil, ‘How Algorithms Rule Our Working Lives | Cathy O’Neil’, The Guardian (online, 1 September 2016) https://www.theguardian.com/science/2016/sep/01/how-algorithms-rule-our-working-lives.
- 312 Meredith Broussard, Artificial Unintelligence: How Computers Misunderstand the World (MIT Press, 2018).
- 313 Fjeld et al (n 85) p.48.
- 314 Victorian Equal Opportunity & Human Rights Commission, Fair-Minded Cover: Investigation into Mental Health Discrimination in Travel Insurance. (2019).
- 315 Scott Travis, ‘Florida Wants to Amass Reams of Data on Students’ Lives’, sun-sentinel.com (2019) https://www.sun-sentinel.com/local/broward/parkland/florida-school-shooting/fl-ne-school-shooting-database-deadline-20190709-i4ocsmqeivdmrhpauhyaplg52u-story.html.
- 316 Ibid.
- 317 Ibid.
- 318 Eubanks (n 307).
- 319 James (n 157).
- 320 Ibid.
- 321 Martin Cunneen, Martin Mullins and Finbarr Murphy, ‘Artificial Intelligence Assistants and Risk: Framing a Connectivity Risk Narrative’ (2020) 35(3) AI & SOCIETY 625, 627
- 322 See eg. AIA Australia, ‘AIA AND MENTEMIA: Practical tips and techniques to help you take control of your mental wellbeing’ https://www.aia.com.au/en/individual/mentemia.html (accessed 14/10/22).
- 323 For research on the political economy of insurance technology, or ‘insurtech’ see http://www.jathansadowski.com/ (accessed 14/10/21).
- 324 Article 19, ‘Emotional Entanglement: China’s Emotion Recognition Market and Its Implications for Human Rights’ (January 2021) 19 https://www.article19.org/wp-content/uploads/2021/01/ER-Tech-China-Report.pdf.
- 325 Ibid.
- 326 Kate Crawford et al, AI Now 2019 Report (AI Now Institute, December 2019) p.6 https://ainowinstitute.org/AI_Now_2019_Report.html.
- 327 Ibid.
- 328 For review of evidence, see: Lisa Feldman Barrett, How Emotions Are Made: The Secret Life of the Brain (Houghton Mifflin Harcourt, 2017) 13–24.
- 329 See eg. Cosgrove et al (n 69); Friesen (n 76); Mohr, Shilton and Hotopf (n 72); Martinez-Martin et al (n 69).
- 330 Joanna Moncrieff, The Bitterest Pills: The Troubling Story of Antipsychotic Drugs (Palgrave Macmillan, 2013th edition, 2013).
- 331 Cris Shore and Laura McLauchlan, ‘“Third Mission” Activities, Commercialisation and Academic Entrepreneurs’ (2012) 20(3) Social Anthropology 267; K Philpott, L Dooley, C O’Reilly and G Lupton ‘The entrepreneurial university: examining the underlying academic tensions’ (2010) 31 Technovation 161– 70.
- 332 Adam Rogers, ‘Star Neuroscientist Tom Insel Leaves the Google-Spawned Verily for ... a Startup?’ (11 May 2017) Wired https://www.wired.com/2017/05/star-neuroscientist-tom-insel-leaves-google-spawned-verily-startup/.
- 333 ‘Joint Letter to Minister Hunt – My Health Record: Call to Suspend My Health Record Roll Out’Letter from Shauna Gaebler et al, 7 August 2018 http://being.org.au/2018/08/joint-letter-to-minister-hunt-my-health-records/.