Foreword
Tech-heavy healthcare interventions relying on algorithmic systems and big data are increasingly widespread, prone to attention-grabbing headlines and woefully underregulated. Public considerations of these tools and systems too often take the corporations developing these interventions at their word, both about their capabilities and their intentions, and too rarely considers the experience or needs of the people on whom the technology is meant to be used. Public policy discussions focus more on cost-saving and resource management than on the implications of medical and carceral surveillance
News coverage of algorithmic health systems rarely if ever discusses the individual and social harms of algorithmic datafication — the fact of being rendered into statistical formulae for the training of machine learning models. And awareness in the general public is more often concerned with the convenience and ease of life supposedly offered by the systems and their purveyors, than any negative externalities on disabled people, especially on people with lived experience of psychosocial disability
Whether it’s Amazon’s forays into prescription drug marketplace or the biometric monitoring of their “Halo” product; or Facebook/Meta’s many interventions into how to manipulate or capitalize on the mental states of their users; or the many cases of online advertisers, mental health apps and even suicide watch services surveilling users without their consent, there are numerous cases of predatory design and implementation of algorithmic systems which are supposedly meant to help those in crisis. And consistently throughout these events, what’s been missing again and again is any input, oversight, or regulatory control from the people whose lives are most directly affected by the social and technological systems of mental healthcare: The recipients of said care.
This report aims to change that
This work takes shape from many years of research, writing and collaboration in the contexts of advocacy, public policy and development of algorithmic systems in healthcare settings. The authors of this report not only work within and report on these fields, but in most cases have direct lived experience of mental health care and a variety of algorithmically-mediated systems within it. From their multiple valences of expertise, the authors present a clear and evidentially sound argument for how we should understand the algorithmic healthcare ecosystem’s intents, and its impacts, and strong recommendations for what needs to be done to achieve much needed change
Too much of the current shape of these discussions has been and continues to be determined by those who have declared that the problems and needs of people with lived experience of psychosocial disability can be understood from some purely objective vantage, and that they have the perfect technological solutions. This paradigm of advocating for and developing algorithmic mental health tools has in turn exacerbated harms done to those in crisis, by not recognising either the particularities of their circumstances, or the systemic stigma these systems can reinforce. In this report, the authors present an alternative framework in which every step of the design, training, implementation and regulation of algorithmic healthcare systems would be done with the direct involvement of people who know the worst things that these systems can do, because they’ve lived it.
In particular, the lived experiential expertise of the authors informs one of the most crucial interventions offered in this report: The recommendation that people must have the ability to opt out of automated decision-making without fear of stigma or punishment.
Too often the normalization of new technosocial contexts means that anyone who resists that context is seen as an outlier, or a radical— a Luddite, in the most pejorative sense. When the tendency toward normative judgement and othering is combined with Western society’s pervasive stigma around mental health, then those who resist algorithmically-driven mental health surveillance— no matter the reason— are regarded with even greater suspicion and potential scorn. To directly challenge this multivalent stigmatization and create more just and equitable outcomes for people with lived experience of psychosocial disability, designers and adjudicators of these systems must not only allow users to opt out of the algorithmic collection and use of their data, but also accept that some innovations may need to be abolished and avoided entirely.
The work of embedding algorithmic tools in any aspect of human life is necessarily interdisciplinary, combining machine learning systems development, social science inventions, mental health experience, various elements of the humanities and public policy. In this report, authors Bossewitch, Brown, Gooding, Harris, Horton, Katterl, Myrick, Ubozoh, and Vasquez put forward a crucial lens through which these seemingly disparate fields can first recognize their need to collaborate, and to then center and heed the voices of those most at risk of being harmed.
Their recommendations and understandings pave the way for a much needed reconsideration and reformulation of how we think about AI, healthcare and public life.
August 2022
Damien Patrick Williams is an assistant professor in Philosophy and Data Science at the University of North Carolina Charlotte. A PhD in Science, Technology, and Society, for Virginia Tech in the United States, Damien researches how technologies such as algorithms, machine intelligence and biotechnologies are impacted by the values, knowledge systems, philosophical explorations, social structures and even religious beliefs of human beings. He is especially concerned with how the knowledge and experience of marginalised peoples affect the technosocial structures of human societies. Damien is member of Project Advisory Committee for the Center for Democracy and Technology's Project on Disability Rights and Algorithmic Fairness, Bias, and Discrimination, and the Disability Inclusion Fund's Tech & Disability Stream Advisory Committee. More about his research can be found at A Future Worth Thinking About Dot Com.