2.7.1 Open-Source Data and Algorithms
Open-source principles promote code, data and algorithms being made freely available for possible modification and redistribution. These principles can promote a collaborative, inclusive and community-minded approach to technology development, can facilitate equal distribution of the benefits across regions and populations, and can help avoid monopolies or power concentration associated with particular technologies. Positive examples exist of open-source technologies in the mental health context, such as apps that place a premium on transparency
CASE STUDY: LAMP (Learn, Assess, Manage, Prevent) – an open source and freely available app
ALAMP is a freely available and open access app that was developed by a group of researchers to support ‘clinicians and patients […] at the intersection of patient demands for trust, control, and community and clinician demands for transparent, data driven, and translational tools’.398 The LAMP platform, according to John Torous and the researchers who led the initiative, ‘evolved through numerous iterations and with much feedback from patients, designers, sociologists, advocates, clinicians, researchers, app developers, and philanthropists’.399 The authors state:
As an open and free tool, the LAMP platform continues to evolve as reflected in its current diverse use cases across research and clinical care in psychiatry, neurology, anesthesia, and psychology. […] The code for the LAMP platform is freely shared […] to encourage others to adapt and improve on our team’s efforts.400
The app can be customised to each person and reportedly fit with their personal care goals and needs, and there is research underway to seek to link the app to options for peer support.401 To promote input by people with lived experience of mental health interventions, the researchers used ‘guided survey research, focus groups, structured interviews, and clinical experience with apps in the mental health care settings, [in a process of seeking] early and continuous input from patients on the platform.402
There are some risks with open science principles, insofar as technologies may be re-purposed for bad ends. Risks may be acute concerning biometric monitoring. Consider the following comment by academic psychiatrist, Nguine Rezaii, who was discussing her biometric monitoring research:
- [w]hen I published my paper on predicting schizophrenia, the publishers wanted the code to be openly accessible, and I said fine because I was into liberal and free stuff. But then what if someone uses that to build an app and predict things on weird teenagers? That’s risky. [...] [Open science advocates] have been advocating free publication of the algorithms. [My prediction tool] has been downloaded 1,060 times so far. I do not know for what purpose...”403
Some technological practices made with good intention, which may at first appear as if they should be openly accessible, could be re-purposed in unexpected and harmful ways and may need to be prevented from being publicised. For example, researchers may develop an algorithmic tool to quickly identify social media users who appear to be LGTBQI+ young people, to whom specifically-designed crisis support can be directed. Such a tool carries inherent risks, including in the event it was used by bad actors who were hostile to LGBTQI+ people. Platform regulation that adequately combats online abuse, harassment and vilification would be the clearest path to address this risk (and would carry mental health benefits more broadly). In lieu of better platform regulation, at least one option to address the tension raised by open science principles is having disclosure processes so that algorithms may be subject to validation or certification agencies that can effectively serve as auditing and accountability bodies, and in ways that respect when some algorithmic technologies should not be made open source.404
- 390 European Commission High-Level Expert Group on Artificial Intelligence (n 135) p.18.
- 391 Til Wykes and Stephen Schueller, ‘Why Reviewing Apps Is Not Enough: Transparency for Trust (T4T) Principles of Responsible Health App Marketplaces’ (2019) 21(5) Journal of Medical Internet Research e12390.
- 392 Ibid.
- 393 Ibid.
- 394 Fjeld et al (n 88) pp.42-43.
- 395 Julia Amann et al, ‘Explainability for Artificial Intelligence in Healthcare: A Multidisciplinary Perspective’ (2020) 20(1) BMC Medical Informatics and Decision Making 310.
- 396 Select Committee on Artificial Intelligence (n 370).
- 397 Fjeld et al (n 88) p.43
- 398 John Torous et al, ‘Creating a Digital Health Smartphone App and Digital Phenotyping Platform for Mental Health and Diverse Healthcare Needs: An Interdisciplinary and Collaborative Approach’ (2019) 4(2) Journal of Technology in Behavioral Science 73.
- 399 Ibid.
- 400 Ibid.
- 401 Ibid.
- 402 Ibid p.75
- 403 David Adam, ‘Machines Can Spot Mental Health Issues—If You Hand over Your Personal Data’, MIT Technology Review (online, 13 August 2020) https://www.technologyreview.com/2020/08/13/1006573/digital-psychiatry-phenotyping-schizophrenia-bipolar-privacy/.
- 404 The Institute of Electrical and Electronics Engineers (IEEE) has proposed having disclosure processes so that algorithms may be subject to validation or certification agencies that can effectively serve as auditing and accountability bodies, and in ways that respect when some algorithmic technologies should not be made open source. IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, Ethically