Social Media Platforms as Public Health Arbiters: The Ethical Implications of Facebook’s Suicide Prevention Algorithm

Research output: Chapter in Book/Report/Conference proceedingBook chapterResearchpeer-review

The emergence of Facebook’s suicide prevention algorithm has prompted discussion around the role that social media platforms should play in public health intervention. Concerns have been raised about an entity that is not a legitimate public health authority collecting and acting on personal health information of its users, particularly sensitive data like an individual’s mental health status. Mental illnesses are still stigmatized, despite continued efforts to normalize these conditions in some areas of the world. Depending on a user’s geographic location, the ramifications of the suicide prevention algorithm generating false positives for suicide risk could have severe consequences. This chapter looks to develop this debate by examining the ethical implications of Facebook’s suicide prevention algorithm from a privacy, legal, and cultural perspective.
Original languageEnglish
Title of host publicationAI in eHealth : Human Autonomy, Data Governance & Privacy in Healthcare
EditorsMarcelo Corrales Compagnucci, Michael Wilson, Mark Fenwick, Nikolaus Forgó, Till Bärnighausen
Place of PublicationCambridge
PublisherCambridge University Press
Publication statusAccepted/In press - 2021
SeriesCambridge Bioethics and Law

ID: 234505442