Policing Platforms: The Ethics of AI-Powered Content Moderation
This paper sets out a philosophical framework for governing harmful speech on social media. It argues that platforms have an enforceable moral duty to combat various forms of harmful speech through their content moderation systems. It pinpoints several underlying duties that together determine the content and stringency of this responsibility. It then confronts the objection that it is morally impermissible to use automated systems to moderate harmful content, given the propensity of AI to generate false positives and false negatives. After explaining why this objection is not decisive, the paper concludes by sketching some implications for legal regulation.
Date: 2 November 2022, 13:00 (Wednesday, 4th week, Michaelmas 2022)
Venue: Please register to receive details
Speaker: Dr Jeff Howard (UCL)
Organiser contact email address: aiethics@philosophy.ox.ac.uk
Host: Dr Charlotte Unruh (University of Oxford)
Part of: Ethics in AI Lunchtime Seminars
Booking required?: Required
Booking url: https://forms.office.com/Pages/ResponsePage.aspx?id=G96VzPWXk0-0uv5ouFLPkUbXexlJuMhCiksodiLwh4ZUOExBUlRNNFoxUVAwTkZZSlkxNjE3MTVMOC4u
Cost: Free
Audience: Public
Editors: Marie Watson, Lauren Czerniawska