On 28th November OxTalks will move to the new Halo platform and will become 'Oxford Events' (full details are available on the Staff Gateway).
There will be an OxTalks freeze beginning on Friday 14th November. This means you will need to publish any of your known events to OxTalks by then as there will be no facility to publish or edit events in that fortnight. During the freeze, all events will be migrated to the new Oxford Events site. It will still be possible to view events on OxTalks during this time.
If you have any questions, please contact halo@digital.ox.ac.uk
This paper sets out a philosophical framework for governing harmful speech on social media. It argues that platforms have an enforceable moral duty to combat various forms of harmful speech through their content moderation systems. It pinpoints several underlying duties that together determine the content and stringency of this responsibility. It then confronts the objection that it is morally impermissible to use automated systems to moderate harmful content, given the propensity of AI to generate false positives and false negatives. After explaining why this objection is not decisive, the paper concludes by sketching some implications for legal regulation.