Oxford Events, the new replacement for OxTalks, will launch on 16th March. From now until the launch of Oxford Events, new events cannot be published or edited on OxTalks while all existing records are migrated to the new platform. The existing OxTalks site will remain available to view during this period.
From 16th, Oxford Events will launch on a new website: events.ox.ac.uk, and event submissions will resume. You will need a Halo login to submit events. Full details are available on the Staff Gateway.
Generative AI has taken the world by storm over the last 9 months, from artistic tools that may upend the creative economy, to an AI-powered ‘copilot for the web’ that just might threaten to kill you if you don’t do what it says. Recent (often prescient) work helps to plot the potential harms of AI systems that generate text and images on demand, but can moral philosophy add a useful lens to help us understand which risks should concern us most, and which we can (for now) discount? For example, how should we weigh and respond to the risks of manipulative but narrow dialogue agents against hypothetical future systems with more general capabilities? And how will the prospect of governing algorithmic systems with natural language prompts affect long-standing debates in machine ethics? This talk makes first steps in developing a ‘generative (AI) ethics’, offering suggestions for how moral philosophy can help understand, prioritise, and reduce the risks posed by recent advances in AI.