OxTalks will soon move to the new Halo platform and will become 'Oxford Events.' There will be a need for an OxTalks freeze. This was previously planned for Friday 14th November – a new date will be shared as soon as it is available (full details will be available on the Staff Gateway).
In the meantime, the OxTalks site will remain active and events will continue to be published.
If staff have any questions about the Oxford Events launch, please contact halo@digital.ox.ac.uk
Generative AI has taken the world by storm over the last 9 months, from artistic tools that may upend the creative economy, to an AI-powered ‘copilot for the web’ that just might threaten to kill you if you don’t do what it says. Recent (often prescient) work helps to plot the potential harms of AI systems that generate text and images on demand, but can moral philosophy add a useful lens to help us understand which risks should concern us most, and which we can (for now) discount? For example, how should we weigh and respond to the risks of manipulative but narrow dialogue agents against hypothetical future systems with more general capabilities? And how will the prospect of governing algorithmic systems with natural language prompts affect long-standing debates in machine ethics? This talk makes first steps in developing a ‘generative (AI) ethics’, offering suggestions for how moral philosophy can help understand, prioritise, and reduce the risks posed by recent advances in AI.