OxTalks will soon move to the new Halo platform and will become 'Oxford Events.' There will be a need for an OxTalks freeze. This was previously planned for Friday 14th November – a new date will be shared as soon as it is available (full details will be available on the Staff Gateway).
In the meantime, the OxTalks site will remain active and events will continue to be published.
If staff have any questions about the Oxford Events launch, please contact halo@digital.ox.ac.uk
Recent advancements in the performance of large language models is driving renewed concern about AI safety and existential risk, igniting debate about the near-term and long-term priorities for AI ethics research as well as philanthropic giving. In this talk, I challenge conventional AI risk narratives as motivated by an anthropocentric, distorted and narrowed vision of intelligence that reveals more about ourselves and our past than the future of AI. I argue for an anti-deterministic reconception of the relationship between AI and existential risk that more fully accounts for human responsibility, freedom and possibility.