On 28th November OxTalks will move to the new Halo platform and will become 'Oxford Events' (full details are available on the Staff Gateway).
There will be an OxTalks freeze beginning on Friday 14th November. This means you will need to publish any of your known events to OxTalks by then as there will be no facility to publish or edit events in that fortnight. During the freeze, all events will be migrated to the new Oxford Events site. It will still be possible to view events on OxTalks during this time.
If you have any questions, please contact halo@digital.ox.ac.uk
Recent advancements in the performance of large language models is driving renewed concern about AI safety and existential risk, igniting debate about the near-term and long-term priorities for AI ethics research as well as philanthropic giving. In this talk, I challenge conventional AI risk narratives as motivated by an anthropocentric, distorted and narrowed vision of intelligence that reveals more about ourselves and our past than the future of AI. I argue for an anti-deterministic reconception of the relationship between AI and existential risk that more fully accounts for human responsibility, freedom and possibility.