During Michaelmas Term, OxTalks will be moving to a new platform (full details are available on the Staff Gateway).
For now, continue using the current page and event submission process (freeze period dates to be advised).
If you have any questions, please contact halo@digital.ox.ac.uk
Recent advancements in the performance of large language models is driving renewed concern about AI safety and existential risk, igniting debate about the near-term and long-term priorities for AI ethics research as well as philanthropic giving. In this talk, I challenge conventional AI risk narratives as motivated by an anthropocentric, distorted and narrowed vision of intelligence that reveals more about ourselves and our past than the future of AI. I argue for an anti-deterministic reconception of the relationship between AI and existential risk that more fully accounts for human responsibility, freedom and possibility.