Recent advancements in the performance of large language models is driving renewed concern about AI safety and existential risk, igniting debate about the near-term and long-term priorities for AI ethics research as well as philanthropic giving. In this talk, I challenge conventional AI risk narratives as motivated by an anthropocentric, distorted and narrowed vision of intelligence that reveals more about ourselves and our past than the future of AI. I argue for an anti-deterministic reconception of the relationship between AI and existential risk that more fully accounts for human responsibility, freedom and possibility.