“Realism” in international relations is constructed from past experience of what is likely and what is possible in the world. AIs may change this sense of the possible tremendously – shifting both the ways that countries can compete and undermine each other, and the deals that might become possible. On top of that, AI itself will become a strategic asset – and target – of great value.
This talk will argue for why AI could become so powerful, sketch the dangers intrinsic to AI and to misuse of AI by bad actors, and talk about how the world could be transformed by these technologies.
Stuart Armstrong’s research at the Future of Humanity Institute centers on the safety and possibilities of Artificial Intelligence (AI), how to define the potential goals of AI and map humanity’s partially defined values into it, and the long term potential for intelligent life across the reachable universe. He has been working with people at FHI and other organizations, such as DeepMind, to formalize AI desiderata in general models so that AI designers can include these safety methods in their designs. His collaboration with DeepMind on “Interruptibility” has been mentioned in over 100 media articles.
Stuart Armstrong’s past research interests include comparing existential risks in general, including their probability and their interactions, anthropic probability (how the fact that we exist affects our probability estimates around that key fact), decision theories that are stable under self-reflection and anthropic considerations, negotiation theory and how to deal with uncertainty about your own preferences, computational biochemistry, fast ligand screening, parabolic geometry, and his Oxford D. Phil. was on the holonomy of projective and conformal Cartan geometries.
A sandwich lunch will be served at 12.45