Compute North vs. Compute South: The Uneven Possibilities of Compute-based AI Governance Around the Globe
Formerly known as the ‘Cyber Strategy & Technology Studies Working Group’
Postgraduate students, fellows, staff and faculty from any discipline are welcome. This group aims to foster frequent interdisciplinary critical dialogue across Oxford and beyond about the political impacts of emerging technologies.
Please contact Elisabeth Siegel at elisabeth.siegel@politics.ox.ac.uk in advance to participate or with any questions. Remote attendance is possible, but in-person attendance is prioritized (and provided refreshment).
Discussion topics will be finalized and optional readings will be sent out a week in advance. You do not currently have to be affiliated with the University of Oxford to attend and participate in discussions.

Abstract: Governments have begun to view AI compute infrastructures, including advanced AI chips, as a geostrategic resource. This is partly because “compute governance” is believed to be emerging as an important tool for governing AI systems. In this governance model, states that host AI compute capacity within their territorial jurisdictions are likely to be better placed to impose their rules on AI systems than states that do not. In this study, we provide the first attempt at mapping the global geography of public cloud GPU compute, one particularly important category of AI compute infrastructure. Us- ing a census of hyperscale cloud providers’ cloud regions, we observe that the world is divided into “Compute North” countries that host AI compute relevant for AI development (ie. training), “Compute South” countries whose AI compute is more relevant for AI deployment (ie. running inferencing), and “Compute Desert” countries that host no public cloud AI compute at all. We generate potential explanations for the results using expert interviews, discuss the implications to AI governance and technology geopolitics, and consider possible future trajectories.

About the speaker: Boxi Wu is a DPhil student at the OII. Their research interests focus on the social and political impacts of AI, focusing on the materiality of AI infrastructure and implications for AI ethics and governance. They have previously worked as a policy researcher, lecturer and strategy consultant. Prior to their DPhil, they completed the MSc at the OII and spent four years at DeepMind on the Responsible AI team, focusing on the ethical and societal implications of frontier AI models across both LLMs and multimodal models. In this role, they advised teams on ethical risks and mitigations and led internal ethics & safety governance forums, most recently focusing on the release of GDM Gemini models.
Date: 14 November 2024, 15:00
Venue: Nuffield College, New Road OX1 1NF
Speaker: Boxi Wu (Oxford Internet Institute, University of Oxford)
Organising department: Department of Politics and International Relations (DPIR)
Organisers: Elisabeth Siegel (University of Oxford), Changing Character of War Centre (CCW)
Organiser contact email address: elisabeth.siegel@politics.ox.ac.uk
Part of: Oxford Technology & Security Nexus
Booking required?: Required
Audience: Public
Editors: Elizabeth Robson, Elisabeth Siegel