OxTalks Change Freeze Starts 2 March
Oxford Events, the new replacement for OxTalks, will launch on 16th March. The two-week OxTalks freeze period starts on Monday 2nd March. During this time, there will be no facility to publish or edit events. The existing OxTalks site will remain available to view during this period. Once Oxford Events launches, you will need a Halo login to submit events. Full details are available on the Staff Gateway.
Artificial Intelligence: Agentic capital, intelligence inequalities, and alignment
Members of the University only.
Join Kevin Vallier, Professor of Philosophy at the Univeristy of Toledo, and Thomas Simpson, Alfred Landecker Professor of Values and Public Policy at the Blavatnik School of Government, for a seminar on agentic capital, intelligence inequalities, and alignment.
AI will transform social order, yet we lack a general theory of how it impacts politics and the economy. In this event, Professor Vallier offers such a theory. AI’s transformative potential arises from its status as agentic capital: capital that can act and spawn autonomously. Professor Vallier will outline his paper that asks: How does agentic capital transform market structure, and what governs the resulting distribution of economic and political power?
Markets are currently pre-agentic. Agents are too unreliable for unsupervised deployment, so humans stay in the loop. But agents will soon make better spawning choices than humans. Given all the things AI can do, small efficiency gains should induce superlinear agent growth. Intelligence will thus expand until physical infrastructure becomes the binding constraint: chips, energy, and queues. Expect the economy to transform in stages.
Once this process has begun, political and economic power will flow to compute owners, who will have a tremendous impact on outcomes through the accumulation of intelligence inequalities, or differential concentrations of agentic capital. These concentrations have dynamic properties owing not merely to the ability of AI to self-replicate but to self-modify. Even their utility functions are not fixed parameters but strategic variables. This has stark consequences for human equality, creating vast intelligence inequalities between persons.
Agentic capital theory also has stark consequences for alignment. The field focuses too much on aligning individual agents, but personality engineering will fail even with good agents, since bad agents will outcompete them. Alignment must address the social dimensions of AI ecosystems: interaction and replication. It requires an AI constitution.
Please note this an in-person event and is open to members of the University of Oxford only. Please use your University email address when registering.
Date:
3 March 2026, 16:30
Venue:
Blavatnik School of Government, Radcliffe Observatory Quarter OX2 6GG
Venue Details:
In person only
Speakers:
Kevin Vallier (University of Toledo),
Professor Tom Simpson (Blavatnik School of Government)
Organising department:
Blavatnik School of Government
Organiser:
Blavatnik School of Government (University of Oxford)
Organiser contact email address:
events@bsg.ox.ac.uk
Booking required?:
Required
Booking url:
https://www.bsg.ox.ac.uk/events/artificial-intelligence-agentic-capital-intelligence-inequalities-and-alignment
Audience:
Members of the University only
Editor:
Anna Ulshofer