OxTalks will soon be transitioning to Oxford Events (full details are available on the Staff Gateway). A two-week publishing freeze is expected in early Hilary to allow all events to be migrated to the new platform. During this period, you will not be able to submit or edit events on OxTalks. The exact freeze dates will be confirmed as soon as possible.
If you have any questions, please contact halo@digital.ox.ac.uk
AI is inescapable, from its mundane uses online to its increasingly consequential decision-making in courtrooms, job interviews, and wars. The ubiquity of AI is so great that it might produce public resignation—a sense that the technology is our shared fate.
As economist Maximilian Kasy shows in The Means of Prediction, artificial intelligence, far from being an unstoppable force, is irrevocably shaped by human decisions—choices made to date by the ownership class that steers its development and deployment. The book clearly and accessibly explains the fundamental principles on which AI works, and, in doing so, reveals that the real conflict isn’t between humans and machines, but between those who control the machines and the rest of us.
The Means of Prediction offers a powerful vision of the future of AI: a future not shaped by technology, but by the technology’s owners. Amid a deluge of debates about technical details, new possibilities, and social problems, Kasy cuts to the core issue: who controls AI’s objectives, and how is this control maintained? The answer lies in what he calls “the means of prediction,” or the essential resources required for building AI systems: data, computing power, expertise, and energy. In a world already defined by inequality, one of humanity’s most consequential technologies has been and will be steered by those already in power.
In this book talk, Kasy will discuss the book’s framework both for understanding AI’s capabilities and for designing its public control, and its compelling case for democratic control over AI objectives as the answer to mounting concerns about AI’s risks and harms.