Oxford Events, the new replacement for OxTalks, will launch on 16th March. From now until the launch of Oxford Events, new events cannot be published or edited on OxTalks while all existing records are migrated to the new platform. The existing OxTalks site will remain available to view during this period.
From 16th, Oxford Events will launch on a new website: events.ox.ac.uk, and event submissions will resume. You will need a Halo login to submit events. Full details are available on the Staff Gateway.
AI is inescapable, from its mundane uses online to its increasingly consequential decision-making in courtrooms, job interviews, and wars. The ubiquity of AI is so great that it might produce public resignation—a sense that the technology is our shared fate.
As economist Maximilian Kasy shows in The Means of Prediction, artificial intelligence, far from being an unstoppable force, is irrevocably shaped by human decisions—choices made to date by the ownership class that steers its development and deployment. The book clearly and accessibly explains the fundamental principles on which AI works, and, in doing so, reveals that the real conflict isn’t between humans and machines, but between those who control the machines and the rest of us.
The Means of Prediction offers a powerful vision of the future of AI: a future not shaped by technology, but by the technology’s owners. Amid a deluge of debates about technical details, new possibilities, and social problems, Kasy cuts to the core issue: who controls AI’s objectives, and how is this control maintained? The answer lies in what he calls “the means of prediction,” or the essential resources required for building AI systems: data, computing power, expertise, and energy. In a world already defined by inequality, one of humanity’s most consequential technologies has been and will be steered by those already in power.
In this book talk, Kasy will discuss the book’s framework both for understanding AI’s capabilities and for designing its public control, and its compelling case for democratic control over AI objectives as the answer to mounting concerns about AI’s risks and harms.