State actors increasingly use machine-learning tools to make decisions that significantly affect people’s lives. The worry that human agency is increasingly eclipsed has, in turn, given rise to assertions of a novel ‘right to a human decision’ – roughly, a right not to be subject to fully automated decision-making. This talk explores how such a right might be justified, by identifying three possible approaches. The first grounds the right to a human decision in the notion of human dignity; the second in the desire to be judged by an agent with the capacity to grasp and explain moral reasons; and the third justifies the right on the grounds that it is constitutive of, and contributes to, the exercise of certain democratic values. On the basis of this discussion, I suggest three things. First, justifying a ‘right to a human decision’ is, despite its intuitive appeal, a surprisingly involved – though not impossible – enterprise. Second, it may give us considerably more than we bargained for. Finally, what appeals to such a right seek to accomplish may, in many cases, be better achieved by broader principles of justice that cannot be reduced to individual rights. In the end, working out what we owe to one another as we continue to make AI increasingly powerful and prevalent, requires working out not just what our values are, but also why we hold them.