Personalised Medicine and AI Art: Parallels and Contrasts in Privacy, Control, and Moral Rights

This paper compares and contrasts two prominent novel usages of AI —- synthetic data in medicine and AI art, in order to develop a typology of how the building and dissemination of AI models may threaten individuals’ abilities to control flows of information, and how the weight of such ethical claims should be weighed against the benefits such models may bring. While these questions will be explored in the legal sphere via GDPR and/or intellectual property law, this paper focuses on how best to articulate and to reconcile the ethical claims in play.

Much machine learning is dependent on high-quality data sets on which to build and to test models. Questions about privacy, and rights to control occur both at the ingest side, and on the output side. Where machine learning requires ingest of large datasets that contain information that is personal, confidential, or in some other way related to individuals’ agency, questions can be asked about whether individualised consent is required. Model outputs may also give rise to individualised complaints – despite the fact that outputs are shaped not just by that particular individual’s data, but the data of thousands of others too. For example, a medical dataset that is completely synthetic could nonetheless threaten individuals’ privacy, if it allows inferences to be drawn about real-world data; an AI art model could make works that seem to an artist to be derivative from their individual style.