OxTalks will soon be transitioning to Oxford Events (full details are available on the Staff Gateway). A two-week publishing freeze is expected in early Hilary to allow all events to be migrated to the new platform. During this period, you will not be able to submit or edit events on OxTalks. The exact freeze dates will be confirmed as soon as possible.
If you have any questions, please contact halo@digital.ox.ac.uk
As generative AI becomes more thoroughly integrated into many human activities, we may face a systematic credit-blame asymmetry: We may not deserve full credit or praise for the valuable outputs we create with generative AI (i.e., if we do not put in sufficient skill or effort), but we may be entirely blameworthy for harmful outputs (e.g., due to negligence or recklessness). How might patterns of praise and blame change, however, if we use personalised AI that is trained on our own past outputs, created solely by us? In this talk I present recent theory and evidence on this question with data from the US, UK, China, and Singapore.