Technological innovations in healthcare, perhaps now more than ever, are posing decisive opportunities for improvements in diagnostics, treatment, and overall quality of life. The use of artificial intelligence and big data processing, in particular, stands to revolutionize healthcare systems as we once knew them. But what effect do these technologies have on human agency and moral responsibility in healthcare? How can patients, practitioners, and the general public best respond to potential obscurities in responsibility? In this project, I investigate the social and ethical challenges arising with newfound medical technologies, specifically the ways in which artificially intelligent systems may be enhancing or threatening responsibility in the delivery of healthcare. I suggest that if our ability to locate responsibility becomes threatened, we are left with a great dilemma. In short, it might seem that we should exercise extreme caution or even restraint in our use of state-of-the-art systems, but thereby lose out on such benefits as improved quality of care. Alternatively, we might need to loosen our commitment to locating moral responsibility when patients come to harm. What is clear, at least, is that the shift toward artificial intelligence and big data calls for a significant shift in expectations on how, if at all, we might locate notions of agency and responsibility in emerging models of healthcare.