Being human in the time of neuroscience and artificial intelligence involve carefully exploring the nexuses of complexity where valid ideas are nevertheless in tension, manifesting subtleties and challenges that must not be overlooked. Each page expresses the existing tension(s) between ideas and within each theme, which emerged in the collective discussions, and are then complemented by insights from NHNAI network researchers.
Complexity on Health #3: Improving healthcare and medicine without undermining professionals’ agency and autonomy
Participants largely acknowledge that health technologies (including AI) can support health professionals in medical decision making (they may even perform better in some tasks). Similarly, they highlight that automating certain tasks may give more time for the human dimensions of caregiving and healthcare (for instance with care-giving robots). Some participants also point that AI and digital technologies can facilitate access to healthcare and health related information, notably for preventive care and health prevention (especially in more isolated or poorer areas). The idea also emerges that digital technologies can improve medical training (e.g. with virtual or augmented reality).
It is however also largely consensual in discussions that AI and health technology should contribute to a more humanized healthcare system. In general, machines should not replace humans. In particular, tasks pertaining to medical decision-making, communication and care giving should remain human. Although it is true that health professionals and caregivers often lack time and are exhausted, and that healthcare systems are under high pressure, AI technologies may not constitute the right or primary answer to these major issues.
Participants also insists upon the fact that health professionals and caregivers should remain in charge of decision making and that overdependence on such technologies may prove harmful on the long run (deskilling, loss of resilience in case of technologies unavailability). Importantly, (moral) responsibility of medical decision making should remain in the hand of humans.
Insights from NHNAI academic network:
A. Cooperation, independence and responsibility
Based on insights from Fernand Doridot (associate professor in ethics, philosophy of sciences and technologies (ICAM – Catholic University of Lille, ETHICS EA7440, France) and Brian P. Green (professor in AI Ethics, Director of technology ethics at the Markkula Center for Applied Ethics (Santa Clara University, USA)
The dangers of automation bias and deskilling
Despite its advantages in healthcare, AI also carries risks, such as the “deskilling” of professionals. Too accustomed to rely on AI, doctors and nurses are at risk of losing important skills. Their ability to question recommendations emanating from AI, even in the event of divergent clinical judgment, may also be blunted.[1] This overconfidence in the results produced by AI is embodied more generally in an “automation bias”, whereby the recommendations issued by AI are considered more reliable, even in cases where human intervention would be more relevant.[2] This situation can lead caregivers to make serious errors, following misleading recommendations, or neglecting important elements due to a lack of guidance from the machine.[3] The overall resilience of the healthcare system could thus be weakened by the progressive inability of professionals to deal autonomously with complex or novel situations, such as rare pathologies, or AI system malfunctions.
Concerned actors and professionals should therefore know the limits of the tech they are using, and a healthy skepticism of that tech should be included (in their training).
Responsibility attribution
Despite the gains brought by AI in terms of data analysis and diagnostics, automation also comes with important ethical questions, such as the need for human professionals to continue to shoulder responsibility for medical decisions and weigh up their moral implications, especially in cases of direct impact on patients’ lives.[4]
However, this importance of preserving human responsibility does not come without any difficulties. For instance, automated systems will make mistakes and the humans “responsible” for those machines could easily be made scapegoats to blame. “Operator error” is often the excuse of first resort when a machine fails, even if the real blame lies in extremely complex system of interactions that no individual could reasonably be expected to understand or be responsible for. Moreover, opposing machines may mean to take a risk that health professional could become more and more reluctant to take. Especially with the aforementioned automation bias likely to intrude and disempower healthcare providers, their patients, and others. These actors may be led to simply see a computer recommendation as something they are not able to dispute, and if they do oppose it and are wrong they will be held liable and possibly punished.
It will thus be key to acknowledge the work that is genuinely performed by machines. It is problematic if health practitioners take all the blame when anything goes wrong. They would become “the fall guys” for complex systems that no individual can reasonably be held responsible for.
Independence of judgement and AI as a complement
It is only a matter of time before AI systems are standard practice in many areas of medicine. Using something less than the medical standard would be viewed as backwards or even grounds for malpractice. We should not think that AI is arriving as an alien imposition on the medical field. Instead it should arrive because there are certain problems that AI can solve better. But this is to be judged from within healthcare practices, with practitioners themselves.
We must therefore stress the need for healthcare staff to be trained in independent judgment, and the ability to deviate from AI decisions if necessary. The integrity of healthcare can only be sustained if AI complements, but does not completely replace, human expertise.
[1] López, G., Valenzuela, O., Torres, J., Troncoso, A., Martínez-Álvarez, F., & Riquelme, J. C. (2020). A conceptual framework for intelligent decision support systems (iDSS) in medical decision making. Decision Support Systems, 130, 113232.
[2] Skitka, L. J., Mosier, K., & Burdick, M. (1999). Does automation bias decision-making?. International Journal of Human-Computer Studies, 51(5), 991–1006.
[3] Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230–253.
[4] Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1).
B. Human contact and self-care mechanisms
In health care, there is an aspect that is partly overlooked, and that is the mechanisms of self-care that the brain-body relationship activates when a person feels cared for. These mechanisms, very often overlooked, are at stake in certain placebo effects that, though downplaying the importance and impact of pharmacological treatments, highlight the incredible capacity of human bodies to engage certain mechanisms of self-repair and pain reduction that increase human well-being. This placebo effect is often gated by the encounter between the person’s beliefs and a certain clinical context or contact with a human practitioner and has been shown to engage brain systems in placebo-responsive individuals.
Because this effect uses of the agency-recognition processes by patients towards caring and medical human practitioners (“it’s a human like me that is helping me”), it is important to keep the human bond and interaction in health care (including human touch, as when the doctor auscultates the body through bodily contact, eye contact with the doctor, conversation with the health practitioner). Such bond and interaction are indispensable to keep these placebo mechanisms active in the more global process of fostering medical and psychological well-being.