Being human in the time of neuroscience and artificial intelligence involve carefully exploring the nexuses of complexity where valid ideas are nevertheless in tension, manifesting subtleties and challenges that must not be overlooked. Each page expresses the existing tension(s) between ideas and within each theme, which emerged in the collective discussions, and are then complemented by insights from NHNAI network researchers.

Complexity on Health #3: Improving healthcare and medicine without undermining professionals’ agency and autonomy

Participants largely acknowledge that health technologies (including AI) can support health professionals in medical decision making (they may even perform better in some tasks). Similarly, they highlight that automating certain tasks may give more time for the human dimensions of caregiving and healthcare (for instance with care-giving robots). Some participants also point that AI and digital technologies can facilitate access to healthcare and health related information, notably for preventive care and health prevention (especially in more isolated or poorer areas). The idea also emerges that digital technologies can improve medical training (e.g. with virtual or augmented reality).

It is however also largely consensual in discussions that AI and health technology should contribute to a more humanized healthcare system. In general, machines should not replace humans. In particular, tasks pertaining to medical decision-making, communication and care giving should remain human. Although it is true that health professionals and caregivers often lack time and are exhausted, and that healthcare systems are under high pressure, AI technologies may not constitute the right or primary answer to these major issues.

Participants also insists upon the fact that health professionals and caregivers should remain in charge of decision making and that overdependence on such technologies may prove harmful on the long run (deskilling, loss of resilience in case of technologies unavailability). Importantly, (moral) responsibility of medical decision making should remain in the hand of humans.

The following ideas can be found in the global and local syntheses downloadable here

  • AI and health technologies can improve medicine and health care:
    • (Global – Health) Acknowledging the positive contribution of health technologies to healthcare
  • AI and health technologies should not lead to dehumanization of healthcare and medicine:
    • (Global – Health) Privileging AI cooperation and support instead of human replacement
  • Risk of overdependence and of problems with responsibility:
    • (Global – Health) Preserving human agency and autonomy (in healthcare)
    • (Global – Health) Never believing we can delegate (moral) responsibility to machines
    • (Global – Health) Fostering literacy and critical thinking
Insights from NHNAI academic network:

In health care there is a part that is partly overlooked, and that are the mechanisms of self-care that the brain-body relationship activates when a person feels cared-for. These mechanisms very often overlooked are at stake in certain placebo effects that, though downplay the importance and impact of pharmacological treatments, highlights the incredible capacity of human bodies to engage certain mechanism of self-repair and pain-reduction that increase human well-being. This placebo effect is often gated by the encounter of the person’s beliefs in a certain clinical context or contact with a human practitioner, and has been shown to engage brain systems in the placebo-responsive population. Because this effect uses of the agency-recognition processes by patients of caring and medical human practitioners (“it’s a human like me that is helping me”), it is important to keep the human bond and interaction in health care (incl. human touch as when the doctor auscultates the body through bodily contact, eye contact with the doctor, conversation with the health practitionner): to keep these placebo mechanisms active in the more global process of furthering medical and psychological well-being.

Despite its advantages in healthcare, AI also carries risks, such as the “deskilling” of professionals. Too accustomed to relying on AI, doctors and nurses are at risk of losing important skills. Their ability to question recommendations emanating from AI, even in the event of divergent clinical judgment, may also be blunted (López et al., 2020). This overconfidence in the results produced by AI is embodied more generally in an “automation bias”, whereby the recommendations issued by AI are considered more reliable, even in cases where human intervention would be more relevant (Skitka, Mosier, & Burdick, 1999). This situation can lead caregivers to make serious errors, following misleading recommendations, or neglecting important elements due to a lack of guidance from the machine (Parasuraman & Riley, 1997). The overall resilience of the healthcare system could thus be weakened by the progressive inability of professionals to deal autonomously with complex or novel situations, such as rare pathologies, or AI system malfunctions.

Despite the gains brought by AI in terms of data analysis and diagnostics, automation also comes with important ethical questions, such as the need for human professionals to continue to shoulder responsibility for medical decisions and weigh up their moral implications, especially in cases of direct impact on patients’ lives (Floridi & Cowls, 2019).

We must therefore stress the need for healthcare staff to be trained in independent judgment, and the ability to deviate from AI decisions if necessary. The integrity of healthcare can only be sustained if AI complements, but does not completely replace, human expertise.

Academic References:

  • Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1).
  • López, G., Valenzuela, O., Torres, J., Troncoso, A., Martínez-Álvarez, F., & Riquelme, J. C. (2020). A conceptual framework for intelligent decision support systems (iDSS) in medical decision making. Decision Support Systems, 130, 113232.
  • Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230–253.
  • Skitka, L. J., Mosier, K., & Burdick, M. (1999). Does automation bias decision-making?. International Journal of Human-Computer Studies, 51(5), 991–1006.

What justifies this “should,” the idea that health professionals “should” retain their agency and autonomy? What if AI gives better care, better health outcomes, and is more caring towards patients? If this is done wrongly, machines will do all the work, but humans will take all the blame when anything goes wrong. Humans become “the fall guys” for complex systems that no individual can reasonably be held responsible for.

This issue of machines and moral responsibility is a huge one because machines will make mistakes and the humans “responsible” for those machines could easily be made scapegoats to blame. “Operator error” is often the excuse of first resort when a machine fails, even if the real blame lies in extremely complex system of interactions that no individual could reasonably be expected to understand or be responsible for.

Concerned actors and professionals should know the limits of the tech they are using, and a healthy skepticism of that tech should be included too. Despite this, automation bias is likely to intrude and disempower healthcare providers, their patients, and others, who will simply see a computer recommendation as something they are not able to dispute, and if they do oppose it and are wrong they will be held liable and possibly punished.

It is only a matter of time before AI systems are standard practice in many areas of medicine. To use something less than the medical standard would be viewed as backwards or even grounds for malpractice. We should not think that AI is arriving as an alien imposition on the medical field, instead it is arriving because there are certain problems that AI can solve better than humans can, and therefore these tools ought to be used for these purposes.