Being human in the time of neuroscience and artificial intelligence involve carefully exploring the nexuses of complexity where valid ideas are nevertheless in tension, manifesting subtleties and challenges that must not be overlooked. Each page expresses the existing tension(s) between ideas and within each theme, which emerged in the collective discussions, and are then complemented by insights from NHNAI network researchers.
Complexity on Health #2: Improving healthcare and medicine without losing sight of persons
Participants largely acknowledge that health technologies (including AI) can support health professionals in medical decision making (they may even perform better in some tasks). Similarly, they highlight that automating certain tasks may give more time for the human dimensions of caregiving and healthcare (for instance with care-giving robots). Some participants also point that AI and digital technologies can facilitate access to healthcare and health related information, notably for preventive care and health prevention (especially in more isolated or poorer areas). The idea also emerges that digital technologies can improve medical training (e.g. with virtual or augmented reality).
It is however also largely consensual in discussions that AI and health technology should contribute to a more humanized healthcare system. They should not lead lose sight of the fact that patients are persons that should be treated with a comprehensive approach making room to all relevant dimensions and firmly rooted in empathy and human relationships. The latter are key for the healing process and the doctor-patient relationship. In general, machines should not replace humans. In particular, tasks pertaining to medical decision-making, communication and care giving should remain human. Although it is true that health professionals and caregivers often lack time and are exhausted, and that healthcare systems are under high pressure, AI technologies may not constitute the right or primary answer to these major issues.
In this perspective, many participants warn against the danger of overfocusing on what can be measured and quantified and of reducing patients to their data (with the risk of medicine and healthcare becoming overly prescriptive and coercive). Patients must be recognized in their singularity and diversity.
Insights from NHNAI academic network:
The risk of moving from the liberation of care – where technology supports caregiving – to the liberation of care, where the essential relational and emotional aspects of caregiving are diminished or lost, raises important ethical concerns. According to Joan Tronto’s ethics of care (Tronto, 2013), caregiving cannot be seen as a simple set of tasks to be streamlined, but must rather be seen as a relational practice involving attention, responsibility and response to the unique needs of individuals. As such, the challenges and emotional labor inherent in caregiving, however difficult, are at the heart of its meaning and cannot be entirely handled by machines. In a similar vein, Michel Foucault warns in The Birth of the Clinic (Foucault, 2003) that medicine’s emphasis on quantification and control can reduce patients to data and strip them of their individuality and humanity. An over-reliance on AI could, of course, reinforce this trend and transform healthcare into a more prescriptive and impersonal practice. According to Neumann et al. (2011) and Decety et al. (2014), empathy and communication are essential to patient satisfaction and outcomes. As Sherry Turkle and Noel Sharkey point out (Turkle, 2011; Sharkey, 2008), these are qualities that AI and robot caregivers cannot replicate. So technologies, while useful for routine tasks, are unlikely to replace the deep emotional and relational dimensions required for meaningful care.
Academic References:
- Decety, J., & Lamm, C. (2014). The empathic brain and its dysfunction in psychopathologies. Nature Reviews Neuroscience, 7(1), 735–748.
- Foucault, M. (2003). The birth of the clinic: An archaeology of medical perception. London: Routledge.
- Neumann, M., Edelhäuser, F., Tauschel, D., Fischer, M. R., Wirtz, M., Woopen, C., … & Scheffer, C. (2011). Empathy decline and its reasons: A systematic review of studies with medical students and residents. Academic Medicine, 86(8), 996–1009.
- Sharkey, N. (2008). The ethical frontier of robotics. Science, 322(5909), 1800–1801.
- Tronto, J. C. (2013). Caring democracy: Markets, equality, and justice. New York: New York University Press.
- Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. New York: Basic Books.
Medical AI might be better able to deal with humans as individual cases than any human can simply because it can truly absorb the volume of particular data specific to any particular individual.
AI can be vastly more patient and empathetic than any human can ever be: never growing tired, needing a break, getting bored, etc. AI bots for companionship and counseling are in some ways already superhuman (and that raises many problems opposite to the one suggested here). The key question then becomes what does a human in particular bring to the medical relationship and why is that important?
Similar to what is mentioned above, humans are vital to the medical system, but their exact role in relation to AI, especially when AI might be “more human” than humans can be, remains in question. If a fully automated hospital were possible and had superior medical outcomes than one staffed by humans, what use is there going to the human-staffed hospital? What benefit is there to the patients if the people working there are more gruff, less skilled, and slower? We can remind ourselves of the beneficial opportunities for grow that come along with adversity, but that seems like a difficult thing to assert when human health and lives are at stake.
This question of the balance between humanity and efficiency is perhaps the most central question regarding the use of AI in healthcare. What do humans bring to healthcare besides our expertise? And does that additional factor outweigh the efficiency, accuracy and other improvements that AI may bring? Surely the warmth and care that humans can bring will be appreciated, but the healthcare system currently does not focus on that – can it be re-emphasized?
Theologically-speaking, humans are made in the image of a God who is both love and logos (Divine “Word” but also logic & reason). If AI takes the Logos away from us, then we should “double down” on the “love” side of things, or we face being replaced entirely. This would require a completely revolutionary shift in understanding of human behavior and culture.