Being human in the time of neuroscience and artificial intelligence involve carefully exploring the nexuses of complexity where valid ideas are nevertheless in tension, manifesting subtleties and challenges that must not be overlooked. Each page expresses the existing tension(s) between ideas and within each theme, which emerged in the collective discussions, and are then complemented by insights from NHNAI network researchers.
Complexity on Health #4: Ensuring fairness and equity with AI and health technologies
Participants largely acknowledge that health technologies (including AI) can support health professionals in medical decision making (they may even perform better in some tasks). Similarly, they highlight that automating certain tasks may give more time for the human dimensions of caregiving and healthcare (for instance with care-giving robots). Some participants also point that AI and digital technologies can facilitate access to healthcare and health related information, notably for preventive care and health prevention (especially in more isolated or poorer areas). The idea also emerges that digital technologies can improve medical training (e.g. with virtual or augmented reality).
Participants also recognize that advances in AI and neuroscience in the healthcare field may enable us to increase our physical and mental capacities (notably with neurological prostheses or implanted brain-machine interfaces). These technologies could also prevent the loss of capacity associated with aging.
However, participants also warn against the risk that the benefits and disadvantages of AI and health technologies may be unfairly distributed. While the potential to better the life of the most vulnerable is enormous, many participants worry about the risk access inequalities (because of lack of financial resources, but also of digital literacy or of reliable infrastructures). Notably, human contact and relationship in healthcare should not become a luxury, access to would be denied for the less favored. The same type of questions arises with respect to access to enhancement technologies.
Insights from NHNAI academic network:
The use by AI devices in healthcare of sensitive data (such as electronic medical records or genomic data) raises ethical concerns, particularly for the protection and ownership of this data. Indeed, this information is often collected by private companies, with no possibility for patients to retain real control over its use (Rumbold et al., 2017). The monetization of this data is playing a growing role in the economic model of healthcare innovation (Murdoch & Detsky, 2013). Companies use them to develop medical algorithms and personalized treatments, and also generate revenue from them via partnerships with health systems and insurers (Terry, 2012). The benefits of AI therefore come to accrue primarily to companies rather than to patients or healthcare systems. This situation fuels fears of a confiscation of innovations for the benefit of wealthy populations and institutions, as well as an exacerbation of socioeconomic inequalities (Powles & Hodson, 2017). To remedy this, new regulatory frameworks are needed to ensure a fair distribution of benefits.
Academic References:
- Powles, J., & Hodson, H. (2017). Google DeepMind and healthcare in an age of algorithms. Health and Technology, 7(4), 351-367.
- Rumbold, J. M., & Pierscionek, B. K. (2017). The ownership and use of human genomic data. European Journal of Human Genetics, 25(2), 200-207.
- Murdoch, T. B., & Detsky, A. S. (2013). The inevitable application of big data to health care. JAMA, 309(13), 1351-1352.
- Terry, N. P. (2012). Protecting patient privacy in the age of big data. Journal of Law, Medicine & Ethics, 40(1), 7-17.