Being human in the time of neuroscience and artificial intelligence involve carefully exploring the nexuses of complexity where valid ideas are nevertheless in tension, manifesting subtleties and challenges that must not be overlooked. Each page expresses the existing tension(s) between ideas and within each theme, which emerged in the collective discussions, and are then complemented by insights from NHNAI network researchers.

Transversal complexity #3: Becoming more efficient without threatening the core of what makes us human

The global-transversal idea “Relying on technology to improve our lives” highlights the fact that AI and automation technologies could help us saving time for essential activities such as relationships or anything that fosters human flourishing by delegating tedious tasks to machines. It also pointed that AI and NS outcomes may allow us to enhance our physical and mental abilities, improving our performance and efficiency.

The global-transversal idea “Seeking for self-improvement” expresses the claim that it is a core part of human nature to seek for self-improvement and progress, for maximizing efficiency.

Nevertheless, (as the global-transversal idea “Preserving and intensifying what makes us human and fostering human flourishing” warns, it may prove destructive to seek uncritically and systematically for augmentation and improvement of efficiency and performance. It could lead to sacrifice aspects that are essential for humans, such as autonomy, creativity, relationships or to negate some limits and vulnerabilities that are at the heart of what it means to be human (mortality, affectability for instance).

Insights from NHNAI academic network:
A. From the perspective of cognitive science

Juan R. Vidal (associate professor in cognitive neuroscience (UCLy (Lyon Catholic University), UR CONFLUENCE : Sciences et Humanités (EA 1598), Lyon, France)

From the cognitive science point of view, seeking self-improvement is something that does not exist as such in human behavior, if it is not attached to a goal-oriented action and in a broad temporal context (ex: we want to assure access to food and water, shelter, …). This goal carries a value for the human that motivates (or not) to further learning and development of certain capacities and behaviors.  Humans think they maximize their efficiency, but as Herbert Simon has mentioned, humans have a bounded rationality, and thus limited capacities to really maximize thought processes and thus behavior. Human rather “satisfice” their behavior in order to become as satisfied as quickly as possible, which is not the same than maximizing their capacities. This bias also applies regarding the use of technology, and with AI is strongly potentiated. Yet, as has been shown, it also reduces dramatically the learning possibilities of the person and in fine, its freedom for action in the world. So, seeking self-improvement should resonate with the possibility to increase learning (embodied) and the possibilities for future learning (keeping doors open…) instead of accelerating certain performances that further ahead deprive the human of learning and thus adapting to changing conditions (if we consider that its adaptability greatly depends on its capacity to learn new behaviors/thoughts to face new problems).

B. From the philosophical, anthropological and theological perspective

Based on insights from Brian P. Green (professor in AI Ethics, Director of technology ethics at the Markkula Center for Applied Ethics (Santa Clara University, USA) and Nathanaël Laurent (associate professor in philosophy of biology (Université de Namur, ESPHIN, Belgium)

In a general manner, between improving ourselves and preserving what makes us human could be discussed in the context of a book published in March 2024 by Editions du Cerf (Paris) entitled “The human being at the center of the world: For a humanism of the present and future times. Against the new obscurantisms.” Daniel Salvatore Schiffer sums up one of the key messages[1]:

In short : the insidious and gradual erosion, if not evaporation, of the human being, in all his anthropological complexity (to use a key concept in Edgar Morin’s philosophico-sociological edifice), to the benefit of a world that is all too often alienated, directive and reductive, It is a totalitarianism that ignores itself or does not speak its name, and so, in the face of increasingly Manichean thinking, it advances masked, sly and silent, but all the more dangerous for the freedom of the mind, of speech and thought, if not of conscience!

This evaporation of what it means to be human is highly threatening. In fact, we cannot know what is of value in us if we do not know what and who we are.

The core of our human nature can be interpreted Biblically as love because we are made in the image of a God who is love (1 John 4:8) who commands us to love (Lev. 19:18, Deut. 6:4-5, Matthew 22:35–40, Mark 12:29–33, Luke 10:27) – even our enemies (Matt. 5:43-44) – and by that love become more fully human and divine. However, from the first chapter of John’s Gospel we also know that God is Logos, word and reason, and that therefore the universe is rational, meaningful, and grounded in the most profound wisdom.

If, then, we have a dual nature (at least dual, if not much more) as loving and logical creatures then AI presents an opportunity and threat to us in these two key areas. We can use AI to help us learn new truths and gain new wisdom about the universe, to better care for each other and build peace around the world. Or we can abuse AI to replace our thinking abilities, thus leaving us mindless, and stunt our ability to love, or even worse turn our love into hate. We are already seeing these evil uses of AI move into society, in the form of using generative AI to cheat in school, and AI algorithms driving social media and app engagement through content that appeals to addiction, vice, and disdain for others.

This opportunity and threat of AI goes right to the core of our being, and thus demonstrates the validity of the existential angst that AI instinctively raises in some people. Indeed, it should raise this angst – or at least concern – in all of us.

Insofar as AI can help us become more logical and loving beings, then it is a blessing to humanity. Insofar as it makes us less logical and less loving it will be a curse. While these two assumptions about humanity are grounded theologically, there are good reasons to assume that it is not merely a theological grounding: it is also psychological, anthropological, sociological, philosophical, ethical, and more. There is an intuitive sense – and rational case to be made – that these features of humanity are legitimately near the core of human identity, and are therefore concerns regarding our engagement with AI.

Lastly, an empirical case can be made regarding the importance of autonomy and agency. From the data collected in this project itself. With four major thematic syntheses covering education, democracy, and health, coming from every country involved in the project, with dozens of claims/ideas made, this is clearly a topic of preeminent importance.

Regarding autonomy and agency, AI threatens both. Because AI automates agency, it effectively delegates that agency from some humans to other humans using AI as the implement (recalling CS Lewis, who said the same of technology in general (as a distilled form of nature) in chapter 3 of The Abolition of Man). Whomever control these agential AIs therefore has the power to disempower other people through automated systems.

This is only one way that AI might remove our autonomy and agency. Another is that we might be deskilled – both technically and morally – and through that lose our own ability to be full moral agents. Whether we are being actively disempowered by others or are instead disempowering ourselves passively or through inaction, AI presents a genuine threat that must be met with great care and urgency.

Remembering that autonomy and agency are at the core of what it means to be human also reminds us that responsibility is ours as well. We have responsibility for our actions, whether small or large, whether we are choosing to empower or disempower ourselves, whether we are acting through commission or omission, or acting directly or through intermediaries – human or AI. Responsibility rests with those humans making decisions, even if AI ultimately executes those decisions, once or a billion times.

[1] Salvatore Schiffer, D. (ed.) L’humain au centre du monde : Pour un humanisme des temps présents et à venir. Contre les nouveaux obscurantismes, Les éditions du Cerf, 2024, ISBN : 9782204162661 (our translation). https://www.opinion-internationale.com/2024/03/09/lhumain-au-centre-du-monde-un-livre-a-lire-sous-la-direction-de-daniel-salvatore-schiffer_119419.html