Being human in the time of neuroscience and artificial intelligence involve carefully exploring the nexuses of complexity where valid ideas are nevertheless in tension, manifesting subtleties and challenges that must not be overlooked. Each page expresses the existing tension(s) between ideas and within each theme, which emerged in the collective discussions, and are then complemented by insights from NHNAI network researchers.

Transversal complexity #1: What link, what interactions with machines?

Some participants point out that, with the progress of AI, we will tend to develop machines (robots, conversational automatons) capable of imitating or simulating behaviors and capacities specific to humans and living beings, such as empathy, assertiveness, emotional and affective life. As a result, it will become increasingly tempting to become emotionally attached to this type of machine capable of simulating relational capacities (such as companions or artificial assistants, or robots for personal care).

These discussions also raise the question of the rights to be granted to advanced robots or intelligent systems.

At the same time, many contributions to the discussions emphasize the importance of not losing sight of the specificity of the living and the human in relation to machines. Machines are not conscious, do not feel emotions, cannot be wise, creative, critical or autonomous, are not capable of spirituality in the usual sense of these terms, which implies rootedness in lived experience, in a biological body. At best, they can simulate convincing behaviors in these registers (notably through conversation), behaviors that human beings or living beings would have in given circumstances.

From this point of view, many participants agree that AI cannot be a subject of law. The question is widely described as speculative or science-fictional, without being uninteresting.

Thus, it is quite widely expressed in the discussions that it is necessary to resist the (increasingly real and powerful) temptation to perceive certain robots or AI systems as genuine people and to try to connect with them affectively (as one would with a human, or even with another living being). We must resist the temptation to substitute interactions with machines for genuine human relationships.

The following ideas can be found in the global and local syntheses downloadable here

  • AI systems and machines cannot be confused with humans and therefore cannot be endowed with rights similar to those of humans.
    • (Global – Democracy) Preserving the specificity of human beings (compared to machines)
    • (France – Democracy) Undesirable: The recognition of a legal personality for AIs is not desirable
    • (France – Democracy) Desirable: Algorithms remain tools (1 extract)
    • (USA – Democracy) Machines are to serve humanity, therefore humanity must maintain appropriate control of AI
    • (France – Democracy) The complex question of the legal status of artificial intelligence is widely debated
  • AI systems should not replace human relationships
    • (Global – Transversal) Preserving empathy, human contact and human relationships
  • AI systems will increasingly have behaviors that enable / encourage the tendency of humans to want to connect with and attach to them.
    • (Portugal – Democracy) Humans and machines may bond
    • (Portugal – Democracy) Artificial intelligence will tend to mimic human abilities
Insights from NHNAI academic network:

Based on insights from Brian P. Green (professor in AI Ethics, Director of technology ethics at the Markkula Center for Applied Ethics (Santa Clara University, USA), Mathieu Guillermin (associate professor in ethics of new technologies (UCLy (Lyon Catholic University), UR CONFLUENCE : Sciences et Humanités (EA 1598), Lyon, France) and Nathanaël Laurent (associate professor in philosophy of biology (Université de Namur, ESPHIN, Belgium)

It’s more than legitimate to marvel at recent developments in AI technologies, which have enabled programs such as chat-GPT and other large language models to sustain a convincing conversation with humans. These performances may deeply impact human relationships and interactions humans have with machines.

As noted in many thematic areas of the NHNAI project, relationships are of great importance in human life and their protection and enhancement should be a serious concern of all those working with AI systems and their effects. In general, AI systems should assist and not replace humans – but especially in relationships. As social creatures, theologically we made in the image of a relational Triune God who is love itself, but this is also a philosophical and empirical point, and logically necessary. Humanity cannot live alone, and anything that erodes our relationships is a risky and dangerous thing. AI must be used to strengthen human relationships, whether familial, friendship, economic, political, or otherwise. AI which damages relationship attacks a core part of what it means to be human.

A.   With AI, we do not create radically a new kind of entity

However, this sense of wonder must be for the right reasons. After all, these successes have nothing to do with the creation of new forms of life, new intelligent beings, we would call the AIs. It is just as dizzying, if not more so, to realize that mankind has been able to build machines, artifacts capable of simulating or reproducing intelligent behavior (convincing behavior that could have come from humans), with absolutely no life, no lived experience, no consciousness, but with pure mechanisms (inert mechanisms, but dazzlingly complex and miniaturized).

In addition to demystifying machine learning (including deep learning, based on artificial neural networks)[1], it’s also crucial to remember that all programs (from the most traditional and conventional to the most advanced AI program produced by machine learning) run on computers or similar machines that are not (or are less) programmable. What a machine like a computer does is to transform material configurations to which humans have associated precise meanings (a series of magnets on a hard drive disk symbolizes a sequence of 0s and 1s, itself associated, for example, with a sequence of words or a sequence of numbers coding the colors of pixels in an image) into new material configurations associated with other meanings (for example, a new series of words, a modified image or a description of the image). This type of machine, designed to transform material configurations into others according to what these configurations signify, is not new. The computer can be seen as the culmination of a long evolutionary history of information techniques and technologies, probably dating back to the very beginnings of writing. From this perspective, the abacus can be seen as an ancestor of the computer (mechanical transformation of configurations symbolizing, for example, numbers to be added, into configurations symbolizing the result of addition).

So, strictly speaking, there are no meanings, images, words or numbers in computers, let alone emotions or consciousness. They are, however, fantastic machines for mechanically manipulating (with incredible efficiency and precision) countless material configurations to which we humans attach meaning. A series of magnets on a computer hard drive disk will cause different pixels on the screen to emit different colors, which will be more than just tiny sources of colored light for us, which will become texts telling us about feelings, images of faces feeling such and such emotions. But the computer only processes information by mechanically and automatically manipulating magnets (or other hardware configurations). This makes it all the more breathtaking to see what we can get computers to do with programs derived from machine learning techniques.

B.   But AI, like any technology, shapes what we are and how we live

Acknowledging these powers of computers should never come without a clear understanding that computers and AI systems are not entities emerging aside from us. As we just saw, they are nothing like Science-Fiction AI that become conscious and autonomous in a strong sense. However, there is another crucial sense in which AI systems are not aside from us: they are not mere tools that we could mobilize only when we need them and that would otherwise remain quietly and neutrally on the shelf. Technology deeply transforms us. It shapes and mediates our ways of being and of living together.

Bruno Latour’s sociological view can help us grasp this important point. For him, the ‘social’ is an associative composition[2]. A situation is seen as a ‘hybrid collective’ made up of human and non-human interactants. Neither objects nor subjects, these interactants are themselves envisaged as relational networks. A digital application, for example, cannot be envisaged without its designers, or the maintenance staff, or the user interface, or of course without its presumed users and intended uses. But users may well hijack this use to adapt it to their own experiential context. An AI like ChatGPT is a composite formed by all the human authors who generated the texts that trained the model, plus all the designers of the model, plus all the agents who filter the AI’s productions, plus all the users and the expected and unforeseeable contexts of use.

C.   Imitation capabilities of AI systems are a deep gamechanger

Large language models like chat-GPT speak to us convincingly (with credible affective or emotional content). We can also try to automatically analyze emotions and feelings in what people say, or in videos capturing body or facial expressions. These new technologies open up the possibility of ever richer and more interesting interactions with machines, with modalities that reproduce or simulate a growing number of characteristics of interactions and relationships between living beings in general, and between humans in particular. To properly consider the consequences and challenges of these new possibilities for interaction with machines, several points need to be emphasized.

a.       AI’s extreme usefulness and uniformization issues

Before looking at the stakes with human (and life) imitation per se, it is important to point out that these imitation capabilities deeply transform the manner we interact with machines. This interaction can be rendered extremely fluid and easy, by comparison with digital skills that are normally required to use a computer. Now, more and more tasks can be launched and driven by vocal control in natural language. This also means that digital systems will no doubt become even more ubiquitous than they already are.

In this perspective, a first issue we must circumvent to maximize the positive outcomes of AI technologies is not the problem of the human appearance of an object, or of the objectification/datafication of a human. Based on Latour’s insights (humans and their technology form an intricate network of interactants, humans cannot be isolated from), what is important to avoid is that AI systems lead to a uniformization of human lives and become an impediment to their creativity. Standardized forms of mediation AI systems and humans interacting with them could overwhelm and threaten the possibility to learn and innovate in concrete local situations. Local learning resulting from uncontrolled interactions with the environment is just as crucial as standardized data recording and processing systems. This is what Amitav Ghosh has formulated[3], for example, about the problem of climate change:

For those who carefully observe the environment in which they live, clues to long-term change sometimes come from unexpected sources. (…) The people who pay the most attention to ecological change are more often than not on the margins; the relationships they have with the soil, the forest or the water are barely mediated by technology.

b.       Never hiding who’s who (or what’s what)[4]

Returning to the question of the human appearance of machine per se, and contrary to what behaviorist approaches might suggest (in connection with the famous Turing test), it seems first important to maintain a distinction between simulating a behavior resulting from a lived experience and having this same behavior while experiencing this lived experience. What can we say, for example, about a machine that expresses words of compassion to an elderly person at the prospect of the end of life? This cannot be confused with the same words uttered by a person capable of experiencing his or her finitude, feeling and sympathizing in a shared lived experience. If AI technology is properly understood, what we have with a machine emitting words of sympathy must not be described as a machine having such feelings. Rather, it is interesting to look at what type of human will, feelings and intentions are really involved. Latour’s analysis is deeply illuminating in this perspective as it leads to consider the AI systems as part of a network of human and non-human interactants, in this case organized to automatically utter words of sympathy. Human intention exists here, but it looks extremely general, remote and abstract. It is the one of developers and other persons involved in the decision of building this system. Such feelings, will and intentions are radically different from the one of a singular person expressing her sympathy to someone she’s in direct contact with. The value of the uttered word cannot even be compared.

c.       The problem of treating human-looking machines as machines

Secondly, it’s also important to say that simply acknowledging that machines are just machines, and treating them as pure tools, is not necessarily the answer to every problem. Indeed, from this perspective and in all likelihood, artificial companions (as in Spike Jonze’s 2013 film Her) will be built and programmed to find their place in a market and therefore behave in a way that satisfies the user (for example, who would want an artificial companion that might betray or leave its human?). We will therefore be faced with systems that are perceived as objects, as possessions, but which will derive all their specific appeal from their ability to resemble a genuine person, to manifest an appearance of humanity, personality or life. Gradually becoming accustomed to the combination of these two characteristics could prove extremely destructive for humanity. It could be tantamount to gradually developing a capacity to feel comfortable with slavery: “Where there is no “other,” but only the appearance of an other at our disposal, concurrent with the absence of the demand that would be exercised upon one’s own self-gift by confrontation with a true other, we risk being conditioned in a dangerous talent for exploitation.”[5]

In the same vein, this combination of object or tool status and personal appearance can also lead us to become accustomed to a consumer attitude towards other people’s behavior, gradually reducing our tolerance of other people’s behavior that would disturb us. It’s not impossible that the constant presence of artificial companions, whose disturbing behaviors will be perceived as defects (by virtue of their status as tools or objects), surreptitiously leads us to view genuine people who disturb us in the same way, “as simply faulty human beings, viewing them with the same sort of idle dissatisfaction that we would feel with a robot that did not deliver the set of behaviors and reactions that we wanted to consume.”[6]

This may lead to reconsider the question of what rights should be granted to robots and AI systems. Admittedly, their status as machines means that we can legitimately refuse to consider them as subjects of law. This does not mean, however, that we should let everyone do as they please with them, just as we might with a table. A regulatory framework may be desirable in this area, if only to prevent the development of behavior or habits that are extremely toxic for human beings and other living beings.

All these factors encourage us to reflect deeply on why developing machines increasingly capable of presenting the appearance of humans or other living beings. We need to reflect upon what we can really gain from such technologies.

[1] Learn more about machine learning in a complexity expert’s contribution to democracy: https://nhnai.org/focus-on-nexuses-of-complexity-democracy/

[2] See: https://www.erudit.org/fr/revues/cs/2022-n4-cs07915/1098602ar.pdf

[3] A. Ghosh, La malédiction de la muscade. Une contre-histoire de la modernité, Wildproject 2024, pp. 170-171 (our translation).

[4] In the following sub-sections, we draw on the work of the AI Research Group of the Centre for Digital Culture (Culture and Education), and its book “Encountering Artificial Intelligence: Ethical and Anthropological Investigations.” *Journal of Moral Theology* 1 (Theological Investigations of AI) 2023; especially chapter 4. https://doi.org/10.55476/001c.91230

[5] Ibid., p. 119.

[6] Ibid., p. 121. The full sentence reads: “Is it possible that we will no longer see this as a glimpse of a wider array of humanity, that we will not struggle toward a charitable response? Perhaps instead, we may come to think of these others as simply faulty human beings, viewing them with the same sort of idle dissatisfaction that we would feel with a robot that did not deliver the set of behaviors and reactions that we wanted to consume.”