Being human in the time of neuroscience and artificial intelligence involve carefully exploring the nexuses of complexity where valid ideas are nevertheless in tension, manifesting subtleties and challenges that must not be overlooked. Each page expresses the existing tension(s) between ideas and within each theme, which emerged in the collective discussions, and are then complemented by insights from NHNAI network researchers.
Transversal complexity #1: What link, what interactions with machines?
Some participants point out that, with the progress of AI, we will tend to develop machines (robots, conversational automatons) capable of imitating or simulating behaviors and capacities specific to humans and living beings, such as empathy, assertiveness, emotional and affective life. As a result, it will become increasingly tempting to become emotionally attached to this type of machine capable of simulating relational capacities (such as companions or artificial assistants, or robots for personal care).
These discussions also raise the question of the rights to be granted to advanced robots or intelligent systems.
At the same time, many contributions to the discussions emphasize the importance of not losing sight of the specificity of the living and the human in relation to machines. Machines are not conscious, do not feel emotions, cannot be wise, creative, critical or autonomous, are not capable of spirituality in the usual sense of these terms, which implies rootedness in lived experience, in a biological body. At best, they can simulate convincing behaviors in these registers (notably through conversation), behaviors that human beings or living beings would have in given circumstances.
From this point of view, many participants agree that AI cannot be a subject of law. The question is widely described as speculative or science-fictional, without being uninteresting.
Thus, it is quite widely expressed in the discussions that it is necessary to resist the (increasingly real and powerful) temptation to perceive certain robots or AI systems as genuine people and to try to connect with them affectively (as one would with a human, or even with another living being). We must resist the temptation to substitute interactions with machines for genuine human relationships.
Insights from NHNAI academic network:
It’s more than legitimate to marvel at recent developments in AI technologies, which have enabled programs such as chat-GPT and other large language models to maintain a convincing conversation with humans. However, this sense of wonder must be for the right reasons. After all, these successes have nothing to do with the creation of new forms of life, new intelligent beings, we would call the AIs. It is just as dizzying, if not more so, to realize that mankind has been able to build machines, artifacts capable of simulating or reproducing intelligent behavior (convincing behavior that could have come from humans), with absolutely no life, no lived experience, no consciousness, but with pure mechanisms (inert mechanisms, but dazzlingly complex and miniaturized). In addition to demystifying machine learning (including deep learning, based on artificial neural networks), it’s also crucial to remember that all programs (from the most traditional and conventional to the most advanced AI program produced by machine learning) run on computers or similar machines that are not (or are less) programmable. What a machine like a computer does is to transform material configurations to which humans have associated precise meanings (a series of magnets on a hard drive disk symbolizes a sequence of 0s and 1s, itself associated, for example, with a sequence of words or a sequence of numbers coding the colors of pixels in an image) into new material configurations associated with other meanings (for example, a new series of words, a modified image or a description of the image). This type of machine, designed to transform material configurations into others according to what these configurations signify, is not new. The computer can be seen as the culmination of a long evolutionary history of information techniques and technologies, probably dating back to the very beginnings of writing. From this perspective, the abacus can be seen as an ancestor of the computer (mechanical transformation of configurations symbolizing, for example, numbers to be added, into configurations symbolizing the result of addition).
So, strictly speaking, there are no meanings, images, words or numbers in computers, let alone emotions or consciousness. They are, however, fantastic machines for mechanically manipulating (with incredible efficiency and precision) countless material configurations to which we humans attach meaning. A series of magnets on a computer hard drive disk will cause different pixels on the screen to emit different colors, which will be more than just tiny sources of colored light for us, which will become texts telling us about feelings, images of faces feeling such and such emotions. But the computer only processes information by mechanically and automatically manipulating magnets (or other hardware configurations).
This makes it all the more breathtaking to see what we can get computers to do with programs derived from machine learning techniques. Large language models like chat-GPT speak to us convincingly (with credible affective or emotional content). We can also try to automatically analyze emotions and feelings in what people say, or in videos capturing body or facial expressions. These new technologies open up the possibility of ever richer and more interesting interactions with machines, with modalities that reproduce or simulate a growing number of characteristics of interactions and relationships between living beings in general, and between humans in particular.
To properly consider the consequences and challenges of these new possibilities for interaction with machines, several points need to be emphasized[1]. In the first place, and contrary to what behaviorist approaches might suggest (in connection with the famous Turing test), it seems important to maintain a distinction between simulating a behavior resulting from a lived experience and having this same behavior while experiencing this lived experience. What can we say, for example, about a machine that expresses words of compassion to an elderly person at the prospect of the end of life? This cannot be confused with the same words uttered by a person capable of experiencing his or her finitude, feeling and sympathizing in a shared lived experience.
Secondly, it’s also important to say that simply acknowledging that machines are just machines, and treating them as pure tools, is not necessarily the answer to every problem. Indeed, from this perspective and in all likelihood, artificial companions (as in Spike Jonze’s 2013 film Her) will be built and programmed to find their place in a market and therefore behave in a way that satisfies the user (for example, who would want an artificial companion that might betray or leave its human?). We will therefore be faced with systems that are perceived as objects, as possessions, but which will derive all their specific appeal from their ability to resemble a genuine person, to manifest an appearance of humanity, personality or life. Gradually becoming accustomed to the combination of these two characteristics could prove extremely destructive for humanity. It could be tantamount to gradually developing a capacity to feel comfortable with slavery: “Where there is no “other,” but only the appearance of an other at our disposal, concurrent with the absence of the demand that would be exercised upon one’s own self-gift by confrontation with a true other, we risk being conditioned in a dangerous talent for exploitation.”[2]
In the same vein, this combination of object or tool status and personal appearance can also lead us to become accustomed to a consumer attitude towards other people’s behavior, gradually reducing our tolerance of other people’s behavior that would disturb us. It’s not impossible that the constant presence of artificial companions, whose disturbing behaviors will be perceived as defects (by virtue of their status as tools or objects), surreptitiously leads us to view genuine people who disturb us in the same way, “as simply faulty human beings, viewing them with the same sort of idle dissatisfaction that we would feel with a robot that did not deliver the set of behaviors and reactions that we wanted to consume.”[3]
This may lead to reconsider the question of what rights should be granted to robots and AI systems. Admittedly, their status as machines means that we can legitimately refuse to consider them as subjects of law. This does not mean, however, that we should let everyone do as they please with them, just as we might with a table. A regulatory framework may be desirable in this area, if only to prevent the development of behavior or habits that are extremely toxic for human beings and other living beings.
All these factors encourage us to reflect deeply on why developing machines increasingly capable of presenting the appearance of humans or other living beings. We need to reflect upon what we can really gain from such technologies.
[1] Here we largely draw on chapter 4 of “Encountering Artificial Intelligence: Ethical and Anthropological Investigations.”
[2] Ibid., p. 119.
[3] Ibid., p. 121. The full sentence reads: “Is it possible that we will no longer see this as a glimpse of a wider array of humanity, that we will not struggle toward a charitable response? Perhaps instead, we may come to think of these others as _simply_ faulty human beings, viewing them with the same sort of idle dissatisfaction that we would feel with a robot that did not deliver the set of behaviors and reactions that we wanted to consume.”
It would also be interesting to adopt Bruno Latour’s sociological view that the ‘social’ is an associative composition[1]. A situation is seen as a ‘hybrid collective’ made up of human and non-human interactants. Neither objects nor subjects, these interactants are themselves envisaged as relational networks. A digital application, for example, cannot be envisaged without its designers, or the maintenance staff, or the user interface, or of course without its presumed users and intended uses. But users may well hijack this use to adapt it to their own experiential context. An AI like ChatGPT is a composit formed by all the human authors who generated the texts that trained the model, plus all the designers of the model, plus all the agents who filter the AI’s productions, plus all the users and the expected and unforeseeable contexts of use.
From this point of view, it makes little sense to talk about the human appearance of an object, or the objectification/datafication of a human. What is important to avoid is that the standardisation of forms of mediation (human or non-human operators) overwhelms the genericity of local learning sites. Local learning resulting from uncontrolled interactions with the environmental fabric is just as crucial as standardised data recording and processing systems. This is what Amitav Ghosh has formulated[2], for example, about the problem of climate change:
« For those who carefully observe the environment in which they live, clues to long-term change sometimes come from unexpected sources. (…) The people who pay the most attention to ecological change are more often than not on the margins; the relationships they have with the soil, the forest or the water are barely mediated by technology. »
[1] One reference could be: https://www.erudit.org/fr/revues/cs/2022-n4-cs07915/1098602ar.pdf
[2] A. Ghosh, La malédication de la muscade. Une contre-histoire de la modernité, Wildproject 2024, pp. 170-171 (ma traduction).