Being human in the time of neuroscience and artificial intelligence involve carefully exploring the nexuses of complexity where valid ideas are nevertheless in tension, manifesting subtleties and challenges that must not be overlooked. Each page expresses the existing tension(s) between ideas and within each theme, which emerged in the collective discussions, and are then complemented by insights from NHNAI network researchers.

Complexity on Education #1: AI and NS in education with respect to human development

Inclusion, personalization and human relationships

Participants highlight the benefits that AI can bring to education, starting with digitization and online school platforms, which make online teaching materials accessible to anyone, facilitating instruction outside class hours, enabling pupils and students to extend the subjects seen in class, and making it easier to catch up on lessons when absent. Digital technologies also allow for online discussions and debate forums that could make it easier for people who are very shy or uncomfortable speaking in public to express themselves.

More specifically, participants also recognize that AI can be of great help in education. AI-assisted translation and language learning systems, especially coupled with conversational bots with speech to text – text to speech capacities, are becoming more accessible. They can be of great help, for instance because language learning partly requires oral practice (conversational robots then being possibly more effective than language books). Such tools may even prove indispensable for people with language difficulties or for deaf or hearing-impaired persons (as mentioned in Kenya and France).

For participants, chatbots like ChatGPT, when used wisely, could be a formidable pedagogical tool, a necessary aid to learning complementing to the teacher. This complementary aspect between AI and the teacher was emphasized several times in the discussions, notably with respect to the personalization of learning. AI makes it possible to personalize learning paths according to each student’s pace, level and abilities. As it is physically and cognitively impossible for the teacher to take into account all the specificities of each student, AI enables him or her to have an overall view and to identify students in difficulty who are in greater need of support.

But participants also recognize that AI’s contribution to education (more inclusion, more access…) very often comes at the expense of face-to-face interaction and human contact, and this concern was almost unanimous in the discussions. The availability of online learning materials can also have the negative effect of encouraging students to invest less time in classroom activities, or even prompting some to drop out and home-school, given that everything is now available online, and within everyone’s reach. As evoked in several countries including Portugal, there is also a risk that younger people, having become accustomed to this new format of online relationships, will become content with these virtual contacts and start ignoring their relational, emotional and physical needs, to the point of becoming distant and cold in contact with others.

In general, participants converge on the idea that undermining human relationships in face-to-face interactions globally threatens education. Only in face-to-face interactions can empathy, emotion, mutual and reciprocal understanding – in short, the encounter with the other – genuinely come into play. Face-to-face interactions when it comes to learning how to be, how to know and how to act. The presence of a teacher and the transmission of his or her passion and emotions play an important role in the learner’s motivation and attention, and therefore in his or her learning. So school is not just a place for learning, but also a place for sharing, meeting new people, and learning to live together, to help society flourish. Through face-to-face interaction, we confront each other, learn social codes and pass on values. Digital education, or education that takes place too much behind screens, can ultimately contribute to reinforcing individualism and selfishness, which would constitute a major hindrance to community life and a threat to social cohesion.

Moreover, even if they recognize that AI can render didactic material more accessible and enhance learning processes, participants also worry about the risk of exacerbating inequalities. Indeed, AI might be accessible and beneficial only to wealthy socio-economical groups or people, notably as AI programs need expensive resources and infrastructures that some populations currently lack. In addition, AI programs are not deprived of biases, and this could perpetuate discrimation and stigma, especially when some cultures and populations are underrepresented in training databases (rendering AI tools less efficient form them, in addition to direct discrimination issues).

The following ideas can be found in the global and local syntheses downloadable here

  • (Education – Global) Fostering social inclusion thanks to AI technologies
  • (Education – Global) Using AI and NS to better teach and learn
  • (Education – Global) Preserving human relationships and in-person interactions
  • (Education – Global) Not exacerbating social and economic inequalities with AI
Insights from NHNAI academic network:
A. Avoiding the disinvestment in human relationships and the commodification of the human being

Based on insights from Brian P. Green (professor in AI Ethics, Director of technology ethics at the Markkula Center for Applied Ethics (Santa Clara University, USA) and Laura Di Rollo (research engineer in cognitive sciences for NHNAI project (UCLy (Lyon Catholic University), UR CONFLUENCE : Sciences et Humanités (EA 1598), Lyon, France)

In her book “Alone together” (2011),[1] Sherry Turkle is concerned that young people are no longer investing in human relationships, and that more is expected of technologies than of humans. Education-focused relationships are among the most important relationships we have as humans. Most people can remember someone who taught them something, whether it was our parents, a friend, or a teacher in school. These educational relationships are vital to our humanity and AI puts them at risk, particularly in two ways: 1) as a distraction from learning (such as with recommendation algorithms on social networks and other digital plateforms, that are optimized for grabbing attention), and 2) as a replacement for learning (for instance with generative AI tools that children and students may use to to breeze through their assignments).

Humans need each other, especially for education. With degraded socialization, the human brain suffers stress-like symptoms, entraining not only a reduction in capacities by the nervous system, but more clearly an empoverishment in the quality and diversity of experiences, which abide to a certain loss of overal freedom of thought and action throughout life. In order to become genuine human beings, children should not be raised by screens and algorithms, but by other genuine human beings. In this perspective, one can wonder about the right time and place for introducing advanced technological tools for children, students and teachers. In order to be able to adequalty judge the interest and added value of technology in a given activity, teachers should in the first place be capable of giving class without any major technological device. If teachers learn to be so with a high level of dependence of technology from the start, it questions the reliability of their understanding of the cycle of learning through human interaction.

In addition to the issue of disinvestment in human relationships, digital technologies also present the risk of commodifying human beings, i.e. reducing them to mere objects. Indeed, as Sherry Turkle (2011) points out, the risk is that our “self” is transformed into an online “object-self,” where we treat each other more and more like objects and in an expeditious manner. The most telling example is certainly email. Emails are a cognitive load in themselves, but sometimes they’re messages from friends or colleagues that we say we need to “deal with” or “get rid of so” we can cross them off our to-do list, as if we were talking about emptying our paper basket.

Ultimately, the danger is that we lose the feeling of being alive, the way of being-in-the-world that preserves a certain dignity and authenticity, and that only human relationships and face-to-face contact can provide. AI has the potential to be a weapon of mass destruction upon the world’s educational system. It needs to be disarmed and instead harnessed as a source of power to assist humans to become better people rather than harm us by enabling the worst parts of our nature. Thus, it seems necessary to strike a balance so as to benefit from what AI can bring us, while preserving those precious human contacts that largely define our humanity, notably through certain attributes. The human voice is to Sherry Turkle what the face is to Levinas.[2] For Sherry Turkle, it is in the voice that the range of human emotions and the singularity of beings are transmitted and heard. For Levinas, it is through the face that the other appears to me in his or her fragility, vulnerability and singularity, which calls for an ethical injunction to protect and not to harm. The face is an interface that enables us to enter into a relationship with others, and through them, with humanity. This raises the question of whether the danger threatening humanity, with relationships mostly at a distance and mostly faceless, is not indifference to the other, and with it, the loss of concern for humanity.

[1] Turkle, S. (2011). Alone together. Why We Expect More from Technology and Less from Each Other. Basic Books, New York.

[2] Lévinas, E. (1984). Ethique et infini. Le livre de poche

B. Escaping the rise of inequality: Solidarity and Relationships

Based on insights from Brian P. Green (professor in AI Ethics, Director of technology ethics at the Markkula Center for Applied Ethics (Santa Clara University, USA), Nathanaël Laurent (associate professor in philosophy of biology, Université de Namur, ESPHIN) and Federico Giorgi (post-doctoral researcher in philosophy, Université de Namur, ESPHIN, Belgium)

AI as a driver of social and economic inequality is an inescapable question because AI will reduce the value of labor and increase the value of capital, thus driving wealth away from workers and towards owners of AI. How to prepare students today for the strange world of tomorrow, where labor might have no value and only those who already own wealth will retain wealth is an unsolved problem of gargantuan proportion.

Students need to know that a strange future is approaching and to be aware of AI and neuro-technology as developing technologies that can affect their futures. Additionally, the uncertainty that will be sparked by these revelations should not be allowed to overpower the growing importance of particularly human pursuits such as seeking ethics, justice, and creating a more caring world. While intellectual labor might be, in some cases, automatable, caring relationships between families and friends can never be automated. Particular human relationships are not fungible and therefore AI can never replace them. The value of family and friends should be re-emphasized and the study of what makes good relationships should be a key part of the revision of education.

The overall concern that AI could become a tool of exclusion against the less affluent segments of the populationdoes not appear to be tied specifically to any one type of technology. Rather, it potentially arises whenever a new scientific discovery is made that can improve the living conditions of a significant portion of the population. If, for instance, a highly effective but expensive treatment for a serious illness were to be commercialized in the future, the same risk of exclusion would apply to those who lack the financial means to afford it.

The participants’ reflections therefore raise very broad questions, but ones that are no less relevant to the concrete reality experienced daily by millions of people—namely, the relationship between ethics and economics. Can we still maintain today that economic science should enjoy absolute autonomy from any proposals for regulation aimed at limiting the devastating effects of inequality? Or is it necessary to challenge such an economistic view, as proposed by thinkers such as Jean Ladrière, Amartya Sen, and Martha Nussbaum (Caltagirone, 2017)?

The contributions of participants in the NHNAI debates once again seem to confirm that economic and technological development cannot be separated from a moral evaluation of the risks of exclusion that digitalization entails for those who lack access to new technologies.