Being human in the time of neuroscience and artificial intelligence involve carefully exploring the nexuses of complexity where valid ideas are nevertheless in tension, manifesting subtleties and challenges that must not be overlooked. Each page expresses the existing tension(s) between ideas and within each theme, which emerged in the collective discussions, and are then complemented by insights from NHNAI network researchers.

Complexity on Education #1: AI and NS in education with respect to human development

Support, automation and cognitive development

Participants in societal discussions recognize the advantages of using AI in education. First, AI can help us to be more productive and efficient, because some tasks are easier and faster to complete with AI (such as producing summaries and taking notes for students, proofreading for teachers…). Automation can also be a mean to relieve teachers who are exhausted from tiring tasks (permanently) or to ensure continuity when they have a health problem (temporarily).  Moreover, AI and automation allow us to save time that could be used in other activities to exert our humanity, or to focus on other essential things like relationships (as evoked in France and Portugal). Another point is that AI can release us from repetitive or uninteresting tasks, which allows us to focus on more profound tasks that need high intellectual activity and might be more interesting or stimulating.

However, participants are also worried about the risk of cognitive impoverishment and loss of autonomy with AI. Delegation through automation implies being dispossessed from a certain (know-how) knowledge and to become machine-dependant. We lose autonomy when we are not able to realize a task by ourselves, without a machine. Moreover, by freeing ourselves from a task, we no longer call upon the cognitive capacities that enabled us to carry out this task, we no longer call upon the cerebral areas we need for the realization of the task (as it is the case with the systematic use of GPS that impoverishes the activity of cerebral areas associated to space orientation and memory). On top of that, certain cognitive faculties need practice to be developed (such problem-solving, creativity…), notably by trial-error as we are also learning from our mistakes. Relying too much on AI to get answers may prevent us from practicing enough. Finally, some tasks we judge uninteresting or of “lower level” may prove key for the development of important cognitive faculties or values (such as patience, maturity…).

The following ideas can be found in the global and local syntheses downloadable here

  • (Education – Global) Using AI and NS to better teach and learn
  • (Education – Global) Using AI to free time for human flourishing
  • (Education – Global) Using AI to improve performance and innovation
  • (Education – Global) Preventing the risk of cognitive impoverishment
  • (Education – Global) Preserving human autonomy
Insights from NHNAI academic network:

Based on insights from Juan R. Vidal (associate professor in cognitive neuroscience (UCLy (Lyon Catholic University), UR CONFLUENCE : Sciences et Humanités (EA 1598), Lyon, France), Laura Di Rollo (research engineer in cognitive sciences for NHNAI project (UCLy (Lyon Catholic University), UR CONFLUENCE : Sciences et Humanités (EA 1598), Lyon, France) and Brian P. Green (professor in AI Ethics, Director of technology ethics at the Markkula Center for Applied Ethics (Santa Clara University, USA)

A. Escaping the law of “the least effort”

Although there could be several beneficial uses of AI in education that can enhance learning (e.g., using ChatGPT for generating questions about the lesson before an exam, or for providing initial ideas for starting a writing project…), it might be very tempting for students to generalize its use to as many as possible of their academic tasks. Technology such as AI makes some tasks easier and appeals to the principle of the “least effort” which, indeed, may be detrimental for cognitive development. This is particularly well illustrated by a study[1] that suggests that excessive use of generative AIs like chatGPT among students is likely to increase procrastination, memory loss, and impact academic results.[2]

Learning new (intellectual and practical) skills requires practice and, often, repetition in order to increase the efficiency and quality of actions regarding their long-term goal. Practice (with repetition) is not possible without making efforts and often facing frustration when not quite achieving the expected goal. If the use of technological devices and AI shortcuts these important learning steps, the individual will not acquiere the new capacities and knowledge, and will thus be empoverished. It is therefore important to evaluate the use of AI through this “effort-for-learning” lens, that should not be viewed as a waste of time, but rather as the time needed to learn-and-keep the knowledge (be it abstract or concrete know-how). Moreover, making efforts also conveys sense-making in learning, which is important for a person’s identity.

It is thus important to think of the use of technology and AI as a means to potentiate the learning of human capacities as such, and not only through the sole lens of the maximization evaluation scores in the education system. We should use AI as a complementary tool that does not prevent making cognitive efforts. For instance, AI could be used to help us remind of things we need to do, rather than only to do it for us, thereby depriving us of experiences that enable us to grow and flourish.  AI could be used as a motivator instead of only or mainly as a facilitator of complex tasks (that may be necessary for learning, especially long-term ones). It is teachers’ and trainers’ responsibility to encourage learners to strike a balance between technological assistance and personal effort, in order to preserve learning and cognitive development, and to limit as far as possible the sources of distraction that technology can represent.

Indeed, understanding better how we learn and how we are influenced by our environment and our practices, fosters the view of a human being whose freedom to flourish depends on the capacity to control the interactions with all aspects of his environment, especially with technological devices that capture attention very efficiently, depriving people from freely paying attention to what’s happening around them. Neuroscience allows to better understand the constraints and mechanisms of human behaviour and thought. It gives ground to take action in order to avoid or domesticate interaction with mind-monopolizing artefacts.

[1] Abbas, M., Jam, F. A., & Khan, T. I. (2024). Is it harmful or helpful? Examining the causes and consequences of generative AI usage among university students. International Journal of Educational Technology in Higher Education, 21(1), 10.

[2] However, this study not only highlights the causal relationship between excessive chatGPT use and cognitive impoverishment. It also shows the causal relationship between excessive use of chatGPT and time pressures and high workload levels. So, AI or technology alone may not be the sole triggers of cognitive impoverishment. Their inclusion in a socio-economic model that overvalues production and consumption, efficiency and speed also looks causally involved.

B. The importance of the body and of lived experience in learning

Neuroscience reveals that the human brain does not really behave as a computer. Contrarily to ideas coming with a computational view of mind, knowledge is not the resultant of abstract calculations specified by software that the brain would execute. According to a more embodied view, cognition, knowledge, sense-making are by far enabled by the body-brain interaction, by the proactive engagement of the embodied mind through the ongoing interactions between nervous system, body and environment. Reducing these interactions, especially those with the social environment, amounts to empoverishing learning experience with a certain loss of overall sense-making and global integrated understanding of all knowlegde.

A consequence of such embodied views is that learning “knowledge with meaning,” or “knowledge that has a sense for the individual,” has something to do with bodily know-how. Digital technology and AI (especially conversational AI) mostly deliver knowledge through written (or audio transformed) text. Sensorimotor manipulations or body movements involved do not go beyond using our fingers to tap or scroll on screens (impoverished interaction with the environment). Passing much time doing screen-job, even though it is for “learning” may fail stimulating enough the coupling between the nervous system and the body. It assimilates learning to what machines do: information storage in a pre-allocated space. As a consequence, cognitive and learning processes might be empoverished. Accordingly, it may prove crucial to find, during school educational years, an equilibrium between time-in-front-of-screens and activities that stimulate more directly the body at a sensorimotor level (in the effort of doing, in a more varied and extended range of lived experiences).

In any case, AI should not lead to reinforce the power of attraction and capture of screens and digital tools on children’s attention and activity-time. It AI should not, under the pretext of optimizing learning, lead to a reduction in the richness of lived experiences. Human knowledge is an experiential (bodily) process more than an algorithmic information process. When reducing the richness and varity of experiences, we inevitably reduce knowledge quality. AI tools may offer the opportunity to go faster through assignments and tasks, but at the expense of the richness and variety of lived experience. One may make an analogy with movie trailers. They may constitute good syntheses of the movies’ content, but they will never exhaust the experience of whatching the full movies. Who would like to speed up movie watching in order to watch “more movies more efficiently”? In many contexts, machine-like processes of optimization cannot meet the human thirst for rich, varied, and high-quality experiences (the quenchening of which imposing to accept the existence of extended unoptimized time lapses).

B. Preventing the loss of skills: critical thinking and creativity

The use of AI brings along with it the risk of deskilling. There are some sorts of skills that seem to be acceptable to lose, for example skills related to outdated technologies, etc., but there are other skills that seem intrinsic to our humanity, skills like those necessary for survival, for living in society, or rational skills for relating to truth. How exactly can we determine what skills we should continue teaching and what skills we are all right with losing is a somewhat open question, but there do seem to be skills that we should not lose.

In any case, the important role of the experience of making some efforts should always be kept in mind. Learning is not a passive process, which is why effort is part of the natural process of learning, especially when what is learned has a certain degree of complexity. Complex knowledge is not a matter of “load” but rather of “relations” between ideas. Establishing these links requires intrinsically more effort than merely retaining the information. Sense-making also goes this path. Effort, though less efficient, is a guarantee for the knowledge and skills acquisition processes. Therefore, an important question is: when does the use of AI as a tool become a substitute for the human thought-action process of knowledge and skills acquisition? When, how and why do students operate the substitution? Answering this question might guide towards developing strategies and contexual adaptations in educational systems to avoid this substitution to happen.

In this respect, we may need to reinvent assignments and activities that cannot be easily solved by AI tools but instead require students to call upon their creativity and critical thinking. Moreover, valuing such activities could motivate students to engage more deeply in the learning process and be more willing to complete tasks on their own.[1]

However, many of the commenters around the world expressed concern that AI might harm our creativity, our critical thinking, our mental development, our social development and so on. These threats should be taken seriously, avoided if possible, and if they start coming true then halted quickly.

Nevertheless, education is also not purely about practical useful skills – it is also about enjoying the more abstract or theoretical aspects of life, pondering the deep mysteries and meanings of the universe. If AI can take away some of the drudgery of life and make us more able to enjoy higher pursuits, as well as other enjoyable human pursuits, then this could be a good outcome.

[1] Ibid.