Being human in the time of neuroscience and artificial intelligence involve carefully exploring the nexuses of complexity where valid ideas are nevertheless in tension, manifesting subtleties and challenges that must not be overlooked. Each page expresses the existing tension(s) between ideas and within each theme, which emerged in the collective discussions, and are then complemented by insights from NHNAI network researchers.

Transversal complexity #4: Supporting without undermining human decision-making

Many participants to collective discussions acknowledge that AI technologies can support humans in decision making in various domains (even perform better in some tasks). They can help us organizing the vast amount of information we must deal with (especially on social networks and the internet) and contribute to enhancing the quality of this information (fact checking, fighting against (deep) fake news, …). They may allow preventing or managing various problems and crises (ensuring better security in the public space with more efficient surveillance, detecting fraud or corruption, anticipating epidemics or the vagaries of the weather and climate change, …).

However, it is also largely expressed that AI support to decision making can raise extremely acute difficulties. First, it may become difficult to preserve human independent decision-making, with the possibility to sometimes diverge from the machine recommendations (for instance based on human-reflection with trained intuition). This may become particularly problematic for professionals to whom we delegate and grant authority, with the risk of shifting authority delegation from professionals to machines (this worry has been expressed about the doctor-patient relationship but could probably also apply in the context of education about the learner-teacher relationship). In addition, chains and patterns of responsibility can suffer dilution and obfuscation. In this perspective, one should never loose sight of the fact that only human beings, thanks to their awareness and critical thinking, are able to make ethical choices and responsible decision-making. Humans are therefore the only ones responsible for technological orientations and the consequences of AI uses.

In addition, and as the discussions in the field of democracy focused on, the involvement of (generative) AI in the processing, management and editorialization of our informational landscape triggers worrisome issues, with serious risks of undermining and impeding collective intelligence. Biased and/or unfair algorithms may automatically and silently propagate discriminations, create information or cognitive bubbles isolating individuals in uniform informational landscapes. (Generative) AI can facilitate and foster the production and dissemination of (deep) fake news. AI can damage our ability to find accurate, trusted and sourced information, introducing mistrust among uninformed citizens, compromising good democratic choices and pluralism.

Insights from NHNAI academic network:

Nathanaël Laurent (associate professor in philosophy of biology, Université de Namur, ESPHIN) and Federico Giorgi (post-doctoral researcher in philosophy, Université de Namur, ESPHIN, Belgium)

The thesis that it might be possible to program an algorithm to make ethical decisions on our behalf is sometimes referred to as algorethics. In addition to the many critical issues that such a perspective understandably encounters—some of which are highlighted by the participants in the NHNAI debate—it is interesting to note how this kind of project tends to reduce the morality of an action to the intention of the agent to align their behavior with a set of ethical principles.

This deontological view of ethics, although not without supporting arguments, appears somewhat reductive, as it does not give sufficient consideration to the outcomes of the action taken (Cabitza, 2021). When reflecting on how a new technology ought to be used, it therefore seems more appropriate to adopt a consequentialist approach—one in which the moral character of an action is evaluated primarily based on the consequences it produces.