Being human in the time of neuroscience and artificial intelligence involve carefully exploring the nexuses of complexity where valid ideas are nevertheless in tension, manifesting subtleties and challenges that must not be overlooked. Each page expresses the existing tension(s) between ideas and within each theme, which emerged in the collective discussions, and are then complemented by insights from NHNAI network researchers.

Complexity on democracy #6: The democratic challenge of regulation

A clear consensus emerges on the fact that powerful new technologies such as require governance and regulation. It is crucial to encourage a reasoned use of AI technology (including Video surveillance, algorithms, big data, social media), always under human control. We need to implement updated normative tools and juridical rights for citizens (which is a multidisciplinary concern); to develop and implement ethical codes for professional groups (e.g., web developers); to take special care about vulnerable groups (preventing the automation of discrimination for instance).

However, part of the exchanges also highlights that regulation raises many acute issues making it a very difficult challenge. One can for instance mention the topic of social media moderation: who is the right actor? AI technologies may contribute but what is the place of humans? Such a topic reveals very fundamental questions about truth, democracy, and legitimacy. More broadly, regulation of AI is challenging for several reasons: the pace of technological development, the obfuscation of patterns of responsibility (with digital technologies in general and more specifically with machine learning), the often “easy” access to powerful tools (in the hand of badly intentioned actors, technology such as image / facial recognition can become extremely harmful), the global scale of research and development (with diversity of value systems around the world as well as constellations of conflicts of interest), …

To cope with the challenge of AI regulation, many participants insist on the importance of digital literacy and critical thinking that should be fostered.

The following ideas can be found in the global and local syntheses downloadable here

  • (Global – Democracy) Setting limits, control and regulation of AI to preserve democracy
  • (Global – Democracy) Taking into account vulnerable people and contributing to human rights, social and political inclusion
  • (Global – Democracy) Being aware of challenges regulation raises
  • (Global – Democracy) Fostering literacy and critical thinking to preserve and strengthen democracy

Insights from NHNAI academic network:

A. From the lawyer’s point of view

Yves Poullet (professor in Law of new technology of information and communication (Université de Namur, ESPHIN – CRIDS, Belgium)

In light of the depth of the challenge of AI regulation, we might recall some basic principles of law, notably with the importance of the rule of law, as a fundamental principle to ensure vivid democracy. The rule of law principle means that for limiting our liberties or to prevent the risk of doing it, it is necessary to go through legislative measures, expressed clearly and in a comprehensive manner, published, having strictly proportionate content according to its purpose and acceptable within a democratic society.

In terms of the content of AI regulation, the transparency about the functioning and the purposes pursued by the data controller should be reinforced, together with the right to contest the use of one’s data (notably to protect persons’ autonomy). In the same vein, we must assert the accountability of the AI developers. This accountability principle leads to impose to them a multidisciplinary and multistakeholder assessment of the applications they are developing and the risks linked.

Furthermore, it is the responsibility of the States to set up a forum where society might openly discuss the ethical aspects of certain large public innovations.

B. Open societal discussions on ethical questions

Based on insights from Brian P. Green (professor in AI Ethics, Director of technology ethics at the Markkula Center for Applied Ethics (Santa Clara University, USA) and Mathieu Guillermin (associate professor in ethics of new technologies (UCLy (Lyon Catholic University), UR CONFLUENCE : Sciences et Humanités (EA 1598), Lyon, France)

This resonates with the question of where the intervention to “protect” people from AI should occur. Should we rely on individuals to be educated enough to protect themselves? Or on politicians to be educated enough to protect citizens? Or on businesses to know enough? Or on the engineers making the product? All involved stakeholders need a say in their own realms of action. No one group can be responsible for all because the problem of AI literacy and control is too complex and needs to have many points of intervention to direct it towards good.

Some things should be automated and others not; how do we know which is which, and what is our rationale for making this distinction? We need a “why” for determining what is legitimately automatable and what not. Collectively exploring this “why” question, the question of our needs, may prove extremely tricky. As our civilization rapidifies there would seem to be no opposing the force of delegation through AI automation because humans simply cannot be fast enough. We already see this in areas of high-frequency trading and cyber offense and defense. When we ask: what can be delegated and what not? This is not only a question about what is technically feasible. It also means wondering WHY?

This question about the “why” pushes us in the domain of evaluative reflection, of values and interests. As mentioned by some participants in the discussions, this reflection may prove difficult as values and interests can be highly divergent. However, it may be interesting to adopt a nuanced approach. Although there can clearly be strong disagreements in moral and ethical matters, this does not necessarily mean that common ground is impossible. As a first approximation, there seems to be some foundational values to build from. Some authors suggest 5 values that could be universal: survive, reproduce, live in society, educate young, seek the truth.[1] These values could be said objective as they are reasonable to a wide variety of people because they exist by logic, in this case proof by contradiction / reductio ad absurdum.

In addition, the existence of strong disagreements does not in itself mean that there are strong divergences between values people uphold. Very often, values are shared but can enter in tension and then people disagree about priority to be given to some over others (security versus privacy protection, individual freedom versus common good, etc.). It thus means that we should always reflect on our disagreements and what they bear upon (there may be more agreement than we believe at first sight, more ground for constructive divergences).

This allows us to highlight the importance of reinforcing the capabilities of all actors to participate to these societal open discussions. As we just saw, it demands fostering critical thinking. It also necessitates to cultivate tech and digital literacy to warrant as informed as possible discussions.

[1] https://arxiv.org/pdf/2311.17017