Being human in the time of neuroscience and artificial intelligence involve carefully exploring the nexuses of complexity where valid ideas are nevertheless in tension, manifesting subtleties and challenges that must not be overlooked. Each page expresses the existing tension(s) between ideas and within each theme, which emerged in the collective discussions, and are then complemented by insights from NHNAI network researchers.

Transversal complexity #2: Setting limits and regulation, even if it could prove challenging

(based on the eponym global-transversal idea: Setting limits and regulation, even if it could prove challenging)

There is a strong need for regulation and norms to ensure AI and NS technologies deliver positive outcomes. Norms and regulation are key to allow for trust building and for persons protection when deploying new technologies. AI should comply with human values (fairness, non-bias, …) and should be human-centric (aiming at human flourishing). AI and NS technologies should beneficiate to all (it is crucial to fight against the exclusion of poor and vulnerable persons).

However, regulation raises many acute issues making it a very difficult challenge. Among such issues, one can evoke the pace of technological development, the obfuscation of patterns of responsibility (with digital technologies in general and more specifically with machine learning), the often “easy” access to powerful tools (in the hand of badly intentioned actors, technology such as image/facial recognition can become extremely harmful), the global scale of research and development (with diversity of value systems around the world as well as constellations of conflicts of interest), the difficulty to enforce regulations (in such a diverse and international context).

Broadly speaking, regulation should foster reasoned and sound uses of AI and NS technologies. Nevertheless, identifying what is reasoned and sound and what is not can prove extremely difficult (take the case of social media moderation for instance: who is the right actor? Or the case of health technologies with grey areas between curative and enhancement uses: who can decide whether a pathology requires/justifies the use of a given health technology?). Stakeholders, professionals, citizens and economic/industrial actors should be involved in regulation processes.

Insights from NHNAI academic network:

AI should not only respect human values and be exclusively centred on the human being, because how can we aim for the fulfilment of a living species whose existence depends on countless interdependencies with other living species and with its terrestrial environment? Limits and regulation could then come from a decentred approach as the one introduced by Aldo Leopold one hundred years ago:

“By extending ‘the boundaries of the community to include soil, water, plants and animals, or collectively, the earth’, Leopold’s Land Ethic not only goes beyond the boundaries of humanity (the ordinary boundaries of morality), it becomes that of a mixed community, including diverse populations of different species. This should enrich our understanding of the variety of duties within the biotic community. »[1]

With these values in mind, important questions arise:

  • What becomes humanism when it takes into account the whole relational network of existence on our planet?
  • What become scientific projects and technological advances if we try to render them compatible with interdependences which render living experiences possible.
  • More specifically, what are the potential benefits of AI for earth in its globality (globality of all interactions and complexity of apprehend them)?

[1] Larrère, C. (2010) . Les éthiques environnementales. Natures Sciences Sociétés, Vol. 18(4), 405-413. https://shs.cairn.info/revue-natures-sciences-societes-2010-4-page-405?lang=fr.