Being human in the time of neuroscience and artificial intelligence involve carefully exploring the nexuses of complexity where valid ideas are nevertheless in tension, manifesting subtleties and challenges that must not be overlooked. Each page expresses the existing tension(s) between ideas and within each theme, which emerged in the collective discussions, and are then complemented by insights from NHNAI network researchers.

Complexity on democracy #1: What place for data and AI in public services and the management of collective life?

The content of the discussions shows that many participants recognize the interest of AI technologies in increasing the efficiency of public services by making them more accessible (through digitization) and more efficient (thanks to the automation of certain tasks, e.g. administrative). AI and digital technologies also seem to be seen as interesting for facilitating democratic life and political decision-making (notably with data analysis to better understand currents within public opinion).

Nevertheless, many participants also point to the importance of not pushing humans into the background, and of subjecting people entirely to algorithms. There was a lot of discussion about the importance of leaving algorithms in their place, as tools to serve and cooperate with humans (but not to replace them entirely). Collective (democratic) life necessitates to preserve (or even increase) empathy and relationships between humans. The automation and digitization of public services is not necessarily, in itself, beneficial for everyone. Some populations may find it difficult to access digital tools, and algorithms may contain biases and automate certain forms of discrimination. It is therefore important that decision-making (at political or public service level) remains under human control.

Automation and the use of data in the conduct of public affairs can therefore be a source of great progress, but must not be to the detriment of humans (or certain more vulnerable groups). Mobilized AI technologies must be reliable (deceiving hopes triggered by announcement of digitalization may undermine even more trust in governments), and display strong levels of fairness, accountability and transparency (to ensure trust-building and social acceptance).

On a more fundamental level, many participants claim a kind of right not to be reduced to their digital data.

The following ideas can be found in the global and local syntheses downloadable here

  • AI and digital technologies can improve public services and democratic processes, but only if used correctly:
    • (Global – Democracy) Acknowledging the positive (potential) impact of AI on human life while asking the right questions
    • (Global – Democracy) Privileging AI cooperation and support instead of human replacement
  • Decision-making must remain under human control:
    • (Global – Democracy) Preserving human responsibility on ethical choices/decision-making
    • (Global – Democracy) Taking into account vulnerable people and contributing to human rights, social and political inclusion
    • (Global – Democracy) Preserving empathy, human contact and relationships
  • Right to not being reduced to one’s data:
    • (Global – Democracy) Recognizing that human persons exceed the sole measurable dimensions
  • Risk of undermining trust in case of low reliability, unfairness or lack of transparency and accountability:
    • (Global – Democracy) Preventing AI from undermining humans’ critical thinking, decision-making abilities, and collective intelligence

Insights from NHNAI academic network:

This nexus of complexity, particularly with its focus on the intelligent use of data while resisting any data fetishism and any reduction of people to digital data ties in with one of the strong axes of Pope Francis’ positioning on AI in connection with resistance against what he calls the “technocratic paradigm”: “Fundamental respect for human dignity means refusing to allow the uniqueness of the person to be identified by a set of data. Algorithms must not be allowed to determine how we understand human rights, to set aside the essential values of compassion, mercy and forgiveness, or to eliminate the possibility of an individual changing and leaving behind the past.”[1]

With this in mind, it is important to solidify our collective acculturation to digital technology. Indeed, the notion of algorithm can easily convey the idea of an absence of bias and, the idea of enhanced rationality or objectivity by comparison to human judgment (after all, algorithms are logical-mathematical procedures that leave no room for arbitrariness or human subjectivity). Yet this connotation masks a much more contrasting reality.

The basic intuition is valid: if a bias or discrimination is explicitly programmed, it will “show up” in the program and the programmer can be called to account. However, this transparency is not necessarily the case with AI programs obtained through so-called machine learning. Without wishing to join the ranks of commentators who present these programs as black boxes (we can watch the calculations being made, nothing is hidden or invisible on principle), it is important to understand that they can very easily include biases and lead to discrimination that is difficult to detect by looking directly at the program’s content.

Indeed, the general idea behind machine learning is to attempt to bypass limitations in our ability to explicitly write programs for complex tasks. For example, we can easily write a program to distinguish between black and white monochrome images … all it takes is a few simple calculations on the numbers encoding the color of the pixels in such images … but what calculations can we make on these same numbers to obtain a program to distinguish between multiple images of everyday objects? At this stage, we can try to go a step further by writing a program with “holes”, or rather “free parameters”, i.e. an outline of a program capable of performing many different logical-mathematical operations (multiplication by coefficients, additions, other more complex operations) and chaining them together in a multitude of ways. The details of the operations will be determined by setting the parameters to a certain value. The idea of machine learning is to say that, with a bit of luck (and above all a lot of skill and astuteness), there is a set of parameters that will produce an efficient program for the task that was resisting until now (e.g. classifying images of everyday objects). Next, we’ll try to find this famous set of parameters (or at least a satisfying set of parameters) automatically, with another program that will test a large number of parameter-setting possibilities by grouping around more or less efficiently. A very effective way of guiding this automatic parameter-setting program is to give it numerous examples of the task in hand (i.e. numerous examples of images already classified according to what they picture). If all goes well, the result is a correctly parameterized program that reproduces the examples (we say we’ve learned a model or trained an algorithm… but it’s still automatic parameterization).

With this basic understanding of machine learning, it’s easier to see how “successful” learning process can still lead to a highly problematic program. If we guide an automatic parameterization with biased data at the outset (reflecting sexist or racial discrimination, for example), successful learning will lead to a program that reproduces these biases or discriminations.[2] Similarly, if we “train” a program on non-representative example bases (for example, because groups or minorities are not represented in the data), it is very possible that the program will not work as well for all the persons who will use it or be subjected to it.

To go even further, we could question our preconceived ideas about what it means to be rational or intelligent, about how we can/should go about developing ideas that deserve to be called knowledge, that deserve to be held as true. It’s certainly tempting to think that we gain in rationality or intelligence by purging our inference procedures of subjective judgments, choices, trade-offs, questions of value, etc. … This vision certainly encourages the idea that algorithms and learning machines have a head start, since they are ultimately based solely on logical-mathematical computations on data. However, recent history and philosophy of science (since at least the second half of the 20th century) has shown us the limits of such a purely algorithmic or procedural conception of rationality and intelligence. Any scientific approach, even the most experimental, inevitably relies on human judgments and arbitrations (concerning the basic vocabulary to be used, the major methodological orientations, the objectives to be achieved… but also concerning fundamental intuitions such as the idea that empirical observation does not systematically deceive us).[3] Computer programs are no exception to this indispensability of human judgment (which corpus of examples? which type of program with free-parameters? which automatic parameterization procedure?…). These kinds of judgments or arbitrations are not made “arbitrarily” (in the sense that everyone could do as they please in their own corner). A great deal of skill and experience is required, and it will never only be a matter of applying criteria or procedures in a purely neutral or objective way.

To be intelligent or rational is, of course, to be able to apply criteria, procedures or algorithms correctly (objectively or neutrally), but it is also, and perhaps above all, to be able to judge the quality of criteria and procedures, to have a reflexive and critical attitude towards what we are doing… and therefore to be able to judge and arbitrate fallibly, to make mistakes sometimes, to correct oneself, to evolve (and to help each other in this respect, to collaborate with good will)… Being intelligent in this sense is something fundamentally alive, something that each of us can only undertake rooted in our own lived experience (with all the richness but also the limits that this entails)[4] and in healthy collaboration with others. An intelligent machine can’t replace this (what could such a replacement mean if not an obliteration of intelligence?). The better question would be: how can the machine help us to be more intelligent, to deepen our life experiences that make us wiser and more experienced?

[1] Message of his Holiness Pope Francis for the 57th World Day of Peace, 1st january 2024, https://www.vatican.va/content/francesco/en/messages/peace/documents/20231208-messaggio-57giornatamondiale-pace2024.html

[2] One example among many others (here with generative AI): https://restofworld.org/2023/ai-image-stereotypes/

[3] Philip Kitcher, Science, Truth and Democracy, New York, NY: Oxford University Press, 2001, ISBN : 0-19-514583-6. Mathieu Guillermin, «Non-neutralité sans relativisme ? Le rôle crucial de la rationalité évaluative». Dans : Laurence Brière, Mélissa Lieutenant-Gosselin, Florence Piron (dir.), Et si la recherche scientifique ne pouvait pas être neutre ? Éditions Science et bien commun, 2019, 315-338.  https://scienceetbiencommun.pressbooks.pub/neutralite/chapter/guillermin/

[4] See for instance: François Laplantine, The Life of the Senses: Introduction to a Modal Anthropology, Routledge (Sensory Studies),  2020, 176 p., ISBN 9781472531964

To complete about machine learning and biases, I recommend the point of view of our Belgian expert Antoinette Rouvroy:

Machine learning and, more generally, the ability of machines to make us aware of the regularities in the world that can only be detected in large numbers, is intended to increase our individual and collective intelligence by giving us access to a ‘stereo-reality’ that is both analogue and digital, and that can improve the way we govern ourselves and coordinate our behaviour in a sustainable way (provided, however, that we recognise that algorithms are, just as much as human decision-makers, always ‘biased’ in their own way, even if these ‘biases’ do not appear to be easy to detect because they seem to have been ‘absorbed’, However, we must recognise that algorithms are just as ‘biased’ in their own way as human decision-makers, even if these ‘biases’ are not easy to detect because they seem to be ‘reabsorbed’ in the hidden layers of neural networks).”[1]

[1] https://www.pointculture.be/articles/focus/gouvernementalite-algorithmique-3-questions-antoinette-rouvroy-et-hugues-bersini/

AI can only ever do what it is told to do, even if that is not what we actually want it to do. Those with control over AI need to be responsive to those who are subject to their power, whether it is business people. Government officials, engineers, and so on.

Because AI is a centralizing technology (centralizing data, compute, and human talent), it disempowers those who are not centered. In this way, AI is antidemocratic. But democratic societies can control antidemocratic influences if they are smart enough to percieve them and determine how to keep them on the democratic “leash.”

When it comes to tech literacy and critical thinking, these are incredibly important issues and also raise the question of where the intervention to “protect” people from AI should occur. Should we rely on individuals to be educated enough to protect themselves? Or on politicians to be educated enough to protect citizens? Or on businesses to know enough? Or on the engineers making the product? All involved stakeholders need a say in their own realms of action. No one group can be responsible for all because the problem of AI literacy and control is too complex and needs to have many points of intervention to direct it towards good.

Someone will have to make the choices about what fair and unbiased means, and this will not likely be without controversy. The use of AI to reinforce democratic processes is an interesting one, also likely fraught with controversy, but perhaps capable of doing things never before possible with democracy, like giving surveys to entire populations and finding what “the people” really think about many political issues, with uncertainty bars around them, and so on. A new form of democracy might be possible. That does not mean it will be any better, but it might be worth doing a pilot study and experimenting with it.

The relationship of AI and efficiency is a really important issue. We want to use AI to become more efficient at good things and at the same time use AI to make bad things less efficient. Can AI help to make it easier to help people? Can AI be used to catch corruption? What other good things can AI help with and what bad things can AI help to stop?

Thus, this is the question: some things should be automated and others not; how do we know which is which, and what is our rationale for making this distinction? We need a “why” for determining what is legitimately automatable and what not. And yet as our civilization rapidifies there would seem to be no opposing this force of delegation to automation because humans simply cannot be fast enough. We already see this in areas of high-frequency trading and cyber offense and defense. The next question then becomes: what can be delegated and what not, and WHY?

As a first approximation, there are some foundational values to build from. There are objective values that are reasonable to a wide variety of people because they exist by logic, in this case proof by contradiction / reductio ad absurdum. This paper has the proof and suggests 5 values that are universal: survive, reproduce, live in society, educate young, seek the truth: https://arxiv.org/pdf/2311.17017

Theologically speaking, God is a creator and Creation is like God’s artifact. Analogously, humans are artificers and we create our own artifacts – artifacts that are now growing in scale to encompass whole swaths of the Earth and near-Earth space. This ability to create artifacts was given to us by God to help remedy our human condition, to make the burdens of life more bearable, from stone tools to artificial intelligence. Technology exists to serve humanity, as a divine gift, and yet we can also abuse these gifts and do terrible things to each other with them. We need to recapture the sense of artifacts and technology as meant to support holiness, or they instead will be used for devilry. We are here to lovingly serve each other using technology to enhance the gifts of nature, not force some humans to serve others with technology as the whip at their backs.