Being human in the time of neuroscience and artificial intelligence involve carefully exploring the nexuses of complexity where valid ideas are nevertheless in tension, manifesting subtleties and challenges that must not be overlooked. Each page expresses the existing tension(s) between ideas and within each theme, which emerged in the collective discussions, and are then complemented by insights from NHNAI network researchers.
Complexity on democracy #2: AI at the service of human collective intelligence
Many participants point that policy and decision making must remain based on human interaction and collective reflection and deliberation. There is a large consensus against government by machines (technocracy), a large consensus on the fact that AI should not replace humans in decision making, in particular in the key field of collective political decisions. On the contrary, human relationships and empathy are key for collective decision making and should be preserve and reinforced.
In this respect, digital tools already provided tremendous possibilities for information exchange and collective debates at unprecedent geographic scales and temporal pace. With internet and social networks, information sharing became extremely liberalized.
Nevertheless, this liberalization of our collective information landscape also triggered the problem of having too much information available and the need to editorialize it more efficiently. In this respect, discussions reflect serious worries about recommendation algorithm that can reinforce biases and isolation of given groups by creating echo chambers and information bubbles. These processes can even be exploited for voluntary manipulation. In any case, this leads to weakening of our collective relationship to truthfulness in policy and societal debates, thus diminishing instead of enhancing our collective intelligence capacities, our ability to be genuine persons in our citizen life with autonomy.
Some participants highlight in this respect the problem of mediatic hypes and the tendency to fall for sensationalism (including hypes and sensationalism about AI itself) which reinforces the problem of information editorialization while more responsible journalism is more necessary than ever.
In general, participants insist upon the need for fostering critical thinking to better navigate our information landscapes and to support our collective intelligence and policy- and decision-making abilities. AI could be of great help in this respect, for instance by contributing to improve the quality of information or by supporting the fight against (deep) fakes news and their dissemination (social networks moderation).
Insights from NHNAI academic network:
Based on insights from Brian P. Green (professor in AI Ethics, Director of technology ethics at the Markkula Center for Applied Ethics (Santa Clara University, USA), Mathieu Guillermin (associate professor in ethics of new technologies (UCLy (Lyon Catholic University), UR CONFLUENCE : Sciences et Humanités (EA 1598), Lyon, France), Nathanaël Laurent (associate professor in philosophy of biology (Université de Namur, ESPHIN, Belgium) and Yves Poullet (professor in Law of new technology of information and communication (Université de Namur, ESPHIN – CRIDS, Belgium)
The health of our democratic societies partly rests upon the quality of the information landscape and of citizens’ collective intelligence. The latter are deeply impacted by digital and AI technology.
A. AI, information landscape and collective intelligence
Given the enormous amount of content available on the internet (even restricted to digital platforms), (at least) partly automated editorialization of information is inevitable. AI tools for profiling users and recommending them some content are thus key pieces of technology. However, we must wonder about the criteria and purposes of these operations of profiling and recommendations. As Gerald Bronner explains,[1] the liberalization of our information landscapes associated with an economic model based on gratuity leads to fierce competition for catching as much as possible users’ attention. Recommendation algorithms are designed to push forward contents that will lead users to stay connected (thereby ensuring maximal exposure to personalized advertising and most efficient data collection). This is very different from recommendation systems that would promote flourishing-conducive contents (which can often be less attractive at first sight).
Profiling and recommendation systems can in particular lead to (unintended or intended) deleterious effects in the political domain. Echo chambers can lead to strong polarization of public opinion. Digital content can be tailored to exploit recommendation systems and echo chambers. It is in particular true of deep fake news produced more and more easily with generative AI tools. Furthermore, the concentration of revenues and economic power in the hands of large platforms might lead to concentration of political power, especially in terms of influence upon public opinion. This can deeply weaken the ground and basic conditions of possibility of democratic societies, for instance threatening the organization of free and transparent elections. Echo chambers and (deep) fake news can even employed as weapons of political destabilization in geostrategic conflicts. Recommendation and profiling systems could also be used by authoritarian regimes to reinforce their control over populations. In the same time, AI technology may help fighting against these threats. We could talk about a kind of AI war,[2] defensive systems combating offensive ones with the information landscape as a battleground. AI system can be trained to detect deep fake images or videos. It could be possible to develop recommendation and editorialization systems that limit the virality of fake news.
Globally speaking, we can expect from AI that it helps us improving our information landscape and our collective intelligence (recommendations of more flourishing conducive content, fight against fake news, …), but it will largely depend on our ability to encourage the development of the right technology and the adoption of most positive uses. to foster digital and ethical literacy. This in particular means fostering digital and ethical literacy to enable concerned actors (from developers to users) to establish adequate conditions. We could for instance mention the necessary reflection on the economic model behind digital technologies and the issues raised by the mirage of gratuity).
More fundamentally, we may also fruitfully reflect upon the meaning of expressions such as “right technology” and “positive uses”. Using AI to support human intelligence or flourishing and not stifle them is another version of the “balancing” question runs through several themes of discussions. If we want AI to support adult humans being “adults” and oppose the use of AI to turns us into dependent “infants” with AI as our “parent,” there is a lot more to say here about what sorts of support are good and which are bad. A part of the question touches upon refining our understanding of what this collective or human intelligence is we expect AI to improve.
B. What does it mean to foster human collective intelligence?
It can prove fruitful to question our preconceived ideas about what it means to be rational or intelligent, about how we can/should go about developing ideas that deserve to be called knowledge, that deserve to be held as true. It’s certainly tempting to think that we gain in rationality or intelligence by purging our inference procedures of subjective judgments, choices, trade-offs, questions of value, etc. … This vision certainly encourages the idea that algorithms and learning machines have a head start, since they are ultimately based solely on logical-mathematical computations on data. Endowed with superior neutrality, algorithms thus could support humans in purging the pollution of their subjectivity to improve their rationality. This view may also lead to grant strong credit to algorithmic governmentality we evoked in another nexus of complexity.[3]
However, recent history and philosophy of science (since at least the second half of the 20th century) has shown us the limits of such a purely algorithmic or procedural conception of rationality and intelligence. Any scientific approach, even the most experimental, inevitably relies on human judgments and arbitrations (concerning the basic vocabulary to be used, the major methodological orientations, the objectives to be achieved… but also concerning fundamental intuitions such as the idea that empirical observation does not systematically deceive us).[4] Computer programs are no exception to this indispensability of human judgment. Even in the case of machine learning, humans must for instance arbitrate about the quality of corpus of examples, about the type of program with free-parameters that we will try to automatically tune, or about the automatic parameterization procedure itself.[5] These kinds of judgments or arbitrations are not made “arbitrarily” (in the sense that everyone could do as they please in their own corner). A great deal of skill and experience is required, and it will never only be a matter of applying criteria or procedures in a purely neutral or objective way.
To be intelligent or rational is, of course, to be able to apply criteria, procedures or algorithms correctly (objectively or neutrally), but it is also, and perhaps above all, to be able to judge the quality of criteria and procedures, to have a reflexive and critical attitude towards what we are doing… and therefore to be able to judge and arbitrate fallibly, to make mistakes sometimes, to correct oneself, to evolve (and to help each other in this respect, to collaborate with good will)… Being intelligent in this sense is something fundamentally alive, something that each of us can only undertake rooted in our own lived experience (with all the richness but also the limits that this entails)[6] and in healthy collaboration with others.
This collective and relational dimension of human intelligence is of paramount importance and leads us back to the topic of democracy as relying on a robust intersubjective space for deliberation. I become more intelligent when I interact with other people, for instance because they use different categorizations (or use mine differently). Democracy and collective deliberation are more than just the blind concatenation of individual opinions, with predominance granted to ones accepted by the majority. It is first and foremost a way of living and flourishing altogether. AI systems, as smart or “intelligent” they may be, cannot be expected to replace or automate this form of deep collective human intelligence. This would in no way be a support to humans but rather a kind of obliteration of their life and intelligence. The key question we should thus wonder about then is: how can the machine help us to be more intelligent? As more and more pervasive actors of our social environment (we may say that we form techno-social or hybrid systems), digital technology (including AI) not only inform us, but also transform us. We must reflect upon this transformation and where we would like it to lead us. How can digital technology contribute to deepening our life experiences that make us wiser and more experienced? What type of AI systems and digital services will genuinely foster our collective and human intelligence?
[1] Gérald Bronner (2012), Apocalypse cognitive, Presses Universitaires de France
[2] https://www.latribune.fr/opinions/tribunes/lutte-contre-la-desinformation-la-guerre-des-intelligences-artificielles-997066.html
[3] See: AI and digital technologies for public services and democratic life.
[4] Philip Kitcher, Science, Truth and Democracy, New York, NY: Oxford University Press, 2001, ISBN : 0-19-514583-6. Mathieu Guillermin, «Non-neutralité sans relativisme ? Le rôle crucial de la rationalité évaluative». Dans : Laurence Brière, Mélissa Lieutenant-Gosselin, Florence Piron (dir.), Et si la recherche scientifique ne pouvait pas être neutre ? Éditions Science et bien commun, 2019, 315-338. https://scienceetbiencommun.pressbooks.pub/neutralite/chapter/guillermin/
[5] For more details, see the expertise input in the nexus of complexity entitled: AI and digital technologies for public services and democratic life, especially section B. Are algorithms more neutral than humans?
[6] See for instance: François Laplantine, The Life of the Senses: Introduction to a Modal Anthropology, Routledge (Sensory Studies), 2020, 176 p., ISBN 9781472531964