Being human in the time of neuroscience and artificial intelligence involve carefully exploring the nexuses of complexity where valid ideas are nevertheless in tension, manifesting subtleties and challenges that must not be overlooked. Each page expresses the existing tension(s) between ideas and within each theme, which emerged in the collective discussions, and are then complemented by insights from NHNAI network researchers.
Complexity on democracy #3 : Ensuring safety and security without undermining fundamental rights
Some participants acknowledge the interest of using AI technologies to improve safety and security (enhanced video surveillance capabilities, increased ability to forecast and manage crisis such as epidemics or natural disaster.
At the same time, discussions clearly manifest worries about fundamental rights and privacy protection, especially mind privacy (already with profiling algorithms, and even more when neuroscience is added to the picture). Weakening privacy and blurring the limits between public and private spheres may notably impede freedom of thought and expression as well as democratic and social life. In addition, participants insist upon the fact that improvements in security and safety should not be achieved at the expense of the most vulnerable, who may encounter more difficulties in asserting their rights. In general, persons should never be reduced to their data.
Insights from NHNAI academic network:
Based on insights from Federico Giorgi (post-doctoral researcher in philosophy (Université de Namur, ESPHIN, Belgium), Brian P. Green (professor in AI Ethics, Director of technology ethics at the Markkula Center for Applied Ethics (Santa Clara University, USA), Nathanaël Laurent (associate professor in philosophy of biology (Université de Namur, ESPHIN, Belgium) and Yves Poullet (professor in Law of new technology of information and communication (Université de Namur, ESPHIN – CRIDS, Belgium)
A. Privacy, a cornerstone of democracy
Privacy protection is a key component of collective life, especially in democratic societies. The right to keep some things secret, to keep them outside of the public sphere is extremely fundamental. As recalled by the Belgian philosopher Corentin de Salle, privacy is extremely important for several basic reasons:[1]
First, to preserve people’s dignity. Out of decency, you might say. Secondly, because revealing things that should remain secret makes people vulnerable. It can undermine their authority if they have responsibilities. It makes it more difficult for them to assume the social role they must play in their professional lives. It can also lead to their weaknesses being revealed, enabling unscrupulous people to exploit them to manipulate, defraud, steal their identity or do them harm. Finally, protecting privacy is important because everyone needs a refuge, a place where they can recharge their batteries without worrying about what they say, do or think. (…)
Moreover, privacy “is not a fundamental freedom alongside other freedoms, but a condition of other freedoms. In particular, freedom of expression and freedom of movement. [As says Yves Poullet, if I know] that I am constantly being spied on, I will no longer dare to express myself as I wish, even in more intimate and private settings. If I feel controlled at all times, how can I move around as I wish?”[2] With emerging neurotechnology providing new powers of analysis and manipulation of brain functioning, privacy issues may become even more acute, with the possibility to undermine what our mental integrity and psychological identity. It may be time to recognize ‘neuro-rights’ as certain countries have already done.
Another way of looking at the foundation of the right to privacy is the issue of the power differential between the individual and the state. Because knowledge is power, and the state has vastly more knowledge and power than the individual, the state is to be made to be more transparent to the individual (freedom of information about the government, narrowly scoped government secrecy), and the individual more opaque to the state (right to privacy). Digital technology and AI systems somehow extend this problem of power asymmetry as, AI is a power that can be controlled by states, but also by other organizations, and these organizations should likewise be made more transparent to the public and the public likewise protected from these organizations through privacy rights.
The desire for public safety via surveillance is, of course, in tension with the right to privacy noted above. The balance between safety and privacy is extremely contextual and so will vary from place to place, but in general, the transparency of the government side (or powerful organization) of the equation can be similarly enhanced in order to still protect individuals even if they are being more surveilled. It is also important to mention that privacy should never be considered from a pure individualistic approach. For instance, with profiling and recommendation technology: we must consider the fact that our profiles are deduced not only from our data but from big data where our data are mixed with data about other people. This means that our individual decision to allow our data collection and processing by AI applications also somehow engage other people. Our data might be used for profiling other people who refused the collection and processing of their data. In fact, behind the exploitation of people (personal) data there is a global question about the type of social and economic model we want to live in, a question that goes beyond the sole question of states’ surveillance of their citizens.
B. Surveillance capitalism
In this respect we could evoke Zuboff’s book The Age of Surveillance Capitalism (2018). Zuboff, an emerita professor at Harvard Business School known for her research on technology in the workplace, has taken on a big task: to create a set of terms that capture the excitement around modern tech companies. She argues that surveillance capitalism makes money by collecting, processing, and analyzing people’s behavior data using methods that encourage “radical indifference,” a way of observing without any witnesses. This sets it apart from industrial capitalism, which profits from exploiting natural resources and labor. Surveillance companies have found a wealth of information from the data they gather for their own use, and they realized they could sell this “data exhaust” to advertisers. For them, the people behind the data are just accessories.
Zuboff sees the resulting economic structures as completely new: a rogue form of capitalism. While previous companies relied on “primitive accumulation,” surveillance companies like Facebook and Google depend on ongoing “digital dispossession,” a concept she has taken from David Harvey. Each of us is constantly made understandable and profitable for these companies. More than just government surveillance that aims to limit free will, Zuboff worries that these companies will use human free will to achieve their goals, relying on the predictable outcomes we provide.
For Zuboff, this creates a troubling situation with respect to the core idea of modern liberalism: the individual. She views surveillance capitalism as an extension of B.F. Skinner’s research in psychology, where people are seen as nothing more than their behaviors and reflexes. Skinner wanted to improve social unity and workplace efficiency, regardless of individual choice. Zuboff highlights examples that show how surveillance capitalism relates to this kind of behaviorism, such as the development of biometrics and Rosalind Picard’s research on affective computing for autistic users, which was later taken up by surveillance startups. All of this shows that surveillance capitalism is gradually undermining our essential right to personal freedom.
[1] De Salle C., Tellier S., De Cooman J., Petit N., Duquenne E., Lombardo A., Hublet L. & Leduc P. (2018) La vie privée à l’ère des big data, Les Études du Centre Jean Gol, p. 9. https://www.cjg.be/les-etudes-du-cjg-la-vie-privee-a-lere-des-big-data/
[2] Ibid.