What is computational propaganda, how can it be countered by design and without compromising freedom of expression, the rule of law and justice?

I have addressed these questions in a paper written with Marlijn Heijnen.


(The full paper here, here below is an abstract.)

“Computational propaganda” (CP) aka “cognitive warfare” refers to the use of algorithms (e.g., AI bots that mimic people), to purposefully distribute misleading information over social media and other internet platforms to influence how citizens think and how they act. Computational propaganda includes the spread of disinformation, misinformation, conspiracy theories and hate speech by a variety of actors, such as single-issue pressure groups, right-wing and left-wing extremist groups, terrorist groups, but also organised crime groups and even sovereign governments. Computational propaganda (CP) may undermine democratic life by gradually weakening the epistemic capacities of the recipients of the information and unduly influencing public discourse. 

However, counter measures to computational propaganda such as e.g. shutting down website and accounts possibly through covert intelligence operations are also known to threaten the core principles of democracy e.g., privacy, freedom of speech, governments’ accountability, the rule of law and others. Also, since it would simply be impossible to identify the deceiving information by hand and stop it even before said deceiving information has gone viral, state actors (e.g., policy officers) or platform owners may use AI based systems (early warning systems) equipped with autonomous capabilities to counter deceiving information. However, AI systems create additional ethical risks – new categories of mistakes, bias amplification, unjust harm to vulnerable individuals and groups.

In this paper we do two things. First, we explore how current value tensions about democratic principles in countering CP measures may be possibly loosened by the application of a Value-sensitive Design approach, according to which new technologies, in this case systems to counter computational propaganda, are designed from their early stages to fulfil all relevant values, through an interdisciplinary engagement of multiple stakeholders.

Second, we discuss how the additional ethical risks introduced by AI systems may be addressed by design for meaningful human control (MHC), and more specifically MHC as reason-responsiveness, including so-called conditions of tracking and tracing. 

The challenge of adhering to democratic principles from the point of view of the tracking condition for MHC (design for reason-responsiveness) is about how to design countering CP system that remains responsive to the values of freedom of expression and privacy of producers of online information while at the same time also promoting the reliability and truthfulness of information circulated. The challenge of adhering to democratic principles from the point of view of the tracing condition for MHC (design for human responsibility) is about how to design counter CP systems that promote the possibility to keep a) producers of online information responsible for attempts to distribute false information with the intention to destabilize democratic procedures in other countries, and b) potential recipients of this information responsible – i.e., able and motivated – for the critical scrutiny of this information, and c) government and companies that enforce counter CP measures accountable for complying with liberal-democratic principles and procedures in their (counter) actions.

You can read the full paper here

The paper was presented at the NATO Symposium on Meaningful Human Control in Information Warfare organised by Jurriaan van Diggelen and Mark Draper for the STO Human Factor and Medicine Panel.