Join our Symposium at IACAP/AISB-25 in Twente on July 1st 2025
Giorgia Pozzi (TU Delft) and I have organized a one-day symposium to be held at the IACAP/AISB-25 in Twente on July 1st 2025. The title is “Bridging Justice and Meaningful Human Control in Medical AI Symposium”. The idea came from a paper Giorgia and I have just finished, where we ask what it means to keep medical AI under control, who should be in control (doctors, patients?) and how to make this control “meaningful”. Specifically, we analyze how epistemic justice is key to prevent structures of power from reducing doctors’ and patients’ real control. Below is a longer description and the program of the day, and you may register for the conference and get all practical info here.
Bridging Justice and Meaningful Human Control in Medical AI Symposium, IACAP/AISB-25
📅 July 1-3
📍 University of Twente, Enschede, NL
Overview
The deployment of AI systems in healthcare (in diagnosis, treatment recommendations, risk assessments, surgery and other domains) raises questions about their responsible use. Due to their self-learning properties and epistemic limitations (e.g., opacity), AI may create so-called responsibility gaps (Matthias, 2004). Developed as a way to counter the emergence of these gaps (Mecacci et al. 2024), meaningful human control requires AI systems to be responsive to the reasons of relevant agents and that agents remain ultimately responsible for the system behavior. But what if unjust power dynamics prevent possibly relevant agents from rendering their reasons accessible to the AI system? How can vulnerable social groups – but also medical professionals – maintain their epistemic and moral agency under conditions of (epistemic) injustice (see, e.g., Pozzi, 2023)? In other words, to what extent does meaningful human control require (epistemic) justice? This also connects to the debate on patients’ empowerment, that is the question to what extent can AI give patients more or less power over their health and well-being. This symposium aims to bring together scholars working on justice, responsibility, Meaningful Human Control, and empowerment in AI to explore fruitful points of intersection between these domains, with a specific focus on medical technologies. The topic urges us to bring together scholars working on the ethics of AI, the epistemology of AI, law, medical ethics, the design-engineering perspective, and their crossovers.
Questions of Interest
- Which epistemically unjust (structural) mechanisms can prevent relevant agents from offering their reasons to an AI-mediated healthcare system?
- Are participatory approaches to technological development enough to counter these injustices and the lack of control ensuing from them? Which other political concepts or measures are needed to make sense of- and address issues of power, justice, or control in medical AI?
- To what extent is it desirable to give patients more control and responsibility for their health? And to what extent would this responsibility empower them as opposed to unjustly burden (some of) them? How are empowerment and (epistemic) justice related in AI-mediated healthcare?
- What can we learn from the analysis of specific case studies or applications? To what extent can different kinds of AI applications improve or worsen human control, justice or patients’ empowerment in different healthcare domains?
- Are there any novel challenges or opportunities raised by GenAI/LLMS for human control, responsibility, justice, empowerment in healthcare?
- Which lessons about control/justice/empowerment in healthcare can be learnt from the introduction of AI in other high-stake domains?
Program
11:00-11:05: Welcome
11:05-11:30: Giorgia Pozzi & Filippo Santoni de Sio: Epistemic justice through meaningful human control in medical AI
11:30-12:00: Sanaa Abrahams & Giulio Mecacci: Epistemic Justice in Healthcare: Enhancing Meaningful Human Control through Responsibility Distribution
12:00-12:30: Keynote by Lily Frank
12:30-13:00: Panel discussion among speakers
——
14:00-14:30: Eliana Bergamin: Contributory injustice and epistemic calcification in AI-driven healthcare: the marginalization of emotions in decision-making and knowledge production
14:30-15:00: Keynote by Patrik Hummel
15:00-15:30: Panel discussion
——
16:00-16:30: Jacqueline Kernahan and Atay Kozlovski: Designing for Meaningful Human Control Over LLM Tools in Healthcare – A Case Study
16:30-17:00: Gabriele Nanino: I’ll answer you to death: LLMs’ epistemic recklessness in the medical sector
17:00-17:30: Panel discussion
Leave a comment