Some reflections while preparing a new course at TU/e: some of my oldest ideas still hold: moral dilemmas about crash avoidance are a good conversation start but not the way to go, my 2017 paper is still good, and – I should have read JafariNaimi’s paper before.
It was in 2015 that I started working on the ethics of self-driving cars. I had just started working on issues of control and responsibility with AI and robots, and my colleague Jeroen van den Hoven invited me to join an informal meeting with another TU Delft professor: Bart van Arem, expert in traffic engineering and pioneer in the development of self-driving cars or, as he would prefer to call them: automated driving systems. Bart was very interested in the ethical and policy issues with the development of self-driving cars – especially issues of balancing safety and innovation – but also concerned about a public debate that had been hijacked by the so-called moral dilemmas with programming cars for unavoidable crashes, made very popular by the moral machine experiment and many popular media pieces and videos by, among others, Patrik Lin. Bart thought that there was much more than that to the ethics of self-driving cars development and introduction.
I shared Bart’s concern and so we started talking and collaborating. That was for me the beginning of many years of work on this topic, a work that brought me to, among other things, write a White paper for the Dutch ministry of Transportation, co-lead the project Meaningful Human Control over Automated Driving Systems with Bart (and the traffic psychologist Marjan Hagenzieker), design a new course in Ethics of Transportation at TU Delft, to “export” it to the Politecnico di Milano, to become member and Rapporteur of an EU independent expert group to deliver a report of ethical issues with driverless mobility.
Now, ten years later, my first course in my new role as professor at TU Eindhoven will be a big engineering ethics course for Bachelor students. It is a massive course with more that one thousand students (sic!) co-taught by many teachers, and I will teach a track that uses the ethics of self-driving cars as a case study to learn the basics of tech ethics. Students are always keen on moral dilemmas with moral dilemmas with programming cars for unavoidable crashes and the trolley problem is always a good way to get them in. So that will be the start. But the main message of the course will remain the one that Bart van Arem and I bonded on ten years ago: there is much more than that to the ethics of self-driving cars – and to ethics of technology more generally. So, I will assign three readings for that part of the course. The first two are my own: the EU report on Safety, Data Ethics and Responsibility with self-driving cars, and my Killing by Autonomous Vehicles and the Legal Doctrine of Necessity, that utilizes a legal-philosophical approach to uncover the many ethical and societal issues highlighted by trolley-like problems, beyond the simplified question: who do you think should be killed (by a self-driving cars)?
The third paper is one I had always planned to read and – shame on me – never did until last week: Nassim JafariNaimi’s et al.’s Our Bodies in the Trolley’s Path, or Why Self-driving Cars Must *Not* Be Programmed to Kill, a beautiful plea for moving away from an abstract “algorithmic morality” back to the moral richness of the concrete lived moral experiences of people in the real world with their complex (care) relations; and to take the trolley-like thought experiments with self-driving cars not literally, but rather as an opportunity “to rethink mobility, as it connects to the design of our cities, the well-being of our communities, and the future of our planet”. I finally read it, and I enjoyed a lot, almost ten years later. As it’s said – better late than never.