Self-driving cars and ethical dilemmas: a legal-philosophical perspective

I just heard that my new paper on the ethics of the programming of self-driving cars has been accepted for publication in Ethical Theory and Moral Practice. The paper is titled: Killing by autonomous vehicles and the legal doctrine of necessity.

The main question addressed in the paper is: How should (fully) “Autonomous Vehicles” (or AVs or self-driving cars) be programmed to behave in the event of an unavoidable accident in which the only choice open is one between the causing of different damages to different objects or persons? Well, I am sure that also non-specialists will have certainly encountered at least one media report like this on the so-called “trolley problem” applied to self-driving cars…

What’s new about my paper? Well, Bonnefon et al. (2016) have approached the issue from the perspective of experimental ethics; before them Gerdes and Thornton (2015) had tackled the issue from the perspective of machine ethics. I think that neither experimental ethics nor philosophical ethics are at the moment able to offer car manufacturers and policy makers any clear indication for addressing this issue; therefore, in order to make some steps forward, in this paper I suggest to take a legal-philosophical approach and to start taking a critical look at how the law has already regulated similarly difficult choices in other emergency scenarios. In particular, I propose to consider the legal doctrine of necessity as it has been elaborated in the Anglo-American jurisprudence and case law. The doctrine of necessity seems a promising starting point as it regulates emergency cases in which human agents have intentionally caused damages to life and property in order to avoid other damages and losses, when avoiding all evils is deemed to be impossible.

My paper has a twofold goal: to make a rational reconstruction of some major principles and norms embedded in the Anglo-American law and jurisprudence on necessity; and to start assessing which, if any, of these principles and norms can be utilized to find reasonable guidelines for solving the ethical issue of the regulation of the programming of AVs in emergency scenarios in which some serious damages to property and life is unavoidable.

You can read a final draft of my paper here.

(and hey, if you think that there is much more than trolley problem dilemmas to the ethics of self-driving cars – well, I am with you: you may be interested in my white paper on ethics and self-driving cars for the Dutch Ministry of Infrastructure and Environment)

Posted in Ethics, ethics of Artificial Intelligence (AI), Ethics of autonomous systems, ethics of technology, killer robots, robot ethics, Robotics, Self-driving cars, Technology, trolley problem

Leave a Reply

Your email address will not be published. Required fields are marked *

*