Hearing on the ban of Autonomous Weapon Systems at the Belgian Parliament

On the coming Dec 6th I will join the Commission for National Defense of the Belgian Chamber of Representatives to give an expert opinion on a motion for the ban on the research, development and use of fully Autonomous Weapon Systems. The motion has been presented by Benoit Hellings en Wouter De Vriendt  (DOC 54 2219/001 en 002) and is available on the website www.dekamer.be under “documenten”).

Posted in accountability gap, autonomous weapon systems (AWS), Ethics of autonomous systems, ethics of technology, international law, killer robots, responsibility gap, responsible innovation, robot ethics, Value-sensitive design

The “Evaluation Schema” for the ethical use of robots in security is available

Afbeeldingsresultaat voor digital society initiative uzh

I have been collaborating for almost two years with a pool of experts coordinated by Markus Christen from the University of Zurich; we have written a White Paper including an “Evaluation Schema” for the assessment of the ethical risks of robot systems to be deployed in security applications. The report is finally done and available!

You can read the report here

The research project was commissioned by funded by Armasuisse Science + Technology, the center of technology of the Swiss Federal Department of Defence, Civil Protection and Sports.

The research was conducted by a team of the Center for Ethics of the University of Zürich (principal investigator PD Dr. Markus Christen; research assistant: Raphael Salvi) and with the support of an international expert team. This team consisted of Prof. Thomas Burri (University of St. Gallen; focus on chapter 3 of part 2), Major Joe Chapa (United States Air Force Academy, Department of Philosophy; focus on chapter 2), Dr. Filippo Santoni de Sio (Delft University of Technology, Department Ethics/Philosophy of Technology; focus on chapters 1 and 4), and Prof. John Sullins (Sonoma State University, Department of Philosophy; focus on chapter 4).

 

Posted in accountability gap, armed drones, autonomous weapon systems (AWS), drone warfare, drones, Ethics, ethics of Artificial Intelligence (AI), Ethics of autonomous systems, ethics of technology, international law, killer robots, military robots, policing robots, rescue robots, responsibility gap, responsible innovation, robot ethics

The Project Meaningful Human Control over Automated Driving Systems has started!

On October 11th 2017 we had an internal kickoff meeting of our NWO-MVI project Meaningful Human Control over Automated Driving Systems! All the members of the research team were present, including the three new postdocs: Giulio Mecacci (philosophy), Daniel Heikoop (behavioural science) and Simeon Calvert (traffic engineering). In early December we will also have a kickoff with the policy and industry partners. More info on the project and the partners here!

 

Posted in accountability gap, Ethics of autonomous systems, responsible innovation, robot ethics, Self-driving cars, Value-sensitive design

My interview on killer robots on the New Scientist

I have been interviewed by the New Scientist for an “Ethics Issue” addressing the “ten scientific dilemmas that will shape our future”. I was asked to intervene on the question: “Should we give robots the right to kill”. Here’s a screenshot of my interview’s page

New scientist interview page

And here below are the covers of the magazine’s issue and of the ethics section:

New scientist July 8 cover

New scientist cover ethics part

Posted in accountability gap, armed drones, autonomous weapon systems (AWS), drones, Ethics, ethics of Artificial Intelligence (AI), Ethics of autonomous systems, ethics of technology, killer robots, military robots, moral responsibility, Responsibility, responsibility gap, robot ethics, Robotics, Technology

Call for two postdocs in our project Meaningful Human Control over Automated Driving Systems

There is a call open for two postdoc positions at TU Delft – one in philosophy and one in behavioural science – within the NWO project Meaningful Human Control over Automated Driving Systems, which I will run with Bart van Arem and Marjan Hagenzieker.  Please find the two calls at the following links (NB the deadline for the behavioural science post will be extended!):
nwo-logo
Posted in behavioural science, Ethics of autonomous systems, robot ethics, Self-driving cars, Value-sensitive design

Marginal responsibilty and somnambulistic killings

My co-edited article titled Pushing the Margins of Responsibility: Lessons from Parks’ Somnambulistic Killing (with Ezio Di Nucci) has just been accepted for publication in Neuroethics.

The paper can be read here.

I have been interested in hard cases in the theory of responsibility for many years now; and I have always been intrigued by the tragic story of Kenneth Parks, the Canadian man who, in a state of somnambulism, drove 23 kilometres to the home of his parents-in-law; entered their house, strangled his father-in-law into unconsciousness and stabbed his mother in law to death. My original source of inspiration on the topic of responsibility and somnambulism was Bernard Williams’ paper The Actus Reus of Dr Caligari, and I had already briefly discussed Parks’ case (and Caligari) in my book Per Colpa di chi and in another article co-written with Ezio: Who is Afraid of Robots. However, in this paper we try to give some more depth and structure to our reflection on moral responsibility for things done while in state of (partial) unconsciousness.

In Responsibility from the Margins (2015) David Shoemaker has argued against a binary approach by claiming that this leaves out instances of what he calls “marginal agency”, cases where agents seem to be “eligible for some responsibility responses but not others” (4). In this paper we endorse and extend Shoemaker’s approach by presenting and discussing one more case of marginal agency not yet covered by Shoemaker or in the other literature on moral responsibility: our case is that of Kenneth Parks. Whereas we agree with Parks’ legal acquittal, in this paper we address the issue of Parks’ moral responsibility. In Parks’ case it seems difficult to say that he is morally responsible for killing his mother in law; but it seems also unsatisfactory to have to just equate his story with a random accident, for which no normative evaluation of the agent would be appropriate.

Our main claim is that Parks, while both falling short of fully satisfying our three conditions for moral responsibility – status, voluntary action, causation – and falling short of manifesting any bad will in any of the senses mapped by Shoemaker may be still seen as responsible in some way for his behavior. Parks, we argue, is a full moral agent whose clouded consciousness is both temporary and partial (status condition); his actions are not involuntary and goal-directed (voluntary action condition); the death of Parks’ mother-in-law is non-deviantly causally dependent on his actions (causation condition). That’s why, we argue, failing to be open to standard moral reactions like blame and condemnation, Parks can still be legitimately open to some resentment, at least from those who have suffered because of his actions.

Whereas admittedly unique, Parks’ case highlights one important general lesson: as generally competent moral agents we may be legitimately called to answer for actions we carried out without any bad will or even without any awareness, insofar as our actions directly affect the wellbeing of others.

The paper can be read here.

The Cabinet of Dr Caligari

The Cabinet of Dr Caligari, here with the beautiful music by Supershock

Posted in consciousness and moral responsibility, marginal agency, marginal responsibility, moral responsibility, Responsibility, Strawson and reactive attitudes

Self-driving cars and ethical dilemmas: a legal-philosophical perspective

I just heard that my new paper on the ethics of the programming of self-driving cars has been accepted for publication in Ethical Theory and Moral Practice. The paper is titled: Killing by autonomous vehicles and the legal doctrine of necessity.

The main question addressed in the paper is: How should (fully) “Autonomous Vehicles” (or AVs or self-driving cars) be programmed to behave in the event of an unavoidable accident in which the only choice open is one between the causing of different damages to different objects or persons? Well, I am sure that also non-specialists will have certainly encountered at least one media report like this on the so-called “trolley problem” applied to self-driving cars…

What’s new about my paper? Well, Bonnefon et al. (2016) have approached the issue from the perspective of experimental ethics; before them Gerdes and Thornton (2015) had tackled the issue from the perspective of machine ethics. I think that neither experimental ethics nor philosophical ethics are at the moment able to offer car manufacturers and policy makers any clear indication for addressing this issue; therefore, in order to make some steps forward, in this paper I suggest to take a legal-philosophical approach and to start taking a critical look at how the law has already regulated similarly difficult choices in other emergency scenarios. In particular, I propose to consider the legal doctrine of necessity as it has been elaborated in the Anglo-American jurisprudence and case law. The doctrine of necessity seems a promising starting point as it regulates emergency cases in which human agents have intentionally caused damages to life and property in order to avoid other damages and losses, when avoiding all evils is deemed to be impossible.

My paper has a twofold goal: to make a rational reconstruction of some major principles and norms embedded in the Anglo-American law and jurisprudence on necessity; and to start assessing which, if any, of these principles and norms can be utilized to find reasonable guidelines for solving the ethical issue of the regulation of the programming of AVs in emergency scenarios in which some serious damages to property and life is unavoidable.

You can read a final draft of my paper here.

(and hey, if you think that there is much more than trolley problem dilemmas to the ethics of self-driving cars – well, I am with you: you may be interested in my white paper on ethics and self-driving cars for the Dutch Ministry of Infrastructure and Environment)

Posted in Ethics, ethics of Artificial Intelligence (AI), Ethics of autonomous systems, ethics of technology, killer robots, robot ethics, Robotics, Self-driving cars, Technology, trolley problem

Ethics of Self-Driving Cars: from Delft…to Ottawa

The past week I had a great experience giving a talk at the “Future of Driving” symposium, organized at TU Delft by  the Electrotechnische Vereeniging. It was very nice, as usual, to discuss ethical and societal issues with students, industry representatives like Maarten Sierhuis (Nissan) and Serge Labermont (Delphi) and governmental representatives (Edwin Nas, Dutch Ministry Infrastructure).

And – this Sunday I am flying to Ottawa where I will participating in a Roundtable on Cooperative Mobility Systems and Automated Driving organized by the International Transport Forum and hosted by the Canandian Ministry of Transport – exciting!

 

 

 

Posted in Ethics of autonomous systems, robot ethics, Robotics, Self-driving cars, Technology

White paper on the Ethics of Self-driving Cars for the Dutch Ministry of Infrastructure and Environment

In the context of the knowledge agenda automated driving, the Dutch Ministry of Infrastructure and Environment Rijkswaterstaat commissioned me to write a white paper on ethical issues in automated driving to provide a basis for discussion and some recommendations on how to take into account this subject when deploying automated vehicles.

The paper can be read here.

Here is an abstract:

In this paper I present, discuss, and offer some recommendations on some major ethical issues presented by the introduction on the public road of automated driving systems (ADS), aka self-driving cars. The recommended methodology is that of Responsible Innovation and Value-Sensitive Design. The concept of “meaningful human control” is introduced and proposed as a basis for a policy approach which prevents morally unacceptable risks for human safety, and anticipates issues of moral and legal responsibility for accidents. The importance of the individual rights to safety, access to mobility and privacy is highlighted too.

Posted in accountability gap, Ethics, Ethics of autonomous systems, ethics of technology, responsibility gap, responsible innovation, robot ethics, Robotics, Self-driving cars, Technology, trolley problem, Value-sensitive design Tagged with:

Appointments August-October 2016

Hi, here are some interesting meetings/conferences I will be participating in in the coming two months. See you there!

  • Oct 7th: I will give a keynote lecture on the Ethics of Human Enhancement at the International Society for Prosthetics and Orthotics (ISPO) conference in Rotterdam (NL)
  • Oct 10th: I will give an invited presentation at the Institut d’Histoire et de Philosophie des Sciences et des Techniques, Universite’ Paris 1 Pantheon Sorbonne (France)
Posted in care robots, Ethics, ethics of Artificial Intelligence (AI), ethics of human enhancement, ethics of technology, Robotics, Self-driving cars, talks and presentations, Technology