Call for two postdocs in our project Meaningful Human Control over Automated Driving Systems

There is a call open for two postdoc positions at TU Delft – one in philosophy and one in behavioural science – within the NWO project Meaningful Human Control over Automated Driving Systems, which I will run with Bart van Arem and Marjan Hagenzieker.  Please find the two calls at the following links (NB the deadline for the behavioural science post will be extended!):
nwo-logo
Posted in behavioural science, Ethics of autonomous systems, robot ethics, Self-driving cars, Value-sensitive design

Marginal responsibilty and somnambulistic killings

My co-edited article titled Pushing the Margins of Responsibility: Lessons from Parks’ Somnambulistic Killing (with Ezio Di Nucci) has just been accepted for publication in Neuroethics.

The paper can be read here.

I have been interested in hard cases in the theory of responsibility for many years now; and I have always been intrigued by the tragic story of Kenneth Parks, the Canadian man who, in a state of somnambulism, drove 23 kilometres to the home of his parents-in-law; entered their house, strangled his father-in-law into unconsciousness and stabbed his mother in law to death. My original source of inspiration on the topic of responsibility and somnambulism was Bernard Williams’ paper The Actus Reus of Dr Caligari, and I had already briefly discussed Parks’ case (and Caligari) in my book Per Colpa di chi and in another article co-written with Ezio: Who is Afraid of Robots. However, in this paper we try to give some more depth and structure to our reflection on moral responsibility for things done while in state of (partial) unconsciousness.

In Responsibility from the Margins (2015) David Shoemaker has argued against a binary approach by claiming that this leaves out instances of what he calls “marginal agency”, cases where agents seem to be “eligible for some responsibility responses but not others” (4). In this paper we endorse and extend Shoemaker’s approach by presenting and discussing one more case of marginal agency not yet covered by Shoemaker or in the other literature on moral responsibility: our case is that of Kenneth Parks. Whereas we agree with Parks’ legal acquittal, in this paper we address the issue of Parks’ moral responsibility. In Parks’ case it seems difficult to say that he is morally responsible for killing his mother in law; but it seems also unsatisfactory to have to just equate his story with a random accident, for which no normative evaluation of the agent would be appropriate.

Our main claim is that Parks, while both falling short of fully satisfying our three conditions for moral responsibility – status, voluntary action, causation – and falling short of manifesting any bad will in any of the senses mapped by Shoemaker may be still seen as responsible in some way for his behavior. Parks, we argue, is a full moral agent whose clouded consciousness is both temporary and partial (status condition); his actions are not involuntary and goal-directed (voluntary action condition); the death of Parks’ mother-in-law is non-deviantly causally dependent on his actions (causation condition). That’s why, we argue, failing to be open to standard moral reactions like blame and condemnation, Parks can still be legitimately open to some resentment, at least from those who have suffered because of his actions.

Whereas admittedly unique, Parks’ case highlights one important general lesson: as generally competent moral agents we may be legitimately called to answer for actions we carried out without any bad will or even without any awareness, insofar as our actions directly affect the wellbeing of others.

The paper can be read here.

The Cabinet of Dr Caligari

The Cabinet of Dr Caligari, here with the beautiful music by Supershock

Posted in consciousness and moral responsibility, marginal agency, marginal responsibility, moral responsibility, Responsibility, Strawson and reactive attitudes

Self-driving cars and ethical dilemmas: a legal-philosophical perspective

I just heard that my new paper on the ethics of the programming of self-driving cars has been accepted for publication in Ethical Theory and Moral Practice. The paper is titled: Killing by autonomous vehicles and the legal doctrine of necessity.

The main question addressed in the paper is: How should (fully) “Autonomous Vehicles” (or AVs or self-driving cars) be programmed to behave in the event of an unavoidable accident in which the only choice open is one between the causing of different damages to different objects or persons? Well, I am sure that also non-specialists will have certainly encountered at least one media report like this on the so-called “trolley problem” applied to self-driving cars…

What’s new about my paper? Well, Bonnefon et al. (2016) have approached the issue from the perspective of experimental ethics; before them Gerdes and Thornton (2015) had tackled the issue from the perspective of machine ethics. I think that neither experimental ethics nor philosophical ethics are at the moment able to offer car manufacturers and policy makers any clear indication for addressing this issue; therefore, in order to make some steps forward, in this paper I suggest to take a legal-philosophical approach and to start taking a critical look at how the law has already regulated similarly difficult choices in other emergency scenarios. In particular, I propose to consider the legal doctrine of necessity as it has been elaborated in the Anglo-American jurisprudence and case law. The doctrine of necessity seems a promising starting point as it regulates emergency cases in which human agents have intentionally caused damages to life and property in order to avoid other damages and losses, when avoiding all evils is deemed to be impossible.

My paper has a twofold goal: to make a rational reconstruction of some major principles and norms embedded in the Anglo-American law and jurisprudence on necessity; and to start assessing which, if any, of these principles and norms can be utilized to find reasonable guidelines for solving the ethical issue of the regulation of the programming of AVs in emergency scenarios in which some serious damages to property and life is unavoidable.

You can read a final draft of my paper here.

(and hey, if you think that there is much more than trolley problem dilemmas to the ethics of self-driving cars – well, I am with you: you may be interested in my white paper on ethics and self-driving cars for the Dutch Ministry of Infrastructure and Environment)

Posted in Ethics, ethics of Artificial Intelligence (AI), Ethics of autonomous systems, ethics of technology, killer robots, robot ethics, Robotics, Self-driving cars, Technology, trolley problem

Ethics of Self-Driving Cars: from Delft…to Ottawa

logo-future-of-driving

The past week I had a great experience giving a talk at the “Future of Driving” symposium, organized at TU Delft by  the Electrotechnische Vereeniging. It was very nice, as usual, to discuss ethical and societal issues with students, industry representatives like Maarten Sierhuis (Nissan) and Serge Labermont (Delphi) and governmental representatives (Edwin Nas, Dutch Ministry Infrastructure).

And – this Sunday I am flying to Ottawa where I will participating in a Roundtable on Cooperative Mobility Systems and Automated Driving organized by the International Transport Forum and hosted by the Canandian Ministry of Transport – exciting!

 

 

 

Posted in Ethics of autonomous systems, robot ethics, Robotics, Self-driving cars, Technology

White paper on the Ethics of Self-driving Cars for the Dutch Ministry of Infrastructure and Environment

In the context of the knowledge agenda automated driving, the Dutch Ministry of Infrastructure and Environment Rijkswaterstaat commissioned me to write a white paper on ethical issues in automated driving to provide a basis for discussion and some recommendations on how to take into account this subject when deploying automated vehicles.

The paper can be read here.

Here is an abstract:

In this paper I present, discuss, and offer some recommendations on some major ethical issues presented by the introduction on the public road of automated driving systems (ADS), aka self-driving cars. The recommended methodology is that of Responsible Innovation and Value-Sensitive Design. The concept of “meaningful human control” is introduced and proposed as a basis for a policy approach which prevents morally unacceptable risks for human safety, and anticipates issues of moral and legal responsibility for accidents. The importance of the individual rights to safety, access to mobility and privacy is highlighted too.

Posted in accountability gap, Ethics, Ethics of autonomous systems, ethics of technology, responsibility gap, responsible innovation, robot ethics, Robotics, Self-driving cars, Technology, trolley problem, Value-sensitive design Tagged with:

Appointments August-October 2016

Hi, here are some interesting meetings/conferences I will be participating in in the coming two months. See you there!

  • Oct 7th: I will give a keynote lecture on the Ethics of Human Enhancement at the International Society for Prosthetics and Orthotics (ISPO) conference in Rotterdam (NL)
  • Oct 10th: I will give an invited presentation at the Institut d’Histoire et de Philosophie des Sciences et des Techniques, Universite’ Paris 1 Pantheon Sorbonne (France)
Posted in care robots, Ethics, ethics of Artificial Intelligence (AI), ethics of human enhancement, ethics of technology, Robotics, Self-driving cars, talks and presentations, Technology

Our “Drones and Responsibility” can be pre-ordered!

Drones cover and back

Drones cover and backThrilled to announce that our co-edited Routledge book “Drones and Responsibility: Legal, Philosophical, and Socio-technical Perspectives on Remotely Controlled Weapons” can now be pre-ordered on Amazon and Bol!

In the meanwhile, you can read our introducion here.

Posted in accountability gap, armed drones, autonomous weapon systems (AWS), criminal responsibility, drone warfare, drones, Ethics, ethics of technology, international law, killer robots, military ethics, military robots, moral responsibility, Responsibility, responsibility gap, robot ethics, Robotics, targeted killings, Technology, UAV

Web lecture on the ethics of care robots

Here is a web lecture (12 min) on the ethics of care robots that I recorded for the TU Delft MOOC (online course) Responsible Innovation: Ethics, Safety and Technology.

The lecture is based on my co-authored paper When Should We Use Care Robots: the Nature-of-activities Approach (with Aimee van Wybsberghe).

Enjoy…!

 

Posted in care robots, Ethics, medical ethics, responsible innovation, robot ethics, Robotics, Technology, Value-sensitive design, web lecture

Technological Doping?

“Technological doping? Responsible Innovation in Sport Engineering” is the title of the talk that I will give in Amsterdam, on April 8th, at the Science and Engineering Conference on Sports Innovations (SECSI).

Whereas more and more cases of biomedical doping are discussed in the media (the tennis star Maria Sharapova‘s being only the last episode in a series of scandals), little is said about the ethics of technological enhancement in sport. This is striking if it is true that when we talk about performance enhancement the concern for the health of users is not the only one among the public and professional ethicists (fairness, authenticity, responsibility-shifting are others), so that many of the arguments against biomedical doping  – both the sound and the flawed ones – may be soon turned into arguments against “technological doping”. Is this good news? Below, in the abstract of my talk, are some preliminary reflections (short summary: confusing and arbitrary regulation is in no one’s interest, so let’s start a systematic reflection). Speaking of ethics of enhancement, my co-authored chapter  “Why less praise for Enhanced performance?” is finally coming out in a Oxford University Press collection on Cognitive enhancement.

Here’s the abstract of my talk in Amsterdam:

How should the use of high tech products be regulated in competitive sport? What should count as “technological doping”? In recent years there have been strict bans on the use of high-tech products in competitive sport, like the Union Cycliste Internationale’s ban on ultra-aerodynamic bicycles and the Fédération Internationale de Natation ban on Speedo’s LZR Racer swimsuit after the Beijing Olympic Games. However, sport federations do not seem to possess solid criteria to draw a clear line between legitimate and illegitimate use of technology in sport: whereas it can easily be seen that the (concealed) use of an engine in cycling is both unfair and against the spirit of that sport, it is much more difficult to clearly establish why the use of ultra-aerodynamic bicycles or high-tech swimming suits should count as technological fraud.

This lack of clarity about the guiding principles behind the regulation of sport technology is worrisome, as it may lead to unpredictable and potentially arbitrary restrictive regulations; and these may in turn curb future research and investment in sport technologies, and cause a reduction of their positive externalities; they might also make competitive sport less attractive to the public.

In this paper, starting from my experience as a researcher on the ethics of human enhancement, and based on the general principles of “socially responsible innovation” [1], I propose a methodology to develop an applied ethics of technological enhancement in sport. This methodology is based on three key elements: a) theoretical research on values and technological enhancement in different sports; b) interdisciplinary collaboration between researchers and private and public stakeholders c) strong design perspective.

Theoretical research. Existing research [2] shows that humans attach different values to competitive performance: fairness, safety but also a sense of authentic agency and a meaningful contribution to the final result; performance enhancing technologies are seen with suspect insofar as they negatively affect one or more of these values. Philosophical and psychological research is required to identify, define, and map the different values entailed in different sports.

Interdisciplinarity. Academic researchers cannot do this research alone: they need input and feedback from sport associations; they also need to interact with designers and private companies in order to understand which products can promote which values in which sport under which conditions.

Design perspective. Academic researchers, designers, sport associations, policy-makers, and private companies should interact from the very early stages of their respective researches in order to elaborate specific design guidelines for the creation of technological products and the elaboration of policies that really promote the values of different sports.

The paper makes a call to designers, private partners and policy makers, for a collaboration on joint research projects on the applied ethics of technological sport enhancement, and proposes a methodology to start this collaboration.

REFERENCES

[1] van den Hoven, J. (2013). “Value Sensitive Design and Responsible Innovation”, in R. Owen, J. Bessant and M. Heintz (eds), Responsible Innovation – Managing the Responsible Emergence of Science and Innovation in Society. West Sussex: John Wiley, 75-84.
[2] Santoni de Sio, F, Faber, NS, Savulescu, J & Vincent, NA (2016). “Why Less Praise For Enhanced Performance? Moving Beyond Responsibility-shifting, Authenticity, and Cheating to a Nature of Activities Approach” in F. Jotterand, & V. Dubljevic (eds.), Cognitive Enhancement: Ethical and Policy Implications in International Perspectives. Oxford: Oxford University Press.

Posted in doping, Ethics, ethics of enhancement, responsible innovation, SECSI conference 2016, sport engineering, technological doping, Technology

Talk on Care Robots at the University of Copenhagen

Paro-snoezelrobot

Tomorrow I will fly to Copenhagen, where, on Thursday March 3rd, I will give a seminar on the ethics of care robots at the Faculty of Public Health.

I will talk about the use of therapeutic companionship robots like the Paro robot by patients with mental disabilities such as dementia. The ethical debate has so far been focused mainly on the psychological benefits of Paro on the positive side, and on the risks of loss of human contact, infantilisation, and deception of patients on the negative side (here is a nice summary of the current ethical debate on Paro by Amanda Sharkey). In this talk, I will try to analyse the role of the patient’s consent. This topic hasn’t been much discussed, probably as it is implicitly assumed that people with mental disabilities like dementia simply cannot express any valid consent to their treatment; I will challenge this assumption by looking, among other things, at the recent legal debate on the legal consent to sexual intercourse by mentally disordered people, where it has been convincingly argued that failing to be fully rational and autonomous agents, mentally disordered people may still express, under certain conditions, some valid consent to perform activities that they see as desirable. If something similar may be argued in relation to the use of companionship robots by mentally disordered persons, then this might be an additional argument in favour of this use.

I look forward  to this discussion for two reasons. First, the seminar will be at the faculty of public health, so I am expecting the audience to be particularly interested in medical ethics issues. Second, this will also be a reunion with my colleague and friend Ezio Di Nucci, with whom I’m editing a book on Drones and Responsibility. Hey, by the way, the book is almost done and will be out in the coming spring (here you can read a draft of the introduction)!

 

Posted in care robots, Ethics, informed consent, medical ethics, Mental disorder, Paro robot, Robotics