Consider the following situation:
You are a trolley driver, driving a trolley on a nice sunny day, when you notice five men working on the street in front of you, in your way. You blow the horn multiple times, but then notice they all are wearing earphones while working and hence can’t hear you approaching. Frustrated, you reach out to the brake, and to your shock, you find out that the brake isn’t working at all. Your mind goes blank, and frightened, just at the thought of what’s going to happen to the five workmen when the trolley hits them shortly.
However, suddenly you notice that there’s a small diversion just before the workmen whichto you can divert your trolley easily and quickly, thereby saving the five workmen! What a relief!
But the relief was only momentary – as you consider taking the diversion, you find out that the diversion isn’t all clear, either. There’s one man working thereon, and if you turn the steering to take the diversion, his death is certain.
There you sit, on your speeding trolley, with a failed brake, with two roads to choose from: one that will kill five men, and another that’ll kill one man, but with an active choice involved from your side.
What’ll you do?
The above hypothetical problem presents before us a millennia-old debate: the same as that between the pro-life and pro-right activists, that between the supporters and critics of utilitarianism, and also, that between the public interest and private rights, in some way. The question here, in one sense, tests utilitarianism in its extremity: by pushing the pleasure and pain element thereof to questions of survival and death, the death here being caused deliberately.
This particular hypothetical situation is a philosophical thought experiment introduced by Phillippa Foot in 1967 – and the two probable courses of action here signifies two different viewpoints of moral reasoning. One course of action, that saves the five men by killing one, subscribes to or signifies a ‘consequentialist moral reasoning’. However, there’s another viewpoint, called a ‘categorical moral reasoning’, that upholds that killing even a single person is wrong, no matter how grave the situation is which dictates such killing.
What would you do? Save the five workmen by diverting the trolley, right?
Statistically, when asked the above question, people tend to subscribe to a consequentialist moral reasoning, and go for saving five lives at the cost of one life. However, when the above hypothetical situation is somewhat modified to represent another situation where the required degree of active involvement is significantly more, people do seem to change their mind:
Suppose you’re a doctor, specialising in organ transplantation. One day, five road-accident victims were sent to you who’re in immediate need of organ transplantation. You have everything, but some healthy organs which can be transplanted. All of the five persons need five different organs: one needs a liver, another a heart, another a kidney, and so on.
While pondering over how to arrange five healthy organs in such a short span of time, you suddenly remember you have a healthy person sitting in the visiting room who just came in for a routine check-up.
Would you kill the person in the visiting room so as to use his organs to save five persons?
The above variation of the trolley problem was introduced by Judith Jarvis Thomson in 1985, and makes clear the distinction between the two choices: the point that the choice is NOT between killing five and killing one, but between ‘letting five die’ and ‘killing one’. And here it becomes difficult to stick to the consequentialist moral reasoning, for most of the people, statistically speaking – because even though we have this notion of higher good being the right thing to do, at the same time, on the other hand, we also have this deep rooted conception that there are certain basic inalienable rights of a human being that cannot be, under any circumstance(s), snatched away from him.
The above set of problems highlighting ethical dilemmas in decision making are informally, and loosely, termed as ‘trolleyology’, or the ‘trolley problem’, in general. Although originally a philosophical thought experiment, it finds extensive applications in number of cases, starting from court room situations, to policy drafting and legislation, to the designing of automated vehicles.
The trolley problem poses a specific significance to the automation industry, inter alia. Automation, by its very nature, requires things to be predetermined – pre-coded by way of predefined algorithms. And a predefined algorithm means and implies a predefined course of action – which directly translates to the requirement of sticking to a precise ethical code. And sticking to an ethical code, in a programmed way, isn’t really easy when it comes to the real-world situations. What should self-driving cars do when having a choice to bump into an old man and a young man? Five people or one bystander? Traffic rules or car safety? These are some of the difficult choices that the AI designers are faced with while designing automated devices.
Some MIT students and academicians came forward recently to design a simulated platform for this issue, and the result was the ‘Moral Machine’. Moral Machine presented before the users a set of hypothetical situations which represented ethical dilemma and required them to make choices – and the result was further distracting.
The opinion of the users was as diverse as it was 2000 years ago.
People from various backgrounds went for various choices – people from eastern countries majorly went for saving the innocent, people from western and American countries went for inaction, and people from Latin American countries went for saving the young and potent. People often went for saving humans over animals, and lives over property. Social sensitisation played a role, but not so uniformly.
The point is, individual opinion varies a great deal when it comes to making ethical choices – choosing between holding on and letting go – deliberating over the meaning and purpose of life and human action. Does that mean designing artificial intelligence shouldn’t have a uniform guide of conduct as well? Should AI and automation ethics be allowed to vary from place to place and society to society as well? And importantly, in this era of globalisation, is it even possible to afford such differences when geographical and societal difference has been proved to be nothing but an illusion merely.
This particular debate, between that of consequentialism and categorical moral reasoning, has been alive since Greek times, probably – and we still aren’t over it. The fact that this debate has persisted so long definitely signifies its level of difficulty – but more than that, it signifies its continued relevance for the society. And now in the wake of automated devices and artificially intelligent computers – and the multitude of policy and legal changes it calls for, it’s relevant more than ever before.
Read the second part here.
This post first appeared here.
ABOUT THE AUTHOR
ANSHUMAN SAHOO
‘Passionate!’ That’s the only word he uses to describe himself. Questioning assumptions. Challenging hypocrisies. Making the planet a better place to live in. Can be found at www.anshumansahoo.com.
Leave a Reply