Posted in AI Law

Part II: So, who is liable?

In the previous post (Part I: Automation & Ethics:Applying Trolleyology), we discussed the introductory part to this article; the inherent ethical dilemma in the trolley problem and how that’s relevant to the current scenario of automated devices and self-driving cars. We discussed certain situations, including the Moral Machine by MIT, and our discussion was primarily ethical. In this post, we’ll carry forward the discussion from an ethical one to a legal one; discussing particular significant legal implications of the issues discussed beforehand.

We discussed ethics in the previous post and observed that people have varying stands when it comes to categorising what particularly is ethical and what is unethical. Despite this personal biases and difference of opinions, every society has certain ethical standards and it enforces the same by way of punishing the unethical. Theft is punished, trespassing is punished, defaming is punished; killing is punished, too, be it five workmen or a single one. Your being confused as to moral reasoning isn’t a valid excuse in the eyes of the law, and the law will punish what it ‘considers’ to be wrong.

But, what if AI commits a wrong? Whom to hold responsible when an artificially intelligent device or self-driving car causes some damages or injuries? Will the coder/designer/manufacturer be liable? Or the owner? Or the person who customised it? Big question, eh.

Legal personhood of AI

Examining the legal liability of AI, we can start by examining the possibilities of a legal personality of AI in the first place. Can an artificially intelligent device/robot be considered a juristic person in the eyes of law? If not, what would be its legal status, then? Can it still be treated merely as a movable property despite the fact that it has an intelligence and ability to learn things its own way?

The logical starting point would be to consider the probability of AI falling within the definition of mere property, like a watch or a remote controller. When it comes to products, the manufacturer is held liable by the principle of strict liability. However, with recent developments, there has been a shift in the judicial approach and liability is held only in cases where the defendant manufacturer/designer could reasonably foresee the damage or has failed to take reasonable care to avoid negligence.

Deducing on this line of reasoning, we can realise that the manufacturer or designer of an artificially intelligent being/device can rarely foresee the course of action its creation would take, and hence should be exempted from the strict liability – hence AI not falling strictly within the definition of a property per se.

The next point of examination would be whether AI could be a person – the option on the opposite end of the logical extremity. It requires little or no examination, in my opinion. Every one of us would agree on this point – that although artificially intelligent devices possess some degree of intelligence, they can in no way be considered ‘equivalent’ to humans.

So, AI doesn’t qualify for a human, and neither does it fall within the categorisation of a property. What’s left? A middle way: Romans invented this when confronted with the question as to whether slaves were humans. They didn’t want to confer upon the slaves complete personhood; but they also understood that slaves had their own minds and wouldn’t necessarily follow the master always. They invented this new legal status – ‘quasi-persons’.

On a sliding scale, with humans like you and I being on one end; and the non-living things like trees, pens, and pencils on the other; and the corporations, juristic persons falling somewhere in the middle; AI also finds a place somewhere in the middle. Where exactly in the middle? Well, that’s a difficult question to answer objectively. Some tests like the Legal Turing test have been proposed to address the same, but they somehow fail when it comes to practical applicability. The best bet would be to consider this issue on a case-to-case basis, depending upon the facts of the case like the intention of the designer, foreseeability of the damage et cetera.

Legal liability of AI

Now that we have an answer of sorts regarding the legal personhood of AI, we can proceed to discuss the legal liability of artificially intelligent devices. And Gabriel Hallevy from Israel makes the job easier for us with his work. A little background for non-law people here: criminal liability has two basic prerequisites: mens rea (the criminal intent, or the guilty mind) and actus reus (the criminal act)In other words, a man, or any person, can be said to be criminally liable only when both the elements are present – that is, he has criminal intent and commits an act to actualise that intent.

There can be a number of instances of various combinations of these elements, and each of those instances calls for different degree of liability. Of those many, Gabriel points out three possible scenarios that can be applicable to the AI; the first one being perpetrator via another. Perpetrator via another is the situation when a mentally deficient person or an animal, who inherently lacks mens rea and is hence innocent, commits a crime under the instructions of someone else. The instructor here is held liable. The second one is natural probable consequence which relies on the possibility that the programmer could naturally see the probable consequence and decides liability accordingly. The third one is direct liability, and this one is special: under this set of circumstances, the AI itself is liable – not the owner.

AI liability is no longer just a theoretical discussion or philosophical deliberation. Way back in 2008, the Indian judiciary faced the case of Avnish Bajaj v State (famously known as the Bazee.com case, wherein the courts recognised the automation involved in codes and software for which the owner could not be held liable. Time and again courts across the globe have come across cases where they have admitted that certain things aren’t ripe for a decision yet, unfortunately.

So, if the AI is liable in certain cases to the exclusion of all other stakeholders, including the owner himself, the next big question that arises is, whom to punish? Whom to give judgements to? Whom to accuse? The conventional theories of punishments obviously aren’t going to apply to AI – we can’t think of retribution or reform by punishing an AI system. So, whom to punish?

Well, nobody knows the answer for now.

This post first appeared here.


ABOUT THE AUTHOR

ANSHUMAN SAHOO

Seminar address

‘Passionate!’ That’s the only word he uses to describe himself. Questioning assumptions. Challenging hypocrisies. Making the planet a better place to live in. Can be found at www.anshumansahoo.com.

Posted in AI Law

Part I: Automation & Ethics – Applying Trolleyology

Consider the following situation:

You are a trolley driver, driving a trolley on a nice sunny day, when you notice five men working on the street in front of you, in your way. You blow the horn multiple times, but then notice they all are wearing earphones while working and hence can’t hear you approaching. Frustrated, you reach out to the brake, and to your shock, you find out that the brake isn’t working at all. Your mind goes blank, and frightened, just at the thought of what’s going to happen to the five workmen when the trolley hits them shortly.

However, suddenly you notice that there’s a small diversion just before the workmen whichto you can divert your trolley easily and quickly, thereby saving the five workmen! What a relief!

But the relief was only momentary – as you consider taking the diversion, you find out that the diversion isn’t all clear, either. There’s one man working thereon, and if you turn the steering to take the diversion, his death is certain.

There you sit, on your speeding trolley, with a failed brake, with two roads to choose from: one that will kill five men, and another that’ll kill one man, but with an active choice involved from your side.

What’ll you do?

The above hypothetical problem presents before us a millennia-old debate: the same as that between the pro-life and pro-right activists, that between the supporters and critics of utilitarianism, and also, that between the public interest and private rights, in some way. The question here, in one sense, tests utilitarianism in its extremity: by pushing the pleasure and pain element thereof to questions of survival and death, the death here being caused deliberately.

This particular hypothetical situation is a philosophical thought experiment introduced by Phillippa Foot in 1967 – and the two probable courses of action here signifies two different viewpoints of moral reasoning. One course of action, that saves the five men by killing one, subscribes to or signifies a ‘consequentialist moral reasoning’. However, there’s another viewpoint, called a ‘categorical moral reasoning’, that upholds that killing even a single person is wrong, no matter how grave the situation is which dictates such killing.

What would you do? Save the five workmen by diverting the trolley, right?

Statistically, when asked the above question, people tend to subscribe to a consequentialist moral reasoning, and go for saving five lives at the cost of one life. However, when the above hypothetical situation is somewhat modified to represent another situation where the required degree of active involvement is significantly more, people do seem to change their mind:

Suppose you’re a doctor, specialising in organ transplantation. One day, five road-accident victims were sent to you who’re in immediate need of organ transplantation. You have everything, but some healthy organs which can be transplanted. All of the five persons need five different organs: one needs a liver, another a heart, another a kidney, and so on.

While pondering over how to arrange five healthy organs in such a short span of time, you suddenly remember you have a healthy person sitting in the visiting room who just came in for a routine check-up.

Would you kill the person in the visiting room so as to use his organs to save five persons?

The above variation of the trolley problem was introduced by Judith Jarvis Thomson in 1985, and makes clear the distinction between the two choices: the point that the choice is NOT between killing five and killing one, but between ‘letting five die’ and ‘killing one’. And here it becomes difficult to stick to the consequentialist moral reasoning, for most of the people, statistically speaking – because even though we have this notion of higher good being the right thing to do, at the same time, on the other hand, we also have this deep rooted conception that there are certain basic inalienable rights of a human being that cannot be, under any circumstance(s), snatched away from him.

The above set of problems highlighting ethical dilemmas in decision making are informally, and loosely, termed as ‘trolleyology’, or the ‘trolley problem’, in general. Although originally a philosophical thought experiment, it finds extensive applications in number of cases, starting from court room situations, to policy drafting and legislation, to the designing of automated vehicles.

The trolley problem poses a specific significance to the automation industry, inter alia. Automation, by its very nature, requires things to be predetermined – pre-coded by way of predefined algorithms. And a predefined algorithm means and implies a predefined course of action – which directly translates to the requirement of sticking to a precise ethical code. And sticking to an ethical code, in a programmed way, isn’t really easy when it comes to the real-world situations. What should self-driving cars do when having a choice to bump into an old man and a young man? Five people or one bystander? Traffic rules or car safety? These are some of the difficult choices that the AI designers are faced with while designing automated devices.

Some MIT students and academicians came forward recently to design a simulated platform for this issue, and the result was the ‘Moral Machine’. Moral Machine presented before the users a set of hypothetical situations which represented ethical dilemma and required them to make choices – and the result was further distracting.

The opinion of the users was as diverse as it was 2000 years ago.

People from various backgrounds went for various choices – people from eastern countries majorly went for saving the innocent, people from western and American countries went for inaction, and people from Latin American countries went for saving the young and potent. People often went for saving humans over animals, and lives over property. Social sensitisation played a role, but not so uniformly.

The point is, individual opinion varies a great deal when it comes to making ethical choices – choosing between holding on and letting go – deliberating over the meaning and purpose of life and human action. Does that mean designing artificial intelligence shouldn’t have a uniform guide of conduct as well? Should AI and automation ethics be allowed to vary from place to place and society to society as well? And importantly, in this era of globalisation, is it even possible to afford such differences when geographical and societal difference has been proved to be nothing but an illusion merely.

This particular debate, between that of consequentialism and categorical moral reasoning, has been alive since Greek times, probably – and we still aren’t over it. The fact that this debate has persisted so long definitely signifies its level of difficulty – but more than that, it signifies its continued relevance for the society. And now in the wake of automated devices and artificially intelligent computers – and the multitude of policy and legal changes it calls for, it’s relevant more than ever before.

Read the second part here.

This post first appeared here.


ABOUT THE AUTHOR

ANSHUMAN SAHOO

Seminar address

‘Passionate!’ That’s the only word he uses to describe himself. Questioning assumptions. Challenging hypocrisies. Making the planet a better place to live in. Can be found at www.anshumansahoo.com.