Posted in AI Law

Part II: So, who is liable?

In the previous post (Part I: Automation & Ethics:Applying Trolleyology), we discussed the introductory part to this article; the inherent ethical dilemma in the trolley problem and how that’s relevant to the current scenario of automated devices and self-driving cars. We discussed certain situations, including the Moral Machine by MIT, and our discussion was primarily ethical. In this post, we’ll carry forward the discussion from an ethical one to a legal one; discussing particular significant legal implications of the issues discussed beforehand.

We discussed ethics in the previous post and observed that people have varying stands when it comes to categorising what particularly is ethical and what is unethical. Despite this personal biases and difference of opinions, every society has certain ethical standards and it enforces the same by way of punishing the unethical. Theft is punished, trespassing is punished, defaming is punished; killing is punished, too, be it five workmen or a single one. Your being confused as to moral reasoning isn’t a valid excuse in the eyes of the law, and the law will punish what it ‘considers’ to be wrong.

But, what if AI commits a wrong? Whom to hold responsible when an artificially intelligent device or self-driving car causes some damages or injuries? Will the coder/designer/manufacturer be liable? Or the owner? Or the person who customised it? Big question, eh.

Legal personhood of AI

Examining the legal liability of AI, we can start by examining the possibilities of a legal personality of AI in the first place. Can an artificially intelligent device/robot be considered a juristic person in the eyes of law? If not, what would be its legal status, then? Can it still be treated merely as a movable property despite the fact that it has an intelligence and ability to learn things its own way?

The logical starting point would be to consider the probability of AI falling within the definition of mere property, like a watch or a remote controller. When it comes to products, the manufacturer is held liable by the principle of strict liability. However, with recent developments, there has been a shift in the judicial approach and liability is held only in cases where the defendant manufacturer/designer could reasonably foresee the damage or has failed to take reasonable care to avoid negligence.

Deducing on this line of reasoning, we can realise that the manufacturer or designer of an artificially intelligent being/device can rarely foresee the course of action its creation would take, and hence should be exempted from the strict liability – hence AI not falling strictly within the definition of a property per se.

The next point of examination would be whether AI could be a person – the option on the opposite end of the logical extremity. It requires little or no examination, in my opinion. Every one of us would agree on this point – that although artificially intelligent devices possess some degree of intelligence, they can in no way be considered ‘equivalent’ to humans.

So, AI doesn’t qualify for a human, and neither does it fall within the categorisation of a property. What’s left? A middle way: Romans invented this when confronted with the question as to whether slaves were humans. They didn’t want to confer upon the slaves complete personhood; but they also understood that slaves had their own minds and wouldn’t necessarily follow the master always. They invented this new legal status – ‘quasi-persons’.

On a sliding scale, with humans like you and I being on one end; and the non-living things like trees, pens, and pencils on the other; and the corporations, juristic persons falling somewhere in the middle; AI also finds a place somewhere in the middle. Where exactly in the middle? Well, that’s a difficult question to answer objectively. Some tests like the Legal Turing test have been proposed to address the same, but they somehow fail when it comes to practical applicability. The best bet would be to consider this issue on a case-to-case basis, depending upon the facts of the case like the intention of the designer, foreseeability of the damage et cetera.

Legal liability of AI

Now that we have an answer of sorts regarding the legal personhood of AI, we can proceed to discuss the legal liability of artificially intelligent devices. And Gabriel Hallevy from Israel makes the job easier for us with his work. A little background for non-law people here: criminal liability has two basic prerequisites: mens rea (the criminal intent, or the guilty mind) and actus reus (the criminal act)In other words, a man, or any person, can be said to be criminally liable only when both the elements are present – that is, he has criminal intent and commits an act to actualise that intent.

There can be a number of instances of various combinations of these elements, and each of those instances calls for different degree of liability. Of those many, Gabriel points out three possible scenarios that can be applicable to the AI; the first one being perpetrator via another. Perpetrator via another is the situation when a mentally deficient person or an animal, who inherently lacks mens rea and is hence innocent, commits a crime under the instructions of someone else. The instructor here is held liable. The second one is natural probable consequence which relies on the possibility that the programmer could naturally see the probable consequence and decides liability accordingly. The third one is direct liability, and this one is special: under this set of circumstances, the AI itself is liable – not the owner.

AI liability is no longer just a theoretical discussion or philosophical deliberation. Way back in 2008, the Indian judiciary faced the case of Avnish Bajaj v State (famously known as the Bazee.com case, wherein the courts recognised the automation involved in codes and software for which the owner could not be held liable. Time and again courts across the globe have come across cases where they have admitted that certain things aren’t ripe for a decision yet, unfortunately.

So, if the AI is liable in certain cases to the exclusion of all other stakeholders, including the owner himself, the next big question that arises is, whom to punish? Whom to give judgements to? Whom to accuse? The conventional theories of punishments obviously aren’t going to apply to AI – we can’t think of retribution or reform by punishing an AI system. So, whom to punish?

Well, nobody knows the answer for now.

This post first appeared here.


ABOUT THE AUTHOR

ANSHUMAN SAHOO

Seminar address

‘Passionate!’ That’s the only word he uses to describe himself. Questioning assumptions. Challenging hypocrisies. Making the planet a better place to live in. Can be found at www.anshumansahoo.com.

Author:

A project by Law Matters Centre for Research, Education, and Social Action (LaMCRESA).

One thought on “Part II: So, who is liable?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s