Introduction
In August 2017, the Union Ministry of Commerce and Industry set up an Artificial Intelligence Task Force to include AI in the Legal, Political, and Economic fields. The aim was to ensure a better thought process to make India achieve its target to become the world’s leading AI-rich economy. The report presented by the task force has prepared a roadmap on how AI will act as a guiding light to ensure development not only in economic fields but also in other socio-economic domains. It categorized various areas which are of relevance to India such as Fintech, Healthcare, Education, Retail/Customer Engagement, etc. Although the report identifies supporting factors for AI adoption and identifies particular government ministries and agencies that could facilitate this development, it fails to adequately resolve the social, ethical, and technological issues that underpin AI usage.
Currently, there is no legal framework in India ascertaining the criminal liability of AI entities. With the increasing use of AI-based products and services in the country, there arises a need for a structured mechanism to determine the liabilities of an AI entity, its maker, seller, and user. In this article, the authors will be dealing with the actus reus element concerning AI.
AI and the Actus Reus
We usually ascribe criminal liability to the act done by an individual. Generally, the acts done by natural persons attract penalty or imprisonment. Therefore, it becomes difficult to ascertain liability when it comes to AI because neither can we consider it as a natural person nor has it been ascribed the status of an artificial person. Thus, one recourse can be to attribute the crime committed by AI to humans because only that way the elements of the crime (i.e., actus reus and mens rea) can be fulfilled. We can borrow the master-servant principle from the law of torts to attribute liability. When a servant commits an offence in the course of employment, it is considered that the master himself has done it and the master is obligated to respond to the indictment (respondent superior). The rationale behind this is to mandate the master to ensure that his servants do not violate the law. There are multiple actors involved with different obligations when it comes to AI which can vary depending upon the circumstances as discussed below.
Firstly, for ascertaining the liability of the producer, who can be called the coder or the maker of the AI entity, we can make him liable to the extent that he created an inherently faulty AI that does not obey its master’s/user’s will. Or, if the maker programmed the AI to do something evil or criminal, then only he can be made liable for that act. The coder or maker’s (i.e., Producer) responsibility is attached to the making of software and hardware of the AI because they know all the technical and mechanical specifications of the AI. They are the ones who are the mastermind behind the education and training of AI. The code that they create is the brain of the AI, it is the core and key to everything that AI is capable of doing and thus giving the producer an upper hand to influence it in any manner as he wants.[1] If a case occurs when because of an inherent malfunction certain act is committed then it is the producer and his deputies who are at fault and should be held liable.
But one can deduce a case where AI commits a wrong by default which is latent and not foreseeable by the maker or coders when they had written the code, then who should be held responsible? Can the burden be cast upon the user or supervisor to foresee such malfunctioning? Even if the owner is primarily associated with the user or supervisor, it is ahead of time to rule out the owner as a probable defendant at this point in the analysis. When deciding the liability, the owner will become one of the indispensable actors for future cases.
The notion that an outsider can be a person who influences the AI in its decision-making some way or another can be self-evident in determining the criminal liability but then there must be proof to support the claim. For instance, a person X has an AI operated servant robot which serves chilled beverages to guest on command but another person Y who is a bitter enemy of X hacks into the robot and makes the robot to induce poison in one of the beverages and one of the guests died. Now, who will be held liable for the death of the guest. In this case, the outsider has taken the control of AI and hampered the decision-making of AI but it will be hard to support the claim because Y may have erased all the traces of hacking and influence over the robot.
There can be another possibility of having an outsider who is creating such an environment for the AI so that it misunderstands the situation in such a way the consequence of which is a wrongful act. An outsider can be a hacker who controls the AI and have the means to change the code of AI to act in a certain way or a third party who presents a new idea to the AI. For example, there was a bot known as the ‘Random Darknet Shopper’ which was programmed to go shopping on Darknet for an art exhibition, it lost control as it started buying ecstasy pills along with other illegal items. This happened despite any instructions from the programmers. Another example could be of ‘Tay’ a chatterbot developed to learn how older teenagers act which was shut down just after a few hours of launch because she began to show abusive, sexist behaviour.
Secondly, regarding the liability of a seller, if he does not make any changes in the AI programme and sells it without fraud that means he gives a clear description and selling terms of the programme then he won’t be liable.
Thirdly, the liability is of the user, master, or the controller. He is the one who operates the AI. If the AI entity commits an offence, then we’ll have to see if the chain of causation has been broken by the AI or not. If the user or operator has criminal intention, programmed the AI for actus reus, then he can be made liable.
The user or operator should be held liable if he has apprehension of the criminal intention of the AI entity or that the AI entity can commit a crime in the future, and still does not take any measures to prevent such act. But if he has successfully proved in the court that the offence committed by the AI person is not because of his conduct or his breach of duty, and he could not have reasonably foreseen such an act and the AI has made a decision on its own to commit such an act, then the chain of causation breaks between him and the AI entity. There arises a dilemma as to what is “reasonably foreseeable” and what qualifies as “proximate cause” because it will depend on the jurisprudence and customs of the society. For example, in a country like the USA, the people are technologically advanced and their threshold of reasonable foreseeability can be high in comparison to a country like India where people aren’t educated enough or technologically advanced. The judicial interpretation differs from country to country it usually ascribes to the prudence of general people there which can differ depending upon the conditions in which the general people were brought up.
There is another conundrum in ascertaining the chain of causation. If an AI entity commits an offence, then it is almost impossible to determine if it has committed that offence with its own intention or at the will of its master as AI may come up with the act itself as it learns through machine learning or reinforcement learning. For example, if an AI-driven car kills a couple crossing the road then it becomes very difficult to ascertain whether it was the master’s will or AI’s response to a stimulus. This poses a serious challenge to the existing structure of the AI’s existence in automobiles.
To determine the liability of the user we can borrow the full control test[2] from the law of torts where we analyse whether the master has control over the activities done by the servant. If the master controls the manner in which the work is to be done then there exists a full control of master on the servant and the master shall be vicariously liable. We can also apply some other tests such as the corporate criminal liability to trace the culpability of the user.
Conclusion
In India, we need lawyers, jurists, programmers, and engineers to come together and design a model framework that can be used to determine the criminal liability of AI entities. The most efficient measure could be to impose an absolute supervisory duty to oversee an AI’s action to prevent it from doing wrongful acts. Despite this there exist measures that can be used by outsiders or third parties to manipulate AI which will weaken the liability assertion as it can only be used when the act is foreseeable from the defendant’s position. We can hold only those persons liable who could have foreseen such harm being done by the use of AI. Alternatively, we can suggest another option which might seem very radical and regressive but can be very effective and efficient for the time being, that is to ban or regulate the AI technology to its current form and wait till an effective framework can be prepared to govern such entities.
[1] Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (OUP 2014) ¶35-37.
[2] Yewen v Noakes [1880] 6 QBD 530 Pg. 3-4, Dharangadhara Chemical Works Ltd. v State of Saurashtra AIR 1957 SC 264.
ABOUT THE AUTHORS
Ashima Joshi
Ashima is a third-year student at National Law University, Odisha.
Mudit Burad
Mudit is a fourth-year student at National Law University, Jodhpur.
Leave a Reply