The term “Artificial Intelligence” (“AI”) signifies a system designed for taking cues from its environment to assess risks, solve problems and put forth predictions on the basis of such inputs. Of late, there have been major developments in the field of Artificial Intelligence which have the potential of significantly affecting various human rights, specifically, the right against discrimination. An AI system is susceptible to reflecting persisting biases, which may lead to the violation of the universally guaranteed right to non-discrimination. Human Rights are generally governed by International treaties and Conventions, which are ratified by the member nations. Furthermore, most nations enact their own domestic laws which prohibit unfair treatment of any class of individuals. However, when the discrimination is done by an AI, it presents a grey area of law in terms of enforcing equality. The aim of this article is to analyze how and why such discrimination occurs in the context of workforce recruitment and prediction of future criminality. Generally, discrimination is majorly classified into direct and indirect. The former occurs when an individual is denied opportunity on grounds of sex, religion, and so on, in violation of explicit laws or rules prohibiting such behavior. On the other hand, indirect discrimination takes place when laws or rules seem neutral on surface but tend to result in discrimination when practiced. The treaties on human rights generally do not require “intention” as an essential element for the act of discrimination, considering that unintentional discrimination is also prohibited by these treaties. While the factual scenario does not indicate a possibility of direct discrimination, there is a possibility of indirect discrimination through Artificial Intelligence systems. While referring to ‘discrimination’ in this piece, the author shall limit her understanding to only indirect discrimination.
Understanding the process: The Alteration Algorithm
During the programming of an algorithm, there lies a possibility of a programmer to promote some kind of discrimination without realizing it. Although it is not the intention of the programmer to discriminate, it is possible that discrimination may be the result without the realization of the programmer. To understand how such discrimination might occur, it is important to examine the term ‘algorithm’ of an AI system. An algorithm can be defined as a “set of instructions or commands to perform a certain task”. The algorithm enables the AI to analyse data of past cases, and use such data to predict the possible results in new cases. Here, “predict” entails that once an algorithm is provided certain data, it becomes capable of inferring other patterns or characteristics. Despite the increasing progress in data processing with the advent of algorithms, the concern raised is much bigger than any advantage. If the machine is provided biased data, then the results inferred by the algorithm shall also be biased. Since algorithms are created by humans, it is almost inevitable that the biases held by the programmer are reflected in the algorithm, either intentionally or unintentionally. When seeking information from candidates for jobs, the algorithms acquire data which include non-anonymized information relating to their personal characteristics such as religion, race and sexual orientation. Once such data is acquired, the programming of the algorithm can be altered/designed in a manner which enables it to exclude candidates with particular characteristics, thus violating the basic requirements of fair treatment.
Instances of Discrimination by a biased AI
It is also possible for Artificial Intelligence systems to be biased as a result of incomplete or unrepresentative data, and a classic example of the same can occur during criminal data collection. If a crime database is predominated by a certain race, gender or a specific set of features, then the algorithm is likely to create incorrect matches for persons of that race or gender. For example, the Swedish police introduced a project for deporting individuals without authorization. This project included checking of documents in public transportation. It is contended that non-white Swedes were disproportionately targeted by the police, and the consequence of such a practice may be the creation of a biased crime database. If an AI system were to be trained by a database containing such biased data, it is possible for the algorithm to reproduce the bias. A well-known example of discrimination by AI was when Amazon used a computer program to employ engineers. The computer program, however, excluded women candidates from being considered for the job. The computer program was fed data from resumes submitted to Amazon from the last decade, most of which came from male applicants. Consequently, the system selected applicants who used “masculine language”, and prioritized male applicants over female applicants. Another important example is that of COMPAS (Correctional Offender Management Profiling for Alternative Sanction). COMPAS is an algorithm which assesses a questionnaire consisting of 137 questions related to involvement in criminal activities and interpersonal relationships. It is used to predict a variety of outcomes, and provides estimates for recidivism, violence and so on. The algorithm has been criticized for placing persons of color at a position of disadvantage, and for being biased towards persons of lighter complexion. This indicates that there are certain data which act as “proxies” for a protected characteristic, even if this characteristic has not been entered as data point.
Legal Instruments to enforce equal treatment: A distant dream?
The most relevant legal instruments for combating algorithmic biases are non-discrimination law and data protection law. However, in India, The Personal Data Protection Bill, 2019 is not sufficient to overcome these shortcomings of Artificial Intelligence. Data protection law covers within its ambit personal data; however, decisions made by algorithms are not within the purview of this law and the Bill has not laid down any provisions which require the actions taken by an AI to be explainable. Predictive models are not within the scope of data protection laws as the results of these models do not relate to identifiable individuals. Resultantly, it is important to introduce new legislations and regulations; however, such legal changes can only be implemented slowly, which is in contrast to the rapid development of technology. Article 15 of the Constitution of India prohibits vertical as well as horizontal discrimination. This means that the State as well as the citizens of the country are prohibited from discriminating against any citizen on the grounds laid down in clauses (1) and (2) respectively. The question that therefore arises is whether an AI can discriminate between two citizens, thereby invoking clause (2) of Article 15. As has already been shown in the previous sections of this piece, the programming of AI systems is such that it is very much possible for their functioning to result in the discrimination of a citizen. In India, a significant number of companies employ AI systems; under this context, can Article 15 be invoked if the function of an AI system results in discrimination? It has been suggested that Article 15(2) can, in fact, be invoked in such instances of discrimination, as the term ‘shop’ under clause (2) has been extended to include within its ambit corporations, companies or individuals offering goods or services in India. One of the goals of provisions such the General Data Protection Regulation and Convention 108 is to protect persons against unfair/illegal discrimination. For this, the rules have laid down provisions related to certain automated individual decision-making. Other provisions in place include the Algorithm Accountability Act introduced in the US Congress and the Algorithm Transparency Bill passed by the New York City Council. Therefore, in order to combat bias by AI systems, India too needs to implement certain legislations which aim to directly prohibit discrimination by algorithm.
Who is to be blamed?
Gabriel Hallevy, in his article titled ‘The Criminal Liability of Artificial Intelligence Entities- from Science Fiction to Legal Social Control’ has put forth three models of criminal liability. According to the Perpetration-via-another Liability Model, the AI is an innocent agent and does not possess human attributes. Under this model, the AI is considered as an instrument which is used by the real perpetrator to commit the offence. Under the Natural-Probable Consequence Liability, it is assumed that programmers/users involved in the daily activities of the AI do not possess the intention to commit a crime with the assistance of the AI. For this model to apply, the programmers or user are merely required to be aware that an offence will be the natural/probable result of the actions. This model, however, does hold the AI liable if it did not act as an innocent agent. Lastly, the Direct Liability Model, as the name suggests, provides for the direct liability of the AI agent and places less importance on the programmer or users. According to this model, there is no reason to not attribute criminal liability on an AI agent if it fulfills the requirements of actus reus and mens rea. It has been argued by many that AI agents should have some form of legal personality, considering the large role that they play in society. Under this context, AI agents have often been compared to juridical persons such as corporations. However, an important distinction between a company with legal personality and an AI agent with a legal personality is that in the case of a company, the actions have to be carried out by a representative of the company. This means that decisions are taken by human beings. On the other hand, the AI agent acts autonomously depending on how it has been programmed, and hence, there is no representative which acts on behalf of the AI agent.
The Uncertain future of AI
If the discriminatory patterns of algorithms are not removed, the use of AI will cause more harm to the society than benefit. Algorithms are more often than not believed to be bias-free. However, as has been shown, this is not the reality and there will always exist a possibility that the algorithm is discriminatory. Further, there is no provision for accountability if and when such discrimination by an Artificial Intelligence system occurs. There is a huge requirement for spreading awareness of the discriminatory risks posed by AIs in the criminal justice system, employment processes and otherwise as well. In addition to policy changes and spreading awareness, it is also important to design algorithms which do not take into consideration any protected characteristics, like race, religion and sexuality. Statistical methods can allow us to identify the features which are proxies for these characteristics, and they can be controlled to remove biases from the algorithm. Lastly, it is also important to ensure transparency; information related to the functionality of AI systems are not revealed by the developers due to intellectual property law. Therefore, it is extremely important to strike a balance between transparency and intellectual property law. Transparency can be introduced by implementing certain legislations directly aiming to keep a check on such discrimination caused by algorithms, as has been done in Europe, certain parts of the United States and so on. While it has been suggested that an AI algorithm cannot be completely unbiased, certain policy solutions can be brought about to reduce such bias to the maximum. Such solutions include bringing about more diversity in tech development, bringing about new perspectives by including disciplines outside of technology and so on.


Any queries can be addressed via mail at [email protected]  (Kindly mention “Query – Blog) at the mail.

Maharashtra National Law University Mumbai Post Box No: 8401 Powai, Mumbai – 400 076 Tel: 022-25703187, 022-25703188 Email: [email protected]

Leave a Comment

Your email address will not be published. Required fields are marked *