Related content

null The Non-Discrimination Ombudsman’s observations on artificial intelligence’s effects on equality

The Non-Discrimination Ombudsman’s observations on artificial intelligence’s effects on equality

The Non-Discrimination Ombudsman has responded to an enquiry by the Ministry of Social Affairs and Health of ethics in the time of artificial intelligence (AI). The enquiry is part of the “Ethics in the Era of Algorithms” report by the Ministry of Social Affairs and Health. 

In the response, the Ombudsman discusses both risks of discrimination related to use of artificial intelligence and the possibilities of AI to promote the realisation of equality. The Ombudsman also brings up the need for proactive impact assessment and supervision in the use of artificial intelligence. Use of different AI systems and algorithmic decision-making are increasing constantly, so the significance of questions of equality in their utilisation grows also. 

Digitalisation and use of AI have the potential to promote equality

Digitalisation, artificial intelligence and automatic decision-making, for instance, centrally involve the question of effects on non-discrimination and equality. Digitalisation and use of AI have the potential to promote equality, such as enhancement of the equality of decision-making; provision of solutions that support realisation of rights of various groups, such as people with disabilities and the elderly people; and, more generally, observation of factors essential for the promotion of equality from data to direct various actions. At the same time, one must take into account AI’s potential to cause discrimination and, on a significant scale, intensify the current structural discrimination in societal structures and decision-making. 

The authorities, employees and education providers have a statutory obligation to promote utilisation of artificial intelligence specifically from the perspective of promotion of equality (sections 5–7 of the Non-discrimination Act, duties to promote). As an example: Are investments made in artificial intelligence to ensure that people entitled to various social allowances know how to apply for such allowances, or is AI focused on monitoring allowances that violate the rules? 

Pursuant to section 5 of the Non-discrimination Act, the authorities have the duty to promote equality and evaluate the effects of their actions on equality.  An authority must in their activity aim to eliminate obstacles to the realisation of equality and change such circumstances (Government proposal HE 19/2014 vp, p. 61).  

The duty to promote equality requires efficient evaluation of impacts on equality in the design and procurement stages of an AI system and during its use.  

Use of artificial intelligence also includes risks of discrimination and negative effects on equality 

Risks of discrimination in artificial intelligence and the potential negative effects on non-discrimination, equality and other human rights are discussed in several international and European reports (reports by UN institutions, the EU Commission, the Council of Europe, and the European Union Agency for Fundamental Rights). 

An AI system’s negative effects on equality and especially discrimination by automatic decision-making can be due to many kinds of circumstances, such as distortions in the data used or poorly selected features of the model. The more non-transparent and multifaceted that the operation of the AI/algorithm is, the more difficult it is to prevent discrimination and observe and rectify the discriminatory effects. Discrimination can also be indirect. Although even rule-based artificial intelligence can involve discrimination, it is easier to prevent risks and observe discrimination that has occurred than in cases of “learning AI”. This is why continuous assessment of impacts from the perspective of fundamental rights and equality also must be taken as the basic principles in use of learning AI.    

Negative effects of artificial intelligence on equality can also arise from the way use of AI is allocated and from who or which groups benefit from use of the AI system. 

Effects of artificial intelligence on discrimination and equality must be evaluated during the design and procurement of use of AI systems for each context. It is also essential to continuously conduct an impact assessment during use. Considering section 5 of the Non-discrimination Act and sections 22 and 6 of the Constitution of Finland, authorities should not develop and use automatic decision-making in a way that conflicts with the duty to safeguard fundamental and human rights or the duty to promote equality.  

In accordance with the Non-discrimination Act, employers and education providers have the duty to promote equality and thus, by effective measures, prevent any negative effects of artificial intelligence on equality (sections 6 and 7 of the Non-discrimination Act).

The Non-Discrimination Ombudsman finds that equality and non-discrimination are key ethical perspectives that must be taken into account in use of artificial intelligence.  There is some legal practice of the subject already. In Finland, this includes the decision by the National Non-Discrimination and Equality Tribunal on decision-making based on statistical circumstances founded on discriminatory grounds in the granting of credit.

Operation of artificial intelligence must be supervised and tested regularly to observe and prevent discrimination

We in Finland need a broader evaluation and societal discussion of the kind of legislation that should be prepared on use of artificial intelligence concerning both the public and private sectors in order to safeguard the realisation of fundamental and human rights, including equality. When developing legislation, it is necessary to assess more comprehensive and precise obligations on impact assessment and transparency.

Artificial intelligence should not be taken into use if risks of discrimination are observed in the system or its operating context or method. The Non-Discrimination Ombudsman emphasise the essential nature of proactive impact assessments and constant supervision. Although the Non-discrimination Act imposes a general obligation on the authorities, employers and education providers to assess effects of equality on their operation, the contents of the obligation and the circle of those bound by the duty should possibly be clarified and extended in other legislation, specifically regarding use of AI. Also, the suitability of the Non-discrimination Act on use of artificial intelligence is not adequately recognised or known.   

Transparency in use of artificial intelligence is essential to assess impacts on equality and to prevent discrimination, to intervene in it and to implement efficient supervision. This implementation should be ensured through regulation.  

It would also be important to ensure that actors who design or deploy AI systems have enough knowledge and competence to prepare an assessment of impacts on equality. This must be ensured especially in the case of public authorities. 

From the perspective of non-discrimination and equality, the essential question is not only whether a certain way of utilising AI and a context of use is legally discriminatory. Use of artificial intelligence must also equally benefit everyone. Use of AI may strengthen the power structures and inequality already present in society. Its effects in the various contexts of use, especially at public authorities, must be reviewed from a broader perspective of societal equality than merely legal discrimination. 

Adequate basic understanding and training on artificial intelligence must be ensured for everyone, due to its far-reaching effects. Also the diversity of experts must be ensured in the development of artificial intelligence.  

More of the Non-Discrimination Ombudsman’s views are presented in detail in the statement (in Finnish) on a memorandum submitted to the Ministry of Justice on the need for regulation of general legislation related to automatic decision-making in administration (VVTDno-2020-620)
 

09.02.2021