Skip to Content

Did you know that discrimination related to the use of artificial intelligence and algorithms is supervised by the Non-Discrimination Ombudsman?

I recently tweeted on the use of artificial intelligence (AI) and discrimination under the same title. The reactions I received revealed that the mandate of the Non-Discrimination Ombudsman to supervise discrimination in the use of AI was unfamiliar to many. In this blog article, I open up some of the links between AI and discrimination and how the Non-Discrimination Ombudsman can intervene even in this kind of discrimination. 

The discriminatory effects in the use of AI and automated algorithmic decision-making in particular may derive from a variety of factors: errors or deficiencies in training data, poorly selected predictor variables or selection criteria, or the fact that the algorithm is directly built to give significance to grounds for discrimination, such as age, language or gender. For example, algorithms used in facial recognition have been reported to better identify white people, since the artificial intelligence has been trained using white faces. Consequently, dark-skinned people are more often subjected to false suspicions and unnecessary measures on a discriminatory basis. AI used in recruitment may lead to discriminatory hiring decisions, based on gender for example, if the algorithm is based on previously applied one-sided recruitment practices regarding gender. 

In addition, even if AI was not allowed to give direct significance to specific grounds for discrimination, and this information was removed from the data, AI might still find from the data personal information that is strongly associated with the grounds for discrimination. For example, information on language and place of residence may in some situations highly correlate with racial or ethnic origin. Without the authors or users of AI intending or wishing it, when combining personal data, AI may still indirectly end up making discriminatory predictions and conclusions by itself. 

However, it is always the responsibility of humans to ensure that AI does not bring about discrimination. In terms of responsibility for discrimination, it is irrelevant whether the discriminating treatment occurred due to the actions of a person or an algorithm. The party responsible for the system is responsible for ensuring that its actions comply with the Non-Discrimination Act in all situations. 

The Non-Discrimination Act, which prohibits and defines discrimination, also applies to the use of AI. The act leaves the list of grounds for discrimination open. Depending on the case, discrimination may be based on such personal grounds as the place of residence or social status. Discrimination based on gender is prohibited in the Equality Act.

The Non-Discrimination Ombudsman is tasked with supervising the prohibition of discrimination. If any suspicions arise about discrimination in the use of AI, a complaint may be submitted to the Ombudsman, or we can investigate the matter on our own initiative. The Ombudsman does not have decision-making power, but the Ombudsman may promote reconciliation between the parties or, if necessary, take the matter to the National Non-Discrimination and Equality Tribunal or to court. 

The Non-Discrimination Ombudsman took a case concerning automated decision-making in lending to the National Non-Discrimination and Equality Tribunal in 2017. The automation was based on a system in which loan applicants were scored on the basis of their place of residence, gender, mother tongue and age. In its decision, the Tribunal concluded that discrimination had taken place and imposed a significant conditional fine on the party found guilty of discrimination. 

To prevent and detect discrimination, the operation of AI must be monitored and tested on a regular basis. It is necessary to perform an impact assessment even before a system is deployed. Various operators – including companies, research institutes and organisations – have started to produce tools and sets of questions to serve as a basis for impact assessment. It is important to use them and develop them further.

The risk of discrimination must also be identified and prevented effectively in connection with the risk and impact assessments required by data protection regulation. Different tools have also been developed to produce transparency.  As good transparency of AI systems as possible is essential in the prevention and supervision of discrimination and in the implementation of the obligation to promote equality. 

The Non-Discrimination Act obliges the authorities, employers and education providers to promote equality. The obligation also applies to the use of AI. In other words an impact assessment must be made from the perspective of equality already when planning the use of artificial intelligence. The obligation to promote equality also applies to cooperation with and outsourcing of activities to private operators. For example, authorities must use effective measures to ensure that they do not acquire and utilise discriminatory AI systems created by the private sector. The equality impacts of the use of AI must also be considered as part of equality planning. 

As the use of AI and algorithmic decision-making increases, the significance and number of equality issues will grow. The Ombudsman has recently discussed the topic with a number of different actors, such as the Financial Supervisory Authority and the largest banks (subject: non-discrimination in the credit scorings), the Central Chamber of Commerce and the Finnish Commerce Federation (non-discrimination in marketing and pricing, corporate responsibility in the use of AI), the Data Protection Ombudsman (cooperation between the two Ombudsman offices, impact assessments) and several ministries (impact assessment in AI projects, observation of AI in equality planning). 

In our society, the general views on what is fair and good vary. This discussion is naturally also reflected in the use of AI. In the future, the national courts, the EU and the European Court of Human Rights will undoubtedly issue interesting guidelines on the weighting of proportionality of the impact in the use of AI from the perspective of the prohibition of discrimination. It is quite impossible to provide general guidelines that would apply to all situations – discriminatory effects must be assessed on a case-by-case basis, taking account of, for example, the context, purpose and impacts of the use of AI. 

However, it is clear that AI can easily become discriminatory if the persons preparing it and ordering it for their services are not aware of the risks of discrimination involved.
 
In the use of artificial intelligence, I also see plenty of opportunities for promoting equality. Hopefully, in the future, the focus in the development of AI systems would be specifically on promoting equality. 

Additional information:
Further information on the use of AI and prevention of discrimination can be found, for example, in the Council of Europe publications:   

Discrimination, artificial intelligence and algorithmic decision making 

Unboxing artificial intelligence – 10 steps to protect human rights