Carmen Mazijn & Vincent Ginis

Artificial algorithms are increasingly being used to make difficult decisions. Their nontransparent functioning should not be a pretext for sweeping complex social issues under the carpet.

Today, many seminars, projects, and workshops are being organised in which researchers, ethicists, entrepreneurs, and policy makers are taking a closer look at the potential and limitations of AI. Let’s take advantage of this focus and not avoid the difficult questions: Many algorithms are being rolled out simply to avoid very complicated discussions.

One trend is clear: Artificial intelligence algorithms are affecting people’s lives in an increasingly profound way. They model data to make predictions about different aspects of people’s lives, and they do it very efficiently. By collecting a person’s data on a large scale and comparing it with data from other people, these algorithms are also being used to make recommendations that affect what a person can and cannot do. This phenomenon is occurring across the spectrum from almost trivial choices to life-defining decisions. For example, such algorithms increasingly decide who is offered a particular advertisement or discount, who gets a loan or insurance, who is invited to a job interview, who gets into a particular school, and in the most extreme cases, even who gets parole (Makhlouf & Palamidessi, 2021).

Automation and standardisation

Two elements are typically used to justify why people are increasingly being judged by a computer rather than a human: automation and standardisation. Firstly, digital automation tries to handle hundreds of decisions with a few mouse clicks. The algorithm quickly makes the first selection. As a result, humans come into play only in the second round and thus need to deal with only a smaller amount of data. Secondly, it has been argued that standardisation by means of algorithms also leads to more equal treatment. People have prejudices and are therefore unable to treat other people fairly. An algorithm can provide the solution here.

Both arguments have their merits, but they also have a downside. Take the argument of efficiency through automation. People who receive a negative decision because of the algorithm are not considered afterwards. Certain errors that the algorithm makes, so-called false negatives, are thus no longer noticed and, upon repeated application of the algorithm, even become a self-fulfilling prophecy. Efficiency thus has a very dark side.

The argument of standardisation too cannot completely legitimise AI. After all, in complicated situations, algorithms cannot take a sufficient amount of the context into account. The complexity of the situation is lost in the outcome, where uncertainties or incompleteness in the data at the input have often completely disappeared. Incidentally, a possible bias in a widely used algorithm may become dominant throughout society. By contrast, many individual decisions that are based on people’s own personal biases could cancel each other out.

The computer says no

However, the preceding arguments (i.e., automation and standardisation) conceal a third, underexposed reason for why algorithms are so widely used. By using algorithms, certain complicated discussions can easily be swept under the carpet. It is easier to hide a decision behind the opacity of an algorithm than to explain why that decision was actually made.

For example, take the automatic grading of students’ paper assignments, a capacity that schools are already using (González-Calatayud et al., 2021). An algorithm can be created to look for certain patterns in the data and compare them with students who previously submitted an excellent assignment. Unfortunately, the developer often cannot explain exactly what these patterns are and why one person does not get a good grade while another one does. Thus, in this situation, no one has to think about or explain why certain aspects result in a positive grade and others do not. Worse still, the algorithm’s decision might be based primarily on the applicant’s name or gender.

A second example is when a text editor predicts which word a user might want to write next. Similarly, the prediction is based on what other people have written previously. Yet, the developer cannot explain why this specific word was suggested and not another. The algorithm might ignore certain words or amplify their use without a clear reason. In this way, the language of the students (and perhaps even their ideas) will be unified and less creative.

Thirdly, artificial intelligence algorithms are used for the labelling of pictures. These algorithms are trained on an immense quantity of examples of images with labels. But after training, even the creators of the algorithm cannot say with certainty which elements in the picture were important for the predicted labels. This practice has led to racist labelling, including the labelling of Black people’s hairstyles as ‘unprofessional’ (Boult, 2016). In these cases, it is crucial to understand the algorithm and not hide behind the black box.

The uncomfortable questions

How can we move forward? After all, we are not opposed to having efficient tools that can assist us in a complex world. However, we think that we should dare to ask uncomfortable questions more often. When developing an algorithm that makes judgements about people, it is important to ask whether it is really necessary and what purpose it serves. Then, the algorithm must critically examine which characteristics of a person can be linked to the decision and which are totally irrelevant. This is not a decision that should only be made by the designers of an algorithm. This discussion is not easy, and we must dare to have it in our communities and society.

Subsequently, we must also question the internal operation of the algorithms and thus strive for greater transparency. After all, everyone is entitled to a clear explanation of why a certain decision was made, also in education (Khosravi et al., 2022). Such decisions should not be obscured by a ‘black box’ that sometimes not even the designer fully understands. The European Union is working on new regulations to make models and algorithms transparent, fair, and privacy-friendly (European Commission, 2021). Researchers are also working on this issue on legal, social, scientific, and technological levels.

The enchanting efficiency with which self-learning algorithms pick up certain patterns in our data and use them to make recommendations means that many people willingly allow themselves to be led and guided by them. However, the gaps between designers, users, and citizens is widening. Hence, we put out this call to policymakers, citizens, and—importantly—teachers to critically embrace self-learning algorithms and to educate young people and encourage them to question the entire pipeline of decisions that are based on self-learning algorithms. Only then will the algorithms become useful tools that can serve society as a whole.

Références

Boult, A. (2016, April 8). Google under fire over ‘racist’ image search results for ‘unprofessional hair’ . Retrieved from The Telegraph: https://www.telegraph.co.uk/technology/2016/04/08/google-under-fire-over-racist-image-search-results-for-unprofess/

European Commission. (2021, April 21). Retrieved from Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence: https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1682

González-Calatayud, V., Prendes-Espinosa, P., & Roig-Vila, R. (2021). Artificial Intelligence for Student Assessment: A Systematic Review . Application of Technologies in E-learning Assessment.

Khosravi, H., Buckingham Shum, S., Chen, G., Conati, C., Gasevic, D., Kay, J., . . . Tsai, Y.-S. (2022). Explainable Artificial Intelligence in education. Computers and Education: Artifical intelligenve.

Makhlouf, K., & Palamidessi, S. Z. (2021). On the applicability of machine learning fairness notions. SIGKDD Explorations, 14-2