You are currently viewing The badly brought up artificial intelligence

The badly brought up artificial intelligence

  • Post author:
  • Post category:News

The badly brought up artificial intelligence

2021.02.25

In an earlier article of mine, I wrote about an imagined situation where an artificial intelligence (AI) designed to organize the manufacture of paperclips turns everything into little paperclips, in the process destroying the planet. How could such a thing happen in theory? Because the AI had been badly taught, it did not see the ‘big picture’, as far as it was concerned, only the matter of paperclip manufacturing counted. But should we be concerned about such things happening in the real world? I will give two examples here to prove that the problem already exists today: a sexist HR and a racist justice AI application!
In 2014, Amazon decided that it would develop an AI application to assess candidates for jobs. The app graded people with one to five stars based on the applicants’ CVs – similarly to the product recommendations of an online store. The only problem was that it transpired that the algorithm gave an advantage to men over women in the case of numerous technical positions, for example, software developer. At first sight, this was surprising because the documents did not contain any information regarding gender. However, Amazon’s earlier applications had been used in the teaching of the AI algorithm, and these applications contained far more men in the group that was accepted. On this basis, somebody unaware of the impact of historical gender-based discrimination could mistakenly believe that men are more suitable to be programmers than women. AI is extremely effective at uncovering hidden connections and it successfully ‘learnt’ such complex correlations that were associated not with programming skills but instead the gender of the applicant. The use of certain words, the mention of typical sports and hobbies closely correlates with gender, and this could be closely matched with the eventual success or otherwise of the candidate in historical teaching data. In other words, the AI-based algorithm only reinforced our own prejudices. Despite all the efforts of developers, they were unable to completely eliminate this distortion, thus the application was in effect withdrawn from HR processes in 2018.
In 2019, a research team pointed out another interesting phenomenon in relation to this. According to their analysis, an online job advertisement formulated in a totally unisex way reached far fewer women than men. The reason is not such a mystery. Young women who can be considered a potential target account for 70-80% of online customers (could this not be the result of another hidden distortion?). That is why social media platforms calculated advertisements published for this particular target group at a higher price on the grounds of simple supply and demand. Since the objective of the HR advertising algorithms is optimization of costs as well as successful recruitment, it comes as no surprise to find that the algorithms learnt that it is more cost effective to target men than women.
The case of the AI application COMPAS used in American courts is no less problematic. The courts use this program to decide, for example, whether in the case of a give offender, he/she is likely to reoffend. However, an extremely detailed analysis in 2016 proved as correct the intuition that the algorithm gives a distorted result based on race. The fundamental problem – misunderstood by many – is not that on average the algorithm gives a higher risk of recidivism for black than for white defendants; the analysts examined what was proven from the projections of the algorithm: that is, what proportion of those released actually committed further crimes. It turns out from this analysis that in the two-year tracking period, the software classified earlier as potential recidivists 45% of black defendants who did not recidivate, and only 23% of white defendants; however, the software misclassified as being of low risk 48% of whites who actually went on to recidivate and 28% of black defendants. A good algorithm should be race neutral when we compare its projections with the final results, that is, the facts, irrespective of whether the frequency of committing crimes is not distributed equally between the different groups.

The above examples indicate that we generally classify an AI algorithm to be efficient if it gives a better result than we could attain with the intelligent investment of resources, with people. However, such an interpretation of efficiency does not rule out that the algorithm is discriminative for given groups.

Given the problems outlined above, many would draw the conclusion that AI is simply not suitable to carry out such tasks because in certain situations it is unacceptably unfair. We reckon that decisions taken in a ‘mathematical’ way will be fair, fairer than we could be, but to achieve this we would have to formulate the concept of justice mathematically. During the recruitment procedure, perhaps it would be socially most just if we picked at random the five persons to be employed from a hundred applicants. However, from the point of view of the individual this is highly unfair because it ignores the skills of and energy invested by the given candidate. In the case of gender and race, there is an emerging consensus to avoid discrimination, but in other cases this may be far more complicated. Our data sets teaching AI already contain within themselves our own injustices and biases. It would be important that these teaching data do not represent the current situation but our expectations judged to be fair!

The examples so far discuss cases where we taught AI with controlled data, thus we supervised the teaching process in this way. However, there are technologies where we leave the teaching up to the environment. These methods are very important in those situations when we do not have a large quantity of already tagged historical data at our disposal. In such a case, however, there is the possibility of distortion, not in the old data but in the environment. The chatbot Tay introduced by Microsoft on Twitter in 2016 is an unsettling example. Tay learnt and expanded its knowledge on the basis of chats, and it didn’t even take a single day for the naive Tay to become a foul-mouthed racist. Microsoft quickly pulled the switch on Tay.
So, what can we take away from all this? Perhaps, that it will be awfully difficult to realize Asimov’s First Law of Robotics if we ourselves are unable to formulate unambiguously what is good and what is bad:

‘A robot may not injure a human being or, through inaction, allow a human being to come to harm.”

Gábor Hraskó

Head of CCS Division, INNObyte

HEADQUARTERS

1115 Budapest,
Bartók Béla út 105-113.
6. emelet

BRANCH

7621 Pécs,
Irgalmasok utcája 5.
I. em. (Konzum Irodaház)

INNOBYTE INFORMATIKAI ZRT.

member of 4ig group

EMAIL

info@innobyte.hu

FAX

+36 1 700 2560

PHONE

+36 1 700 2563
Széchenyi2020 logo